D2 shading/tinting
D2 shading/tinting
How does d23dfx or d2x shade textures. Regular d2 uses the .256 lighting files. I was wondering.
Pixel shaders are shiny new things that were unheard of until a year after D3's release, and are by no means lighting-centric. They *can* "shade" (shading in the sense of changing appearance based on light), but they can also be used to create a ton of different effects, including interesting materials, texture overlays, bumpmapping, specular mapping, shadowing, and so on.
Descent 1 and 2 both use light values assigned to cube corners, but to do the actual lighting they calculate what each corner should be based on light-producing objects (lasers, missiles, energy powerups, etc). Then, when the actual rendering takes place, the corners are interpolated to the position of the current pixel, selects the nearest of 16 or 32 or so light levels (rather than 256, to save on speed and to make it possible to draw the scene consistently using a 256 color palette), and then it changes the color value of whatever texel is right there based on that. The game uses a custom palette of colors every frame to achieve its wide variety of environments and lighting, and it does so in a very adaptive manner.
D2 3Dfx and D2X versions with hardware rendering simply send all the cube polygons to the graphics card along with the light values to do whatever with, and it is thus that no palette is used, which means you can have any light value instead of a certain number of them, eliminating the "triangle" banding seen in software-rendered D1/D2.
D3 uses a technique called lightmapping, where the texture of a polygon is multiplied by a light texture with values from 0 to 1 (0 being black, 1 being full brightness). This is convenient, but it results in an inability to create washed-out effects due to extreme brightness (an effect unheard of in D3's day, since it takes way more computational power, and abilities which video cards didn't really have back then). Lightmapping is far more advanced than vertex lighting (which is what D1/2 use), and it gives the ability to have different light values on all parts of the polygon, not just at the corners. The result is the highly detailed lighting seen in D3 ("highly detailed" of course being a relative term. it was extremely kickass in 1999).
Descent 1 and 2 both use light values assigned to cube corners, but to do the actual lighting they calculate what each corner should be based on light-producing objects (lasers, missiles, energy powerups, etc). Then, when the actual rendering takes place, the corners are interpolated to the position of the current pixel, selects the nearest of 16 or 32 or so light levels (rather than 256, to save on speed and to make it possible to draw the scene consistently using a 256 color palette), and then it changes the color value of whatever texel is right there based on that. The game uses a custom palette of colors every frame to achieve its wide variety of environments and lighting, and it does so in a very adaptive manner.
D2 3Dfx and D2X versions with hardware rendering simply send all the cube polygons to the graphics card along with the light values to do whatever with, and it is thus that no palette is used, which means you can have any light value instead of a certain number of them, eliminating the "triangle" banding seen in software-rendered D1/D2.
D3 uses a technique called lightmapping, where the texture of a polygon is multiplied by a light texture with values from 0 to 1 (0 being black, 1 being full brightness). This is convenient, but it results in an inability to create washed-out effects due to extreme brightness (an effect unheard of in D3's day, since it takes way more computational power, and abilities which video cards didn't really have back then). Lightmapping is far more advanced than vertex lighting (which is what D1/2 use), and it gives the ability to have different light values on all parts of the polygon, not just at the corners. The result is the highly detailed lighting seen in D3 ("highly detailed" of course being a relative term. it was extremely kickass in 1999).
Lightmapping is also what the Unreal Engine uses. It's capable of some pretty wicked lighting and shadow effects and you can adjust the density of the lightmap to produce more detailed shadows.
Sadly, Descent 3 doesn't seem to have particularly high resolution on its lightmaps. You can't, say, have a ladder-shaped structure under a light and expect the steps to occlude the light and cast a neat shadow.
Sadly, Descent 3 doesn't seem to have particularly high resolution on its lightmaps. You can't, say, have a ladder-shaped structure under a light and expect the steps to occlude the light and cast a neat shadow.
It's just a billboarded texture (at least, it is in the game. I don't know anything about the editor, but I have no reason to believe that it always draws things as they appear in the game). It's basically drawn like a halo, but with distance-based influence on its alpha value. Don't even need a shader for it, just an additive blend (blending is still fixed-function, since the framebuffer is not directly accessible from within a shader in the current model some stranger cards have programmable blending, but not in the pixel shader). About the most use for a shader for an effect like that is calculating the fade, which is more efficiently done on the CPU, since there's not exactly very many of these fading billboards... Only two polys of it are ever visible The halo effect is more inventive than advanced, but it does add a bit to the game. It's exactly along the lines of what game devs like: cheap and good-looking.
I'm just itching to see Valve's HDR version of Source, though... True light blooms are way better than any billboarded effect
I'm just itching to see Valve's HDR version of Source, though... True light blooms are way better than any billboarded effect