I am in the process of converting my webgl deferred renderer to one that uses high dynamic range. I've read a lot about the subject from various sources online and I have a few questions that I hope could be clarified. Most of the reading I have done covers HDR image rendering, but my questions pertain to how a renderer might have to change to support HDR.
As I understand it, HDR is essentially trying to capture higher light ranges so that we can see detail in both extremely lit or dark scenes. Typically in games we use an intensity of 1 to represent white light and 0 black. But in HDR / the real world, the ranges are far more varied. I.e. a sun in the engine might be 10000 times brighter than a lightbulb of 10.
To cope with these larger ranges you have to convert your renderer to use floating point render targets (or ideally half floats as they use less memory) for its light passes.
My first question is on the lighting. Besides the floating point render targets, does this simply mean that if previously I had a light representing the sun, which was of intensity 1, it could/should now be represented as 10000? I.e.
float spec = calcSpec();
vec4 diff = texture2D( sampler, uv );
vec4 color = diff * max(0.0, dot( N, L )) * lightIntensity + spec; //Where lightIntensity is now 10000?
return color;
Are there any other fundamental changes to the lighting system (other than float textures and higher ranges)?
Following on from this, we now have a float render target that has additively accumulated all the light values (in the higher ranges as described). At this point I might do some post processing on the render target with things like bloom. Once complete it now needs to be tone-mapped before it can be sent to the screen. This is because the light ranges must be converted back to the range of our monitors.
So for the tone-mapping phase, I would presumably use a post process and then using a tone-mapping formula convert the HDR lighting to a low dynamic range. The technique I chose was John Hables from Uncharted 2:
const float A = 0.15;
const float B = 0.50;
const float C = 0.10;
const float D = 0.20;
const float E = 0.02;
const float F = 0.30;
const float W = 11.2;
vec3 Uncharted2Tonemap(vec3 x)
{
return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F;
}
... // in main pixel shader
vec4 texColor = texture2D(lightSample, texCoord );
texColor *= 16; // Hardcoded Exposure Adjustment
float ExposureBias = 2.0;
vec3 curr = Uncharted2Tonemap( ExposureBias * texColor.xyz );
vec3 whiteScale = 1.0 / Uncharted2Tonemap(W);
vec3 color = curr * whiteScale;
// Gama correction
color.x = pow( color.x, 1.0 /2.2 );
color.y = pow( color.y, 1.0 /2.2 );
color.z = pow( color.z, 1.0 /2.2 );
return vec4( color, 1.0 );
My second question is related to this tone mapping phase. Is there much more to it than simply this technique? Is simply using higher light intensities and tweaking the exposure all thats required to be considered HDR - or is there more to it? I understand that some games have auto exposure functionality to figure out the average luminescence, but at the most basic level is this needed? Presumably you can just use manually tweak the exposure?
Something else thats discussed in a lot of the documents is that of gama correction. The gama correction seems to be done in two areas. First when textures are read and then once again when they are sent to the screen. When textures are read they must simply be changed to something like this:
vec4 diff = pow( texture2D( sampler, uv), 2.2 );
Then in the above tone mapping technique the output correction is done by:
pow(color,1/2.2);
From John Hables presentation he says that not all textures must be corrected like this. Diffuse textures must be, but things like normal maps don't necessarily have to.
My third question is on this gama correction. Is this necessary in order for it to work? Does it mean I have to change my engine in all places where diffuse maps are read?
That is my current understanding of whats involved for this conversion. Is it correct and is there anything I have misunderstood or got wrong?
GL_RGB10_A2
if you can sacrifice alpha (which is often the case in deferred shading). There is a similar small packed floating-point format (GL_R11F_G11F_B10F
) that will get you really good performance if you can sacrifice precision in one color channel and eliminate alpha. They will give you about the same performance on modern hardware and the packed floating-point format (viaGL_APPLE_texture_packed_float
in ES 2.0) is generally preferred. – Semivowel