I have a rendering system where I draw to an FBO with a multisampled renderbuffer, then blit it to another FBO with a texture in order to resolve the samples in order to read off the texture to perform post-processing shading while drawing to the backbuffer (FBO index 0).
Now I'd like to get some correct sRGB output... The problem is the behavior of the program is rather inconsistent between when I run it on OS X and Windows and this also changes depending on the machine: On Windows with the Intel HD 3000 it will not apply the sRGB nonlinearity but on my other machine with a Nvidia GTX 670 it does. On the Intel HD 3000 in OS X it will also apply it.
So this probably means that I'm not setting my GL_FRAMEBUFFER_SRGB
enable state at the right points in the program. However I can't seem to find any tutorials that actually tell me when I ought to enable it, they only ever mention that it's dead easy and comes at no performance cost.
I am currently not loading in any textures so I haven't had a need to deal with linearizing their colors yet.
To force the program to not simply spit back out the linear color values, what I have tried is simply comment out my glDisable(GL_FRAMEBUFFER_SRGB)
line, which effectively means this setting is enabled for the entire pipeline, and I actually redundantly force it back on every frame.
I don't know if this is correct or not. It certainly does apply a nonlinearization to the colors but I can't tell if this is getting applied twice (which would be bad). It could apply the gamma as I render to my first FBO. It could do it when I blit the first FBO to the second FBO. Why not?
I've gone so far as to take screen shots of my final frame and compare raw pixel color values to the colors I set them to in the program:
I set the input color to RGB(1,2,3) and the output is RGB(13,22,28).
That seems like quite a lot of color compression at the low end and leads me to question if the gamma is getting applied multiple times.
I have just now gone through the sRGB equation and I can verify that the conversion seems to be only applied once as linear 1/255, 2/255, and 3/255 do indeed map to sRGB 13/255, 22/255, and 28/255 using the equation 1.055*C^(1/2.4)+0.055
. Given that the expansion is so large for these low color values it really should be obvious if the sRGB color transform is getting applied more than once.
So, I still haven't determined what the right thing to do is. does glEnable(GL_FRAMEBUFFER_SRGB)
only apply to the final framebuffer values, in which case I can just set this during my GL init routine and forget about it hereafter?
GL_FRAMEBUFFER_SRGB
at initialization and don't touch it forever after, even on the Intel HD 3000 in Windows it will do the correct transformation. It's just that it seems to require it being enabled for more of the code path than on Nvidia. – Maki