What exactly is the need for gamma correction?
Asked Answered
D

3

9

I have problems to fully understand the need for gamma correction. I hope you guys can help me.

Let’s assume we want to display 256 neighboring pixels. These pixels should be a smooth gradient from black to white. To denote theirs colors, we use linear gray values from 0..255. Due to the non-linearity of the human eye, the monitor must not just turn these values into linear luminance values. If the neighboring pixels had the luminance values (1/256)*I_max, (2/256)*I_max, et cetera, we would perceive in the darker area too large differences in brightness between two pixels (the gradient would not be smooth).

Fortunately, a monitor has the reciprocal non-linearity to the human eye. That means, if we put linear gray values 0..255 into the frame buffer, then the monitor turns them into non-linear luminance values x^gamma. However, as our eye is non-linear the other way round, we perceive a smooth linear gradient. The non-linearity of the monitor and the one of our eye cancel each other out.

So, why do we need the gamma correction? I have read in books that we always want the monitor to produce linear luminance values. According to them, the non-linearity of the monitor must be compensated before writing the gray values to the frame buffer. That is done by the gamma correction. However, my problem here is that - as far as I understand it - we would not perceive linear brightness values (i.e. we would not perceive a smooth, steady gradient) when the monitor produces linear luminance values.

As far as I see it, it would be just perfect, if we put linear gray values into the frame buffer. The monitor turns these values into non-linear luminance values and our eye perceives linear brightness values again, because the eye is reciprocal non-linear. There would be no need to gamma correct the gray values in the frame buffer and no need to force the monitor to produce linear luminance values.

What is wrong with my way of looking at these things? Thanks

Deeprooted answered 19/10, 2012 at 13:48 Comment(0)
P
6

Allow me to ‘resurrect’ this question since I am struggling with similar questions right now and I think I have found the answer -it may be useful for someone else. Or I might be wrong and someone could tell me :)

I think there is nothing wrong with your way of thinking. Thing is, you don‘t need to gamma-correct all the time, if you know what you are doing. It depends on what you want to achieve. Let‘s see two different cases.

A) Light simulation (AKA rendering). You have a diffuse surface with a light pointing towards it. Then, the light's intensity is doubled.

Well. Let’s see what happens in the real world in such situation. Assuming a purely diffuse surface, the intensity of the light reflected is going to be the surface's albedo multiplied by the incoming light intensity and the cosine of the incoming light angle and the normal. Whatever. Thing is, when the incoming light intensity is doubled, the reflected light intensity will be doubled too. This is why light transport is said to be a linear process. Funny enough, you will not perceive the surface as twice as bright, because our perception is nonlinear (This is modelled by the so-called Steven's power law). Put again: in the real world the reflected light is doubled, but you do not perceive it twice as bright.

Now, how would we simulate this? Well, if we have a sRGB texture with the surface's albedo, we would need to linearlize it (by de-correcting it, which means applying the 2.2 gamma). Now that it is linear, and we have the light intensity, we can use the formula I said before to compute the reflected light intensity. Since we are in a linear space, by doubling the intensity we will double the output, like in the real world. Now we gamma-correct our results. Because of this, when the screen displays the rendered image, it will apply the gamma and so it will have a linear response, meaning that the intensity of the light emited by the screen will be twice as much when we simulate the twice-as-poweful light than when we simulate the first one. So the light that arrive at your eyes from your screen will have double the intensity. Exactly as it would happen if you were looking at the real surface with real lights affecting it. You will not perceive the second render twice as bright, of course but, again, and as we said earlier, this is exactly what it would happen in the real situation. Same behavior in the real world and in the simulation means that the simulation (the render) was correct :)

B) A different case is precisely if you want a gradient that you want to 'look' (AKA being perceived) as linear.

Since you want the nonlinear response of the screen to cancel out our nonlinear visual perception, you can skip gamma correction altogether (as you suggest). Or, more accurately, keep operating in linear space and gamma-correcting, but creating your gradient not with consecutive values for the pixels(1,2,3...255) that would be perceived nonlinearly (because of Steven's), but values transformed by the inverse of our perceptual brightness response (that is, applying an exponent of 1/0.5=2 to the normalized values. This is applying the reciprocal of Steven's exponent for brightness).

As a matter of fact, if you see gamma-corrected linear gradient such as the one in http://scanline.ca/gradients/ you do not perceive it as linear at all: you see far more variation in the lower intensities than in the higher ones (as expected).

Well, at least this is my current understanding of the topic. I hope it helps anyone. And again, please, please, if it is wrong I would be really grateful if someone could point it out...

Pongid answered 26/9, 2013 at 2:22 Comment(1)
I understand it the same way than you.Matsumoto
R
3

The problem is really when doing color calculations. For example, if you are blending two colors, you need to use the linear intensities to do the calculations. To actually display the proper result, you then have to convert the linear intensities back to the gamma-corrected intensities.

How your eyes perceive the intensities isn't relevant. To do color calculations correctly, they have to be done based on the physical principles of optics, which relies on linear luminance values. Once you have calculated a color, you want those luminance values to be output by your monitor, regardless of how it is perceived, so you have to compensate for the fact that the monitor doesn't directly produce the colors that you want.

Robbegrillet answered 19/10, 2012 at 14:2 Comment(4)
"To actually display the proper result, you then have to convert the linear intensities back to the gamma-corrected intensities." <-- Why? Why do I have to compute gamma-corrected intensities at all? As I tried to explain, wouldn't it be fine if I put just the normal linear values into the frame buffer? The monitor would then turn them into non-linear luminance values (because of its non-linearity), yes. But as our eye has the reciprocal non-linearity, we should perceive the linear gradient. Were is my mistake?Deeprooted
@user1494080: How your eyes perceive the colors isn't relevant. Color calculations aren't done based on perceived intensities. You have calculated a color and that is the color you want the monitor to display, regardless of how it is perceived. You have to compensate for the fact that the monitor doesn't directly produce the intensities that you want.Robbegrillet
@user1494080, if monitors were designed to input linear values you'd be correct. But they're not, and the origins of that are way back in TV history.Furious
@Deeprooted thers is a document I like a lot : "the gamma FAQ" poynton.com/PDFs/GammaFAQ.pdf Gamma is like big/little endian, we often do it incorrectly by going overboard. Let us all take the time to digest it fully. What Vaughn says is the key. You calculate a physical value (linear), the monitor displays it elevated at power 2.2, therefore you need to compensate by elevating to power 1/2.2 before writing to framebuffer. Of course this is only true if you really are working in linear space. careful of your data source. mostly, data sources are in gamma space (textures,photos)..Matsumoto
L
0

To actually answer the question which is wrong with your way of looking at this - it is nothing really wrong with it. It WOULD be great to have a linear framebuffer, but as you say, it's definetely not great to have an 8-Bit linear frame buffer.

The fact that 8 bits are so easy to handle is pretty much the only justification for gamma compressed frame buffers and color notations (Think HTML's #888 - wouldn't it be uncool to use #333 for middle gray not #888).

About the monitor - you want to be able to predict it's response to your input, and you know from sRGB what it should be. Normally that's all you need to know. Some people think it's "correct" or something if the monitor produces "linear" output which can be simulated if you compensate for the monitor's gamma. I advise to steer clear of such a setup, which breaks all the apps which (correcly and sanely) assume standard gamma in favour of un-breaking ill-concieved linearity-assuming apps. Don't do that. Instead, fix the apps or dump them.

Lexicology answered 22/10, 2012 at 16:17 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.