As I understand it, shadow-mapping is done by rendering the scene from the perspective of the light to create a depth map. Then you re-render the scene from the POV of the camera, and for each point (fragment in GLSL) in the scene you calculate the distance from there to the light source; if it matches what you have in your shadow map, then it's in the light, otherwise it's in the shadow.
I was just reading through this tutorial to get an idea of how how to do shadow mapping with a point/omnidirectional light.
Under section 12.2.2 it says:
We use a single shadow map for all light sources
And then under 12.3.6 it says:
1) Calculate the squared distance from the current pixel to the light source.
...
4) Compare the calculated distance value with the fetched shadow map value to determine whether or not we're in shadow.
Which is roughly what I stated above.
What I don't get is if we've baked all our lights into one shadow map, then which light do we need to compare the distance to? The distance baked into the map shouldn't correspond to anything, because it's a blend of all the lights, isn't it?
I'm sure I'm missing something, but hopefully someone can explain this to me.
Also, if we are using a single shadow map, how do we blend it for all the light sources?
For a single light source the shadow map just stores the distance of the closest object to the light (i.e., a depth map), but for multiple light sources, what would it contain?