Sampling from a depth buffer in a shader returns values between 0 and 1, as expected. Given the near- and far- clip planes of the camera, how do I calculate the true z value at this point, i.e. the distance from the camera?
From http://web.archive.org/web/20130416194336/http://olivers.posterous.com/linear-depth-in-glsl-for-real
// == Post-process frag shader ===========================================
uniform sampler2D depthBuffTex;
uniform float zNear;
uniform float zFar;
varying vec2 vTexCoord;
void main(void)
{
float z_b = texture2D(depthBuffTex, vTexCoord).x;
float z_n = 2.0 * z_b - 1.0;
float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
}
[edit] So here's the explanation (with 2 mistakes, see Christian's comment below) :
An OpenGL perspective matrix looks like this :
When you multiply this matrix by an homogeneous point [x,y,z,1], it gives you: [don't care, don't care, Az+B, -z] (with A and B the 2 big components in the matrix).
OpenGl next does the perspective division: it divides this vector by its w component. This operation is not done in shaders (except special cases like shadowmapping) but in hardware; you can't control it. w = -z, so the Z value becomes -A/z -B.
We are now in Normalized Device Coordinates. The Z value is between 0 and 1. For some stupid reason, OpenGL requires that it should be moved to the [-1,1] range (just like x and y). A scaling and offset is applied.
This final value is then stored in the buffer.
The above code does the exact opposite :
- z_b is the raw value stored in the buffer
- z_n linearly transforms z_b from [-1,1] to [0,1]
- z_e is the same formula as z_n=-A/z_e -B, but solved for z_e instead. It's equivalent to z_e = -A / (z_n+B). A and B should be computed on the CPU and sent as uniforms, btw.
The opposite function is :
varying float depth; // Linear depth, in world units
void main(void)
{
float A = gl_ProjectionMatrix[2].z;
float B = gl_ProjectionMatrix[3].z;
gl_FragDepth = 0.5*(-A*depth + B) / depth + 0.5;
}
Az+B
by -z
you get -A-B/z
rather than -A/z-B
. And then it is after the perspective divide that the value is in [-1,1] and needs to be scale-biases to [0,1] before writing to the depth buffer, and not the other way around (though your code does it right, it's just the explanation that's wrong). –
Deon z_n = z_b
? –
Nylon I know this is an old, old question, but I've found myself back here more than once on various occasions, so I thought I'd share my code that does the forward and reverse conversions.
This is based on @Calvin1602's answer. These work in GLSL or plain old C code.
uniform float zNear = 0.1;
uniform float zFar = 500.0;
// depthSample from depthTexture.r, for instance
float linearDepth(float depthSample)
{
depthSample = 2.0 * depthSample - 1.0;
float zLinear = 2.0 * zNear * zFar / (zFar + zNear - depthSample * (zFar - zNear));
return zLinear;
}
// result suitable for assigning to gl_FragDepth
float depthSample(float linearDepth)
{
float nonLinearDepth = (zFar + zNear - 2.0 * zNear * zFar / linearDepth) / (zFar - zNear);
nonLinearDepth = (nonLinearDepth + 1.0) / 2.0;
return nonLinearDepth;
}
I ended up here trying to solve a similar problem when Nicol Bolas's comment on this page made me realize what I was doing wrong. If you want the distance to the camera and not the distance to the camera plane, you can compute it as follows (in GLSL):
float GetDistanceFromCamera(float depth,
vec2 screen_pixel,
vec2 resolution) {
float fov = ...
float near = ...
float far = ...
float distance_to_plane = near / (far - depth * (far - near)) * far;
vec2 center = resolution / 2.0f - 0.5;
float focal_length = (resolution.y / 2.0f) / tan(fov / 2.0f);
float diagonal = length(vec3(screen_pixel.x - center.x,
screen_pixel.y - center.y,
focal_length));
return distance_to_plane * (diagonal / focal_length);
}
(source) Thanks to github user cassfalg: https://github.com/carla-simulator/carla/issues/2287
© 2022 - 2024 — McMap. All rights reserved.