precision in Depth buffer on OpenGL ES
Asked Answered
O

2

0

I'm trying to get proper Z value from depth buffer (which is rendered to color texture and then read with glReadPixels) and then unproject for real 3D space coordinate. On iPad Air it works perfectly but not on iPad 3 or iPad 4.

iPad 3/4 and iPad Air has:
OpenGL ES 2.0 IMGSGX554-97.7 and OpenGL ES 2.0 Apple A7 GPU - 27.23
GLSL version OpenGL ES GLSL ES 1.00
OpenGL depth bits 24 - on all devices

glDepthFunc(GL_LEQUAL);
glDepthRangef(0, 1.0);
glClearDepthf(1.0);

In fragment shader:

precision highp float;  

// .... some code and variables  

const float maxnum = 256.0;

vec4 pack (float depth)
{

    const vec4 bitSh = vec4(maxnum * maxnum * maxnum,
                            maxnum * maxnum,
                            maxnum,
                            1.0);
    const vec4 bitMsk = vec4(0,
                             1.0 / maxnum,
                             1.0 / maxnum,
                             1.0 / maxnum);
    vec4 comp = fract(depth * bitSh);
    comp -= comp.xxyz * bitMsk;
    return comp;
}

void main()
{
   gl_FragColor = pack(gl_FragCoord.z);
}

On iPad Air we can see:
enter image description here

On iPad 3/4 :
enter image description here

Ophir answered 18/4, 2014 at 15:36 Comment(2)
The only precision difference I am aware of between the Apple A7 GPU and previous PowerVR SGX GPUs is that lowp and mediump are identical on the A7 (both 16-bit). The older models had lowp = 12-bit and mediump = 16-bit.Cockleboat
@AndonM.Coleman ok, if it's right - then I can't use gl_FragCoord.z, because it's not accurate. I've tried to use coordinates from vertex shader, but it's does not help too much. What you would suggest to try?Ophir
O
1

I had 2 issues in my case:
1. Large distance between nearZ and farZ
2. I tried to use gl_FragCoord.z which is has low precision. It was resolved with rendering to framebuffer, which is had just depth component (without color buffer!) and then render result of depth texture in second pass to another framebuffer with color renderbuffer and shader which has the same pack function as in question.

enter image description here

Answer is here on OpenGL.org FAQ

12.050 Why is my depth buffer precision so poor?

The depth buffer precision in eye coordinates is strongly affected by the ratio of zFar to zNear, the zFar clipping plane, and how far an object is from the zNear clipping plane.

You need to do whatever you can to push the zNear clipping plane out and pull the zFar plane in as much as possible.

12.070 Why is there more precision at the front of the depth buffer?

After the projection matrix transforms the clip coordinates, the XYZ-vertex values are divided by their clip coordinate W value, which results in normalized device coordinates. This step is known as the perspective divide. The clip coordinate W value represents the distance from the eye. As the distance from the eye increases, 1/W approaches 0. Therefore, X/W and Y/W also approach zero, causing the rendered primitives to occupy less screen space and appear smaller. This is how computers simulate a perspective view.

As in reality, motion toward or away from the eye has a less profound effect for objects that are already in the distance. For example, if you move six inches closer to the computer screen in front of your face, it's apparent size should increase quite dramatically. On the other hand, if the computer screen were already 20 feet away from you, moving six inches closer would have little noticeable impact on its apparent size. The perspective divide takes this into account.

As part of the perspective divide, Z is also divided by W with the same results. For objects that are already close to the back of the view volume, a change in distance of one coordinate unit has less impact on Z/W than if the object is near the front of the view volume. To put it another way, an object coordinate Z unit occupies a larger slice of NDC-depth space close to the front of the view volume than it does near the back of the view volume.

In summary, the perspective divide, by its nature, causes more Z precision close to the front of the view volume than near the back.

12.080 There is no way that a standard-sized depth buffer will have enough precision for my astronomically large scene. What are my options?

The typical approach is to use a multipass technique. The application might divide the geometry database into regions that don't interfere with each other in Z. The geometry in each region is then rendered, starting at the furthest region, with a clear of the depth buffer before each region is rendered. This way the precision of the entire depth buffer is made available to each region.

Ophir answered 22/4, 2014 at 13:15 Comment(4)
Thanks, this solve my problem with web-gl. I used mat4.perspective with values: near=0 and far=100. It caused weird artifacts. Now, with values 0.01 and 0.4 everything works nicely.Kwangchow
@User1 glad it help you :)Ophir
@SAKrisT: could you please post some code for point#2 rendering to framebuffer which is have just attached texture for depth component (without color buffer!) and then render this depth-texture to another framebuffer with attached color renderbuffer ?Hensel
@Hensel it's sequence of operations with 2 framebuffers, would be hard to post here something easy understandable.Ophir
Q
0

Perhaps the value in the depth buffer is fine, but it is a precision issue during sampling?

Have you checked your declaration of the depth texture sampler in the GLSL, and make sure it is declared 'highp'?

If you request a precision that is too low for your needs (or get defaulted into lowp because you didn't set it), some devices may give you more than you asked for and cover up for the omission.

Questor answered 18/4, 2014 at 17:33 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.