Writing texture data onto depth buffer
Asked Answered
E

1

6

I'm trying to implement the technique described at : Compositing Images with Depth.

The idea is to use an existing texture (loaded from an image) as a depth mask, to basically fake 3D.

The problem I face is that glDrawPixels is not available in OpenglES. Is there a way to accomplish the same thing on the iPhone?

Ensconce answered 26/12, 2010 at 16:4 Comment(0)
H
7

The depth buffer is more obscured than you think in OpenGL ES; not only is glDrawPixels absent but gl_FragDepth has been removed from GLSL. So you can't write a custom fragment shader to spool values to the depth buffer as you might push colours.

The most obvious solution is to pack your depth information into a texture and to use a custom fragment shader that does a depth comparison between the fragment it generates and one looked up from a texture you supply. Only if the generated fragment is closer is it allowed to proceed. The normal depth buffer will catch other cases of occlusion and — in principle — you could use a framebuffer object to create the depth texture in the first place, giving you a complete on-GPU round trip, though it isn't directly relevant to your problem.

Disadvantages are that drawing will cost you an extra texture unit and textures use integer components.

EDIT: for the purposes of keeping the example simple, suppose you were packing all of your depth information into the red channel of a texture. That'd give you a really low precision depth buffer, but just to keep things clear, you could write a quick fragment shader like:

void main()
{
    // write a value to the depth map
    gl_FragColor = vec4(gl_FragCoord.w, 0.0, 0.0, 1.0);
}

To store depth in the red channel. So you've partially recreated the old depth texture extension — you'll have an image that has a brighter red in pixels that are closer, a darker red in pixels that are further away. I think that in your question, you'd actually load this image from disk.

To then use the texture in a future fragment shader, you'd do something like:

uniform sampler2D depthMap;

void main()
{
    // read a value from the depth map
    lowp vec3 colourFromDepthMap = texture2D(depthMap, gl_FragCoord.xy);

    // discard the current fragment if it is less close than the stored value
    if(colourFromDepthMap.r > gl_FragCoord.w) discard;

    ... set gl_FragColor appropriately otherwise ...
}

EDIT2: you can see a much smarter mapping from depth to an RGBA value here. To tie in directly to that document, OES_depth_texture definitely isn't supported on the iPad or on the third generation iPhone. I've not run a complete test elsewhere.

Hardesty answered 4/1, 2011 at 17:6 Comment(7)
Thank you for your answer. Could you elaborate a bit about the on-GPU round-trip you are talking about? I'm not quite sure I understand that part.Ensconce
It's not completely relevant to the question you were asking — it's more relevant to the 'how do I use a depth map for shadowing' type of question, where you do actually draw something to create a depth map on the GPU and keep the map for later rather than loading it from disk. But I've added a quick bit of example code, which may still be a bit vague?Hardesty
Having said all that, GL_OES_depth_texture does appear to exist on the iPhone 4 I have here now. I'm not sure if it's an OS version thing, but my understanding was that the hardware is practically identical to the iPad so I'm not otherwise able to explain it. Might be worth checking the specific iOS devices and OS versions you want to support.Hardesty
I may ask this as a separate question, but when writing to a texture using a fragment shader to simulate gl_FragDepth how do you prevent overwriting a fragment that may be nearer the viewer with one that may be farther away? It doesn't seem like we have the ability to query the texture being written to in order to see if there was something already written at that location.Madeline
Shouldn't that be gl_FragCoord.z in the first fragment shader code example?Phalansterian
@BradLarson you ever found an answer to your question?Phalansterian
@cheeesus - Yes I did. iOS 6.0 introduced the GL_EXT_shader_framebuffer_fetch extension, which allows you to read the current color at a fragment. Using this, you can read the previous fragment color which encodes depth, compare it to what the current fragment depth is, and only write out the depth that's closer to the viewer. Before iOS 6, I used a maximum blend mode and a bucket-filling approach for the color components that only extended the dynamic range on depth from 256 to 1024. Now I can nominally get a 32-bit dynamic range, although highp precision limits you to 16 bits in practice.Madeline

© 2022 - 2024 — McMap. All rights reserved.