Setting up OpenGL Multiple Render Targets
Asked Answered
O

1

25

I've seen a lot of material on this subject, but there are some differences between the examples I've found and I'm having a hard time getting a solid understanding of the correct process. Hopefully someone can tell me if I'm on the right track. I should also mention I'm doing this on OS X Snow Leopard and the latest version of Xcode 3.

For the sake of example, let's say that I want to write to two targets, one for normal and one for color. To do this I create one framebuffer and bind two textures to it, as well as a depth texture:

glGenFramebuffersEXT(1, &mFBO);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, mFBO);

glGenTextures(1, &mTexColor);
glBindTexture(GL_TEXTURE_2D, mTexColor);
//<texture params>
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, mTexColor, 0);

glGenTextures(1, &mTexNormal);
glBindTexture(GL_TEXTURE_2D, mTexNormal);
//<Texture params>
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT1_EXT, GL_TEXTURE_2D, mTexNormal, 0);

glGenTextures(1, &mTexDepth);
glBindTexture(GL_TEXTURE_2D, mTexDepth);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, w, h, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, mTexDepth, 0);

glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0)

Before rendering, I would bind the framebuffer again and then do:

GLenum buffers[] = { GL_COLOR_ATTACHMENT0_EXT, GL_COLOR_ATTACHMENT1_EXT };
glDrawBuffers(2, buffers);

This would mean further draw calls would draw to my framebuffer. (I think?)

I'd then set my shaders and draw the scene. In my vertex shader I would process normals/positions/colors as usual, and pass the data to the fragment shader. The fragment would then do something like:

gl_FragData[0] = OutputColor;
gl_FragData[1] = OutputNormal;

At this point, I should have two textures; one with colors from all the rendered objects and one with normals. Is all of this correct? I should now be able to use those textures like any other, say rendering them to a fullscreen quad, right?

Observe answered 26/8, 2011 at 15:55 Comment(0)
E
11

Sounds and looks reasonable. This is indeed the common way to do it. If you don't need the depth data as texture for further processing, you can also use a renderbuffer for an attachment, but a texture should also work fine.

You can also use glCheckFramebufferStatusEXT after all the setup is done, to see if the framebuffer is valid in its current configuration, but your code looks fine. If you don't have a problem and this was just for assurance, then rest assured that you're on the right track, otherwise tell us what's wrong.

Electrochemistry answered 26/8, 2011 at 22:46 Comment(4)
Thanks for the answer! It was more of just an assurance-type question to make sure I wasn't allocating stuff I didn't need, and that the order looked okay. Would using a renderbuffer instead of a texture offer any performance benefits?Observe
@Observe I don't know, probably not.Electrochemistry
I know it's an old question. But yes, using a renderbuffer will gain performance compared to using a texture. As a renderbuffer can take advantage of more optimized ways of storing the data and processes for handling the drawing operations.Animator
It seems that glDrawBuffers affects framebuffer state rather than global state, so one shouldn't need to call it after rebinding the framebuffer.Interoffice

© 2022 - 2024 — McMap. All rights reserved.