I've seen a lot of material on this subject, but there are some differences between the examples I've found and I'm having a hard time getting a solid understanding of the correct process. Hopefully someone can tell me if I'm on the right track. I should also mention I'm doing this on OS X Snow Leopard and the latest version of Xcode 3.
For the sake of example, let's say that I want to write to two targets, one for normal and one for color. To do this I create one framebuffer and bind two textures to it, as well as a depth texture:
glGenFramebuffersEXT(1, &mFBO);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, mFBO);
glGenTextures(1, &mTexColor);
glBindTexture(GL_TEXTURE_2D, mTexColor);
//<texture params>
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, mTexColor, 0);
glGenTextures(1, &mTexNormal);
glBindTexture(GL_TEXTURE_2D, mTexNormal);
//<Texture params>
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT1_EXT, GL_TEXTURE_2D, mTexNormal, 0);
glGenTextures(1, &mTexDepth);
glBindTexture(GL_TEXTURE_2D, mTexDepth);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, w, h, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, mTexDepth, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0)
Before rendering, I would bind the framebuffer again and then do:
GLenum buffers[] = { GL_COLOR_ATTACHMENT0_EXT, GL_COLOR_ATTACHMENT1_EXT };
glDrawBuffers(2, buffers);
This would mean further draw calls would draw to my framebuffer. (I think?)
I'd then set my shaders and draw the scene. In my vertex shader I would process normals/positions/colors as usual, and pass the data to the fragment shader. The fragment would then do something like:
gl_FragData[0] = OutputColor;
gl_FragData[1] = OutputNormal;
At this point, I should have two textures; one with colors from all the rendered objects and one with normals. Is all of this correct? I should now be able to use those textures like any other, say rendering them to a fullscreen quad, right?