Opengl Render To Texture With Partial Transparancy (Translucency) And Then Rendering That To The Screen
Asked Answered
S

4

14

I've found a few places where this has been asked, but I've not yet found a good answer.

The problem: I want to render to texture, and then I want to draw that rendered texture to the screen IDENTICALLY to how It would appear if I skipped the render to texture step and were just directly rendering to the screen. I am currently using a blend mode glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). I have glBlendFuncSeparate to play around with as well.

I want to be able to render partially transparent overlapping items to this texture. I know the blend function is currently messing up the RGB values based on the Alpha. I've seen some vague suggestions to use "premultiplied alpha" but the description is poor as to what that means. I make png files in photoshop, I know they have a translucency bit and you can't easily edit the alpha channel independently as you can with TGA. If necessary I can switch to TGA, though PNG is more convenient.

For now, for the sake of this question, assume we aren't using images, instead I am just using full color quads with alpha.

Once I render my scene to the texture I need to render that texture to another scene, and I need to BLEND the texture assuming partial transparency again. Here is where things fall apart. In the previous blending steps I clearly alter the RGB values based on Alpha, doing it again works a-okay if Alpha is 0 or 1, but if it is in in between, the result is a further darkening of those partially translucent pixels.

Playing with blend modes I've had very little luck. The best I can do is render to texture with:

glBlendFuncSeparate(GL_ONE, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE);

I did discover that rendering multiple times with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) will approximate the right color (unless things overlap). But that's not exactly perfect (as you can see in the following image, the parts where the green/red/blue boxes overlap gets darker, or accumulates alpha. (EDIT: If I do the multiple draws in the render to screen part and only render once to texture, the alpha accumulation issue disappears and it does work, but why?! I don't want to have to render the same texture hundreds of times to the screen to get it to accumulate properly)

Here are some images detailing the issue (the multiple render passes are with basic blending (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), and they are rendered multiple times in the texture rendering step. The 3 boxes on the right are rendered 100% red, green, or blue (0-255) but at alpha values of 50% for blue, 25% for red, and 75% for green: enter image description here

So, a breakdown of what I want to know:

  1. I set blend mode to: X?
  2. I render my scene to a texture. (Maybe I have to render with a few blend modes or multiple times?)
  3. I set my blend mode to: Y?
  4. I render my texture to the screen over an existing scene. (Maybe I need a different shader? Maybe I need to render the texture a few times?)

Desired behavior is that at the end of that step, the final pixel result is identical to if I were to just do this:

  1. I set my blend mode to: (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
  2. I render my scene to the screen.

And, for completeness, here is some of my code with my original naive attempt (just regular blending):

    //RENDER TO TEXTURE.
    void Clipped::refreshTexture(bool a_forceRefresh) {
        if(a_forceRefresh || dirtyTexture){
            auto pointAABB = basicAABB();
            auto textureSize = castSize<int>(pointAABB.size());
            clippedTexture = DynamicTextureDefinition::make("", textureSize, {0.0f, 0.0f, 0.0f, 0.0f});
            dirtyTexture = false;
            texture(clippedTexture->makeHandle(Point<int>(), textureSize));
            framebuffer = renderer->makeFramebuffer(castPoint<int>(pointAABB.minPoint), textureSize, clippedTexture->textureId());
            {
                renderer->setBlendFunction(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
                SCOPE_EXIT{renderer->defaultBlendFunction(); };

                renderer->modelviewMatrix().push();
                SCOPE_EXIT{renderer->modelviewMatrix().pop(); };
                renderer->modelviewMatrix().top().makeIdentity();

                framebuffer->start();
                SCOPE_EXIT{framebuffer->stop(); };

                const size_t renderPasses = 1; //Not sure?
                if(drawSorted){
                    for(size_t i = 0; i < renderPasses; ++i){
                        sortedRender();
                    }
                } else{
                    for(size_t i = 0; i < renderPasses; ++i){
                        unsortedRender();
                    }
                }
            }
            alertParent(VisualChange::make(shared_from_this()));
        }
    }

Here is the code I'm using to set up the scene:

    bool Clipped::preDraw() {
        refreshTexture();

        pushMatrix();
        SCOPE_EXIT{popMatrix(); };

        renderer->setBlendFunction(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
        SCOPE_EXIT{renderer->defaultBlendFunction();};
        defaultDraw(GL_TRIANGLE_FAN);

        return false; //returning false blocks the default rendering steps for this node.
    }

And the code to render the scene:

test = MV::Scene::Rectangle::make(&renderer, MV::BoxAABB({0.0f, 0.0f}, {100.0f, 110.0f}), false);
test->texture(MV::FileTextureDefinition::make("Assets/Images/dogfox.png")->makeHandle());

box = std::shared_ptr<MV::TextBox>(new MV::TextBox(&textLibrary, MV::size(110.0f, 106.0f)));
box->setText(UTF_CHAR_STR("ABCDE FGHIJKLM NOPQRS TUVWXYZ"));
box->scene()->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({0, 0, 1, .5})->position({80.0f, 10.0f})->setSortDepth(100);
box->scene()->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({1, 0, 0, .25})->position({80.0f, 40.0f})->setSortDepth(101);
box->scene()->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({0, 1, 0, .75})->position({80.0f, 70.0f})->setSortDepth(102);
test->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({.0, 0, 1, .5})->position({110.0f, 10.0f})->setSortDepth(100);
test->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({1, 0, 0, .25})->position({110.0f, 40.0f})->setSortDepth(101);
test->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({.0, 1, 0, .75})->position({110.0f, 70.0f})->setSortDepth(102);

And here's my screen draw:

renderer.clearScreen();
test->draw(); //this is drawn directly to the screen.
box->scene()->draw(); //everything in here is in a clipped node with a render texture.
renderer.updateScreen();

*EDIT: FRAMEBUFFER SETUP/TEARDOWN CODE:

void glExtensionFramebufferObject::startUsingFramebuffer(std::shared_ptr<Framebuffer> a_framebuffer, bool a_push){
    savedClearColor = renderer->backgroundColor();
    renderer->backgroundColor({0.0, 0.0, 0.0, 0.0});

    require(initialized, ResourceException("StartUsingFramebuffer failed because the extension could not be loaded"));
    if(a_push){
        activeFramebuffers.push_back(a_framebuffer);
    }

    glBindFramebuffer(GL_FRAMEBUFFER, a_framebuffer->framebuffer);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, a_framebuffer->texture, 0);
    glBindRenderbuffer(GL_RENDERBUFFER, a_framebuffer->renderbuffer);
    glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, roundUpPowerOfTwo(a_framebuffer->frameSize.width), roundUpPowerOfTwo(a_framebuffer->frameSize.height));

    glViewport(a_framebuffer->framePosition.x, a_framebuffer->framePosition.y, a_framebuffer->frameSize.width, a_framebuffer->frameSize.height);
    renderer->projectionMatrix().push().makeOrtho(0, static_cast<MatrixValue>(a_framebuffer->frameSize.width), 0, static_cast<MatrixValue>(a_framebuffer->frameSize.height), -128.0f, 128.0f);

    GLenum buffers[] = {GL_COLOR_ATTACHMENT0};
    //pglDrawBuffersEXT(1, buffers);


    renderer->clearScreen();
}

void glExtensionFramebufferObject::stopUsingFramebuffer(){
    require(initialized, ResourceException("StopUsingFramebuffer failed because the extension could not be loaded"));
    activeFramebuffers.pop_back();
    if(!activeFramebuffers.empty()){
        startUsingFramebuffer(activeFramebuffers.back(), false);
    } else {
        glBindFramebuffer(GL_FRAMEBUFFER, 0);
        glBindRenderbuffer(GL_RENDERBUFFER, 0);

        glViewport(0, 0, renderer->window().width(), renderer->window().height());
        renderer->projectionMatrix().pop();
        renderer->backgroundColor(savedClearColor);
    }
}

And my clear screen code:

void Draw2D::clearScreen(){
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
}
Subdue answered 21/6, 2014 at 22:54 Comment(10)
The issue you have, I think, it is kind of hard to understand what you are saying, is that you do not fully understand how blending works. If you render something that is partially transparent on top of something else that is partially transparent the final color drawn by your graphics card will not have the same alpha as either of the things you rendered, unless you are using alpha values of 1 or 0. A way to fix this (I think, haven't tested it) would be to make the objects alpha color a function of how many times it is renderedExerciser
Right, I don't know how to achieve what I want because I have an incomplete understanding of blending functions. They aren't very intuitive to me. Assistance in this regard is exactly what this question is about. :)Subdue
I can also affect my shader I use to render either part of this issue. IE: I can change the shader I use when rendering the texture to the screen.Subdue
One thing it may be is you are not clearing the texture every time you render to it. Clear it using glClear(GL_COLOR_BUFFER_BIT) or whatever function you use to clear the screen every time you render to a texture. I believe that would be in your refreshTexture methodExerciser
@jamolnng I have edited my answer to include my framebuffer setup/teardown code. I have my clear in there.Subdue
@Subdue You should probably at least upvote the answer you ended up using in the end ;)Tat
@Tat I did, the guy who answered first with two methods has my up vote. I've given you one too, thanks for your time but you were beaten to the punch.Subdue
@Subdue I actually posted the first answer with the method you are using. Timestamps don't lie ;) Thanks for the up vote thoughTat
@Tat Fair enough! Reto Koradi posted the first working solution and suggested he knew about the second before your post, I didn't realize you posted before he updated his answer but regardless I guess the both of you figured it out. I'll read through some of your older answers and upvote quality ones to compensate. ;)Subdue
@Subdue nah, don't worry about it. Next time (:Tat
R
17

Based on some calculations and simulations I ran, I came up with two fairly similar solutions that seem to do the trick. One uses pre-multiplied colors in combination with a single (separate) blend function, the other one works without pre-multiplied colors, but requires changing the blend function a couple of times in the process.

Option 1: Single Blend Function, Pre-Multiplication

This approach works with a single blend function through the entire process. The blend function is:

glBlendFuncSeparate(GL_ONE, GL_ONE_MINUS_SRC_ALPHA,
                    GL_ONE_MINUS_DST_ALPHA, GL_ONE);

It requires pre-multiplied colors, which means that if your input color would normally be (r, g, b, a), you use (r * a, g * a, b * a, a) instead. You can perform the pre-multiplication in the fragment shader.

The sequence is:

  1. Set the blend function to (GL_ONE, GL_ONE_MINUS_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA, GL_ONE).
  2. Set render target to FBO.
  3. Render layers that you want rendered to FBO, using pre-multiplied colors.
  4. Set render target to default framebuffer.
  5. Render layers you want below FBO content, using pre-multiplied colors.
  6. Render FBO attachment, without applying pre-multiplication since the colors in the FBO are already pre-multiplied.
  7. Render layers you want on top of FBO content, using pre-multiplied colors.

Option 2: Switch Blend Functions, without Pre-Multiplication

This approach does not require pre-multiplication of the colors for any step. The downside is that the blend function has to be switched a few times during the process.

  1. Set the blend function to (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA, GL_ONE).
  2. Set render target to FBO.
  3. Render layers that you want rendered to FBO.
  4. Set render target to default framebuffer.
  5. (optional) Set the blend function to (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA).
  6. Render layers you want below FBO content.
  7. Set the blend function to (GL_ONE, GL_ONE_MINUS_SRC_ALPHA).
  8. Render FBO attachment.
  9. Set the blend function to (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA).
  10. Render layers you want on top of FBO content.

Explanation and Proof

I think Option 1 is nicer and possibly more efficient because it does not require switching blend functions during rendering. So the detailed explanation below is for Option 1. The math for Option 2 is pretty much the same though. The only real difference is that Option 2 uses GL_SOURCE_ALPHA for the first term of the blend function to perform the pre-multiplication where necessary, where Option 1 expects pre-multiplied colors to come into the blend function.

To illustrate that this works, let's go through an example where 3 layers are rendered. I'll do all the calculations for the r and a components. The calculations for g and b would be equivalent to the ones for r. We will render three layers in the following order:

(r1, a1)  pre-multiplied: (r1 * a1, a1)
(r2, a2)  pre-multiplied: (r2 * a2, a2)
(r3, a3)  pre-multiplied: (r3 * a3, a3)

For the reference calculation, we blend these 3 layers with the standard GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA blend function. We don't need to track the resulting alpha here since DST_ALPHA is not used in the blend function, and we don't use the pre-multiplied colors yet:

after layer 1: (a1 * r1)
after layer 2: (a2 * r2 + (1.0 - a2) * a1 * r1)
after layer 3: (a3 * r3 + (1.0 - a3) * (a2 * r2 + (1.0 - a2) * a1 * r1)) =
               (a3 * r3 + (1.0 - a3) * a2 * r2 + (1.0 - a3) * (1.0 - a2) * a1 * r1)

So the last term is our target for the final result. Now, we render layers 2 and 3 into an FBO. Later we will render layer 1 into the frame buffer, and then blend the FBO on top of it. The goal is to get the same result.

From now on, we will apply the blend function listed at the start, and use pre-multiplied colors. We will also need to calculate the alphas, since DST_ALPHA is used in the blend function. First, we render layers 2 and 3 into the FBO:

after layer 2: (a2 * r2, a2)
after layer 3: (a3 * r3 + (1.0 - a3) * a2 * r2, (1.0 - a2) * a3 + a2)

Now we render to he primary framebuffer. Since we don't care about the resulting alpha, I'll only calculate the r component again:

after layer 1: (a1 * r1)

Now we blend the content of the FBO on top of this. So what we calculated for "after layer 3" in the FBO is our source color/alpha, a1 * r1 is the destination color, and GL_ONE, GL_ONE_MINUS_SRC_ALPHA is still the blend function. The colors in the FBO are already pre-multiplied, so there will be no pre-multiplication in the shader while blending the FBO content:

srcR = a3 * r3 + (1.0 - a3) * a2 * r2
srcA = (1.0 - a2) * a3 + a2
dstR = a1 * r1
ONE * srcR + ONE_MINUS_SRC_ALPHA * dstR
    = srcR + (1.0 - srcA) * dstR
    = a3 * r3 + (1.0 - a3) * a2 * r2 + (1.0 - ((1.0 - a2) * a3 + a2)) * a1 * r1
    = a3 * r3 + (1.0 - a3) * a2 * r2 + (1.0 - a3 + a2 * a3 - a2) * a1 * r1
    = a3 * r3 + (1.0 - a3) * a2 * r2 + (1.0 - a3) * (1.0 - a2) * a1 * r1

Compare the last term with the reference value we calculated above for the standard blending case, and you can tell that it's exactly the same.

This answer to a similar question has some more background on the GL_ONE_MINUS_DST_ALPHA, GL_ONE part of the blend function: OpenGL ReadPixels (Screenshot) Alpha.

Resistor answered 24/6, 2014 at 6:57 Comment(5)
Thanks! I'll take a look tonight. This is the kind of advice/breakdown I was looking for. Very easy to understand and actionable.Subdue
For curiosity and completeness could you outline the other method that involves switching blend functions as well? It would be beneficial for others stumbling across this problem.Subdue
Yes, I can add that tonight.Resistor
Ok, I added details about the alternate approach I had mentioned in passing in the original version.Resistor
Awesome! BOTH work. A million points to you, sir. I chose to go with your second option (switching blend modes) because I can implement it entirely within my "Clipped" class, and the shader option requires changing my default shader and blend function for everything. But both work!Subdue
S
3

I achieved my goal. Now, let me share this information with the internet, since it exists nowhere else that I could find.

enter image description here

  1. Create your framebuffer (blindframebuffer etc)
  2. Clear the framebuffer to 0, 0, 0, 0
  3. Set your viewport properly. This is all basic stuff I took for granted in the question, but want to include here.
  4. Now, render your scene to the framebuffer normally with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). Make sure the scene is sorted (just as you would normally.)
  5. Now bind the included fragment shader. This will undo the damage dealt to the image color values via the blend function.
  6. Render the texture to your screen with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
  7. Go back to rendering as normal with a regular shader.

The code I included in the question remains basically untouched except that I ensure I'm binding the shader I list below when I do my "preDraw" function, which is specific to my own little framework, but is basically the "draw to screen" call for my rendered texture.

I call this the "unblend" shader.

#version 330 core

smooth in vec4 color;
smooth in vec2 uv;

uniform sampler2D texture;

out vec4 colorResult;

void main(){
    vec4 textureColor = texture2D(texture, uv.st);

    textureColor/=sqrt(textureColor.a);

    colorResult = textureColor * color;
}

Why do I do textureColor/=sqrt(textureColor.a)? Because the original color is figured like this:

resultR = r * a, resultG = g * a, resultB = b * a, resultA = a * a

Now, if we want to undo that we need to figure out what a is. The easiest way to find is to solve for "a" here:

resultA = a * a

If a is .25 when originally rendering we have:

resultA = .25 * .25

Or:

resultA = 0.0625

When the texture is being drawn to the screen though, we don't have "a" anymore. We know what resultA is, it's the texture's alpha channel. So we can sqrt(resultA) to get .25 back. Now with that value we can divide to undo the multiply:

textureColor/=sqrt(textureColor.a);

And that fixes everything up undoing the blending!

*EDIT: Well... Kinda at least. There is a sleight inaccuracy, in this case I can show it by rendering over a clear color that is not identical to the framebuffer clear color. Some alpha information seems to be lost, probably in the rgb channels. This is still good enough for me, but I wanted to follow up with the screenshot showing the inaccuracy before signing out. If anyone has a solution please provide it!

I have opened a bounty to bring this answer up to a canonical 100% correct solution. Right now, if I render more partially transparant objects over the existing transparancy the transparancy is accumulated differently than on the right resulting in a lightening of the final texture beyond what is shown on the right. Likewise, when rendered over a non-black background it's clear the results of the existing solution differ slightly as demonstrated above.

A proper solution would be identical in all cases. My existing solution cannot take the destination blending into account in the shader correction, only the source alpha.

enter image description here

Subdue answered 22/6, 2014 at 3:12 Comment(0)
T
1

In order to do this in a single pass you need support for separate color & alpha blending functions. First you render the texture which has foreground contribution stored in the alpha channel (i.e. 1=fully opaque, 0=fully transparent) and pre-multiplied source color value in the RGB color channel. To create this texture do the following operations:

  1. clear the texture to RGBA=[0, 0, 0, 0]
  2. set the color channel blending to src_color*src_alpha+dst_color*(1-src_alpha)
  3. set the alpha channel blending to src_alpha*(1-dst_alpha)+dst_alpha
  4. render the scene to the texture

To set the mode specified by 2) and 3), you can do: glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA, GL_ONE) and glBlendEquation(GL_FUNC_ADD)

Next render this texture to the scene by setting the color blending to: src_color+dst_color*(1-src_alpha), i.e. glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) and glBlendEquation(GL_FUNC_ADD)

Tat answered 25/6, 2014 at 3:10 Comment(0)
T
0

Your problem is older than OpenGL, or personal computers, or indeed any living human. You're trying to blend two images together and make it look like they weren't blended at all. Printing presses face this exact problem. When ink is applied to paper, the result is a blend between the ink color and the paper color.

The solution is the same in paper as it is in OpenGL. You must alter your source image in order to control your final result. This is easy enough to figure out if you examine the math used to blend.

For each of R, G, B, the resultant color is (old * (1-opacity)) + (new * opacity). The basic scenario, and the one you'd like to emulate, is drawing a color directly onto the final back buffer at opacity A.

For example, opacity is 50% and your green channel has 0xFF. The result should be 0x7F on a black background (including unavoidable rounding error). You probably can't assume the background is black, so expect the green channel to vary between 0x7F and 0xFF.

You'd like to know how to emulate that result when you're really rendering to a texture, then rending the texture to the back buffer. It turns out that the "vague suggestions to use 'premultiplied alpha'" were correct. Whereas your solution is to use a shader to unblend a previous blend operation in the last step, the standard solution is to multiply the colors of your original source texture with the alpha channel (aka premultiplied alpha). When composting the intermediate texture, the RGB channels are blended without multiplying by Alpha. When rendering the texture to the back buffer, against the RGB channels are blended without multiplying by Alpha. Thus you neatly avoid the multiple multiplication problem.

Please consult these resources for a better understanding. I and most others are more familiar with this technique in DirectX, so you may have to search for the appropriate OGL flags.

Thorpe answered 24/6, 2014 at 1:14 Comment(2)
My problem is that if I have A, B, C, D, and E rects all at different transparancy being rendered from a scene graph, I do not understand how to do premultiplied alpha in a generic way. Most premultiplied alpha resources I have read assume one source texture being rendered... But I want to apply this to the scene. Maybe I need to re-read your answer, but I'm not quite intelligent enough to sort it out from just this description without possibly another deep dive in over my head through those references. More precise steps for my specific problem would be appreciated.Subdue
I really appreciate you taking time to answer, however, I'll upvote regardless.Subdue

© 2022 - 2024 — McMap. All rights reserved.