Render to 1D texture
Asked Answered
T

2

6

I'm trying several ways to implement a simple particle system. Here i am using the ping-pong technique between several textures, all attached to an unique fbo.

I think all the bindings/setup are correct because i see that it write to the B textures using the data from the A textures.

The problem is that only one texel of the B textures is being written to:

problem

In this image, i try to copy the texels from the source textures to the destination textures.

So let's get to the code:

Setup code

#define NB_PARTICLE 5

GLuint fbo;
GLuint tex[6];
GLuint tex_a[3];
GLuint tex_b[3];
int i;

// 3 textures for position/vitesse/couleur
// we double them for ping-pong so 3 * 2 = 6 textures
glGenTextures(6, tex);

for (i = 0 ; i < 6 ; i++ )
{
    glBindTexture(GL_TEXTURE_1D, tex[i]);
    glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA32F, NB_PARTICLE, 0, GL_RGBA, GL_FLOAT, NULL);
    glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP);
    glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_T, GL_CLAMP);
}

for (i = 0; i < 3; i++)
{
    tex_a[i] = tex[i];
    tex_b[i] = tex[i + 3];
}

// Uploads particle data from "a" textures
glBindTexture(GL_TEXTURE_1D, tex_a[0]);
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA32F, NB_PARTICLE, 0, GL_RGBA, GL_FLOAT, seed_pos);
glBindTexture(GL_TEXTURE_1D, tex_a[1]);
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA32F, NB_PARTICLE, 0, GL_RGBA, GL_FLOAT, seed_vit);
glBindTexture(GL_TEXTURE_1D, tex_a[2]);
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA32F, NB_PARTICLE, 0, GL_RGBA, GL_FLOAT, seed_color);

// Create the fbo
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo);

// Attach the textures to the corresponding fbo color attachments
int i;
for (i = 0; i < 6; i++)
    glFramebufferTexture1D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_1D, tex[i], 0);

glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);

Render code

static int pingpong = 1;

glUseProgram(integratorShader);
glBindVertexArray(0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo);

// Set the input textures to be the "a" textures
int i;
for (i = 0 ; i < 3 ; i++ )
{
    glActiveTexture(GL_TEXTURE0 + i);
    glBindTexture(GL_TEXTURE_1D, tex_a[i]);
}

// Set the draw buffers to be the "b" textures attachments
glDrawBuffers(3, GL_COLOR_ATTACHMENT0 + (pingpong * 3) );

glViewport(0, 0, NB_PARTICLE, 1);

glDrawArrays(GL_POINTS, 0, NB_PARTICLE);

glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glUseProgram(0);

// A became B and B became A
swapArray(tex_a, tex_b, 3);
pingpong++;
pingpong %= 2;

Vertex shader

#version 430

void main() {

    gl_Position = vec4(gl_VertexID, 0, 0, 1);

}

Fragment shader

#version 430

// binding = texture unit
layout (binding = 0) uniform sampler1D position_texture;
layout (binding = 1) uniform sampler1D vitesse_texture;
layout (binding = 2) uniform sampler1D couleur_texture;

// location = index in the "drawBuffers" array
layout (location = 0) out vec4 position_texel;
layout (location = 1) out vec4 vitesse_texel;
layout (location = 2) out vec4 couleur_texel;

void main() {

    vec4 old_position_texel = texelFetch(position_texture, int(gl_FragCoord.x), 0);
    vec4 old_vitesse_texel =  texelFetch(vitesse_texture, int(gl_FragCoord.x), 0);
    vec4 old_couleur_texel =  texelFetch(couleur_texture, int(gl_FragCoord.x), 0);

    position_texel = old_position_texel;
    vitesse_texel = old_vitesse_texel;
    couleur_texel = old_couleur_texel;
}

As i'm using 1D textures, i thought the only data i need to send is an index and that i could perfectly use gl_VertexID for that. That's why i'm sending 0 attributes data.

I think the problem is the way i set gl_FragCoord (and sadly it is the only variable i can't debug :( )

Tetrapod answered 7/11, 2013 at 15:10 Comment(0)
C
6

The problem is how you invoke the shaders. You have basically set up a standard GPGPU fragment shader pipeline, process each texel of an input texture and write the result to the corresponding texel in the output texture. But your way of invoking this GPGPU pipeline, i.e. the way you render the geometry, is complete rubbish. All the other stuff, especially your fragment shader and your way of using the gl_FragCoord is completely fine.

It seems you are confusing the computation pass (where you want to compute the particle positions) with the drawing pass (where you want to render the particles, probably as points). But here in this GPGPU stage there is absolutely no need to render N points, since you don't do anything useful in the vertex shader anyway. All you want to do is generate a fragment for each pixel of the framebuffer. But that is already achieved for you by the rasterizer anyway. When you want to draw a triangle in a "normal" graphics application, you don't subdivide that triangle into pixels yourself and draw those as GL_POINTS either, do you? The exact error why it fails is your strange use of gl_VertexID as coordinate. Since you don't use any transformations, the output vertex coordinate should lie in the [-1,1]-box, everything else gets clipped away. But your points are at positions (0,0), (1,0), (2,0),..., so they don't cover the framebuffer densely anyway and only the point at (0,0) gets drawn, which is exactly the center pixel you see (the point at (1,0) probably gets clipped away too due to rounding and stuff).

So what you have to do is just draw a single quad that covers the whole framebuffer. This means without any transformations it should cover the complete clip space and thus the [-1,1]-square (in fact when using a framebuffer that's just a single pixel high, a line would even do). The rasterizer then generates all the fragments you need for you. You can still achieve this without any attributes, by just rendering a single quad (without any attributes activated):

glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

And then use the vertex ID to select the appropriate corner of the quad:

const vec2 corners[4] = { 
    vec2(-1.0, 1.0), vec2(-1.0, -1.0), vec2(1.0, 1.0), vec2(1.0, -1.0) };

void main()
{
    gl_Position = vec4(corners[gl_VertexID], 0.0, 1.0);   
}

This will then generate a fragment for each pixel and everything else should work fine.


As a side note, I'm not sure you really want a 1D texture here, as 1D textures have rather strict size constraints compared to buffers. You can just use a 2D texture, this won't change anything in your current processing (though it might require some little index magic when actually drawing the particles).

In fact when you already use OpenGL 4.3, the much more natural way to do such GPGPU tasks like a particle engine would be a compute shader. This way you can store your particle data in buffer objects and just work directly on those using a compute shader, without the need for packing them into textures (since you probably want to render them later anyway, and thus would need a texture read in the vertex shader), without the need for abusing the graphics pipeline for compute tasks and without the need for any pingpong storage at all (just work on the buffers in-place).

Cabin answered 7/11, 2013 at 16:53 Comment(1)
The thing is, the previous versions of the particle system i did were working with arrays of uniforms, ubo, ssbo, in which you explicitly says where you want to write data. But with textures, you need to work with the outputs of the vertex processing stage, which as you explained were completely wrong because of clipping. And yeah, i plan to try compute shaders too :)Tetrapod
A
0

There is a problem in the vertex shader code:

gl_Position = vec4(gl_VertexID, 0, 0, 1);

The value of the gl_VertexID will be 0, 1, 2, and so on for each point rendered in a call. For a point to be rendered, it has to be within a range of (-1.0, 1.0). Only the gl_VertexID value of 0 will be in the range to be rendered, so that is the only pixel you see in the presentation textures.

Alaniz answered 20/6, 2023 at 5:12 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.