Resizing point sprites based on distance from the camera
Asked Answered
E

4

7

I'm writing a clone of Wolfenstein 3D using only core OpenGL 3.3 for university and I've run into a bit of a problem with the sprites, namely getting them to scale correctly based on distance.

From what I can tell, previous versions of OGL would in fact do this for you, but that functionality has been removed, and all my attempts to reimplement it have resulted in complete failure.

My current implementation is passable at distances, not too shabby at mid range and bizzare at close range.

The main problem (I think) is that I have no understanding of the maths I'm using.
The target size of the sprite is slightly bigger than the viewport, so it should 'go out of the picture' as you get right up to it, but it doesn't. It gets smaller, and that's confusing me a lot.
I recorded a small video of this, in case words are not enough. (Mine is on the right)

Expected Result Actual Result

Can anyone direct me to where I'm going wrong, and explain why?

Code:
C++

// setup
glPointParameteri(GL_POINT_SPRITE_COORD_ORIGIN, GL_LOWER_LEFT);
glEnable(GL_PROGRAM_POINT_SIZE);

// Drawing
glUseProgram(StaticsProg);
glBindVertexArray(statixVAO);
glUniformMatrix4fv(uStatixMVP, 1, GL_FALSE, glm::value_ptr(MVP));
glDrawArrays(GL_POINTS, 0, iNumSprites);

Vertex Shader

#version 330 core

layout(location = 0) in vec2 pos;
layout(location = 1) in int spriteNum_;

flat out int spriteNum;

uniform mat4 MVP;

const float constAtten  = 0.9;
const float linearAtten = 0.6;
const float quadAtten   = 0.001;

void main() {
    spriteNum = spriteNum_;
    gl_Position = MVP * vec4(pos.x + 1, pos.y, 0.5, 1); // Note: I have fiddled the MVP so that z is height rather than depth, since this is how I learned my vectors.
    float dist = distance(gl_Position, vec4(0,0,0,1));
    float attn = constAtten / ((1 + linearAtten * dist) * (1 + quadAtten * dist * dist));
    gl_PointSize = 768.0 * attn;
}

Fragment Shader

#version 330 core

flat in int spriteNum;

out vec4 color;

uniform sampler2DArray Sprites;

void main() {
    color = texture(Sprites, vec3(gl_PointCoord.s, gl_PointCoord.t, spriteNum));
    if (color.a < 0.2)
        discard;
}
Engrossment answered 22/12, 2011 at 19:33 Comment(2)
I think it's because he set the sprite center position to lower left, but he really wants the anchor point in the top center.Lactometer
I want the attachment point to be in the center of the bottom edge, but because of a bunch of alignment issues I've not got around to sorting out, I've got to shift a lot of things around somewhat oddly.Engrossment
L
13

First of all, I don't really understand why you use pos.x + 1.

Next, like Nathan said, you shouldn't use the clip-space point, but the eye-space point. This means you only use the modelview-transformed point (without projection) to compute the distance.

uniform mat4 MV;       //modelview matrix

vec3 eyePos = MV * vec4(pos.x, pos.y, 0.5, 1); 

Furthermore I don't completely understand your attenuation computation. At the moment a higher constAtten value means less attenuation. Why don't you just use the model that OpenGL's deprecated point parameters used:

float dist = length(eyePos);   //since the distance to (0,0,0) is just the length
float attn = inversesqrt(constAtten + linearAtten*dist + quadAtten*dist*dist);

EDIT: But in general I think this attenuation model is not a good way, because often you just want the sprite to keep its object space size, which you have quite to fiddle with the attenuation factors to achieve that I think.

A better way is to input its object space size and just compute the screen space size in pixels (which is what gl_PointSize actually is) based on that using the current view and projection setup:

uniform mat4 MV;                //modelview matrix
uniform mat4 P;                 //projection matrix
uniform float spriteWidth;      //object space width of sprite (maybe an per-vertex in)
uniform float screenWidth;      //screen width in pixels

vec4 eyePos = MV * vec4(pos.x, pos.y, 0.5, 1); 
vec4 projCorner = P * vec4(0.5*spriteWidth, 0.5*spriteWidth, eyePos.z, eyePos.w);
gl_PointSize = screenWidth * projCorner.x / projCorner.w;
gl_Position = P * eyePos;

This way the sprite always gets the size it would have when rendered as a textured quad with a width of spriteWidth.

EDIT: Of course you also should keep in mind the limitations of point sprites. A point sprite is clipped based of its center position. This means when its center moves out of the screen, the whole sprite disappears. With large sprites (like in your case, I think) this might really be a problem.

Therefore I would rather suggest you to use simple textured quads. This way you circumvent this whole attenuation problem, as the quads are just transformed like every other 3d object. You only need to implement the rotation toward the viewer, which can either be done on the CPU or in the vertex shader.

Liaoning answered 22/12, 2011 at 20:9 Comment(6)
Your second bit of code causes this, and I'm not entirely sure what it's doing. Could you explain it further please? I will also look into the quads.Engrossment
Yeah, the one with projCorner.Engrossment
@Engrossment The second dit basically first transforms the sprite position into eye/view space. Then it constructs a point, that is the upper-left corner of a sprite, that is the same size as the input sprite and located in the center of the screen and at the distance of the input sprite. This corner is then transformed by the projction matrix (and divided by w). And the horizontal distance of this point to (0,0), so its x value, is actually its screen size in [0,1], which just needs to be multiplied by the screen width, to gain the pixel size. I don't know why it doesn't work though.Liaoning
@LexiR In other words, it takes a sprite, that is in the screen center and at the camera distance of the input sprite and the same size as the input sprite. Transforms this into screen space and takes its screen space width in pixels as point size.Liaoning
Isn't w always 1 when describing positions though?Engrossment
@LexiR Not after the projection transformation. When using a perspective projection, the resulting w is not 1 and the following division by w actually realizes the perspective distortion. So this is the part where our sprite's size depends on its distance to the camera.Liaoning
P
4

Based on Christian Rau's answer (last edit), I implemented a geometry shader that builds a billboard in ViewSpace, which seems to solve all my problems:

Expected Result Actual Result

Here are the shaders: (Note that I have fixed the alignment issue that required the original shader to add 1 to x)

Vertex Shader

#version 330 core

layout (location = 0) in vec4 gridPos;
layout (location = 1) in int  spriteNum_in;

flat out int spriteNum;

// simple pass-thru to the geometry generator
void main() {
    gl_Position = gridPos;
    spriteNum = spriteNum_in;
}

Geometry Shader

#version 330 core

layout (points) in;
layout (triangle_strip, max_vertices = 4) out;

flat in int spriteNum[];

smooth out vec3 stp;

uniform mat4 Projection;
uniform mat4 View;

void main() {
    // Put us into screen space. 
    vec4 pos = View * gl_in[0].gl_Position;

    int snum = spriteNum[0];

    // Bottom left corner
    gl_Position = pos;
    gl_Position.x += 0.5;
    gl_Position = Projection * gl_Position;
    stp = vec3(0, 0, snum);
    EmitVertex();

    // Top left corner
    gl_Position = pos;
    gl_Position.x += 0.5;
    gl_Position.y += 1;
    gl_Position = Projection * gl_Position;
    stp = vec3(0, 1, snum);
    EmitVertex();

    // Bottom right corner
    gl_Position = pos;
    gl_Position.x -= 0.5;
    gl_Position = Projection * gl_Position;
    stp = vec3(1, 0, snum);
    EmitVertex();

    // Top right corner
    gl_Position = pos;
    gl_Position.x -= 0.5;
    gl_Position.y += 1;
    gl_Position = Projection * gl_Position;
    stp = vec3(1, 1, snum);
    EmitVertex();

    EndPrimitive();
}

Fragment Shader

#version 330 core

smooth in vec3 stp;

out vec4 colour;

uniform sampler2DArray Sprites;

void main() {
    colour = texture(Sprites, stp);
    if (colour.a < 0.2)
        discard;
}
Papst answered 22/12, 2011 at 19:33 Comment(3)
So Christian Rau's answer is what got you to where you are, yet you accept your own answer? Wow, that is incredibly thoughtless after the effort he put in. Way to take people for granted.Smallish
@NickWiggill I hadn't really thought about it like that to be honest. I made and accepted my answer because I thought it would be more helpful to anyone else with my problem.Engrossment
+1 on your answer in return for your consideration in reassigning the accepted answer. Yours will probably end up with more votes anyway.Smallish
L
1

I don't think you want to base the distance calculation in your vertex shader on the projected position. Instead just calculate the position relative to your view, i.e. use the model-view matrix instead of the model-view-projection one.

Think about it this way -- in projected space, as an object gets closer to you, its distance in the horizontal and vertical directions becomes exaggerated. You can see this in the way the lamps move away from the center toward the top of the screen as you approach them. That exaggeration of those dimensions is going to make the distance get larger when you get really close, which is why you're seeing the object shrink.

Lactometer answered 22/12, 2011 at 19:52 Comment(0)
T
0

At least in OpenGL ES 2.0, there is a maximum size limitation on gl_PointSize imposed by the OpenGL implementation. You can query the size with ALIASED_POINT_SIZE_RANGE.

Tacmahack answered 14/12, 2012 at 19:2 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.