I am trying to make a particle system where instead of a texture, a quad is rendered by a fragment shader such as below.
uniform vec3 color;
uniform float radius;
uniform float edge;
uniform vec2 position;
uniform float alpha;
void main()
{
float dist = distance(gl_FragCoord.xy, position);
float intensity = smoothstep(dist-edge, dist+edge, radius);
gl_FragColor = vec4(color, intensity*alpha);
}
Each particle is an object of a c++ class that wraps this shader and all the variables together and draws it. I use openFrameworks so the exact openGL calls are hidden from me.
I've read that usually particle systems are done with textures, however, I prefer to do it like this because this way I can add more functionality to the particles. The problem is that after only 30 particles, the framerate drops dramatically. Is there a more efficient way of doing this? I was thinking of maybe putting the variables for each particle into an array and sending these arrays into one fragment shader that then renders all particles in one go. But this would mean that the amount of particles would be fixed because the uniform arrays in the shader would have to be declared beforehand.
Are non-texture-based particle systems simply too inefficient to be realistic, or is there a way of designing this that I'm overlooking?
draw
call is emitted and probably also some bindings. While this shouldn't be a huge problem for just 30 particles, the performance is not the best with that approach. The problem is not that you use a shader and not a texture, but that at least 2 API calls are done for every particle. A better way would be to store all particles in one VBO. You could either store their full geometry or useglDrawArraysInstanced
and/or Geometry-Shader. – Sphygmograph