Precise control over texture bits in GLSL
Asked Answered
S

1

10

I am trying to implement an octree traversal scheme using OpenGL and GLSL, and would like to keep the data in textures. While there is a big selection of formats to use for the texture data (floats and integers of different sizes) I have some trouble figuring out if there is a way to have more precise control over the bits and thus achieving greater efficiency and compact storage. This might be a general problem, not only applying to OpenGL and GLSL.

As a simple toy example, let's say that I have a texel containing a 16 bit integer. I want to encode two booleans of 1 bit each, one 10 bit integer value and then a 4 bit integer value into this texel. Is there a technique to encode this when creating the texture, and then decode these components when sampling the texture using a GLSL shader?

Edit: Looks like I am in fact looking for bit manipulation techniques. Since they seem to be supported, I should be fine after some more researching.

Shanon answered 19/2, 2013 at 15:26 Comment(3)
Are you asking how to do bit manipulation?Burnedout
Bit manipulation is possible in GLSL 1.3 (OpenGL 3.0), I don't know how you can read the raw int from the texture in GLSL however... Texture2D returns float vec4Moretta
@NicolBolas: After some more searching, it looks like I am. I haven't done that a lot, so I was not sure. If GLSL does support it, I should be able to figure out where to start learning about it!Shanon
F
5

Integer and bit-manipulations inside GLSL shaders are supported since OpenGL 3 (thus present on DX10 class hardware, if that tells you more). So you can just do this bit mainulation on your own inside the shader.

But working with integers is one thing, getting them out of the texture is another. The standard OpenGL texture formats (that you may be used to) are either storing floats directly (like GL_R16F) or normalized fixed point values (like GL_R16, effectively integers for the uninitiated ;)), but reading from them (using texture, texelFetch or whatever) will net you float values in the shader, from which you cannot that easily or reliably deduce the original bit-pattern of the internally stored integer.

So what you really need to use is an integer texture, which require OpenGL 3, too (or maybe the GL_EXT_texture_integer extension, but hardware supporting that will likely have GL3 anyway). So for your texture you need to use an actual integer internal format, like e.g. GL_R16UI (for a 1-component 16-bit unsigned integer) in constrast to the usual fixed point formats (like e.g. GL_R16 for a normalized [0,1]-color with 16 bits precision).

And then in the shader you need to use an integer sampler type, like e.g. usampler2D for an unsigned integer 2D texture (and likewise isampler... for the signed variants) to actually get an unsigned integer from your texture or texelFetch calls:

CPU:

glTexImage2D(GL_TEXTURE_2D, 0, GL_R16UI, ..., GL_R, GL_UNSIGNED_SHORT, data);

GPU:

uniform usampler2D tex;

...
uint value = texture(tex, ...).r;
bool b1 = (value&0x8000) == 0x8000, 
     b2 = (value&0x4000) == 0x4000;
uint i1 = (value>>4) & 0x3FF, 
     i2 = value & 0xF;
Fotheringhay answered 19/3, 2013 at 10:5 Comment(1)
Hi, that glTexImage2D is returning an error for me, could you please give a look? #21626209Paleoasiatic

© 2022 - 2024 — McMap. All rights reserved.