Your arguments to glTexImage2D
are inconsistent. The 3rd argument (GL_RGB
) suggests that you want a 3 component texture, the 7th (GL_RED
) suggests a one-component texture. Then your other attempt uses GL_RG
, which suggests 2 components.
You need to use an internal texture format that stores unsigned shorts, like GL_RGB16UI
.
If you want one component, your call would look like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16UI, 640, 480, 0, GL_RED_INTEGER, GL_UNSIGNED_SHORT, kinect_depth);
If you want three components:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16UI, 640, 480, 0, GL_RGB_INTEGER, GL_UNSIGNED_SHORT, kinect_depth);
You also need to make sure that the types used in your shader for sampling the texture match the type of the data stored in the texture. In this example, since you use a 2D texture containing unsigned integer values, your sampler type should be usampler2D
, and you want to store the result of the sampling operation (result of texture()
call in the shader) in a variable of type uvec4
. (paragraph added based on suggestion by Andon)
Some more background on the format/type arguments of glTexImage2D
, since this is a source of fairly frequent misunderstandings:
The 3rd argument (internalFormat) is the format of the data that your OpenGL implementation will store in the texture (or at least the closest possible if the hardware does not support the exact format), and that will be used when you sample from the texture.
The last 3 arguments (format, type, data) belong together. format and type describe what is in data, i.e. they describe the data you pass into the glTexImage2D
call.
It is mostly a good idea to keep the two formats matched. Like in this case, the data you pass in is GL_UNSIGNED_SHORT
, and the internal format GL_R16UI
contains unsigned short values. In OpenGL ES it is required for the internal format to match format/type. Full OpenGL does conversion if necessary, which is undesirable for performance reasons, and also frequently not what you want because the precision of the data in the texture won't be the same as the precision of your original data.
kinect_depth
? Why does your second example useGL_RG
while your first usesGL_RGB
? – JumbalaGL_R16UI
,GL_RG16UI
orGL_RGB16UI
depending on the number of channels you have. – PyuriaGL_UNSIGNED_SHORT
in your call toglTexImage2D (...)
has nothing to do with how the GPU stores your texture. That is only used by GL when it reads your image data, so it knows how to interpret the pixels. Chances are pretty good thatGL_RGB
(which is very vague as it lacks a size) is going to turn out to be 8-bit unsigned normalized (GL_RGB8
). – Poston