This is a weird question but I was wondering why glTexImage2d cares about the pixel data type and format of Null data. The signature for glTexImage2d is...
void glTexImage2D( GLenum target, GLint level, GLint internalFormat,
GLsizei width, GLsizei height, GLint border, GLenum format,
GLenum type, const GLvoid * data);
The internalFormat acts to tell the graphics driver what you want to store the data as on the gpu and the format and type tell the graphics driver what to expect from GLvoid * data. So, if I don't pass any data, by passing null for instance, why does the graphics driver care what the format and type is? So this is a weird question because sometimes it doesn't. The times when it does, and I haven't checked every single iteration but, is specifically when making a depth texture, or something I've come across recently is the Integer types, like GL_RED_INTEGER, GL_RG_INTEGER, etc. and/or their corresponding internalFormats like GL_R8I, GL_RGBA32UI. Whereas, any of the "simple" types don't have to correspond in anyway to the internalFormat, like GL_RGBA8, GL_RGBA32F, etc. For some reason, the former particular data types and formats have to be exact even if you're not passing any data. Why is that?
format
, and not theinternalFormat
;) You can even get away without specifying an exact format on older implementations - it is declared asGLint
instead ofGLenum
because you used to (GL 2.x) be able to use 1,2,3 or 4 for theinternalFormat
to tell GL how many components the texture image needed. – Prank