See JWWalker's answer first. This answer is a supplement to it. A quick illustration.
GL_RGBA
stored in GL_UNSIGNED_INT_8_8_8_8
:
0xrrggbbaa
GL_RGBA' stored in
GL_UNSIGNED_BYTE`:
[0xrr, 0xgg, 0xbb, 0xaa]
In both cases, RGBA is in the logical order r, g, b, a, but on a little endian machine (this is the normal architecture), 0xrrggbbaa
has the little byte (least-significant) stored first. If you read the uint32_t
one byte at a time, you'll get 0xaa
first! A C++ example:
uint32_t color{0x11223344};
uint8_t first_byte{reinterpret_cast<uint8_t*>(color)[0]};
first_byte
will equal 0x44
.
One thing that's confusing is the word "first". It can mean "appearing first when written" as in "The red byte is first in 0xrrggbbaa". This is different from "having the lowest memory/pointer address" as in "The alpha byte is first in 0xrrggbbaa when encoded using little-endian"! When you use GL_RGBA
, it sure looks like red will be first, but when encoded in a 4-byte integer little endian, it is only that way in the hex representation.
GL_UNSIGNED_BYTE
andGL_UNSIGNED_INT_...
in this example are the pixel transfer types. They do not say anything about how GL stores the color, only how the "packed" colors are interpreted by GL when the color data are sent to it. Sort of an important distinction because usually the goal with these more exotic formats is to match the client (CPU) and server (GPU) formats so that GL does not need to perform data conversion and can do a simple block transfer. – Kingston