I'm loading this (very small) image using:
UIImage* image = [UIImage named:@"someFile.png"];
The image is 4x1 and it contains a red, green, blue and white pixel from left to right, in that order.
Next, I get the pixel data out of the underlying CGImage:
NSData* data = (NSData*)CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
Now, for some reason, the pixel data is laid out differently depending on the iOS device.
When I run the app in the simulator or on my iPhone 4, the pixel data looks like this:
(255,0,0),(0,255,0),(0,0,255),(255,255,255)
So, the pixels are 3 bytes per pixel, with blue as the most significant byte and red as the least significant. So I guess you call that BGR?
When I check the CGBitmapInfo, I can see that the kCGBitmapByteOrderMask is kCGBitmapByteOrderDefault. I can't find anywhere that explains what "default" is.
On the other hand, when I run it on my first gen iPhone, the pixel data looks like this:
(0,0,255,255),(0,255,0,255),(255,0,0,255),(255,255,255,255)
So 4 bytes per channel, alpha as the most significant byte, and blue as the least significant. So... that's called ARGB?
I've been looking at the CGBitmapInfo for clues on how to detect the layout. On the first gen iPhone, the kCGBitmapAlphaInfoMask is kCGImageAlphaNoneSkipFirst. That means that the most significant bits are ignored. So that makes sense. On the first gen iPhone the kCGBitmapByteOrderMask is kCGBitmapByteOrder32Little. I don't know what that means or how to relate it back to how the R, G and B components are laid out in memory. Can anyone shed some light on this?
Thanks.
kCIFormatRGBAh
mean that in memory order first comes red and finally alpha channel {red float16, green float16, blue float16, alpha float16}. If so, thenkCIFormatBGRA8
could only sensible mean the same, that blue comes in the first byte of memory. So in response to "that's called ARGB?", wouldn't it be BGRA? – Sinuous