Which CGImageAlphaInfo should we use?
Asked Answered
D

4

12

The Quartz 2D programming guide defines the availability of the various alpha storage modes:

enter image description here

Which ones should we use for RGB contexts, and why?

For non-opaque contexts, kCGImageAlphaPremultipliedFirst or kCGImageAlphaPremultipliedLast?

For opaque contexts, kCGImageAlphaNoneSkipFirst or kCGImageAlphaNoneSkipLast?

Does the choice of value affect performance?

Typically, I see kCGImageAlphaPremultipliedFirst for non-opaque and kCGImageAlphaNoneSkipFirst for opaque contexts. Some state that these perform better but I haven't seen any hard evidence or documentation about this.

A quick GitHub search shows that developers favor kCGImageAlphaPremultipliedFirst over kCGImageAlphaPremultipliedLast and kCGImageAlphaNoneSkipLast over kCGImageAlphaNoneSkipFirst. Sadly, this is little more than anecdotal evidence.

Doradorado answered 18/5, 2014 at 15:26 Comment(6)
I'd bet my hat the "best" option is different depending on what GPU hardware is available. Since I don't personally have access to all of the ~50 different GPUs Apple has shipped in the last few years, I can't help you with any data on what is best. Write an app to bench test it for performance and also profile the exact colour output of each method, then run it on whatever hardware you have available.Penhall
the various options define how the pixel data is calculated and represented internally... when building images from raw pixel data for example (feeding in c-arrays) it makes a huge difference if you choose the wrong alpha representation for example.Berkin
@AbhiBeckert That's my gut feeling as well but I would be nice to have some confirmation or consensus on this. I bet most developers just copy CGBitmapContextCreate code without giving much touch if they're using the right bitmap info.Doradorado
@Berkin It makes sense to make the bitmap info match the raw data format, but does the bitmap info affect draw performance as well? Maybe the gain at creation (which can be done on background) is lost at draw time (main thread).Doradorado
My guess is it will affect performance and also accuracy. Video cards, especially consumer grade video cards, do not always output accurate colours. That is one of the reasons they're so much faster than doing the same operations on a CPU. They just need to be good enough the user will not notice any inaccuracies. If you care, then you need to test it.Penhall
never prematurely optimize - do you see any performance issue then built versions using one or the other and do tests. I have played around fairly with various formats and have not identified issues that end users will see...Berkin
D
10

Confirmed by an Apple engineer at WWDC 2014 that we should use kCGImageAlphaPremultipliedFirst or kCGImageAlphaNoneSkipFirst, and that it does affect performance.

Doradorado answered 5/6, 2014 at 23:43 Comment(1)
If you're reading this from the future, take it with a grain of salt. It might have changed!Doradorado
R
4

The most universally used is RGBA format (True colour format) where the Alpha byte is located at the last byte describing the pixel. - kCGImageAlphaPremultipliedLast (32 bits). Not all formats are as supported universally by all devices.just an observation but all the png and jpeg images that I processed downloaded from the web are all RGBA (or turn into that when I convert PNG to UIImage) - I've never come across an ARGB formatted file in the wild, although I know it is possible.

The different formats affect the file size of the image and the colour quality (in case you didn't know) and the image quality, the 8 bit formats are black and white (grey scale) a discussion of all these is found here: http://en.wikipedia.org/wiki/Color_depth

Resistor answered 26/5, 2014 at 1:30 Comment(4)
One other thing - If you use Core Image Filters - you'll find that a lot of filters particularly CIColorMatrix expects the RGBA (True color) format, not that you will not be able to use the other formats but you will need to convert them first ....Resistor
Hi Paulo. ARGB is commonly used in iOS assets, although I don't exactly why. I think you answer should be a comment, as it's more of an explanation of color formats than an attempt of answering my question(s).Doradorado
In fact, your comment about Core Image Filters is more relevant to the question than your actual answer.Doradorado
UIColor are typically RGBA except for UIColor Black or white which are monotone. I have an app that deal with a lot of images and most of the functions expect RGBA. somehow in that big app. I never had to deal with any of the ARBG objects. Anyway I referred to the article because True Color 32bit RGBA is universally recognised and are typical outputs of photoshop and other commonly used image software, using RGBA ensures compatibility with a wide array of other applications particularly if the image you produce is to be used interchangeably in other platforms.Resistor
I
0

For best performance, your bytes per row and data should be aligned to multiples of 16 bytes.

bits per component: number of bits used for each color component. For 16 bit RGBA, 16/4 components = 4.

bits per pixel: at least bits per component * number of components

bytes per row: ((bits per component * number of components + 7)/8 ) * width

From CGBitmapContext.h:

The number of bits for each component of a pixel is specified by 'bitsPerComponent'. The number of bytes per pixel is equal to '(bitsPerComponent * number of components + 7)/8'. Each row of the bitmap consists of bytesPerRow' bytes, which must be at leastwidth * bytes per pixel' bytes; in addition, `bytesPerRow' must be an integer multiple of the number of bytes per pixel.

Once you have the bytes per row given your desired pixel format and color space, if it's divisible by 16, you should be in good shape. If you are NOT correctly aligned, Quartz will perform some of these optimizations for you, which will incur overhead. For extra points you can also attempt to optimize the bytes per row to fit in a line of the L2 cache on the architecture you're targeting.

This is covered well in the Quartz 2D Programming Guide as well as the documentation for CGImage. There is also a question that may be relevant.

All that said, it's best to try different things and profile in Instruments.

Isabelleisac answered 28/5, 2014 at 3:26 Comment(2)
Bytes per row is (bytes per pixel) * width, not (bits per component) * (number of component) * width: pixels can't stride multiple units. If you use 16-bit RGB, one pixel is 16 bits, regardless of the one bit this leaves unused. Also, I think that Quartz only supports RGB over 16 bits (with 5 bpc), not RGBA.Dittmer
From CGBitmapContext.h: "The number of bytes per pixel is equal to (bitsPerComponent * number of components + 7)/8'." Multiply that by the number of pixels per row (the width). kCGImageAlphaLast is an example of RGBA without premultiply. kCGImageAlphaPremultipliedLast would be RGBA with premultiply.Isabelleisac
H
0

I am using kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big and it works great.

Hillock answered 25/9, 2016 at 17:9 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.