Core Image filter CISourceOverCompositing not appearing as expected with alpha overlay
Asked Answered
P

3

13

I’m using CISourceOverCompositing to overlay text on top of an image and I’m getting unexpected results when the text image is not fully opaque. Dark colors are not dark enough and light colors are too light in the output image.

I recreated the issue in a simple Xcode project. It creates an image with orange, white, black text drawn with 0.3 alpha, and that looks correct. I even threw that image into Sketch placing it on top of the background image and it looks great. The image at the bottom of the screen shows how that looks in Sketch. The problem is, after overlaying the text on the background using CISourceOverCompositing, the white text is too opaque as if alpha was 0.5 and the black text is barely visible as if alpha was 0.1. The top image shows that programmatically created image. You can drag the slider to adjust the alpha (defaulted at 0.3) which will recreate the result image.

enter image description here

The code is included in the project of course, but also included here. This creates the text overlay with 0.3 alpha, which appears as expected.

let colorSpace = CGColorSpaceCreateDeviceRGB()
let alphaInfo = CGImageAlphaInfo.premultipliedLast.rawValue

let bitmapContext = CGContext(data: nil, width: Int(imageRect.width), height: Int(imageRect.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: alphaInfo)!
bitmapContext.setAlpha(0.3)
bitmapContext.setTextDrawingMode(CGTextDrawingMode.fill)
bitmapContext.textPosition = CGPoint(x: 20, y: 20)

let displayLineTextWhite = CTLineCreateWithAttributedString(NSAttributedString(string: "hello world", attributes: [.foregroundColor: UIColor.white, .font: UIFont.systemFont(ofSize: 50)]))
CTLineDraw(displayLineTextWhite, bitmapContext)

let textCGImage = bitmapContext.makeImage()!
let textImage = CIImage(cgImage: textCGImage)

Next that text image is overlaid on top of the background image, which does not appear as expected.

let combinedFilter = CIFilter(name: "CISourceOverCompositing")!
combinedFilter.setValue(textImage, forKey: "inputImage")
combinedFilter.setValue(backgroundImage, forKey: "inputBackgroundImage")
let outputImage = combinedFilter.outputImage!
Peepul answered 25/2, 2018 at 0:59 Comment(3)
Wish I could help you. This question did receive my attention. The Apple doc suggested checking out the formula it uses ( keithp.com/~keithp/porterduff/p253-porter.pdf ). Have you checked it? Specifically page 4, sections 4.3? It's a bit "Greek" for my current usage of CI but maybe it's help you? Seems like multiplying alpha may be happening unexpectedly?Content
Thanks @dfd that's a good thought, but I don't see how. I've updated the question with more details and a sample project if anything stands out to you!Peepul
@Joey you won't belive but i m having same issue so and i tried a lot but didn't get any proper solutions so i tried one trick i have just put one white UIView Behind the image so it work perfectly. so just try may be it will help :)Berezina
P
7

After a lot of back and forth trying different things, (thanks @andy and @Juraj Antas for pushing me in the right direction) I finally have the answer.

So drawing into a Core Graphics context results in the correct appearance, but it requires more resources to draw images using that approach. It seemed the problem was with CISourceOverCompositing, but the problem actually lies with the fact that, by default, Core Image filters work in linear SRGB space whereas Core Graphics works in perceptual RGB space, which explains the different results - sRGB is better at preserving dark blacks and linearSRGB is better at preserving bright whites. So the original code is fine, but you can output the image in a different way to get a different appearance.

You could create a Core Graphics image from the Core Image filter using a Core Image context that performs no color management. This essentially causes it to interpret the color values "incorrectly" as device RGB (since that's the default for no color management), which can cause red from the standard color range to appear as even more red from the wide color range for example. But this addresses the original concern with alpha compositing.

let ciContext = CIContext(options: [.workingColorSpace: NSNull()])
let outputCGImage = ciContext.createCGImage(outputCIImage, from: outputCIImage.extent)

It is probably more desirable to keep color management enabled and specify the working color space to be sRGB. This too resolves the issue and results in "correct" color interpretation. Note if the image being composited were Display P3, you'd need to specify displayP3 as the working color space to preserve the wide colors.

let workingColorSpace = CGColorSpace(name: CGColorSpace.sRGB)!
let ciContext = CIContext(options: [.workingColorSpace: workingColorSpace])
let outputCGImage = ciContext.createCGImage(outputCIImage, from: outputCIImage.extent)

It is probably even better to utilize CIBlendKernel to blend two images and specify the color space to use instead of using the source over compositing filter, which allows you to not change the working color space (which affects the entire filter chain), and gain performance improvements with reduced memory usage as you no longer need to create a CGImage.

let extendedColorSpace = CGColorSpace(name: CGColorSpace.extendedSRGB)!
let outputImage = CIBlendKernel.sourceOver.apply(foreground: textImage, background: backgroundImage, colorSpace: extendedColorSpace)!
Peepul answered 24/3, 2018 at 19:12 Comment(1)
I also added outputColorSpace. For swift 4: let ciContext = CIContext(options: [CIContextOption.outputColorSpace: NSNull(), CIContextOption.workingColorSpace: NSNull()])Zenaidazenana
R
4

For black-and-white text

If you're using .normal compositing operation you'll definitely get not the same result as using .hardLight. Your picture shows the result of .hardLight operation.

.normal operation is classical OVER op with formula: (Image1 * A1) + (Image2 * (1 – A1)).

Here's a premultiplied text (RGB*A), so RGB pattern depends on A's opacity in this particular case. RGB of text image can contain any color, including a black one. If A=0 (black alpha) and RGB=0 (black color) and your image is premultiplied – the whole image is totally transparent, if A=1 (white alpha) and RGB=0 (black color) – the image is opaque black.

If your text has no alpha when you use .normal operation, I'll get ADD op: Image1 + Image2.


To get what you want, you need to set up a compositing operation to .hardLight.

.hardLight compositing operation works as .multiply

if alpha of text image less than 50 percent (A < 0.5, the image is almost transparent)

Formula for .multiply: Image1 * Image2


.hardLight compositing operation works as .screen

if alpha of text image greater than or equal to 50 percent (A >= 0.5, the image is semi-opaque)

Formula 1 for .screen: (Image1 + Image2) – (Image1 * Image2)

Formula 2 for .screen: 1 – (1 – Image1) * (1 – Image2)

.screen operation has much softer result than .plus, and it allows to keep alpha not greater than 1 (plus operation adds alphas of Image1 and Image2, so you might get alpha = 2, if you have two alphas). .screen compositing operation is good for making reflections.


enter image description here

func editImage() {

    print("Drawing image with \(selectedOpacity) alpha")
    
    let text = "hello world"
    let backgroundCGImage = #imageLiteral(resourceName: "background").cgImage!
    let backgroundImage = CIImage(cgImage: backgroundCGImage)
    let imageRect = backgroundImage.extent
    
    //set up transparent context and draw text on top
    let colorSpace = CGColorSpaceCreateDeviceRGB()
    let alphaInfo = CGImageAlphaInfo.premultipliedLast.rawValue
    
    let bitmapContext = CGContext(data: nil, width: Int(imageRect.width), height: Int(imageRect.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: alphaInfo)!
    bitmapContext.draw(backgroundCGImage, in: imageRect)
    
    bitmapContext.setAlpha(CGFloat(selectedOpacity))
    bitmapContext.setTextDrawingMode(.fill)

    //TRY THREE COMPOSITING OPERATIONS HERE 
    bitmapContext.setBlendMode(.hardLight)
    //bitmapContext.setBlendMode(.multiply)
    //bitmapContext.setBlendMode(.screen)
    
    //white text
    bitmapContext.textPosition = CGPoint(x: 15 * UIScreen.main.scale, y: (20 + 60) * UIScreen.main.scale)
    let displayLineTextWhite = CTLineCreateWithAttributedString(NSAttributedString(string: text, attributes: [.foregroundColor: UIColor.white, .font: UIFont.systemFont(ofSize: 58 * UIScreen.main.scale)]))
    CTLineDraw(displayLineTextWhite, bitmapContext)
    
    //black text
    bitmapContext.textPosition = CGPoint(x: 15 * UIScreen.main.scale, y: 20 * UIScreen.main.scale)
    let displayLineTextBlack = CTLineCreateWithAttributedString(NSAttributedString(string: text, attributes: [.foregroundColor: UIColor.black, .font: UIFont.systemFont(ofSize: 58 * UIScreen.main.scale)]))
    CTLineDraw(displayLineTextBlack, bitmapContext)
    
    let outputImage = bitmapContext.makeImage()!
    
    topImageView.image = UIImage(cgImage: outputImage)
}

So for recreating this compositing operation you need the following logic:

//rgb1 – text image 
//rgb2 - background
//a1   - alpha of text image

if a1 >= 0.5 { 
    //use this formula for compositing: 1–(1–rgb1)*(1–rgb2) 
} else { 
    //use this formula for compositing: rgb1*rgb2 
}

I recreated an image using compositing app The Foundry NUKE 11. Offset=0.5 here is Add=0.5.

I used property Offset=0.5 because transparency=0.5 is a pivot point of .hardLight compositing operation.

enter image description here

enter image description here


For color text

You need to use .sourceAtop compositing operation in case you have ORANGE (or any other color) text in addition to B&W text. Applying .sourceAtop case of .setBlendMode method you make Swift use the luminance of the background image to determine what to show. Alternatively you can employ CISourceAtopCompositing core image filter instead of CISourceOverCompositing.

bitmapContext.setBlendMode(.sourceAtop)

or

let compositingFilter = CIFilter(name: "CISourceAtopCompositing")

.sourceAtop operation has the following formula: (Image1 * A2) + (Image2 * (1 – A1)). As you can see you need two alpha channels: A1 is the alpha for text and A2 is the alpha for background image.

bitmapContext.textPosition = CGPoint(x: 15 * UIScreen.main.scale, y: (20 + 60) * UIScreen.main.scale)
let displayLineTextOrange = CTLineCreateWithAttributedString(NSAttributedString(string: text, attributes: [.foregroundColor: UIColor.orange, .font: UIFont.systemFont(ofSize: 58 * UIScreen.main.scale)]))
CTLineDraw(displayLineTextOrange, bitmapContext)

enter image description here

enter image description here

Resile answered 6/3, 2018 at 12:10 Comment(44)
almost correct, but it depends on color of image not alpha which blend mode is used.Nonparticipating
it depends on both in this case: RGB and A. Math applied to RGB, alpha controls opacity. Because here is RGB*A pattern (premultiplied image).Resile
do you know how is A computed in your math?Nonparticipating
would you mind to write it here? I am testing in custom CIFilter.Nonparticipating
What exactly to write?Resile
how to get float from vec4 or in other words, from input rgba you need to get single number you compare to 0.5 and choose blending formula.Nonparticipating
For instance: you've got premultiplied rgba image. When you use slider, you must use only alpha, because lowering A make lower RGB. Use variable for alpha in vec4.Resile
image that is generated is not using premultiplied alpha. That would be only rgb. but it is rgba. What I am trying now to do is to exactly recreate .hardLight in custom CIFilter.Nonparticipating
developer.apple.com/documentation/coregraphics/cgblendmode/…Resile
This is definitely controlled by alpha.Resile
whole image has alpha == 0.3Nonparticipating
@IBAction func sliderValueChanged(_ sender: UISlider) { selectedOpacity = sender.value editImage() }Resile
Yes, alpha == 0.3, slider controls Alpha, and Alpha controls RGB.Resile
So, A*RGB, and if R=1, B=1, G=1, RGB == 0.3Resile
That's what I said.Resile
and for R=0, G=0, B=0, A=0.3 -> A*RGB == 0.0 ? That would mean for both cases it is lower than 0.5 and multiply is used always...not trueNonparticipating
Logic is wrong. A controls RGB, not RGB controls A.Resile
A can be 0.5 or 0.1. So A*RGB. And RGB=0.5 or RGB=0.1Resile
RGB image can contain any color, including black one. If A=0 and RGB=0 and image is premultiplied – the whole image is transparent, if A=1 and RGB=0 the image is opaque black.Resile
that is something I know. But it does not help me recreating right blend mode.Nonparticipating
what compositing operation you're recreating? .hardLight?Resile
if a1 >= 0.5 { // 1 – (1 – rgb1) * (1 – rgb2) } else { // rgb1 * rgb2 }Resile
problem is in A, if it is alpha, than it is constant 0.3, if it is RGB*A, than again it is 0.3, or 0, depending when text is white or black...so there is no case where it is bigger than 0.5 ;) this is why I think it is computed from color not alpha.Nonparticipating
You were right – alpha has no impact on hardLight operation. Only on transparency.Resile
Hey Andy, thanks for you answer. Hard light does look correct for purely black and white text, but unfortunately for colored text like UIColor.orange for example, it does not appear as expected, as it is inappropriately blended with the background so it can appear red rather than uniformly orange. I'm really looking to achieve the normal blend mode, not hard light, multiply, or screen.Peepul
@Joey, if it's recreated with preliminary color correction (what you can see in NUKE interface) it looks OK.Resile
@Joey, if you're looking to achieve the .normal blend mode (this is classical OVER compositing operation), the formula is: RGB1*A1 + RGB2*(1–A1)Resile
problem is color space. Generic RGB on iOS, sRGB in graphic programs. Without conversion between the spaces result never will be the same.Nonparticipating
Gamma property can compensate sRGB curve.Resile
For orange/black/white colors use ".sourceAtop" operation.Resile
Yep, sourceAtop is all that's needed! Interestingly, I also tried CISourceAtopCompositing but it appears the exact same as CISourceOverCompositing from what I can tell. Odd. Anyways, if you'll update the answer I'll accept it! Thank you!Peepul
I did try the CISourceAtopCompositing CIFilter, but it did not result in the same outcome as using sourceAtop blend mode. It appears the same as CISourceOverCompositing. But your answer says it would obtain the same results. I'd think it would too, but it doesn't, strangely.Peepul
You need to add alpha channel to background image at first.Resile
Hey @andy can you expand upon what you mean by adding alpha channel to the background image first to make this work using CISourceAtopCompositing? I made part of the background image transparent but I continue to get the same result.Peepul
The formula for Atop operation is Ab+B(1-a). The resulted image shows the shape of image B, with A covering B where the images overlap.Resile
a is the alpha channel of foreground image, b is the alpha channel of background image.Resile
A is the RGB channels of foreground image, B is the RGB channels of background image.Resile
As formula says you definitely need alpha for background.Resile
Ok, hm, with alpha on the background image I'm getting the same unexpected outcome. Can you try it with the sample project see if you can get it working with CISourceAtopCompositing?Peepul
At the moment I can't reach Xcode. Only tomorrow)))Resile
You're definitely right about the problem being caused by the background image. If I create a new image by drawing the background into the context and getting it out via makeImage() to use that instead, it appears as expected. But that's really inefficient and uses a lot of RAM for large photos. I'm wondering what's an efficient way to get that background image in the right format.Peepul
I've tried it today. It seems ok with compositing op.Resile
I suppose the only efficient way to get background image in the right format is to prepare it in compositing app like Nuke or Fusion.Resile
Let us continue this discussion in chat.Resile
N
2

Final answer: Formula in CISourceOverCompositing is good one. It is right thing to do.

BUT

It is working in wrong color space. In graphic programs you most likely have sRGB color space. On iOS Generic RGB color space is used. This is why results don't match.

Using custom CIFilter I recreated CISourceOverCompositing filter.
s1 is text image.
s2 is background image.

Kernel for it is this:

 kernel vec4 opacity( __sample s1, __sample s2) {
     vec3 text = s1.rgb;
     float textAlpha = s1.a;
     vec3 background = s2.rgb;

     vec3 res = background * (1.0 - textAlpha) + text;
     return vec4(res, 1.0);
 }

So to fix this color 'issue' you must convert text image from RGB to sRGB. I guess your next question will be how to do that ;)

Important: iOS does not support device-independent or generic color spaces. iOS applications must use device color spaces instead. Apple doc about color spaces

test image with RGB and sRGB color spaces

Nonparticipating answered 5/3, 2018 at 15:49 Comment(11)
Thanks for your answer! Unfortunately, while this appears as expected using the 50% gray test I had, it doesn't look correct on real photos due to the overlay blend mode. I updated the project to use a real photo that more clearly demonstrates the issue, and I added your proposed changes that you can uncomment to try it with the real background image.Peepul
Try exchange .overlay with .hardLight. Is the result what are you after? If not only other way is custom CIFilter.Nonparticipating
Hm hard light works well for black and white text, but trying orange text, it appears red.Peepul
I am testing custom CIFilter, I need to find good math formula for that "normal" blend mode. using formulas from there: simplefilter.de/en/basics/mixmods.htmlNonparticipating
With custom CIFilter I got same result as with CISourceOverCompositing. CISourceOverCompositing formula is: vec3 res = background * (1.0 - textAlpha) + text; vec3 is rgb color vector. Now still the question is what is right formula for composition that graphic programs use for "normal" blend mode. Have hard time to find it.Nonparticipating
I am starting to think that problem is not in the formula but in the color space. Graphic programs are using sRGB....Nonparticipating
Hmm I tried replacing CGColorSpaceCreateDeviceRGB() with CGColorSpace(name: CGColorSpace.sRGB) but I'm getting the same result.Peepul
yeah, I know. there is a catch. "Important: iOS does not support device-independent or generic color spaces. iOS applications must use device color spaces instead." developer.apple.com/library/content/documentation/…Nonparticipating
in other words Apple is saying that you need to do it yourself. It is not that hard (but not exactly easy, color spaces are tricky) ryanjuckett.com/programming/rgb-color-space-conversion/?start=2Nonparticipating
Thanks for your answer! I was able to obtain the desired results simply by using the sourceAtop blend mode instead of hardLight, if you hadn't seen that in the other comments.Peepul
yes, got it. simple solution is always the best. Andy's answer is good, except that A in his equations is not alpha. Alpha in your test image is never bigger than 0.3. It is lightness and can be computed as: float lightness = (0.2126* b.r + 0.7152*b.g + 0.0722*b.b). He should fix that. HardLight is only good for grey or near to grey images as you get color shift if you use colors.Nonparticipating

© 2022 - 2024 — McMap. All rights reserved.