Change color of certain pixels in a UIImage
Asked Answered
R

2

28

For a given multi-color PNG UIImage (with transparency), what is the best/Swift-idiomatic way to:

  1. create a duplicate UIImage
  2. find all black pixels in the copy and change them to red
  3. (return the modified copy)

There are a few related questions on SO but I haven't been able to find something that works.

Ruwenzori answered 27/7, 2015 at 18:52 Comment(3)
One thing I can suggest is going through every pixel and manually changing it if its black.Hayton
Indeed... my question is how to do that :) I am new to Swift and am so unfamiliar with the APIs that I don't even know what to Google.Ruwenzori
@Ruwenzori didn't processing speed was a barrier for you? I m using the same below code for exactly the same scenario you have mentioned, but it's taking 4-5 secsOperant
G
64

You have to extract the pixel buffer of the image, at which point you can loop through, changing pixels as you see fit. At the end, create a new image from the buffer.

In Swift 3, this looks like:

func processPixels(in image: UIImage) -> UIImage? {
    guard let inputCGImage = image.cgImage else {
        print("unable to get cgImage")
        return nil
    }
    let colorSpace       = CGColorSpaceCreateDeviceRGB()
    let width            = inputCGImage.width
    let height           = inputCGImage.height
    let bytesPerPixel    = 4
    let bitsPerComponent = 8
    let bytesPerRow      = bytesPerPixel * width
    let bitmapInfo       = RGBA32.bitmapInfo

    guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
        print("unable to create context")
        return nil
    }
    context.draw(inputCGImage, in: CGRect(x: 0, y: 0, width: width, height: height))

    guard let buffer = context.data else {
        print("unable to get context data")
        return nil
    }

    let pixelBuffer = buffer.bindMemory(to: RGBA32.self, capacity: width * height)

    for row in 0 ..< Int(height) {
        for column in 0 ..< Int(width) {
            let offset = row * width + column
            if pixelBuffer[offset] == .black {
                pixelBuffer[offset] = .red
            }
        }
    }

    let outputCGImage = context.makeImage()!
    let outputImage = UIImage(cgImage: outputCGImage, scale: image.scale, orientation: image.imageOrientation)

    return outputImage
}

struct RGBA32: Equatable {
    private var color: UInt32

    var redComponent: UInt8 {
        return UInt8((color >> 24) & 255)
    }

    var greenComponent: UInt8 {
        return UInt8((color >> 16) & 255)
    }

    var blueComponent: UInt8 {
        return UInt8((color >> 8) & 255)
    }

    var alphaComponent: UInt8 {
        return UInt8((color >> 0) & 255)
    }        

    init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
        let red   = UInt32(red)
        let green = UInt32(green)
        let blue  = UInt32(blue)
        let alpha = UInt32(alpha)
        color = (red << 24) | (green << 16) | (blue << 8) | (alpha << 0)
    }

    static let red     = RGBA32(red: 255, green: 0,   blue: 0,   alpha: 255)
    static let green   = RGBA32(red: 0,   green: 255, blue: 0,   alpha: 255)
    static let blue    = RGBA32(red: 0,   green: 0,   blue: 255, alpha: 255)
    static let white   = RGBA32(red: 255, green: 255, blue: 255, alpha: 255)
    static let black   = RGBA32(red: 0,   green: 0,   blue: 0,   alpha: 255)
    static let magenta = RGBA32(red: 255, green: 0,   blue: 255, alpha: 255)
    static let yellow  = RGBA32(red: 255, green: 255, blue: 0,   alpha: 255)
    static let cyan    = RGBA32(red: 0,   green: 255, blue: 255, alpha: 255)

    static let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue

    static func ==(lhs: RGBA32, rhs: RGBA32) -> Bool {
        return lhs.color == rhs.color
    }
}

For Swift 2 rendition, see previous revision of this answer.

Gyrose answered 27/7, 2015 at 19:21 Comment(29)
Rob, thank you for your prompt and thorough response! I am running into the following runtime error when I try your code: <Error>: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 24 bits/pixel; 3-component color space; kCGImageAlphaNone; 352 bytes/row. fatal error: unexpectedly found nil while unwrapping an Optional value - Could it be the colorSpace?Ruwenzori
Resizable iPad / iOS 8.4 (12H141). Thank you again for your help.Ruwenzori
Yeah, that simulator works fine for me. The kCGImageAlphaNone in your error message is highly suspect. It's almost like you're passing 0 for the last parameter of CGBitmapContextCreate. Double check that bitmapInfo parameter. Try simply CGBitmapInfo(CGImageAlphaInfo.PremultipliedLast.rawValue) for bitmapInfo.Gyrose
Yep, that was it! It works great now. Thank you again for your time and expertise!Ruwenzori
I am trying the code in this answer, but the image I get as a result seems to be magnifyed, as if there was a confusion between pixels and points. Am I missing something?Pouncey
@Pouncey - I don't think this would really magnify it, though it might seem like it was if you provided it a retina image (e.g. if you provided an image with a scale of 2, you'd get an image whose dimensions were twice as great, but with a scale of 1, which is really the same set of pixels, but just a different scale factor applied). But I've modified the routine above to preserve the scale (and the orientation) to avoid that confusion.Gyrose
This how I build theImage before feeding it to processPixelsInImage. Is something wrong? UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0); view.layer.renderInContext(UIGraphicsGetCurrentContext()!) let theImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()Pouncey
I have only one version of your code, the one on this post(answer). I don't know if this is the revised version or not. Beside if I do the same as you and do not get the same result, I must be making a mistake when setting the new image, I am actually not sure about the correct way to do that. I tried 2 ways, they both expand the image. How do you do it?Pouncey
In response to your earlier comment, I changed the code in my answer. See its revision history, where I create the UIImage from the CGImage, now preserving the scale (and orientation) of the original image.Gyrose
Now that works as you say. Thanks a lot for your help. It will allow me to move forward in my project.Pouncey
Great answer: Briefly explain what has to be done, then provide working code.Gruel
@SNos - It works fine for me on Swift 2.2. I'd suggest you post a separate question illustrating your problem. BTW, I have added Swift 3.0 implementation, though. (And I also got rid of those global red, green, blue, etc., functions and wrapped it all in a RGBA32 struct.)Gyrose
Just used it as a class and works fine.. How did you converted it for swift 3?Calabrese
Usually you can let the Xcode converter do most of the heavy lifting. Here, I just went through the issues the compiler raised one by one. It's all fairly self explanatory. It just takes a few minutes.Gyrose
can we have the objective-c version for the above ?Ezar
You can clean this up, but this illustrates the basic Objective-C concept, which is nearly identical to the above: gist.github.com/robertmryan/b89cf29a4b4e69abb02fcfd6640bef51Gyrose
Here is the swift3 for a transparent color let clear = RGBA32(red: 0, green: 0, blue: 0, alpha: 0)Botswana
It appears that the image that this returns has all zero color values. Any ideas?Skipjack
RGBA32 gives the compiler an error "Expression was too complex to be solved in reasonable time; consider breaking up the expression into distinct sub-expressions" for the color definition in Swift 4.Jim
@VagueExplanation - Whenever that happens, split the offending line into separate statements. See how I did that in init method in my revised answer, above.Gyrose
@Gyrose and all others, it does take some time due to the nested loop over rows and columns or width and height. I am posting the android code here playing with Bitmap, which is working fine with only one loop, and is fast as well.Operant
@Gyrose thanks for your comment, I was trying the same code for an image of size 375*375. And I was changing the pixels color of like 5% - to 15% of the image, and it was taking 3-4 seconds for me on Simulator.Operant
I don't get the second part of your comment @Gyrose The Interim Step......Operant
@RajaSaad Let us continue this discussion in chat.Gyrose
@Gyrose thanks a lot man, its working fine on Release build. Saved a lot of time.Operant
@Gyrose once again joining the comments section! I am not eligible for chat. So, commenting my concerns here. I have an image of 1080*1080 with a size of 1.3 MB, its not working efficiently even for the adhoc and release build. What I feel is using CGImage is causing this. What if I go for CIImage? would it effect the performance?Operant
@RajaSaad - On my iPhone 12 Pro Max, the processing of a 1080×1080 px image with the above color substitution took 0.03 sec in a release build. You can parallelize the routine, but there just isn't enough going on computationally (at least with my simple color substitution) to justify the overhead. I only started to see processing time improvements in parallel processing when images started to exceed 10,000×10,000. You should probably should just try both CIImage and CGImage approaches and benchmark both. Or post your example to codereview.stackexchange.com.Gyrose
codereview.stackexchange.com/questions/267889/… @GyroseOperant
@Gyrose thanks a lot for saving my time!! The way you described each and every issue and solution to that was more than amazing. Now my code is working like a charm after I followed your instructions and your code snippet. To every one facing this kind of issue, or want to work on changing image pixels color, please go through the question link in the above comment and enjoy the thoroughly explained answer by Rob #RespectOperant
S
2

For getting better result we can search color range in image pixels, referring to @Rob answer I made update and now the result is better.

func processByPixel(in image: UIImage) -> UIImage? {

    guard let inputCGImage = image.cgImage else { print("unable to get cgImage"); return nil }
    let colorSpace       = CGColorSpaceCreateDeviceRGB()
    let width            = inputCGImage.width
    let height           = inputCGImage.height
    let bytesPerPixel    = 4
    let bitsPerComponent = 8
    let bytesPerRow      = bytesPerPixel * width
    let bitmapInfo       = RGBA32.bitmapInfo

    guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
        print("Cannot create context!"); return nil
    }
    context.draw(inputCGImage, in: CGRect(x: 0, y: 0, width: width, height: height))

    guard let buffer = context.data else { print("Cannot get context data!"); return nil }

    let pixelBuffer = buffer.bindMemory(to: RGBA32.self, capacity: width * height)

    for row in 0 ..< Int(height) {
        for column in 0 ..< Int(width) {
            let offset = row * width + column

           /*
             * Here I'm looking for color : RGBA32(red: 231, green: 239, blue: 247, alpha: 255)
             * and I will convert pixels color that in range of above color to transparent
             * so comparetion can done like this (pixelColorRedComp >= ourColorRedComp - 1 && pixelColorRedComp <= ourColorRedComp + 1 && green && blue)
             */

            if pixelBuffer[offset].redComponent >=  230 && pixelBuffer[offset].redComponent <=  232 &&
                pixelBuffer[offset].greenComponent >=  238 && pixelBuffer[offset].greenComponent <=  240 &&
                pixelBuffer[offset].blueComponent >= 246 && pixelBuffer[offset].blueComponent <= 248 &&
                pixelBuffer[offset].alphaComponent == 255 {
                pixelBuffer[offset] = .transparent
            }
        }
    }

    let outputCGImage = context.makeImage()!
    let outputImage = UIImage(cgImage: outputCGImage, scale: image.scale, orientation: image.imageOrientation)

    return outputImage
}

I hope this helps someone 🎉

Slut answered 6/12, 2018 at 8:38 Comment(1)
it does take some time due to the nested loop over rows and columns or width and height. I can post the android code here playing with Bitmap, which is working fine with only one loop, and is fast as well.Operant

© 2022 - 2024 — McMap. All rights reserved.