Convert image to grayscale
Asked Answered
C

13

50

I am trying to convert an image into grayscale in the following way:

#define bytesPerPixel 4
#define bitsPerComponent 8

-(unsigned char*) getBytesForImage: (UIImage*)pImage
{
    CGImageRef image = [pImage CGImage];
    NSUInteger width = CGImageGetWidth(image);
    NSUInteger height = CGImageGetHeight(image);

    NSUInteger bytesPerRow = bytesPerPixel * width;

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    unsigned char *rawData = malloc(height * width * 4);
    CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    CGColorSpaceRelease(colorSpace);

    CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
    CGContextRelease(context);

    return rawData;
}

-(UIImage*) processImage: (UIImage*)pImage
{   
    DebugLog(@"processing image");
    unsigned char *rawData = [self getBytesForImage: pImage];

    NSUInteger width = pImage.size.width;
    NSUInteger height = pImage.size.height;

    DebugLog(@"width: %d", width);
    DebugLog(@"height: %d", height);

    NSUInteger bytesPerRow = bytesPerPixel * width;

    for (int xCoordinate = 0; xCoordinate < width; xCoordinate++)
    {
        for (int yCoordinate = 0; yCoordinate < height; yCoordinate++)
        {
            int byteIndex = (bytesPerRow * yCoordinate) + xCoordinate * bytesPerPixel;

            //Getting original colors
            float red = ( rawData[byteIndex] / 255.f );
            float green = ( rawData[byteIndex + 1] / 255.f );
            float blue = ( rawData[byteIndex + 2] / 255.f );

            //Processing pixel data
            float averageColor = (red + green + blue) / 3.0f;

            red = averageColor;
            green = averageColor;
            blue = averageColor;

            //Assigning new color components
            rawData[byteIndex] = (unsigned char) red * 255;
            rawData[byteIndex + 1] = (unsigned char) green * 255;
            rawData[byteIndex + 2] = (unsigned char) blue * 255;


        }
    }

    NSData* newPixelData = [NSData dataWithBytes: rawData length: height * width * 4];
    UIImage* newImage = [UIImage imageWithData: newPixelData];

    free(rawData);

    DebugLog(@"image processed");

    return newImage;

}

So when I want to convert an image I just call processImage:

imageToDisplay.image = [self processImage: image];

But imageToDisplay doesn't display. What may be the problem?

Thanks.

Couching answered 19/8, 2009 at 9:55 Comment(1)
Which cheeky monkey has added this to their favourites without upvoting it? totally blinding lack of generosity!Nepheline
S
30

What exactly takes place when you use this function? Is the function returning an invalid image, or is the display not showing it correctly?

This is the method I use to convert to greyscale.

- (UIImage *) convertToGreyscale:(UIImage *)i {

    int kRed = 1;
    int kGreen = 2;
    int kBlue = 4;

    int colors = kGreen | kBlue | kRed;
    int m_width = i.size.width;
    int m_height = i.size.height;

    uint32_t *rgbImage = (uint32_t *) malloc(m_width * m_height * sizeof(uint32_t));
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(rgbImage, m_width, m_height, 8, m_width * 4, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
    CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
    CGContextSetShouldAntialias(context, NO);
    CGContextDrawImage(context, CGRectMake(0, 0, m_width, m_height), [i CGImage]);
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);

    // now convert to grayscale
    uint8_t *m_imageData = (uint8_t *) malloc(m_width * m_height);
    for(int y = 0; y < m_height; y++) {
        for(int x = 0; x < m_width; x++) {
            uint32_t rgbPixel=rgbImage[y*m_width+x];
            uint32_t sum=0,count=0;
            if (colors & kRed) {sum += (rgbPixel>>24)&255; count++;}
            if (colors & kGreen) {sum += (rgbPixel>>16)&255; count++;}
            if (colors & kBlue) {sum += (rgbPixel>>8)&255; count++;}
            m_imageData[y*m_width+x]=sum/count;
        }
    }
    free(rgbImage);

    // convert from a gray scale image back into a UIImage
    uint8_t *result = (uint8_t *) calloc(m_width * m_height *sizeof(uint32_t), 1);

    // process the image back to rgb
    for(int i = 0; i < m_height * m_width; i++) {
        result[i*4]=0;
        int val=m_imageData[i];
        result[i*4+1]=val;
        result[i*4+2]=val;
        result[i*4+3]=val;
    }

    // create a UIImage
    colorSpace = CGColorSpaceCreateDeviceRGB();
    context = CGBitmapContextCreate(result, m_width, m_height, 8, m_width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
    CGImageRef image = CGBitmapContextCreateImage(context);
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);
    UIImage *resultUIImage = [UIImage imageWithCGImage:image];
    CGImageRelease(image);

    free(m_imageData);

    // make sure the data will be released by giving it to an autoreleased NSData
    [NSData dataWithBytesNoCopy:result length:m_width * m_height];

    return resultUIImage;
}
Scorcher answered 19/8, 2009 at 15:11 Comment(10)
Thanks, that is working, but I get an image that is rotated by 90 degrees from the original position. How can I fix it?Couching
Check out cocoadev.com/index.pl?UIImage (or just google "How to rotate a UIImage"Scorcher
I notice that this function doesn't respect the original image's alpha transparency mask.Zemstvo
Wikipedia and others seem to imply that the correct distribution is 0.3RED+0.59GREEN+0.11Blue, not just averaging the sum of the three colors.Fogg
This is a function I've been using for quite some time with 0 problem. Take it or leave it :)Scorcher
FYI there is a memory leak in Dutchie432's answer with respect to: ><code>uint8_t *m_imageData = (uint8_t *) malloc(m_width * m_height); ...which is never freed and should be.Chane
@graemer957: Good catch. Looks like Ivan Vučica has corrected this.Pickle
This code does not appear to be properly converting the rgb image data to equivalent gray levels. It's worse than what is described in @mahboudz's otherwise valid comment since it isn't even averaging the three color components. Rather, due to what appears to be some sort of bug, it actually ends up just taking the green component from each pixel and making the the gray value. Since the eye is more responsive to green than the other two components, it's understandable why the answerer (and others) might have thought everything was working fine...Pickle
I don't really have the means to test this anymore, but I will take your word for it. +1 for the investigation.Scorcher
I think the problem is with the line int colors = kGreen;, which seems to force processing of only the green component of the pixels. To correct, try int colors = kGreen | kBlue | kRed;.Fogg
N
50

I needed a version that preserved the alpha channel, so I modified the code posted by Dutchie432:

@implementation UIImage (grayscale)

typedef enum {
    ALPHA = 0,
    BLUE = 1,
    GREEN = 2,
    RED = 3
} PIXELS;

- (UIImage *)convertToGrayscale {
    CGSize size = [self size];
    int width = size.width;
    int height = size.height;

    // the pixels will be painted to this array
    uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));

    // clear the pixels so any transparency is preserved
    memset(pixels, 0, width * height * sizeof(uint32_t));

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // create a context with RGBA pixels
    CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace, 
                                                 kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);

    // paint the bitmap to our context which will fill in the pixels array
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);

    for(int y = 0; y < height; y++) {
        for(int x = 0; x < width; x++) {
            uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];

            // convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
            uint32_t gray = 0.3 * rgbaPixel[RED] + 0.59 * rgbaPixel[GREEN] + 0.11 * rgbaPixel[BLUE];

            // set the pixels to gray
            rgbaPixel[RED] = gray;
            rgbaPixel[GREEN] = gray;
            rgbaPixel[BLUE] = gray;
        }
    }

    // create a new CGImageRef from our context with the modified pixels
    CGImageRef image = CGBitmapContextCreateImage(context);

    // we're done with the context, color space, and pixels
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);
    free(pixels);

    // make a new UIImage to return
    UIImage *resultUIImage = [UIImage imageWithCGImage:image];

    // we're done with image now too
    CGImageRelease(image);

    return resultUIImage;
}

@end
Nabila answered 8/10, 2009 at 23:14 Comment(1)
It works, but not support Retina display, ruralcoder(below) updates this.Legist
W
45

Here is a code using only UIKit and the luminosity blending mode. A bit of a hack, but it works well.

// Transform the image in grayscale.
- (UIImage*) grayishImage: (UIImage*) inputImage {

    // Create a graphic context.
    UIGraphicsBeginImageContextWithOptions(inputImage.size, YES, 1.0);
    CGRect imageRect = CGRectMake(0, 0, inputImage.size.width, inputImage.size.height);

    // Draw the image with the luminosity blend mode.
    // On top of a white background, this will give a black and white image.
    [inputImage drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0];

    // Get the resulting image.
    UIImage *filteredImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return filteredImage;

}

To keep the transparency, maybe you can just set the opaque mode parameter of the UIGraphicsBeginImageContextWithOptions to NO. Needs to be checked.

Whitson answered 5/6, 2011 at 19:29 Comment(2)
For readers: UIGraphicsBeginImageContextWithOptions is iOS 4-only. Setting opaque to NO does not preserve alpha.Vinitavinn
This method takes 10ms instead of 2ms for accepted answer on my image.Melodramatic
A
37

Based on Cam's code with the ability to deal with the scale for Retina displays.

- (UIImage *) toGrayscale 
{
    const int RED = 1;
    const int GREEN = 2;
    const int BLUE = 3;

    // Create image rectangle with current image width/height
    CGRect imageRect = CGRectMake(0, 0, self.size.width * self.scale, self.size.height * self.scale);

    int width = imageRect.size.width;
    int height = imageRect.size.height;

    // the pixels will be painted to this array
    uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));

    // clear the pixels so any transparency is preserved
    memset(pixels, 0, width * height * sizeof(uint32_t));

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // create a context with RGBA pixels
    CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace, 
                                                 kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);

    // paint the bitmap to our context which will fill in the pixels array
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);

    for(int y = 0; y < height; y++) {
        for(int x = 0; x < width; x++) {
            uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];

            // convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
            uint8_t gray = (uint8_t) ((30 * rgbaPixel[RED] + 59 * rgbaPixel[GREEN] + 11 * rgbaPixel[BLUE]) / 100); 

            // set the pixels to gray
            rgbaPixel[RED] = gray;
            rgbaPixel[GREEN] = gray;
            rgbaPixel[BLUE] = gray;
        }
    }

    // create a new CGImageRef from our context with the modified pixels
    CGImageRef image = CGBitmapContextCreateImage(context);

    // we're done with the context, color space, and pixels
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);
    free(pixels);

    // make a new UIImage to return
    UIImage *resultUIImage = [UIImage imageWithCGImage:image
                                                 scale:self.scale 
                                           orientation:UIImageOrientationUp];

    // we're done with image now too
    CGImageRelease(image);

    return resultUIImage;
}
Apocopate answered 11/3, 2011 at 18:6 Comment(6)
Works pretty well even on PNG with transparent background! Thank you!Ancillary
Thanks a ton! I needed a way to autogenerate a "glyph" based on a changing image, and this is perfect!Erickaericksen
Awesome! Thanks so much! Works well and without (feared) performance hit.Pompano
I have improved the performance a bit by replacing the floats by integers. The performance could be further improved by only one for and adding 4 to rgbaPixel pointer instead of calculating its position with every iteration.Ombre
I am not sure if this is right. But RED corresponds to the most significant bits. So in this case, it should be RED = 3.Neuron
Yes, just used this, and it seems RED and BLUE should be swapped.Shushubert
F
31

I liked Mathieu Godart's answer, but it didn't seem to work properly for retina or alpha images. Here's an updated version that seems to work for both of those for me:

- (UIImage*)convertToGrayscale
{
    UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
    CGRect imageRect = CGRectMake(0.0f, 0.0f, self.size.width, self.size.height);

    CGContextRef ctx = UIGraphicsGetCurrentContext();

    // Draw a white background
    CGContextSetRGBFillColor(ctx, 1.0f, 1.0f, 1.0f, 1.0f);
    CGContextFillRect(ctx, imageRect);

    // Draw the luminosity on top of the white background to get grayscale
    [self drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0f];

    // Apply the source image's alpha
    [self drawInRect:imageRect blendMode:kCGBlendModeDestinationIn alpha:1.0f];

    UIImage* grayscaleImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return grayscaleImage;
}
Foible answered 15/1, 2013 at 21:58 Comment(1)
makes my semi-transparent colored image too light.Gamble
S
30

What exactly takes place when you use this function? Is the function returning an invalid image, or is the display not showing it correctly?

This is the method I use to convert to greyscale.

- (UIImage *) convertToGreyscale:(UIImage *)i {

    int kRed = 1;
    int kGreen = 2;
    int kBlue = 4;

    int colors = kGreen | kBlue | kRed;
    int m_width = i.size.width;
    int m_height = i.size.height;

    uint32_t *rgbImage = (uint32_t *) malloc(m_width * m_height * sizeof(uint32_t));
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(rgbImage, m_width, m_height, 8, m_width * 4, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
    CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
    CGContextSetShouldAntialias(context, NO);
    CGContextDrawImage(context, CGRectMake(0, 0, m_width, m_height), [i CGImage]);
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);

    // now convert to grayscale
    uint8_t *m_imageData = (uint8_t *) malloc(m_width * m_height);
    for(int y = 0; y < m_height; y++) {
        for(int x = 0; x < m_width; x++) {
            uint32_t rgbPixel=rgbImage[y*m_width+x];
            uint32_t sum=0,count=0;
            if (colors & kRed) {sum += (rgbPixel>>24)&255; count++;}
            if (colors & kGreen) {sum += (rgbPixel>>16)&255; count++;}
            if (colors & kBlue) {sum += (rgbPixel>>8)&255; count++;}
            m_imageData[y*m_width+x]=sum/count;
        }
    }
    free(rgbImage);

    // convert from a gray scale image back into a UIImage
    uint8_t *result = (uint8_t *) calloc(m_width * m_height *sizeof(uint32_t), 1);

    // process the image back to rgb
    for(int i = 0; i < m_height * m_width; i++) {
        result[i*4]=0;
        int val=m_imageData[i];
        result[i*4+1]=val;
        result[i*4+2]=val;
        result[i*4+3]=val;
    }

    // create a UIImage
    colorSpace = CGColorSpaceCreateDeviceRGB();
    context = CGBitmapContextCreate(result, m_width, m_height, 8, m_width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
    CGImageRef image = CGBitmapContextCreateImage(context);
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);
    UIImage *resultUIImage = [UIImage imageWithCGImage:image];
    CGImageRelease(image);

    free(m_imageData);

    // make sure the data will be released by giving it to an autoreleased NSData
    [NSData dataWithBytesNoCopy:result length:m_width * m_height];

    return resultUIImage;
}
Scorcher answered 19/8, 2009 at 15:11 Comment(10)
Thanks, that is working, but I get an image that is rotated by 90 degrees from the original position. How can I fix it?Couching
Check out cocoadev.com/index.pl?UIImage (or just google "How to rotate a UIImage"Scorcher
I notice that this function doesn't respect the original image's alpha transparency mask.Zemstvo
Wikipedia and others seem to imply that the correct distribution is 0.3RED+0.59GREEN+0.11Blue, not just averaging the sum of the three colors.Fogg
This is a function I've been using for quite some time with 0 problem. Take it or leave it :)Scorcher
FYI there is a memory leak in Dutchie432's answer with respect to: ><code>uint8_t *m_imageData = (uint8_t *) malloc(m_width * m_height); ...which is never freed and should be.Chane
@graemer957: Good catch. Looks like Ivan Vučica has corrected this.Pickle
This code does not appear to be properly converting the rgb image data to equivalent gray levels. It's worse than what is described in @mahboudz's otherwise valid comment since it isn't even averaging the three color components. Rather, due to what appears to be some sort of bug, it actually ends up just taking the green component from each pixel and making the the gray value. Since the eye is more responsive to green than the other two components, it's understandable why the answerer (and others) might have thought everything was working fine...Pickle
I don't really have the means to test this anymore, but I will take your word for it. +1 for the investigation.Scorcher
I think the problem is with the line int colors = kGreen;, which seems to force processing of only the green component of the pixels. To correct, try int colors = kGreen | kBlue | kRed;.Fogg
R
13

Different approach with CIFilter. Preserves alpha channel and works with transparent background:

+ (UIImage *)convertImageToGrayScale:(UIImage *)image
{
    CIImage *inputImage = [CIImage imageWithCGImage:image.CGImage];
    CIContext *context = [CIContext contextWithOptions:nil];

    CIFilter *filter = [CIFilter filterWithName:@"CIColorControls"];
    [filter setValue:inputImage forKey:kCIInputImageKey];
    [filter setValue:@(0.0) forKey:kCIInputSaturationKey];

    CIImage *outputImage = filter.outputImage;

    CGImageRef cgImageRef = [context createCGImage:outputImage fromRect:outputImage.extent];

    UIImage *result = [UIImage imageWithCGImage:cgImageRef];
    CGImageRelease(cgImageRef);

    return result;

}
Rogan answered 4/7, 2015 at 18:4 Comment(2)
This doesn't seem to preserve the size correctly. Using this scaled my images up 2x.Ayesha
Since I can't edit my comment, use: [UIImage imageWithCGImage:cgImageRef scale:self.scale orientation:self.imageOrientation]; to ensure retina support.Ayesha
U
11

A swift extension to UIImage, preserving alpha:

extension UIImage {

    private func convertToGrayScaleNoAlpha() -> CGImageRef {
        let colorSpace = CGColorSpaceCreateDeviceGray();
        let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.None.rawValue)
        let context = CGBitmapContextCreate(nil, UInt(size.width), UInt(size.height), 8, 0, colorSpace, bitmapInfo)
        CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), self.CGImage)
        return CGBitmapContextCreateImage(context)
    }


    /**
        Return a new image in shades of gray + alpha
    */
     func convertToGrayScale() -> UIImage {
        let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.Only.rawValue)
        let context = CGBitmapContextCreate(nil, UInt(size.width), UInt(size.height), 8, 0, nil, bitmapInfo)
        CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), self.CGImage);
        let mask = CGBitmapContextCreateImage(context)
        return UIImage(CGImage: CGImageCreateWithMask(convertToGrayScaleNoAlpha(), mask), scale: scale, orientation:imageOrientation)!
    }
}
Unknown answered 15/1, 2015 at 9:22 Comment(0)
C
9

Here is another good solution as a category method on UIImage. It's based on this blog post and its comments. But I fixed a memory issue here:

- (UIImage *)grayScaleImage {
    // Create image rectangle with current image width/height
    CGRect imageRect = CGRectMake(0, 0, self.size.width * self.scale, self.size.height * self.scale);
    // Grayscale color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
    // Create bitmap content with current image size and grayscale colorspace
    CGContextRef context = CGBitmapContextCreate(nil, self.size.width * self.scale, self.size.height * self.scale, 8, 0, colorSpace, kCGImageAlphaNone);
    // Draw image into current context, with specified rectangle
    // using previously defined context (with grayscale colorspace)
    CGContextDrawImage(context, imageRect, [self CGImage]);
    // Create bitmap image info from pixel data in current context
    CGImageRef grayImage = CGBitmapContextCreateImage(context);
    // release the colorspace and graphics context
    CGColorSpaceRelease(colorSpace);
    CGContextRelease(context);
    // make a new alpha-only graphics context
    context = CGBitmapContextCreate(nil, self.size.width * self.scale, self.size.height * self.scale, 8, 0, nil, kCGImageAlphaOnly);
    // draw image into context with no colorspace
    CGContextDrawImage(context, imageRect, [self CGImage]);
    // create alpha bitmap mask from current context
    CGImageRef mask = CGBitmapContextCreateImage(context);
    // release graphics context
    CGContextRelease(context);
    // make UIImage from grayscale image with alpha mask
    CGImageRef cgImage = CGImageCreateWithMask(grayImage, mask);
    UIImage *grayScaleImage = [UIImage imageWithCGImage:cgImage scale:self.scale orientation:self.imageOrientation];
    // release the CG images
CGImageRelease(cgImage);
    CGImageRelease(grayImage);
    CGImageRelease(mask);
    // return the new grayscale image
    return grayScaleImage;
}
Champlain answered 14/8, 2012 at 23:11 Comment(0)
N
9

An fast and efficient Swift 3 implementation for iOS 9/10. I feel this is efficient having now tried every image filtering method I could find for processing 100's of images at a time (when downloading using AlamofireImage's ImageFilter option). I settled on this method as FAR better than any other I tried (for my use case) in terms of memory and speed.

func convertToGrayscale() -> UIImage? {

    UIGraphicsBeginImageContextWithOptions(self.size, false, self.scale)
    let imageRect = CGRect(x: 0.0, y: 0.0, width: self.size.width, height: self.size.height)
    let context = UIGraphicsGetCurrentContext()

    // Draw a white background
    context!.setFillColor(red: 1.0, green: 1.0, blue: 1.0, alpha: 1.0)
    context!.fill(imageRect)

    // optional: increase contrast with colorDodge before applying luminosity 
    // (my images were too dark when using just luminosity - you may not need this)
    self.draw(in: imageRect, blendMode: CGBlendMode.colorDodge, alpha: 0.7)


    // Draw the luminosity on top of the white background to get grayscale of original image
    self.draw(in: imageRect, blendMode: CGBlendMode.luminosity, alpha: 0.90)

    // optional: re-apply alpha if your image has transparency - based on user1978534's answer (I haven't tested this as I didn't have transparency - I just know this would be the the syntax)
    // self.draw(in: imageRect, blendMode: CGBlendMode.destinationIn, alpha: 1.0)

    let grayscaleImage = UIGraphicsGetImageFromCurrentImageContext()
    UIGraphicsEndImageContext()
    return grayscaleImage
}


Re the use of colorDodge: I initially had issues getting my images light enough to match the grayscale coloring produced by using CIFilter("CIPhotoEffectTonal") - my results turned out too dark. I was able to get a decent match by applying CGBlendMode.colorDodge @ ~ 0.7 alpha, which seems to increase the overall contrast.

Other color blend effects might work too - but I think you would want to apply before luminocity, which is the greyscale filtering effect. I found this page very helpful to reference about the different BlendModes.



Re efficiency gains I found: I need to process 100's of thumbnail images as they are loaded from a server (using AlamofireImage for async loading, caching, and applying a filter). I started to experience crashes when the total size of my images exceeded the cache size, so I experimented with other methods.

The CoreImage CPU based CIFilter approach was the first I tried, and wasn't memory efficient enough for the number of images I'm handling.

I also tried applying a CIFilter via the GPU using EAGLContext(api: .openGLES3), which was actually even more memory intensive - I actually got memory warnings for 450+ mb use while loading 200 + images.

I tried bitmap processing (i.e. CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: CGImageAlphaInfo.none.rawValue)... which worked well except I couldn't get a high enough resolution for a modern retina device. Images were very grainy even when I added context.scaleBy(x: scaleFactor, y: scaleFactor).

So out of everything I tried, this method (UIGraphics Context Draw) to be VASTLY more efficient re speed and memory when applying as a filter to AlamofireImage. As in, seeing less than 70 mbs ram when processing my 200+ images and them basically loading instantly, rather than over about 35 seconds it took with the openEAGL methods. I know these are not very scientific benchmarks. I will instrument it if anyone is very curious though :)


And lastly, if you do need to pass this or another greyscale filter into AlamofireImage - this is how to do it: (note you must import AlamofireImage into your class to use ImageFilter)

public struct GrayScaleFilter: ImageFilter {
    public init() {
    }

    public var filter: (UIImage) -> UIImage {
        return { image in
            return image.convertToGrayscale() ?? image
        }
    }
}

To use it, create the filter like this and pass into af_setImage like so:

let filter = GrayScaleFilter()
imageView.af_setImage(withURL: url, filter: filter)
Nonmoral answered 14/1, 2017 at 2:35 Comment(1)
really fast! tested!Algid
E
6
@interface UIImageView (Settings)

- (void)convertImageToGrayScale;

@end

@implementation UIImageView (Settings)

- (void)convertImageToGrayScale
{
    // Create image rectangle with current image width/height
    CGRect imageRect = CGRectMake(0, 0, self.image.size.width, self.image.size.height);

    // Grayscale color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();

    // Create bitmap content with current image size and grayscale colorspace
    CGContextRef context = CGBitmapContextCreate(nil, self.image.size.width, self.image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);

    // Draw image into current context, with specified rectangle
    // using previously defined context (with grayscale colorspace)
    CGContextDrawImage(context, imageRect, [self.image CGImage]);

    // Create bitmap image info from pixel data in current context
    CGImageRef imageRef = CGBitmapContextCreateImage(context);

    // Create a new UIImage object
    UIImage *newImage = [UIImage imageWithCGImage:imageRef];

    // Release colorspace, context and bitmap information
    CGColorSpaceRelease(colorSpace);
    CGContextRelease(context);
    CFRelease(imageRef);

    // Return the new grayscale image
    self.image = newImage;
}

@end
Etherize answered 9/1, 2017 at 11:25 Comment(1)
This answer needs more upvotes. It's almost as concise and elegant as using blend modes, and more performant. I just posted a similar answer that adds more details around opacity and retina support.Modulus
M
2

I have yet another answer. This one is extremely performant and handles retina graphics as well as transparency. It expands on Sargis Gevorgyan's approach:

+ (UIImage*) grayScaleFromImage:(UIImage*)image opaque:(BOOL)opaque
{
// NSTimeInterval start = [NSDate timeIntervalSinceReferenceDate];

CGSize size = image.size;

CGRect bounds = CGRectMake(0, 0, size.width, size.height);

// Create bitmap content with current image size and grayscale colorspace
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
size_t bitsPerComponent = 8;
size_t bytesPerPixel = opaque ? 1 : 2;
size_t bytesPerRow = bytesPerPixel * size.width * image.scale;
CGContextRef context = CGBitmapContextCreate(nil, size.width, size.height, bitsPerComponent, bytesPerRow, colorSpace, opaque ? kCGImageAlphaNone : kCGImageAlphaPremultipliedLast);

// create image from bitmap
CGContextDrawImage(context, bounds, image.CGImage);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage* result = [[UIImage alloc] initWithCGImage:cgImage scale:image.scale orientation:UIImageOrientationUp];
CGImageRelease(cgImage);
CGContextRelease(context);

// performance results on iPhone 6S+ in Release mode.
// Results are in photo pixels, not device pixels:
//  ~ 5ms for 500px x 600px
//  ~ 15ms for 2200px x 600px
// NSLog(@"generating %d x %d @ %dx grayscale took %f seconds", (int)size.width, (int)size.height, (int)image.scale, [NSDate timeIntervalSinceReferenceDate] - start);

return result;
}

Using blending modes instead is elegant, but copying to a grayscale bitmap is more performant because you only use one or two color channels instead of four. The opacity bool is meant to take in your UIView's opaque flag so you can opt out of using an alpha channel if you know you won't need one.

I haven't tried the Core Image based solutions in this answer thread, but I would be very cautious about using Core Image if performance is important.

Modulus answered 23/6, 2017 at 17:20 Comment(1)
How would this compare to Natalia's solution?Purslane
B
0

Thats my try to convert fast by drawing directly to grayscale colorspace without each pixel enumeration. It works 10x faster than CIImageFilter solutions.

@implementation UIImage (Grayscale)

static UIImage *grayscaleImageFromCIImage(CIImage *image, CGFloat scale)
{
    CIImage *blackAndWhite = [CIFilter filterWithName:@"CIColorControls" keysAndValues:kCIInputImageKey, image, kCIInputBrightnessKey, @0.0, kCIInputContrastKey, @1.1, kCIInputSaturationKey, @0.0, nil].outputImage;
    CIImage *output = [CIFilter filterWithName:@"CIExposureAdjust" keysAndValues:kCIInputImageKey, blackAndWhite, kCIInputEVKey, @0.7, nil].outputImage;
    CGImageRef ref = [[CIContext contextWithOptions:nil] createCGImage:output fromRect:output.extent];
    UIImage *result = [UIImage imageWithCGImage:ref scale:scale orientation:UIImageOrientationUp];
    CGImageRelease(ref);
    return result;
}

static UIImage *grayscaleImageFromCGImage(CGImageRef imageRef, CGFloat scale)
{
    NSInteger width = CGImageGetWidth(imageRef) * scale;
    NSInteger height = CGImageGetHeight(imageRef) * scale;

    NSMutableData *pixels = [NSMutableData dataWithLength:width*height];
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
    CGContextRef context = CGBitmapContextCreate(pixels.mutableBytes, width, height, 8, width, colorSpace, 0);

    CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
    CGImageRef ref = CGBitmapContextCreateImage(context);
    UIImage *result = [UIImage imageWithCGImage:ref scale:scale orientation:UIImageOrientationUp];

    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);
    CGImageRelease(ref);

    return result;
}

- (UIImage *)grayscaleImage
{
    if (self.CIImage) {
        return grayscaleImageFromCIImage(self.CIImage, self.scale);
    } else if (self.CGImage) {
        return grayscaleImageFromCGImage(self.CGImage, self.scale);
    }

    return nil;
}

@end
Begone answered 15/4, 2016 at 6:15 Comment(2)
Converts transparent background to black.Fivestar
@Fivestar looks like additional image masking can helps: incurlybraces.com/…Begone
H
0

This is an updated Swift solution based on @Natalia's answer, but without the now deprecated UIGraphic begin and end context methods:

extension UIImage {
   
    var grayscale: UIImage {
    
        let grayImage = UIGraphicsImageRenderer(size: size).image { (ctx) in
        
            let context = ctx.cgContext
            let imageRect = CGRect(origin: .zero, size: size)
        
            // Draw original image
            context.setFillColor(red: 1.0, green: 1.0, blue: 1.0, alpha: 1.0)
            context.fill(imageRect)
        
            // Optional: increase contrast with colorDodge before applying luminosity
            draw(in: imageRect, blendMode: .colorDodge, alpha: 0.7)
        
            // Draw the luminosity on top of the white background to get grayscale of original image
            draw(in: imageRect, blendMode: .luminosity, alpha: 0.9)
        
            // Optional: re-apply alpha if your image has transparency
            draw(in: imageRect, blendMode: .destinationIn, alpha: 1.0)
        }
    
        return grayImage
    }
}
Humoral answered 25/4 at 19:39 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.