Getting a CGImage from CIImage
Asked Answered
A

3

38

I have a UIImage which is loaded from a CIImage with:

tempImage = [UIImage imageWithCIImage:ciImage];

The problem is I need to crop tempImage to a specific CGRect and the only way I know how to do this is by using CGImage. The problem is that in the iOS 6.0 documentation I found this:

CGImage
If the UIImage object was initialized using a CIImage object, the value of the property is NULL.

A. How to convert from CIImage to CGImage? I'm using this code but I have a memory leak (and can't understand where):

+(UIImage*)UIImageFromCIImage:(CIImage*)ciImage {  
    CGSize size = ciImage.extent.size;  
    UIGraphicsBeginImageContext(size);  
    CGRect rect;  
    rect.origin = CGPointZero;  
    rect.size   = size;  
    UIImage *remImage = [UIImage imageWithCIImage:ciImage];  
    [remImage drawInRect:rect];  
    UIImage *result = UIGraphicsGetImageFromCurrentImageContext();  
    UIGraphicsEndImageContext();  
    remImage = nil;  
    ciImage = nil;  
    //
    return result;  
}
Applecart answered 18/1, 2013 at 15:44 Comment(3)
You mention that you need the CGImage to do the crop. As Joris Kluivers said, you can do the crop without the CGImage by using the CICrop filter on the CIImage. Is there anything else you need the CGImage for? If so, what?Appaloosa
Also, regarding the memory leak, did you try using Instruments's Leaks template? Between the Leaks instrument and the Allocations instrument's Heapshot tool, you should be able to nail down where in your app you were leaking or accumulating memory.Appaloosa
@PeterHosey I did and I found that for some reason I have over 200 live instances of CIImage and over 100 of CGImage, all originating from this method. I just don't see whereApplecart
Y
26

See the CIContext documentation for createCGImage:fromRect:

CGImageRef img = [myContext createCGImage:ciImage fromRect:[ciImage extent]];

From an answer to a similar question: https://mcmap.net/q/410714/-save-ciimage-by-converting-to-cgimage-throws-an-error

Also since you have a CIImage to begin with, you could use CIFilter to actually crop your image.

Yerkovich answered 18/1, 2013 at 15:59 Comment(7)
Could and should: Core Image is extremely lazy, in that createCGImage:fromRect: (or any other method that requires finished pixels) is the point at which all the work actually gets done; no actual filtering has taken place before then. Cropping with a Core Image filter will actually save quite a bit of work (proportional to however much you crop out), since then you won't be asking for the cropped-out pixels, so they won't be rendered at all. (Of course, the other way would be to pass the crop rectangle for the fromRect:, which will have the same effect.)Appaloosa
@Joris Kluivers thanks for the answer but I get the same result: CIContext *context = [CIContext new]; CGImageRef ref = [context createCGImage:ciImage fromRect:[ciImage extent]]; tempImage = [UIImage imageWithCGImage:ref]; CGImageRelease(ref); NSLog(@"tempImage: %f %f",tempImage.size.width,tempImage.size.height); outputs: tempImage: 0.000000 0.000000Applecart
@PeterHosey You're right but unfortunately I don't have the crop info when I'm doing the conversion and I need to do the conversion before because I use the CGImage. ThanksApplecart
@AndreiStoleru: You might try using contextWithOptions: instead of new to create the context. Also, what do you get in your log if you log the value of [ciImage extent]?Appaloosa
@PeterHosey NSLog(@"extent: %f",ciImage.extent.size.width); DEBUG -[CamViewController captureOutput:didOutputSampleBuffer:fromConnection:]:180 - extent: 1280.000000 Also tried with contextWithOptions: but same result.Applecart
Here is how you create your context to get the CGImage: CIContext *myContext = [CIContext contextWithOptions:nil]; CGImageRef imgRef = [myContext createCGImage:ciImage fromRect:[ciImage extent]]; UIImage *imagefromCGImage = [UIImage imageWithCGImage:imgRef]; CGImageRelease(imgRef);Prolific
It had better NOT do this conversion in main thread, it costs a lot of time.Rye
E
53

Swift 3, Swift 4 and Swift 5

Here is a nice little function to convert a CIImage to CGImage in Swift.

func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
    let context = CIContext(options: nil)
    if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
        return cgImage
    }
    return nil
}

On a desktop or TV, you would typically use:

let ctx = CIContext(options: [.useSoftwareRenderer: false])
let cgImage = ctx.createCGImage(output, from: output.extent)

Many other alternative hint options are available in the core image processor, such as .allowLowPower, .cacheIntermediates, .highQualityDownsample, priorities and so on.

Notes:

  • CIContext(options: nil) will use a software renderer and can be quite slow. To improve the performance, use CIContext(options: [CIContextOption.useSoftwareRenderer: false]) - this forces operations to run on GPU, and can be much faster.
  • If you use CIContext more than once, cache it as apple recommends.
Entomostracan answered 22/8, 2017 at 10:19 Comment(0)
Y
26

See the CIContext documentation for createCGImage:fromRect:

CGImageRef img = [myContext createCGImage:ciImage fromRect:[ciImage extent]];

From an answer to a similar question: https://mcmap.net/q/410714/-save-ciimage-by-converting-to-cgimage-throws-an-error

Also since you have a CIImage to begin with, you could use CIFilter to actually crop your image.

Yerkovich answered 18/1, 2013 at 15:59 Comment(7)
Could and should: Core Image is extremely lazy, in that createCGImage:fromRect: (or any other method that requires finished pixels) is the point at which all the work actually gets done; no actual filtering has taken place before then. Cropping with a Core Image filter will actually save quite a bit of work (proportional to however much you crop out), since then you won't be asking for the cropped-out pixels, so they won't be rendered at all. (Of course, the other way would be to pass the crop rectangle for the fromRect:, which will have the same effect.)Appaloosa
@Joris Kluivers thanks for the answer but I get the same result: CIContext *context = [CIContext new]; CGImageRef ref = [context createCGImage:ciImage fromRect:[ciImage extent]]; tempImage = [UIImage imageWithCGImage:ref]; CGImageRelease(ref); NSLog(@"tempImage: %f %f",tempImage.size.width,tempImage.size.height); outputs: tempImage: 0.000000 0.000000Applecart
@PeterHosey You're right but unfortunately I don't have the crop info when I'm doing the conversion and I need to do the conversion before because I use the CGImage. ThanksApplecart
@AndreiStoleru: You might try using contextWithOptions: instead of new to create the context. Also, what do you get in your log if you log the value of [ciImage extent]?Appaloosa
@PeterHosey NSLog(@"extent: %f",ciImage.extent.size.width); DEBUG -[CamViewController captureOutput:didOutputSampleBuffer:fromConnection:]:180 - extent: 1280.000000 Also tried with contextWithOptions: but same result.Applecart
Here is how you create your context to get the CGImage: CIContext *myContext = [CIContext contextWithOptions:nil]; CGImageRef imgRef = [myContext createCGImage:ciImage fromRect:[ciImage extent]]; UIImage *imagefromCGImage = [UIImage imageWithCGImage:imgRef]; CGImageRelease(imgRef);Prolific
It had better NOT do this conversion in main thread, it costs a lot of time.Rye
A
-2

After some googling I found this method which converts a CMSampleBufferRef to a CGImage:

+ (CGImageRef)imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer,0);        // Lock the image buffer

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);   // Get information of the image
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext);
    CGContextRelease(newContext);

    CGColorSpaceRelease(colorSpace);
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);
    /* CVBufferRelease(imageBuffer); */  // do not call this!

    return newImage;
}

(but I closed the tab so I don't know where I got it from)

Applecart answered 18/1, 2013 at 17:50 Comment(4)
This doesn't involve a CIImage at all. So, really you were intending to create a CGImage from a CMSampleBuffer all along, and the CIImage was just the means you had in mind to do that?Appaloosa
@PeterHosey As you could probably deduct I'm getting the sampleBuffer from AVCaptureOutput and I'm using the CIImage for face detection. The final goal is to crop just the face from the captured image and because I'm too stupid to understand CIImage and CGImage I searched for another solution: CMSampleBuffer ps. I accepted the answer from Joris because that's the right answer to my question.Applecart
Ah, face detection. That's legitimate. Have you looked at AVCaptureMetadataOutput and AVMetadataFaceObject yet?Appaloosa
@PeterHosey Thanks for the suggestion, seems it's faster than CIDetector ;)Applecart

© 2022 - 2025 — McMap. All rights reserved.