What is the most efficient way to display CVImageBufferRef on iOS
Asked Answered
S

2

9

I have CMSampleBufferRef(s) which I decode using VTDecompressionSessionDecodeFrame which results in CVImageBufferRef after decoding of a frame has completed, so my questions is..

What would be the most efficient way to display these CVImageBufferRefs in UIView?

I have succeeded in converting CVImageBufferRef to CGImageRef and displaying those by settings CGImageRef as CALayer's content but then DecompressionSession has been configured with @{ (id)kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] };

Here is example/code how I've converted CVImageBufferRef to CGImageRef (note: cvpixelbuffer data has to be in 32BGRA format for this to work)

    CVPixelBufferLockBaseAddress(cvImageBuffer,0);
    // get image properties 
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(cvImageBuffer);
    size_t bytesPerRow   = CVPixelBufferGetBytesPerRow(cvImageBuffer);
    size_t width         = CVPixelBufferGetWidth(cvImageBuffer);
    size_t height        = CVPixelBufferGetHeight(cvImageBuffer);

    /*Create a CGImageRef from the CVImageBufferRef*/
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef    cgContext  = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef cgImage = CGBitmapContextCreateImage(cgContext);

    // release context and colorspace 
    CGContextRelease(cgContext);
    CGColorSpaceRelease(colorSpace);

    // now CGImageRef can be displayed either by setting CALayer content
    // or by creating a [UIImage withCGImage:geImage] that can be displayed on
    // UIImageView ...

The #WWDC14 session 513 (https://developer.apple.com/videos/wwdc/2014/#513) hints that YUV -> RGB colorspace conversion (using CPU?) can be avoided and if YUV capable GLES magic is used - wonder what that might be and how this could be accomplished?

Apple's iOS SampleCode GLCameraRipple shows an example of displaying YUV CVPixelBufferRef captured from camera using 2 OpenGLES with separate textures for Y and UV components and a fragment shader program that does the YUV to RGB colorspace conversion calculations using GPU - is all that really required, or is there some more straightforward way how this can be done?

NOTE: In my use case I'm unable to use AVSampleBufferDisplayLayer, due to fact how the input to decompression becomes available.

Segmentation answered 29/9, 2015 at 17:16 Comment(0)
A
1

Update: The original answer below does not work because kCVPixelBufferIOSurfaceCoreAnimationCompatibilityKey is unavailable for iOS.


UIView is backed by a CALayer whose contents property supports multiple types of images. As detailed in my answer to a similar question for macOS, it is possible to use CALayer to render a CVPixelBuffer’s backing IOSurface. (Caveat: I have only tested this on macOS.)

Alunite answered 22/1, 2019 at 3:39 Comment(0)
C
0

If you're getting your CVImageBufferRef from CMSampleBufferRef, which you're receiving from captureOutput:didOutputSampleBuffer:fromConnection:, you don't need to make that conversion and can directly get the imageData out of CMSampleBufferRef. Here's the code:

NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:sampleBuffer];
UIImage *frameImage = [UIImage imageWithData:imageData];

API description doesn't provide any info about wether its 32BGRA supported or not, and produces imageData, along with any meta-data, in jpeg format without any compression applied. If your goal is to display the image on screen or use with UIImageView, this is the quick way.

Crinoline answered 2/10, 2015 at 20:10 Comment(1)
Thanks, but the Frames are captured elsewhere and encoded H.264 NALUs are transmitted over network among other application data. Thus the application receiving these encoded frames over networks first needs to decode those CMSampleBuffers containing H.264 within it's CMBlockBuffers to get access to decoded frames and pixel data, and after that it need to display those decoded video frames.now I'm looking for the most efficient way for doing this (less memory copying between CPU and GPU memory, best battery and overall computing efficiency..)Segmentation

© 2022 - 2024 — McMap. All rights reserved.