I have CMSampleBufferRef(s) which I decode using VTDecompressionSessionDecodeFrame which results in CVImageBufferRef after decoding of a frame has completed, so my questions is..
What would be the most efficient way to display these CVImageBufferRefs in UIView?
I have succeeded in converting CVImageBufferRef to CGImageRef and displaying those by settings CGImageRef as CALayer's content but then DecompressionSession has been configured with @{ (id)kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] };
Here is example/code how I've converted CVImageBufferRef to CGImageRef (note: cvpixelbuffer data has to be in 32BGRA format for this to work)
CVPixelBufferLockBaseAddress(cvImageBuffer,0);
// get image properties
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(cvImageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cvImageBuffer);
size_t width = CVPixelBufferGetWidth(cvImageBuffer);
size_t height = CVPixelBufferGetHeight(cvImageBuffer);
/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef cgContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef cgImage = CGBitmapContextCreateImage(cgContext);
// release context and colorspace
CGContextRelease(cgContext);
CGColorSpaceRelease(colorSpace);
// now CGImageRef can be displayed either by setting CALayer content
// or by creating a [UIImage withCGImage:geImage] that can be displayed on
// UIImageView ...
The #WWDC14 session 513 (https://developer.apple.com/videos/wwdc/2014/#513) hints that YUV -> RGB colorspace conversion (using CPU?) can be avoided and if YUV capable GLES magic is used - wonder what that might be and how this could be accomplished?
Apple's iOS SampleCode GLCameraRipple shows an example of displaying YUV CVPixelBufferRef captured from camera using 2 OpenGLES with separate textures for Y and UV components and a fragment shader program that does the YUV to RGB colorspace conversion calculations using GPU - is all that really required, or is there some more straightforward way how this can be done?
NOTE: In my use case I'm unable to use AVSampleBufferDisplayLayer, due to fact how the input to decompression becomes available.