iOS6 : How to use the conversion feature of YUV to RGB from cvPixelBufferref to CIImage
Asked Answered
P

2

10

From iOS6, Apple has given the provision to use native YUV to CIImage through this call

initWithCVPixelBuffer:options:

In the core Image Programming guide, they have mentioned about this feature

Take advantage of the support for YUV image in iOS 6.0 and later. Camera pixel buffers are natively YUV but most image processing algorithms expect RBGA data. There is a cost to converting between the two. Core Image supports reading YUB from CVPixelBuffer objects and applying the appropriate color transform.

options = @{ (id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_420YpCvCr88iPlanarFullRange) };

But, I am unable to use it properly. I have a raw YUV data. So, this is what i did

                void *YUV[3] = {data[0], data[1], data[2]};
                size_t planeWidth[3] = {width, width/2, width/2};
                size_t planeHeight[3] = {height, height/2, height/2};
                size_t planeBytesPerRow[3] = {stride, stride/2, stride/2};
                CVPixelBufferRef pixelBuffer = NULL;
                CVReturn ret = CVPixelBufferCreateWithPlanarBytes(kCFAllocatorDefault,
                               width, 
                               height,
                               kCVPixelFormatType_420YpCbCr8PlanarFullRange, 
                               nil,
                               width*height*1.5,
                               3, 
                               YUV,
                               planeWidth,
                               planeHeight, 
                               planeBytesPerRow, 
                               nil,
                               nil, nil, &pixelBuffer); 

    NSDict *opt =  @{ (id)kCVPixelBufferPixelFormatTypeKey :
                        @(kCVPixelFormatType_420YpCbCr8PlanarFullRange) };

CIImage *image = [[CIImage alloc]   initWithCVPixelBuffer:pixelBuffer options:opt];

I am getting nil for image. Anyy idea what I am missing.

EDIT: I added lock and unlock base address before call. Also, I dumped the data of pixelbuffer to ensure pixellbuffer propely hold the data. It looks like something wrong with the init call only. Still CIImage object is returning nil.

 CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CIImage *image = [[CIImage alloc]   initWithCVPixelBuffer:pixelBuffer options:opt];
 CVPixelBufferUnlockBaseAddress(pixelBuffer,0);
Pryor answered 8/11, 2013 at 4:21 Comment(2)
were you able to resolve it ? . if yes please post your solutionsGusman
Hi Rugger, please provide solution if you able to solve it. thanksDemilune
D
1

There should be error message in console: initWithCVPixelBuffer failed because the CVPixelBufferRef is not IOSurface backed. See Apple's Technical Q&A QA1781 for how to create an IOSurface-backed CVPixelBuffer.

Calling CVPixelBufferCreateWithBytes() or CVPixelBufferCreateWithPlanarBytes() will result in CVPixelBuffers that are not IOSurface-backed...

...To do that, you must specify kCVPixelBufferIOSurfacePropertiesKey in the pixelBufferAttributes dictionary when creating the pixel buffer using CVPixelBufferCreate().

NSDictionary *pixelBufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
    [NSDictionary dictionary], (id)kCVPixelBufferIOSurfacePropertiesKey,
    nil];
// you may add other keys as appropriate, e.g. kCVPixelBufferPixelFormatTypeKey,     kCVPixelBufferWidthKey, kCVPixelBufferHeightKey, etc.
 
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(... (CFDictionaryRef)pixelBufferAttributes,  &pixelBuffer);

Alternatively, you can make IOSurface-backed CVPixelBuffers using CVPixelBufferPoolCreatePixelBuffer() from an existing pixel buffer pool, if the pixelBufferAttributes dictionary provided to CVPixelBufferPoolCreate() includes kCVPixelBufferIOSurfacePropertiesKey.

Deterge answered 28/10, 2014 at 7:0 Comment(0)
T
0

I am working on a similar problem and kept finding that same quote from Apple without any further information on how to work in a YUV color space. I came upon the following:

By default, Core Image assumes that processing nodes are 128 bits-per-pixel, linear light, premultiplied RGBA floating-point values that use the GenericRGB color space. You can specify a different working color space by providing a Quartz 2D CGColorSpace object. Note that the working color space must be RGB-based. If you have YUV data as input (or other data that is not RGB-based), you can use ColorSync functions to convert to the working color space. (See Quartz 2D Programming Guide for information on creating and using CGColorspace objects.) With 8-bit YUV 4:2:2 sources, Core Image can process 240 HD layers per gigabyte. Eight-bit YUV is the native color format for video source such as DV, MPEG, uncompressed D1, and JPEG. You need to convert YUV color spaces to an RGB color space for Core Image.

I note that there are no YUV color spaces, only Gray and RGB; and their calibrated cousins. I'm not sure how to convert the color space yet, but will certainly report here if I find out.

Tactical answered 4/12, 2014 at 0:5 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.