I have a program that views a camera input in real-time and gets the color value of the middle pixel. I use a captureOutput: method to grab the CMSampleBuffer from an AVCaptureSession output (which happens to be read as a CVPixelBuffer) and then I grab the rgb values of a pixel using the following code:
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
unsigned char* pixel = (unsigned char *)CVPixelBufferGetBaseAddress(imageBuffer);
NSLog(@"Middle pixel: %hhu", pixel[((width*height)*4)/2]);
int red = pixel[(((width*height)*4)/2)+2];
int green = pixel[(((width*height)*4)/2)+1];
int blue = pixel[((width*height)*4)/2];
int alpha = 1;
UIColor *color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha)];
I though that formula ((width*height)*4)/2 would get me the middle pixel, but it gives me the top middle pixel of the image. I am wondering what formula I would need to use to access the pixel in the middle of the screen. I'm kind of stuck because I don't really know the internal structure of these pixel buffers.
In the future I would like to grab the 4 middle pixels and average them for a more accurate color reading, but for now I would just like to get an idea how these things work.