Getting pixel data from UIImageView -- works on simulator, not device
Asked Answered
D

3

4

Based on the responses to a previous question, I've created a category on UIImageView for extracting pixel data. This works fine in the simulator, but not when deployed to the device. I should say not always -- the odd thing is that it does fetch the correct pixel colour if point.x == point.y; otherwise, it gives me pixel data for a pixel on the other side of that line, as if mirrored. (So a tap on a pixel in the lower-right corner of the image gives me the pixel data for a corresponding pixel in the upper-left, but tapping on a pixel in the lower-left corner returns the correct pixel colour). The touch coordinates (CGPoint) are correct.

What am I doing wrong?

Here's my code:

@interface UIImageView (PixelColor)
- (UIColor*)getRGBPixelColorAtPoint:(CGPoint)point;
@end

@implementation UIImageView (PixelColor)

- (UIColor*)getRGBPixelColorAtPoint:(CGPoint)point
{
    UIColor* color = nil;

    CGImageRef cgImage = [self.image CGImage];
    size_t width = CGImageGetWidth(cgImage);
    size_t height = CGImageGetHeight(cgImage);
    NSUInteger x = (NSUInteger)floor(point.x);
    NSUInteger y = height - (NSUInteger)floor(point.y);

    if ((x < width) && (y < height))
    {
        CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
        CFDataRef bitmapData = CGDataProviderCopyData(provider);
        const UInt8* data = CFDataGetBytePtr(bitmapData);
        size_t offset = ((width * y) + x) * 4;
        UInt8 red = data[offset];
        UInt8 blue = data[offset+1];
        UInt8 green = data[offset+2];
        UInt8 alpha = data[offset+3];
        CFRelease(bitmapData);
        color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];
    }

    return color;
}
Di answered 15/12, 2009 at 23:59 Comment(3)
can you check the value of self.image.imageOrientation? it's possible that the image you're using is in UIImageOrientationLeftMirrored which would be the reflection you're seeing, though I don't know why it would be that way only on the device....Arbil
It's UIImageOrientationUp on both simulator and device. I'm not sure that "left mirrored" is the correct interpretation of what's happening, because the reflection is (seemingly) taking place along the diagonal y=x, and UIImageOrientationLeftMirrored is a simple rotation of the entire image "90 deg CCW" according to the SDK.Di
More info: if I swap how x and y are being calculated, the behaviour is reversed -- it works on the device, but not on the simulator. (Although the exact positioning on the device seems to be off by a few degrees CCW, but it could be the imprecision of my finger versus using the mouse)Di
J
5

I think R B G is wrong. You have:

UInt8 red =   data[offset];     
UInt8 blue =  data[offset+1];
UInt8 green = data[offset+2];

But don't you really mean R G B? :

UInt8 red =   data[offset];     
UInt8 green = data[offset+1];
UInt8 blue =  data[offset+2];

But even with that fixed there's still a problem as it turns out Apple byte swaps (great article) the R and B values when on the device, but not when on the simulator.

I had a similar simulator/device issue with a PNG's pixel buffer returned by CFDataGetBytePtr.

This resolved the issue for me:

#if TARGET_IPHONE_SIMULATOR
        UInt8 red =   data[offset];
        UInt8 green = data[offset + 1];
        UInt8 blue =  data[offset + 2];
#else
        //on device
        UInt8 blue =  data[offset];       //notice red and blue are swapped
        UInt8 green = data[offset + 1];
        UInt8 red =   data[offset + 2];
#endif

Not sure if this will fix your issue, but your misbehaving code looks close to what mine looked like before I fixed it.

One last thing: I believe the simulator will let you access your pixel buffer data[] even after CFRelease(bitmapData) is called. On the device this is not the case in my experience. Your code shouldn't be affected, but in case this helps someone else I thought I'd mention it.

Juristic answered 10/6, 2010 at 13:1 Comment(1)
Took me some time to finally get back to this, but your code was bang-on. The combination of mixing up G and B, and not knowing about the byte ordering change Xcode does behind the scenes, was a fatal combination.Di
Q
0

You could try the following alternative approach:

  • create a CGBitmapContext
  • draw the image into the context
  • call CGBitmapContextGetData on the context to get the underlying data
  • work out your offset into the raw data (based on how you created the bitmap context)
  • extract the value

This approach works for me on the simulator and device.

Quackenbush answered 17/12, 2009 at 8:34 Comment(3)
I'm trying to avoid that because of the overhead involved (this function could be called more than one a second)Di
You could cache the raw data, then the overhead is reduced to an array lookup, unless the image is continually changing.Quackenbush
That's true. However, I still would like to know what the underlying problem is in my code above.Di
G
0

It looks like that in the code posted in the original questions instead of:

NSUInteger x = (NSUInteger)floor(point.x);
NSUInteger y = height - (NSUInteger)floor(point.y);

It should be:

NSUInteger x = (NSUInteger)floor(point.x);
NSUInteger y = (NSUInteger)floor(point.y);
Groggy answered 26/12, 2010 at 19:12 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.