Detect touches only on non-transparent pixels of UIImageView, efficiently
Asked Answered
H

3

28

How would you detect touches only on non-transparent pixels of a UIImageView, efficiently?

Consider an image like the one below, displayed with UIImageView. The goal is be to make the gesture recognisers respond only when the touch happens in the non-transparent (black in this case) area of the image.

enter image description here

Ideas

  • Override hitTest:withEvent: or pointInside:withEvent:, although this approach might be terribly inefficient as these methods get called many times during a touch event.
  • Checking if a single pixel is transparent might create unexpected results, as fingers are bigger than one pixel. Checking a circular area of pixels around the hit point, or trying to find a transparent path towards an edge might work better.

Bonus

  • It'd be nice to differentiate between outer and inner transparent pixels of an image. In the example, the transparent pixels inside the zero should also be considered valid.
  • What happens if the image has a transform?
  • Can the image processing be hardware accelerated?
Halfhour answered 8/11, 2012 at 15:20 Comment(1)
I open-sourced what we ended up doing here: github.com/robotmedia/RMShapedImageViewHalfhour
J
20

Here's my quick implementation: (based on Retrieving a pixel alpha value for a UIImage)

- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
    //Using code from https://mcmap.net/q/129266/-retrieving-a-pixel-alpha-value-for-a-uiimage

    unsigned char pixel[1] = {0};
    CGContextRef context = CGBitmapContextCreate(pixel,
                                                 1, 1, 8, 1, NULL,
                                                 kCGImageAlphaOnly);
    UIGraphicsPushContext(context);
    [image drawAtPoint:CGPointMake(-point.x, -point.y)];
    UIGraphicsPopContext();
    CGContextRelease(context);
    CGFloat alpha = pixel[0]/255.0f;
    BOOL transparent = alpha < 0.01f;

    return !transparent;
}

This assumes that the image is in the same coordinate space as the point. If scaling goes on, you may have to convert the point before checking the pixel data.

Appears to work pretty quickly to me. I was measuring approx. 0.1-0.4 ms for this method call. It doesn't do the interior space, and is probably not optimal.

Jud answered 8/11, 2012 at 17:42 Comment(2)
Any idea on now to make this work when the image has been scaled? I can't seem to make it work, Works fine though if the image is full size (not scaled)Axiology
I would use the ratio of x and y distances, then do the same test on the original image, but based on converting the ratio. Eg: 30% of 200 pixels is 60 pixels. If the original image is 100 pixels, then test on 30 pixels.Tinytinya
A
6

On github, you can find a project by Ole Begemann which extends UIButton so that it only detects touches where the button's image is not transparent.

Since UIButton is a subclass of UIView, adapting it to UIImageView should be straightforward.

Hope this helps.

Aubrey answered 8/11, 2012 at 15:26 Comment(2)
Thanks Sergio. The thing with that project is that it only checks a single pixel, and the way it does it is not very efficient when extended to many pixels. Still, it's a great starting point. :)Halfhour
Thanks Sergio. You saved my day with that link! Cheers!Dogy
T
3

Well, if you need to do it really fast, you need to precalculate the mask.

Here's how to extract it:

UIImage *image = [UIImage imageNamed:@"some_image.png"];
NSData *data = (NSData *) CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
unsigned char *pixels = (unsigned char *)[data bytes];
BOOL *mask = (BOOL *)malloc(data.length);
for (int i = 0; i < data.length; i += 4) {
  mask[i >> 2] = pixels[i + 3] == 0xFF; // alpha, I hope
}
// TODO: save mask somewhere

Or you could use the 1x1 bitmap context solution to precalculate the mask. Having a mask means you can check any point with the cost of one indexed memory access.

As for checking a bigger area than one pixel - I would check pixels on a circle with the center in the touch point. About 16 points on the circle should be enough.

Detecting also inner pixels: another precalculation step - you need to find the convex hull of the mask. You can do that using the "Graham scan" algorithm http://softsurfer.com/Archive/algorithm_0109/algorithm_0109.htm Then either fill that area in the mask, or save the polygon and use a point-in-polygon test instead.

And finally, if the image has a transform, you need to convert the point coordinates from screen space to image space, and then you can just check the precalculated mask.

Tiffin answered 8/11, 2012 at 19:16 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.