CGPathRef intersection
Asked Answered
P

3

9

Is there a way to find out whether two CGPathRefs are intersected or not. In my case all the CGPaths are having closePath.

For example, I am having two paths. One path is the rectangle which is rotated with some angle and the other path is curved path. Two paths origin will be changing frequently. At some point they may intersect. I want to know when they are intersected. Please let me know if you have any solution.

Thanks in advance

Psychoneurosis answered 20/6, 2009 at 15:3 Comment(1)
There is finally an intersects method.Social
K
3

Make one path the clipping path, draw the other path, then search for pixels that survived the clipping process:

// initialise and erase context
CGContextAddPath(context, path1);
CGContextClip(context);

// set fill colour to intersection colour
CGContextAddPath(context, path2);
CGContextFillPath(context);

// search for pixels that match intersection colour

This works because clipping = intersecting.

Don't forget that intersection depends on the definition of interiority, of which there are several. This code uses the winding-number fill rule, you might want the even odd rule or something else again. If interiority doesn't keep you up at night, then this code should be fine.

My previous answer involved drawing transparent curves to an RGBA context. This solution is superior to the old one because it is

  1. simpler
  2. uses a quarter of the memory as an 8bit greyscale context suffices
  3. obviates the need for hairy, difficult-to-debug transparency code

Who could ask for more?

I guess you could ask for a complete implementation, ready to cut'n'paste, but that would spoil the fun and obfuscate an otherwise simple answer.

OLDER, HARDER TO UNDERSTAND AND LESS EFFICIENT ANSWER

Draw both CGPathRefs separately at 50% transparency into a zeroed, CGBitmapContextCreate-ed RGBA memory buffer and check for any pixel values > 128. This works on any platform that supports CoreGraphics (i.e. iOS and OSX).

In pseudocode

// zero memory

CGContextRef context;
context = CGBitmapContextCreate(memory, wide, high, 8, wide*4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaPremultipliedLast);
CGContextSetRGBFillColor(context, 1, 1, 1, 0.5);  // now everything you draw will be at 50%

// draw your path 1 to context

// draw your path 2 to context

// for each pixel in memory buffer
if(*p > 128) return true; // curves intersect  
else p+= 4; // keep looking

Let the resolution of the rasterised versions be your precision and choose the precision to suit your performance needs.

Kakemono answered 20/6, 2009 at 20:4 Comment(7)
Thanks for the solution. But, I am not that good at Core Graphics. How can we retrieve and check for non-white pixels.Psychoneurosis
Downvoted, because this doesn't appear to work - it won't ADD the color values on iOS, it just replaces them. I looked through the docs and tried different configurations of colorspace to see if there was a way of making it working, but no successDrees
If you can't find a way to get your transparency code to work, start a new question.Kakemono
If you've got code that answers the question, that would be great. Right now, it's a theoretical answer that doesn't appear to work in practice (question is tagged iPhone, and your answer says "if .. iPhone ... probably have to". My experience: tried it, didn't work). I don't want others to go down the same blind alley I went down.Drees
This isn't a cut and paste answer, you will need to do some work. Anyone can see that iOS CAN draw transparent paths, so your code must be bugged. Start a "Why doesn't this code draw transparent CGPathRefs?" question. I've added some more details to my answer but I think either you've got a misconfigured CGContext, or you're not drawing the paths separately or any other number of problems. Once you do get it working feel free to post your code, that would be a superior answer.Kakemono
Rather than creating an RGB colorspace, you can create an alpha-only colorspace, which only requires 1-byte per pixel and is faster. See robnapier.net/blog/clipping-cgrect-cgpath-531 for an example of implementing this kind of solution. It works very well and is quite fast. Your clipping solution is excellent, but doesn't work if you're looking for intersection rather than overlap, or for open shapes that can't be filled.Galacto
Great! I did not know about kCGImageAlphaOnly. I've been using CGColorSpaceCreateDeviceGray() + kCGImageAlphaNone and pretending that it was alpha. It looked fine, but maybe this will look better! And I can delete a few lines of code. Thanks!Kakemono
C
0

1) There isn't any CGPath API to do this. But, you can do the math to figure it out. Take a look at this wikipedia article on Bezier curves to see how the curves in CGPath are implemented.

2) This is going to be slow on the iPhone I would expect but you could fill both paths into a buffer in difference colors (say, red and blue, with alpha=0.5) and then iterate through the buffer to find any pixels that occur at intersections. This will be extremely slow.

Candlemas answered 20/6, 2009 at 19:12 Comment(2)
#2 is actually very fast, and is an excellent solution for certain kinds of problems (such as looking for intersections of the stroke rather than the fill).Galacto
My apologies for chiming in an old question, but there is now an intersects method that does this for us.Social
D
-1

For iOS, the alpha blend seems to be ignored.

Instead, you can do a color blend, which will achieve the same effect, but doesn't need alpha:

CGContextSetBlendMode(context, kCGBlendModeColorDodge);
CGFloat semiTransparent[] = { .5,.5,.5,1};

Pixels in output Image will be:

  • RGB = 0,0,0 = (0.0f) ... no path
  • RGB = 64,64,64 = (0.25f) ... one path, no intersection
  • RGB = 128,128,128 = (0.5f) ... two paths, intersection found

Complete code for drawing:

-(void) drawFirst:(CGPathRef) first second:(CGPathRef) second into:(CGContextRef)context
{
    /** setup the context for DODGE (everything gets lighter if it overlaps) */
    CGContextSetBlendMode(context, kCGBlendModeColorDodge);

    CGFloat semiTransparent[] = { .5,.5,.5,1};

    CGContextSetStrokeColor(context, semiTransparent);
    CGContextSetFillColor(context, semiTransparent);

    CGContextAddPath(context, first);
    CGContextFillPath(context);
    CGContextStrokePath(context);

    CGContextAddPath(context, second);
    CGContextFillPath(context);
    CGContextStrokePath(context);
}

Complete code for checking output:

[self drawFirst:YOUR_FIRST_PATH second:YOUR_SECOND_PATH into:context];

// Now we can get a pointer to the image data associated with the bitmap
// context.
BOOL result = FALSE;
unsigned char* data = CGBitmapContextGetData (context);
if (data != NULL) {

    for( int i=0; i<width; i++ )
        for( int k=0; k<width; k++ )
        {
    //offset locates the pixel in the data from x,y.
    //4 for 4 bytes of data per pixel, w is width of one row of data.
    int offset = 4*((width*round(k))+round(i));
    int alpha =  data[offset];
    int red = data[offset+1];
    int green = data[offset+2];
    int blue = data[offset+3];

            if( red > 254 )
            {
                result = TRUE;
                break;
            }
        }

And, finally, here's a slightly modified code from another SO answer ... complete code for creating an RGB space on iOS 4, iOS 5, that will support the above functions:

- (CGContextRef) createARGBBitmapContextWithFrame:(CGRect) frame
{
   /** NB: this requires iOS 4 or above - it uses the auto-allocating behaviour of Apple's method, to reduce a potential memory leak in the original StackOverflow version */
    CGContextRef    context = NULL;
    CGColorSpaceRef colorSpace;
    void *          bitmapData;
    int             bitmapByteCount;
    int             bitmapBytesPerRow;

    // Get image width, height. We'll use the entire image.
    size_t pixelsWide = frame.size.width;
    size_t pixelsHigh = frame.size.height;

    // Declare the number of bytes per row. Each pixel in the bitmap in this
    // example is represented by 4 bytes; 8 bits each of red, green, blue, and
    // alpha.
    bitmapBytesPerRow   = (pixelsWide * 4);
    bitmapByteCount     = (bitmapBytesPerRow * pixelsHigh);

    // Use the generic RGB color space.
    colorSpace = CGColorSpaceCreateDeviceRGB();
    if (colorSpace == NULL)
    {
        fprintf(stderr, "Error allocating color space\n");
        return NULL;
    }

    // Create the bitmap context. We want pre-multiplied ARGB, 8-bits
    // per component. Regardless of what the source image format is
    // (CMYK, Grayscale, and so on) it will be converted over to the format
    // specified here by CGBitmapContextCreate.
    context = CGBitmapContextCreate (NULL,
                                     pixelsWide,
                                     pixelsHigh,
                                     8,      // bits per component
                                     bitmapBytesPerRow,
                                     colorSpace,
                                     kCGImageAlphaPremultipliedFirst
                                     //kCGImageAlphaFirst
                                     );
    if (context == NULL)
    {
        fprintf (stderr, "Context not created!");
    }

    // Make sure and release colorspace before returning
    CGColorSpaceRelease( colorSpace );

    return context;
}
Drees answered 22/12, 2011 at 16:10 Comment(5)
iOS doesn't ignore alpha. Check your code - have you tried creating the CGContext with kCGImageAlphaPremultipliedLast?Kakemono
I think your transparency problem was due to the fact that you created an ARGB bitmap yet specified an RGBA colour. In ARGB your semiTransparent colour is a pretty cornflower blue with 50% alpha. Try the unambiguous CGContextSetRGBFillColor and let me know if that fixes it. p.s. I've updated my solution to avoid this vexing transparency stuff.Kakemono
kCGImageAlphaPremultipliedLast (and all variants) failed.Drees
Method is called "ARGB" but there's nothing in there that specifies where the alpha channel goes - AFAICS from reading Apple's docs, bitmaps don't have an explicit alpha channel, Apple handles it internally based on the incoming color ref. Apple's docs define that color ref array as RGBA. Or have I missed something here?Drees
The alpha comes first because of kCGImageAlphaPremultipliedFirst, documented in CGImage.h. Bitmaps can have an alpha channel, CGColorRef surely does define the order, but you're using a naked component array (cut'n'paste mixup?) which is most likely dependent on the bitmap layout. No idea why kCGImageAlphaPremultipliedLast fails, it's been working for me since 2009. There's nothing special about RGBA, tho, except that it matches up nicely with OpenGLES 1 textures.Kakemono

© 2022 - 2024 — McMap. All rights reserved.