iOS get CGPoint from openCV cv::Point
Asked Answered
J

2

8

enter image description here

In the above image ,we can see point which are drawn on image ,by some openCV algorithm.

I want to draw a UIView point on those points ,so that user can crop it.

I am not getting how will I access those points so that i can add uiview points.

I tried to read the cv::Point ,but value are just different(more) to the co-ordinate height and width.

static cv::Mat drawSquares( cv::Mat& image, const std::vector<std::vector<cv::Point> >& squares )
{

    int max_X=0,max_Y=0;
    int min_X=999,min_Y=999;
    for( size_t i = 0; i < squares.size(); i++ )
    {
        const cv::Point* p = &squares[i][0];
        int n = (int)squares[i].size();

        NSLog(@"Squares%d %d %d",n,p->x,p->y);

        polylines(image, &p, &n, 1, true, cv::Scalar(0,255,0), 3, cv::LINE_AA);

    }


    return image;
}

In above code ,drawsquare method draw the squares .I have NSLog the point x, y co-ordinates but these values are not w.r.t to device co-ordinate system.

Can someone help me how it can be achieved Or an alternative to my requirement.

Thanks

Jerkin answered 1/6, 2015 at 10:34 Comment(4)
So the problem here is that you are not able to plot the CropView (Polygon) on top of the imageview? Right?Inmost
No i just want to convert the cv::Point to CGPoint,So that i can add uiview points on those points and add crop feature.cv::Point are not device cordinates.Is there any formula to convert it.Jerkin
@muku i have similar problem, were you able to find solution for this cvpoint to cgpoint ..Hayley
FYI: the Apple class CIDetector offers similar rectangle detection functionality.Flaunty
A
1

This is in Swift 3. In the Swift class that you're returning the cv::Points to:

  1. Get the x and y dimensions of the image you're recording from your camera AV Capture Session
  2. Divide the x and y dimension of the UIview you're using to visualize the image by the capture session's image dimensions in the X and Y
  3. Multiply the point's x and y coordinates by the scaled x and y dimensions

{
    let imageScaleX = imgView.bounds.width/(newCameraHelper?.dimensionX)!
    let imageScaleY = imgView.bounds.height/(newCameraHelper?.dimensionY)!
    for point in Squares {
       let x = point.x * imageScaleX
       let y = point.y * imageScaleY
    }
}
Aindrea answered 30/3, 2017 at 15:53 Comment(0)
J
0

Actually Due to image size,the co-ordinates are map in different way,

For eg. If image size is within the boundary of screen then there is no issues,you can directly use the cvPoint as CGPoint,

But if case is that image size is 3000*2464 which is approx size of camera clicked image then u have apply some formula.

Below is the way i got from internet and it helped me to extract CGPoint from cvPoint when the size of image is more den our screen dimension

Get the scale Factor of image

- (CGFloat) contentScale
{
CGSize imageSize = self.image.size;
CGFloat imageScale = fminf(CGRectGetWidth(self.bounds)/imageSize.width, CGRectGetHeight(self.bounds)/imageSize.height);
return imageScale;
 }

Suppose this is cvPoint (_pointA variable) u have then by using the below formula u can extract it.

tmp = CGPointMake((_pointA.frame.origin.x) / scaleFactor, (_pointA.frame.origin.y) / scaleFactor);
Jerkin answered 8/7, 2015 at 3:46 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.