OpenCV : wrapPerspective on whole image
Asked Answered
A

3

7

I'm detecting markers on images captured by my iPad. Because of that I want to calculate translations and rotations between them, I want to change change perspective on images these image, so it would look like I'm capturing them directly above markers.

Right now I'm using

points2D.push_back(cv::Point2f(0, 0));
points2D.push_back(cv::Point2f(50, 0));
points2D.push_back(cv::Point2f(50, 50));
points2D.push_back(cv::Point2f(0, 50));

Mat perspectiveMat = cv::getPerspectiveTransform(points2D, imagePoints);
cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(_image->cols, _image->rows));

Which gives my these results (look at the right-bottom corner for result of warpPerspective):

photo 1 photo 2 photo 3

As you probably see result image contains recognized marker in left-top corner of the result image. My problem is that I want to capture whole image (without cropping) so I could detect other markers on that image later.

How can I do that? Maybe I should use rotation/translation vectors from solvePnP function?

EDIT:

Unfortunatelly changing size of warped image don't help much, because image is still translated so left-top corner of marker is in top-left corner of image.

For example when I've doubled size using:

cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(2*_image->cols, 2*_image->rows));

I've recieved these images:

photo 4 photo 5

Antimacassar answered 30/10, 2013 at 23:25 Comment(0)
C
3

Your code doesn't seem to be complete, so it is difficult to say what the problem is.

In any case the warped image might have completely different dimensions compared to the input image so you will have to adjust the size paramter you are using for warpPerspective.

For example try to double the size:

cv::warpPerspective(*_image, *_undistortedImage, M, 2*cv::Size(_image->cols, _image->rows));

Edit:

To make sure the whole image is inside this image, all corners of your original image must be warped to be inside the resulting image. So simply calculate the warped destination for each of the corner points and adjust the destination points accordingly.

To make it more clear some sample code:

// calculate transformation
cv::Matx33f M = cv::getPerspectiveTransform(points2D, imagePoints);

// calculate warped position of all corners

cv::Point3f a = M.inv() * cv::Point3f(0, 0, 1);
a = a * (1.0/a.z);

cv::Point3f b = M.inv() * cv::Point3f(0, _image->rows, 1);
b = b * (1.0/b.z);

cv::Point3f c = M.inv() * cv::Point3f(_image->cols, _image->rows, 1);
c = c * (1.0/c.z);

cv::Point3f d = M.inv() * cv::Point3f(_image->cols, 0, 1);
d = d * (1.0/d.z);

// to make sure all corners are in the image, every position must be > (0, 0)
float x = ceil(abs(min(min(a.x, b.x), min(c.x, d.x))));
float y = ceil(abs(min(min(a.y, b.y), min(c.y, d.y))));

// and also < (width, height)
float width = ceil(abs(max(max(a.x, b.x), max(c.x, d.x)))) + x;
float height = ceil(abs(max(max(a.y, b.y), max(c.y, d.y)))) + y;

// adjust target points accordingly
for (int i=0; i<4; i++) {
    points2D[i] += cv::Point2f(x,y);
}

// recalculate transformation
M = cv::getPerspectiveTransform(points2D, imagePoints);

// get result
cv::Mat result;
cv::warpPerspective(*_image, result, M, cv::Size(width, height), cv::WARP_INVERSE_MAP);
Charlie answered 1/11, 2013 at 10:39 Comment(3)
I know that output image can have different dimenstions, I've tried with doubling them (look at edited question for results), but it didn't gave me useful results. You say that my code is incomplete - what should I add than? I'm using getPerspectiveTransform in order to get transformation matrix, and I'm using coordinates of corners of detected marker there as dst matrix (according to OpenCV documentation - docs.opencv.org/modules/imgproc/doc/…).Antimacassar
Well I think your target coordinates of (-6,-6) etc. are a bit weird. Because that means that your target rectangle will be at these coordinates in the resulting image. To move them simply move the target rectangle to the middle of the target image.Charlie
Thanks - I've figured out that I've posted wrong points2D array (I've edited it in the question right now). Shifting these points a little bit have moved whole output image, but I'm still cropping some parts of it.Antimacassar
S
3

I implemented littleimp's answer in python in case anyone needs it. It should be noted that this will not work properly if the vanishing points of the polygons are falling within the image.

    import cv2
    import numpy as np
    from PIL import Image, ImageDraw
    import math
    
    
    def get_transformed_image(src, dst, img):
        # calculate the tranformation
        mat = cv2.getPerspectiveTransform(src.astype("float32"), dst.astype("float32"))
        
            
        # new source: image corners
        corners = np.array([
                        [0, img.size[0]],
                        [0, 0],
                        [img.size[1], 0],
                        [img.size[1], img.size[0]]
                    ])
    
        # Transform the corners of the image
        corners_tranformed = cv2.perspectiveTransform(
                                      np.array([corners.astype("float32")]), mat)
    
        # These tranformed corners seems completely wrong/inverted x-axis 
        print(corners_tranformed)
        
        x_mn = math.ceil(min(corners_tranformed[0].T[0]))
        y_mn = math.ceil(min(corners_tranformed[0].T[1]))
    
        x_mx = math.ceil(max(corners_tranformed[0].T[0]))
        y_mx = math.ceil(max(corners_tranformed[0].T[1]))
    
        width = x_mx - x_mn
        height = y_mx - y_mn
    
        analogy = height/1000
        n_height = height/analogy
        n_width = width/analogy
    
    
        dst2 = corners_tranformed
        dst2 -= np.array([x_mn, y_mn])
        dst2 = dst2/analogy 
    
        mat2 = cv2.getPerspectiveTransform(corners.astype("float32"),
                                           dst2.astype("float32"))
    
    
        img_warp = Image.fromarray((
            cv2.warpPerspective(np.array(image),
                                mat2,
                                (int(n_width),
                                int(n_height)))))
        return img_warp
    
    
    # image coordingates
    src=  np.array([[ 789.72, 1187.35],
     [ 789.72, 752.75],
     [1277.35, 730.66],
     [1277.35,1200.65]])
    
    
    # known coordinates
    dst=np.array([[0, 1000],
                 [0, 0],
                 [1092, 0],
                 [1092, 1000]])
    
    # Create the image
    image = Image.new('RGB', (img_width, img_height))
    image.paste( (200,200,200), [0,0,image.size[0],image.size[1]])
    draw = ImageDraw.Draw(image)
    draw.line(((src[0][0],src[0][1]),(src[1][0],src[1][1]), (src[2][0],src[2][1]),(src[3][0],src[3][1]), (src[0][0],src[0][1])), width=4, fill="blue")
    #image.show()
    
    warped =  get_transformed_image(src, dst, image)
    warped.show()
Standard answered 30/10, 2020 at 12:20 Comment(1)
how to convert this code into c++?Thurifer
G
1

There are two things you need to do:

  1. Increase the size of the output of cv2.warpPerspective
  2. Translate the warped source image such that the center of the warped source image matches with the center of cv2.warpPerspective output image

Here is how code will look:

# center of source image
si_c = [x//2 for x in image.shape] + [1]
# find where center of source image will be after warping without comepensating for any offset
wsi_c = np.dot(H, si_c)
wsi_c = [x/wsi_c[2] for x in wsi_c]
# warping output image size
stitched_frame_size = tuple(2*x for x in image.shape)
# center of warping output image
wf_c = image.shape
# calculate offset for translation of warped image
x_offset = wf_c[0] - wsi_c[0]
y_offset = wf_c[1] - wsi_c[1]
# translation matrix
T = np.array([[1, 0, x_offset], [0, 1, y_offset], [0, 0, 1]])
# translate tomography matrix
translated_H = np.dot(T.H)
# warp
stitched = cv2.warpPerspective(image, translated_H, stitched_frame_size)
Gonium answered 25/8, 2020 at 13:43 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.