findHomography, getPerspectiveTransform, & getAffineTransform
Asked Answered
F

5

41

This question is on the OpenCV functions findHomography, getPerspectiveTransform & getAffineTransform

  1. What is the difference between findHomography and getPerspectiveTransform?. My understanding from the documentation is that getPerspectiveTransform computes the transform using 4 correspondences (which is the minimum required to compute a homography/perspective transform) where as findHomography computes the transform even if you provide more than 4 correspondencies (presumably using something like a least squares method?). Is this correct? (In which case the only reason OpenCV still continues to support getPerspectiveTransform should be legacy? )

  2. My next concern is that I want to know if there is an equivalent to findHomography for computing an Affine transformation? i.e. a function which uses a least squares or an equivalent robust method to compute and affine transformation. According to the documentation getAffineTransform takes in only 3 correspondences (which is the min required to compute an affine transform).

Best,

Frogmouth answered 28/6, 2012 at 4:8 Comment(1)
Maybe estimateRigidTransform would fit your needs.Hypothyroidism
P
48

Q #1: Right, the findHomography tries to find the best transform between two sets of points. It uses something smarter than least squares, called RANSAC, which has the ability to reject outliers - if at least 50% + 1 of your data points are OK, RANSAC will do its best to find them, and build a reliable transform.

The getPerspectiveTransform has a lot of useful reasons to stay - it is the base for findHomography, and it is useful in many situations where you only have 4 points, and you know they are the correct ones. The findHomography is usually used with sets of points detected automatically - you can find many of them, but with low confidence. getPerspectiveTransform is good when you kn ow for sure 4 corners - like manual marking, or automatic detection of a rectangle.

Q #2 There is no equivalent for affine transforms. You can use findHomography, because affine transforms are a subset of homographies.

Perithecium answered 28/6, 2012 at 5:58 Comment(1)
the response Q#2 isn't true as of opencv 4.X: there're estimateAffine2D and estimateAffine3D which are the equivalent of findHomography for affine transformsDriblet
S
14

I concur with everything @vasile has written. I just want to add some observations:

getPerspectiveTransform() and getAffineTransform() are meant to work on 4 or 3 points (respectively), that are known to be correct correspondences. On real-life images taken with a real camera, you can never get correspondences that accurate, not with automatic nor manual marking of the corresponding points.

There are always outliers. Just look at the simple case of wanting to fit a curve through points (e.g. take a generative equation with noise y1 = f(x) = 3.12x + gauss_noise or y2 = g(x) = 0.1x^2 + 3.1x + gauss_noise): it will be much more easier to find a good quadratic function to estimate the points in both cases, than a good linear one. Quadratic might be an overkill, but in most cases will not be (after removing outliers), and if you want to fit a straight line there you better be mightily sure that is the right model, otherwise you are going to get unusable results.

That said, if you are mightily sure that affine transform is the right one, here's a suggestion:

  • use findHomography, that has RANSAC incorporated in to the functionality, to get rid of the outliers and get an initial estimate of the image transformation
  • select 3 correct matches-correspondances (that fit with the homography found), or reproject 3 points from the 1st image to the 2nd (using the homography)
  • use those 3 matches (that are as close to correct as you can get) in getAffineTransform()
  • wrap all of that in your own findAffine() if you want - and voila!
Serendipity answered 28/6, 2012 at 9:8 Comment(4)
Is there a way to to find the best "Affine" matrix? I want to force the last row of the homography to be [0, 0, 1].Ingrid
@Drazick The "algorithm" I have written does almost that -- it uses findHomography to get rid of outliers and so that you don't have to code your own RANSAC, and than you can use getAffineTransform() on any 3 points to get a close-to-best affine. Alternatevley, you could code your own RANSAC algorithm with getAffineTransform() instead of getPerspectiveTransform() as a core function.Serendipity
@penleope, I found a solution how to find the best (l2 wise) affine transform using SVD in a manner similar to the way you estimate the best homography.Ingrid
@Drazick great. Describe it in an answer?Serendipity
B
4

Re Q#2, estimateRigidTransform is the oversampled equivalent of getAffineTransform. I don't know if it was in OCV when this was first posted, but it's available in 2.4.

Bronchi answered 17/8, 2014 at 23:44 Comment(0)
D
4

There is an easy solution for the finding the Affine transform for the system of over-determined equations.

  1. Note that in general an Affine transform finds a solution to the over-determined system of linear equations Ax=B by using a pseudo-inverse or a similar technique, so

x = (A At )-1 At B

Moreover, this is handled in the core openCV functionality by a simple call to solve(A, B, X).

  1. Familiarize yourself with the code of Affine transform in opencv/modules/imgproc/src/imgwarp.cpp: it really does just two things:

    a. rearranges inputs to create a system Ax=B;

    b. then calls solve(A, B, X);

NOTE: ignore the function comments in the openCV code - they are confusing and don’t reflect the actual ordering of the elements in the matrices. If you are solving [u, v]’= Affine * [x, y, 1] the rearrangement is:

         x1 y1 1 0  0  1
         0  0  0 x1 y1 1
         x2 y2 1 0  0  1
    A =  0  0  0 x2 y2 1
         x3 y3 1 0  0  1
         0  0  0 x3 y3 1

    X = [Affine11, Affine12, Affine13, Affine21, Affine22, Affine23]’

         u1 v1
    B =  u2 v2
         u3 v3 

All you need to do is to add more points. To make Solve(A, B, X) work on over-determined system add DECOMP_SVD parameter. To see the powerpoint slides on the topic, use this link. If you’d like to learn more about the pseudo-inverse in the context of computer vision, the best source is: ComputerVision, see chapter 15 and appendix C.

If you are still unsure how to add more points see my code below:

// extension for n points;
cv::Mat getAffineTransformOverdetermined( const Point2f src[], const Point2f dst[], int n )
{
    Mat M(2, 3, CV_64F), X(6, 1, CV_64F, M.data); // output
    double* a = (double*)malloc(12*n*sizeof(double));
    double* b = (double*)malloc(2*n*sizeof(double));
    Mat A(2*n, 6, CV_64F, a), B(2*n, 1, CV_64F, b); // input

    for( int i = 0; i < n; i++ )
    {
        int j = i*12;   // 2 equations (in x, y) with 6 members: skip 12 elements
        int k = i*12+6; // second equation: skip extra 6 elements
        a[j] = a[k+3] = src[i].x;
        a[j+1] = a[k+4] = src[i].y;
        a[j+2] = a[k+5] = 1;
        a[j+3] = a[j+4] = a[j+5] = 0;
        a[k] = a[k+1] = a[k+2] = 0;
        b[i*2] = dst[i].x;
        b[i*2+1] = dst[i].y;
    }

    solve( A, B, X, DECOMP_SVD );
    delete a;
    delete b;
    return M;
}

// call original transform
vector<Point2f> src(3);
vector<Point2f> dst(3);
src[0] = Point2f(0.0, 0.0);src[1] = Point2f(1.0, 0.0);src[2] = Point2f(0.0, 1.0);
dst[0] = Point2f(0.0, 0.0);dst[1] = Point2f(1.0, 0.0);dst[2] = Point2f(0.0, 1.0);
Mat M = getAffineTransform(Mat(src), Mat(dst));
cout<<M<<endl;
// call new transform
src.resize(4); src[3] = Point2f(22, 2);
dst.resize(4); dst[3] = Point2f(22, 2);
Mat M2 = getAffineTransformOverdetermined(src.data(), dst.data(), src.size());
cout<<M2<<endl;
Dyna answered 16/1, 2015 at 22:55 Comment(0)
E
0

getAffineTransform:affine transform is combination of translation, scale, shear, and rotation https://www.mathworks.com/discovery/affine-transformation.html https://www.tutorialspoint.com/computer_graphics/2d_transformation.htm

getPerspectiveTransform:perspective transform is project mapping enter image description here

Elegancy answered 15/5, 2019 at 9:24 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.