How to correctly use cv::triangulatePoints()
Asked Answered
A

5

46

I am trying to triangulate some points with OpenCV and I found this cv::triangulatePoints() function. The problem is that there is almost no documentation or examples of it.

I have some doubts about it.

  1. What method does it use? I've done a little research about triangulations and there are several methods (Linear, Linear LS, eigen, iterative LS, iterative eigen,...) but I can't find which one is being used in OpenCV.

  2. How should I use it? It seems that as an input it needs a projection matrix and 3xN homogeneous 2D points. I have them defined as std::vector<cv::Point3d> pnts, but as an output it needs 4xN arrays and obviously I can't create a std::vector<cv::Point4d> because it doesn't exist, so how should I define the output vector?

For the second question I tried: cv::Mat pnts3D(4, N, CV_64F); and cv::Mat pnts3d;, but neither seems to work (it throws an exception).

Anaesthesiology answered 30/4, 2013 at 8:37 Comment(6)
Did you look on OpenCV documentation website?Circumlocution
@sgar91 indeed I did, but that documentation does not solve any of my questions!Anaesthesiology
check this.Foetus
@Willy checked and tested already :D. I came prepared! That code is not completely correct. In the function InterativeLinearLStraingulation() the iterations always breaks at the second time, because variables u,u1,P and P1 are not updated, making the condition to be true and break the loop. I am triying to read the original book and correct the code, but it is not straightforward :SAnaesthesiology
@Willy also checked the result without iteration (it should work) but it seems it doesnt work. The results I get are not crazy, but sure not correct.Anaesthesiology
Read the source for more info: opencv\modules\calib3d\src\triangulate.cppFoetus
A
64

1.- The method used is Least Squares. There are more complex algorithms than this one. Still, it is the most common one, as the other methods may fail in some cases (i.e. some others fail if points are on a plane or at infinite).

The method can be found in Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman (p312)

2.-The usage:

cv::Mat pnts3D(1, N, CV_64FC4);
cv::Mat cam0pnts(1, N, CV_64FC2);
cv::Mat cam1pnts(1, N, CV_64FC2);

Fill the 2 channel point Matrices with the points in the images.

cam0 and cam1 are Mat3x4 camera matrices (intrinsic and extrinsic parameters). You can construct them by multiplying A*RT, where A is the intrinsic parameter matrix and RT is the rotation translation 3x4 pose matrix.

cv::triangulatePoints(cam0,cam1,cam0pnts,cam1pnts,pnts3D);

NOTE: pnts3D NEEDs to be a 4 channel 1xN cv::Mat when defined, throws exception if not, but the result is a cv::Mat(4, N, cv_64FC1) matrix. Really confusing, but it is the only way I didn't get an exception.


UPDATE: As of version 3.0 or possibly earlier, this is no longer true, and pnts3D can also be of type Mat(4, N, CV_64FC1) or may be left completely empty (as usual, it is created inside the function).

Anaesthesiology answered 30/4, 2013 at 12:27 Comment(11)
Thanks, a good starting point. Man, OpenCV's documentation is BAD. I mean, it's better than no documentation at all, but still...Carmella
This reply was very useful thanks...opencv documentation lacks such an exampleSubjection
Very nice answer. OpenCV docs suck. Great elaboration on such poor documentation.Chant
The last paragraph is not true anymore; in the current version (I have 3.0 rc installed), Mat pnts3D(4, N, CV_64FC1) also works.Unknot
Still can't understand why triangulation methods aren't well implemented in OpenCV yet.Chibouk
Is triangulatePoints actually working for arbitrary camera poses (like "forward stereo" from a monocular camera sequence, some may call this SfM) or does it only work for stereo camera (say both cameras have mostly x-offset = are next to each other). I see triangulatePoints is used itself in recoverPose function (via modules/calib3d/src/five-point.cpp) so it looks like arbitrary poses should be possible. I get the triangulation working with KITTI stereo sequences but when I try to do stereo from monocular (one KITTI camera) with R&t via recoverPose then the triangulation outputs garbage.Forestation
@Forestation I suggest you ask a separate question. This was asked 5 years ago for opencv v1...Anaesthesiology
Hi Ander, thanks for adding a reference to your answer and a page number, but I would appreciate it if you would also add the edition that you are using, and the section name and number of that edition because page numbers change. I have the 2nd edition in front of me (2003) and I am pretty sure you are referring to "12.2 Linear triangulation methods", if I am right here could you edit your answer to add these details? Thanks.Epithelioma
@Epithelioma please do edit the answer with the information yourself (with your current information). I wrote this in 2013, very very long ago! I don't even live in the same country where the book I read is, so I have no access to that information anymore.Anaesthesiology
@AnderBiguri can you just edit it? I would have to go through the suggested edit system and I am a bit worried that edit reviewers wouldn't be able to review an edit like that.Epithelioma
@Epithelioma I can approve it myself. Be clear in the edit description and should be fine. I just do not have the info requiredAnaesthesiology
M
19

A small addition to @Ander Biguri's answer. You should get your image points on a non-undistorted image, and invoke undistortPoints() on the cam0pnts and cam1pnts, because cv::triangulatePoints expects the 2D points in normalized coordinates (independent from the camera) and cam0 and cam1 should be only [R|t^T] matricies you do not need to multiple it with A.

Mammalogy answered 23/4, 2015 at 10:5 Comment(6)
I am not sure if they changed the code or not, but I am pretty sure you needed to at the moment of the answer. A moderator did modify my answer adding a bit more of explanation there, agreeing with me.Anaesthesiology
Dunno, but I could not make it work the way you posted (however it does make perfectly sense), and wanted to share the way I did, maybe someone will bump into this again. ;)Twiddle
Weird, I did make it work with an quite high accuracy in my system (0.02 pixels) so I guess that It works. However it was too long ago!Anaesthesiology
I would agree Bálint Kriván as Ander Biruis' solution is using distorted points. Multiplying the camera matrix A does only transfer those points into the camera system not dealing with any distortion at all. In my opinion this could work well with small distortion but undistorted points should definitely be used.Chibouk
So how is it in the end? Do I need to multiply with A if I run my point detection on undistorted images? Thanks.Claudy
you should do undistortPoints() instead of undistort() on the image, then you don't need A.Twiddle
B
7

Thanks to Ander Biguri! His answer helped me a lot. But I always prefer the alternative with std::vector, I edited his solution to this:

std::vector<cv::Point2d> cam0pnts;
std::vector<cv::Point2d> cam1pnts;
// You fill them, both with the same size...

// You can pick any of the following 2 (your choice)
// cv::Mat pnts3D(1,cam0pnts.size(),CV_64FC4);
cv::Mat pnts3D(4,cam0pnts.size(),CV_64F);

cv::triangulatePoints(cam0,cam1,cam0pnts,cam1pnts,pnts3D);

So you just need to do emplace_back in the points. Main advantage: you do not need to know the size N before start filling them. Unfortunately, there is no cv::Point4f, so pnts3D must be a cv::Mat...

Boyt answered 8/5, 2017 at 2:0 Comment(3)
Fortunately this fucntion seem to accept better inputs now (2017), when I wrote the question/answer (2013) it was a mess!Anaesthesiology
The doc is still a little bit messy, so your answer helped me doing it! thanks!Anoxemia
Much cleaner. Thanks for the answer.Chant
U
3

I tried cv::triangulatePoints, but somehow it calculates garbage. I was forced to implement a linear triangulation method manually, which returns a 4x1 matrix for the triangulated 3D point:

Mat triangulate_Linear_LS(Mat mat_P_l, Mat mat_P_r, Mat warped_back_l, Mat warped_back_r)
{
    Mat A(4,3,CV_64FC1), b(4,1,CV_64FC1), X(3,1,CV_64FC1), X_homogeneous(4,1,CV_64FC1), W(1,1,CV_64FC1);
    W.at<double>(0,0) = 1.0;
    A.at<double>(0,0) = (warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,0) - mat_P_l.at<double>(0,0);
    A.at<double>(0,1) = (warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,1) - mat_P_l.at<double>(0,1);
    A.at<double>(0,2) = (warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,2) - mat_P_l.at<double>(0,2);
    A.at<double>(1,0) = (warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,0) - mat_P_l.at<double>(1,0);
    A.at<double>(1,1) = (warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,1) - mat_P_l.at<double>(1,1);
    A.at<double>(1,2) = (warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,2) - mat_P_l.at<double>(1,2);
    A.at<double>(2,0) = (warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,0) - mat_P_r.at<double>(0,0);
    A.at<double>(2,1) = (warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,1) - mat_P_r.at<double>(0,1);
    A.at<double>(2,2) = (warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,2) - mat_P_r.at<double>(0,2);
    A.at<double>(3,0) = (warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,0) - mat_P_r.at<double>(1,0);
    A.at<double>(3,1) = (warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,1) - mat_P_r.at<double>(1,1);
    A.at<double>(3,2) = (warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,2) - mat_P_r.at<double>(1,2);
    b.at<double>(0,0) = -((warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,3) - mat_P_l.at<double>(0,3));
    b.at<double>(1,0) = -((warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,3) - mat_P_l.at<double>(1,3));
    b.at<double>(2,0) = -((warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,3) - mat_P_r.at<double>(0,3));
    b.at<double>(3,0) = -((warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,3) - mat_P_r.at<double>(1,3));
    solve(A,b,X,DECOMP_SVD);
    vconcat(X,W,X_homogeneous);
    return X_homogeneous;
}

the input parameters are two 3x4 camera projection matrices and a corresponding left/right pixel pair (x,y,w).

Upholsterer answered 12/9, 2014 at 15:9 Comment(2)
I'll try the opencv method some day. My purpose is to provide an alternative function if you have problem with the opencv method or simply don't want to use it. To prove it's precise see triangulated result imagizer.imageshack.us/v2/150x100q90/674/3tYH9R.png from original imagizer.imageshack.us/v2/150x100q90/540/wyg6bC.pngUpholsterer
Definetly very good to have an alterntive method here around.Anaesthesiology
W
0

Additionally to Ginés Hidalgo comments,

if you did a stereocalibration and could estimate exactly Fundamental Matrix from there, which was calculated based on checkerboard.

Use correctMatches function refine detected keypoints

std::vector<cv::Point2f> pt_set1_pt_c, pt_set2_pt_c;
cv::correctMatches(F,pt_set1_pt,pt_set2_pt,pt_set1_pt_c,pt_set2_pt_c)
Westbrooks answered 21/12, 2021 at 14:41 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.