OpenCV undistortPoints and triangulatePoint give odd results (stereo)
Asked Answered
A

1

65

I'm trying to get 3D coordinates of several points in space, but I'm getting odd results from both undistortPoints() and triangulatePoints().

Since both cameras have different resolution, I've calibrated them separately, got RMS errors of 0,34 and 0,43, then used stereoCalibrate() to get more matrices, got an RMS of 0,708, and then used stereoRectify() to get remaining matrices. With that in hand I've started the work on gathered coordinates, but I get weird results.

For example, input is: (935, 262), and the undistortPoints() output is (1228.709125, 342.79841) for one point, while for another it's (934, 176) and (1227.9016, 292.4686) respectively. Which is weird, because both of these points are very close to the middle of the frame, where distortions are the smallest. I didn't expect it to move them by 300 pixels.

When passed to traingulatePoints(), the results get even stranger - I've measured the distance between three points in real life (with a ruler), and calculated the distance between pixels on each picture. Because this time the points were on a pretty flat plane, these two lengths (pixel and real) matched, as in |AB|/|BC| in both cases was around 4/9. However, triangulatePoints() gives me results off the rails, with |AB|/|BC| being 3/2 or 4/2.

This is my code:

double pointsBok[2] = { bokList[j].toFloat()+xBok/2, bokList[j+1].toFloat()+yBok/2 };
cv::Mat imgPointsBokProper = cv::Mat(1,1, CV_64FC2, pointsBok);

double pointsTyl[2] = { tylList[j].toFloat()+xTyl/2, tylList[j+1].toFloat()+yTyl/2 };
//cv::Mat imgPointsTyl = cv::Mat(2,1, CV_64FC1, pointsTyl);
cv::Mat imgPointsTylProper = cv::Mat(1,1, CV_64FC2, pointsTyl);

cv::undistortPoints(imgPointsBokProper, imgPointsBokProper, 
      intrinsicOne, distCoeffsOne, R1, P1);
cv::undistortPoints(imgPointsTylProper, imgPointsTylProper, 
      intrinsicTwo, distCoeffsTwo, R2, P2);

cv::triangulatePoints(P1, P2, imgWutBok, imgWutTyl, point4D);

double wResult = point4D.at<double>(3,0);
double realX = point4D.at<double>(0,0)/wResult;
double realY = point4D.at<double>(1,0)/wResult;
double realZ = point4D.at<double>(2,0)/wResult;

The angles between points are kinda sorta good but usually not:

`7,16816    168,389 4,44275` vs `5,85232    170,422 3,72561` (degrees)
`8,44743    166,835 4,71715` vs `12,4064    158,132 9,46158`
`9,34182    165,388 5,26994` vs `19,0785    150,883 10,0389`

I've tried to use undistort() on the entire frame, but got results just as odd. The distance between B and C points should be pretty much unchanged at all times, and yet this is what I get:

7502,42     
4876,46 
3230,13 
2740,67 
2239,95 

Frame by frame.

Pixel distance (bottom) vs real distance (top) - should be very similar: |BC| distance

Angle:

ABC angle

Also, shouldn't both undistortPoints() and undistort() give the same results (another set of videos here)?
enter image description here

Angeles answered 26/8, 2015 at 13:10 Comment(11)
Could you include the code you used for calibration and some sample images for the triangulation part ?Soapwort
This might be difficult... there's a lot of it. I'll try to post only relevant parts tomorrowAngeles
This seems like a messed up calibration -- if you use undistort on an image from just one camera (ignoring the stereo calibration), does it work? You should be able to calibrate the intrinsics of each camera individually and ensure that they are OK and then pass those values to the stereo calibration as an initial guess using the CV_CALIB_USE_INTRINSIC_GUESS flag.Sachsen
@Sachsen As for calibrating intrinsics, I do exactly that. I calibrate each camera individually (because they are of different resolution and aspect ratio), and then I calibrate the stereo system with what I get from previous calibrations. After calibration, I immediatly apply undistort() image to both images, and the image is straight as an arrow. I did try undistortPoints() and undistort() on the same set videos, graphs are above.Angeles
Are there any updates on this problem? Was it just the calibration problem or was is the OpenCV problem or something else?Jaffna
@ancajic I never found out. The end user (researchers) are still finishing up their previous project, and the whole thing takes way too much time to set up and record for me to try on my own, and I'd need another person tooAngeles
You should rectify your images (using stereoRectify) before triangulation (and after undistortion) so that corresponding points have the same height otherwise disparity map would be inaccurate.Douro
You've mentioned stereoRectify but I don't see it between undistortPoints and triangulatePointsDouro
@Douro I've never seen stereoRectify used between undistortPoints and triangulatePoints ? I needed something out of it earlier, so I used it much earlier. I calibrate the cameras and save coordinates separatelyAngeles
@Angeles After undistortion they're no longer rectified so the results would be wrong. For examples and further explanation you can have a look at chapter 12 of Learning OpenCV by Gary Bradski and Adrian KaehlerDouro
@Douro huh. I'll have to look into thatAngeles
D
3

The function cv::undistort does undistortion and reprojection in one go. It performs the following list of operations:

  1. undo camera projection (multiplication with the inverse of the camera matrix)
  2. apply the distortion model to undo the distortion
  3. rotate by the provided Rotation matrix R1/R2
  4. project points to image using the provided Projection matrix P1/P2

If you pass the matrices R1, P1 resp. R2, P2 from cv::stereoCalibrate(), the input points will be undistorted and rectified. Rectification means that the images are transformed in a way such that corresponding points have the same y-coordinate. There is no unique solution for image rectification, as you can apply any translation or scaling to both images, without changing the alignement of corresponding points. That being said, cv::stereoCalibrate() can shift the center of projection quite a bit (e.g. 300 pixels). If you want pure undistortion you can pass an Identity Matrix (instead of R1) and the original camera Matrix K (instead of P1). This should lead to pixel coordinates similar to the original ones.

Disheveled answered 11/11, 2016 at 15:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.