Is cv2.triangulatePoints just not very accurate?
Asked Answered
M

2

8

Summary

I'm trying to triangulate points from 2 images but am not getting accurate results at all.

Details

Here's what I'm doing:

  1. Measure my 16 object points in real world coordinates.

  2. Determine the pixel coordinates of the 16 object points for each image.

  3. Use cv2.solvePnP() to get the tvecs and rvecs for each camera.

  4. Use cv2.projectPoints to verify that the tvecs and rvecs re-project a given 3D point to the correct image coordinates (which it does work). For example:

    img_point_right = cv2.projectPoints(np.array([[0,0,39]], np.float), 
                                        right_rvecs, 
                                        right_tvecs,
                                        right_intrinsics,
                                        right_distortion)
    
  5. With that verified, use this formula to get the rotation matrices:

    left_rotation, jacobian = cv2.Rodrigues(left_rvecs)
    right_rotation, jacobian = cv2.Rodrigues(right_rvecs)
    

    and then the projection matrices:

    RT = np.zeros((3,4))
    RT[:3, :3] = left_rotation
    RT[:3, 3] = left_translation.transpose()
    left_projection = np.dot(left_intrinsics, RT)
    
    RT = np.zeros((3,4))
    RT[:3, :3] = right_rotation
    RT[:3, 3] = right_translation.transpose()
    right_projection = np.dot(right_intrinsics, RT)
    
  6. Before triangulating, undistort the points using cv2.undistortPoints. For example:

    left_undist = cv2.undistortPoints(left_points, 
                                       cameraMatrix=left_intrinsics,
                                       distCoeffs=left_distortion)
    
  7. Triangulate the points. For example:

    # Transpose to get into OpenCV's 2xN format.
    left_points_t = np.array(left_undist[0]).transpose()
    right_points_t = np.array(right_undist[0]).transpose()
    # Note, I take the 0th index of each points matrix to get rid of the extra dimension, 
    # although it doesn't affect the output.
    
    triangulation = cv2.triangulatePoints(left_projection, right_projection, left_points_t, right_points_t)
    homog_points = triangulation.transpose()
    
    euclid_points = cv2.convertPointsFromHomogeneous(tri_homog)
    

Unfortunately, when I get the output of the last step, my point doesn't even have a positive Z direction despite the 3D point I'm trying to reproduce having a positive Z position.

For reference, positive Z is forward, positive Y is down, and positive X is right.

For example, 3D point (0, 0, 39) - imagine a point 39 feet in front of you - gives triangulation output of (4.47, -8.77, -44.81)

Questions

Is this a valid way to triangulate points?

If so, is cv2.triangulatePoints just not a good method through which to triangulate points and any suggestions for alternatives?

Thank you for your help.

Milk answered 25/2, 2021 at 3:27 Comment(0)
M
10

Well, it turns out that if I don't call the undistortPoints function before calling the triangulatePoints function, then I get reasonable results. This is because undistortPoints normalizes the points using the intrinsic parameters while performing the undistortion, but then I'm still calling triangulatePoints with projection matrices that account for the intrinsic parameters.

I can get even better results, however, by undistorting the points, and then calling triangulatePoints with projection matrices that are built using the Identity matrix as the intrinsic matrix.

Problem solved!

Milk answered 25/2, 2021 at 14:39 Comment(0)
W
4

I was having the same problem as you the day before. It turns out that undistortPoints works as expected if you pass the P matrix, so it will return the results in pixels (otherwise it will assume P as an identity and return normalized): left_undist = cv2.undistortPoints(left_points, cameraMatrix=left_intrinsics, distCoeffs=left_distortion, R=left_intrinsics)

This way you don't need to mess with the intrinsics. The result will be the same.

Also, be sure to use float in the parameters passed to triangulatePoints.

projMat1 = mtx1 @ cv2.hconcat([np.eye(3), np.zeros((3,1))]) # Cam1 is the origin
projMat2 = mtx2 @ cv2.hconcat([R, T]) # R, T from stereoCalibrate

# points1 is a (N, 1, 2) float32 from cornerSubPix
points1u = cv2.undistortPoints(points1, mtx1, dist1, None, mtx1)
points2u = cv2.undistortPoints(points2, mtx2, dist2, None, mtx2)

points4d = cv2.triangulatePoints(projMat1, projMat2, points1u, points2u)
points3d = (points4d[:3, :]/points4d[3, :]).T
Windcheater answered 26/2, 2021 at 13:21 Comment(1)
Hey, I tried your solution but I’m running into a bit of a problem with undistortPoints. I posted my question on here #68639674Transpontine

© 2022 - 2024 — McMap. All rights reserved.