How to calculate 3D object points from 2D image points using stereo triangulation?
Asked Answered
C

1

7

I have a stereo-calibrated camera system calibrated using OpenCV and Python. I am trying to use it to calculate the 3D position of image points. I have collected the intrinsic and extrinsic matrices, as well as, the E, F, R, and T matrices. I am confused on how to triangulate the 2D image points to 3D object points. I have read the following post but I am confused on the process (In a calibrated stereo-vision rig, how does one obtain the "camera matrices" needed for implementing a 3D triangulation algorithm?). Can some one explain how to get from 2D to 3D? From reading around, I feel that the fundamental matrix (F) is important, but I haven't found a clear way to link it to the projection matrix (P). Can someone please walk me through this process?

I appreciate any help I can get.

Casmey answered 11/3, 2014 at 18:51 Comment(0)
N
15

If you calibrated your stereo camera, you should have the intrinsics K1, K2 for each camera, and the rotation R12 and translation t12 from the first to the second camera. From these, you can form the camera projection matrices P1 and P2 as follows:

P1 = K1 * [I3 | 0]
P2 = K2 * [R12 | t12]

Here, I3 is the 3x3 identity matrix, and the notation [R | t] means stacking R and t horizontally.

Then, you can use function triangulatePoints (documentation), which implements the sparse stereo triangulation from the two camera matrices.

If you want dense triangulation or depthmap estimation, there are several functions for that. You first need to rectify the two images using stereoRectify (documentation) and then perform stereo matching, for example using StereoBM (documentation).

Nydianye answered 11/3, 2014 at 20:22 Comment(4)
FYI. To get accurate measurements remember to take the 4xn output matrix (from triangulatePoints) and divide the first three columns (x, y, z) by the 4th column. output /= output[3]Casmey
It's critical that each point be of type float otherwise you'll get unexpected results (even segfaults)Oarlock
Apologies for necromancing this question and answer, but does this procedure assume that the planes of the two cameras coincide, or does it equally apply in the general case where the cameras are arbitrarily placed and possibly not even having the same intrinsic params?Lenrow
@matanster it applies to arbitrary cameras, but the mentioned functions assume the cameras intrinsic and extrinsic parameters are calibrated.Nydianye

© 2022 - 2024 — McMap. All rights reserved.