Converting 2D point to 3D location
Asked Answered
G

2

10

I have a fixed camera with known cameraMatrix and distCoeffs. I also have a chessboard which is fixed too, and transform and rotation vector are also calculated using solvePnP.

I'm wondering how is possible to get 3D location of a 2D point on the same plane that the chessboard is located, like the picture below:

enter image description here

One thing for sure is that the Z of that point is 0, but how to get X and Y of that point.

Groos answered 19/10, 2019 at 19:48 Comment(6)
with your transform and rotation vectors, are you able to explain all of the chessboard corners in 3D?Ataliah
if you say that Z will be 0, is it ok for your to just get the plane-coordinates of that point? Like "going 10 cm in red direction and minus 15 cm in green direction?Ataliah
@Ataliah this wont work, because pixels closer to camera represent larger areaGroos
it is easy to get the plane coordinates with a petspective homography. But if you need the 3d points in your camera centered 3d space, you have to transform the plane according to your rotation and translation vectors afterwards.Ataliah
Can you provide your expected result of this point coordinates?Missy
Also the dimensions of the chess board?Missy
V
9

You can solve this with 3 simple steps:

Step 1:

Compute the 3d direction vector, expressed in the coordinate frame of the camera, of the ray corresponding to the given 2d image point by inverting the camera projection model:

std::vector<cv::Point2f> imgPt = {{u,v}}; // Input image point
std::vector<cv::Point2f> normPt;
cv::undistortPoints     (imgPt, normPt, cameraMatrix, distCoeffs);
cv::Matx31f ray_dir_cam(normPt[0].x, normPt[0].y, 1);
// 'ray_dir_cam' is the 3d direction of the ray in camera coordinate frame
// In camera coordinate frame, this ray originates from the camera center at (0,0,0)

Step 2:

Compute the 3d direction of the vector of this ray in the coordinate frame attached to the chessboard, using the relative pose between the camera and the chessboard:

// solvePnP typically gives you 'rvec_cam_chessboard' and 'tvec_cam_chessboard'
// Inverse this pose to get the pose mapping camera coordinates to chessboard coordinates
cv::Matx33f R_cam_chessboard;
cv::Rodrigues(rvec_cam_chessboard, R_cam_chessboard);
cv::Matx33f R_chessboard_cam = R_cam_chessboard.t();
cv::Matx31f t_cam_chessboard = tvec_cam_chessboard;
cv::Matx31f pos_cam_wrt_chessboard = -R_chessboard_cam*t_cam_chessboard;
// Map the ray direction vector from camera coordinates to chessboard coordinates
cv::Matx31f ray_dir_chessboard = R_chessboard_cam * ray_dir_cam;

Step 3:

Find the desired 3d point by computing the intersection between the 3d ray and the chessboard plane with Z=0:

// Expressed in the coordinate frame of the chessboard, the ray originates from the
// 3d position of the camera center, i.e. 'pos_cam_wrt_chessboard', and its 3d
// direction vector is 'ray_dir_chessboard'
// Any point on this ray can be expressed parametrically using its depth 'd':
// P(d) = pos_cam_wrt_chessboard + d * ray_dir_chessboard
// To find the intersection between the ray and the plane of the chessboard, we
// compute the depth 'd' for which the Z coordinate of P(d) is equal to zero
float d_intersection = -pos_cam_wrt_chessboard.val[2]/ray_dir_chessboard.val[2];
cv::Matx31f intersection_point = pos_cam_wrt_chessboard + d_intersection * ray_dir_chessboard;
Valdemar answered 24/10, 2019 at 19:55 Comment(1)
I think undistorted points must be converted to 3D space by using camera intrinsic parameters. Otherwise results will be in pixels. I believe this is especially important when the principal point is not in the center. That's why it might be better to use ray_dir_cam(( normPt[0].x - c_x) / f_x, (normPt[0].y - c_y) / f_y, 1); Please correct me if I am wrong.Cark
S
1

Since your case limited to the plains, The simple way is to use Homography.

First undistort your image. Then use findHomography to calculate the Homography matrix which transform your pixel coordinate (image) to real coordinate (euclidean space e.g. in cm). Something similar to this one:

#include <opencv2/calib3d.hpp>
//...

//points on undistorted image (in pixel). more is better
vector<Point2f>  src_points = { Point2f(123,321), Point2f(456,654), Point2f(789,987), Point2f(123,321) };
//points on chessboard (e.g. in cm)
vector<Point2f>  dst_points = { Point2f(0, 0), Point2f(12.5, 0), Point2f(0, 16.5), Point2f(12.5, 16.5) }; 
Mat H = findHomography(src_points, dst_points, RANSAC);

//print euclidean coordinate of new point on undistorted image (in pixel)
cout << H * Mat(Point3d(125, 521, 0)) << endl;
Scarlettscarp answered 24/10, 2019 at 12:9 Comment(4)
I did what you said: vector<Point2f> corners, vector<Point2f> objectPoints2d; findChessboardCorners(img, patternSize, corners); calcChessboardCorners(patternSize, squareSize, objectPoints2d); chessboardHomography = findHomography(corners, objectPoints2d, RANSAC);Groos
it does not work, and the coordinate it returns is not correctGroos
even if you multiply the homography matrix with the pixel which is located on chessboard [0,0,0] it will return [-192, -129, 0.33]Groos
@Groos do you undistore image first? check objectPoints2d be correct. Event print and check them manually.Scarlettscarp

© 2022 - 2024 — McMap. All rights reserved.