reprojectImageTo3D() in OpenCV
Asked Answered
G

1

21

I've been trying to compute real world coordinates of points from a disparity map using the reprojectImageTo3D() function provided by OpenCV, but the output seems to be incorrect.

I have the calibration parameters, and compute the Q matrix using

stereoRectify(left_cam_matrix, left_dist_coeffs, right_cam_matrix, right_dist_coeffs, frame_size, stereo_params.R, stereo_params.T, R1, R2, P1, P2, Q, CALIB_ZERO_DISPARITY, 0, frame_size, 0, 0);

I believe this first step is correct, since the stereo frames are being rectified properly, and the distortion removal I'm performing also seems all right. The disparity map is being computed with OpenCV's block matching algorithm, and it looks good too.

The 3D points are being calculated as follows:

cv::Mat XYZ(disparity8U.size(),CV_32FC3); reprojectImageTo3D(disparity8U, XYZ, Q, false, CV_32F);

But for some reason they form some sort of cone, and are not even close to what I'd expect, considering the disparity map. I found out that other people had a similar problem with this function, and I was wondering if someone has the solution.

Thanks in advance!

[EDIT]

stereoRectify(left_cam_matrix, left_dist_coeffs, right_cam_matrix, right_dist_coeffs,frame_size, stereo_params.R, stereo_params.T, R1, R2, P1, P2, Q, CALIB_ZERO_DISPARITY, 0, frame_size, 0, 0);

initUndistortRectifyMap(left_cam_matrix, left_dist_coeffs, R1, P1, frame_size,CV_32FC1, left_undist_rect_map_x, left_undist_rect_map_y);
initUndistortRectifyMap(right_cam_matrix, right_dist_coeffs, R2, P2, frame_size, CV_32FC1, right_undist_rect_map_x, right_undist_rect_map_y);
cv::remap(left_frame, left_undist_rect, left_undist_rect_map_x, left_undist_rect_map_y, CV_INTER_CUBIC, BORDER_CONSTANT, 0);
cv::remap(right_frame, right_undist_rect, right_undist_rect_map_x, right_undist_rect_map_y, CV_INTER_CUBIC, BORDER_CONSTANT, 0);

cv::Mat imgDisparity32F = Mat( left_undist_rect.rows, left_undist_rect.cols, CV_32F );  
StereoBM sbm(StereoBM::BASIC_PRESET,80,5);
sbm.state->preFilterSize  = 15;
sbm.state->preFilterCap   = 20;
sbm.state->SADWindowSize  = 11;
sbm.state->minDisparity   = 0;
sbm.state->numberOfDisparities = 80;
sbm.state->textureThreshold = 0;
sbm.state->uniquenessRatio = 8;
sbm.state->speckleWindowSize = 0;
sbm.state->speckleRange = 0;

// Compute disparity
sbm(left_undist_rect, right_undist_rect, imgDisparity32F, CV_32F );

// Compute world coordinates from the disparity image
cv::Mat XYZ(disparity32F.size(),CV_32FC3);
reprojectImageTo3D(disparity32F, XYZ, Q, false, CV_32F);
print_3D_points(disparity32F, XYZ);

[EDIT]

Adding the code used to compute 3D coords from disparity:

cv::Vec3f *StereoFrame::compute_3D_world_coordinates(int row, int col,
  shared_ptr<StereoParameters> stereo_params_sptr){

 cv::Mat Q_32F;

 stereo_params_sptr->Q_sptr->convertTo(Q_32F,CV_32F);
 cv::Mat_<float> vec(4,1);

 vec(0) = col;
 vec(1) = row;
 vec(2) = this->disparity_sptr->at<float>(row,col);

 // Discard points with 0 disparity    
 if(vec(2)==0) return NULL;
 vec(3)=1;              
 vec = Q_32F*vec;
 vec /= vec(3);
 // Discard points that are too far from the camera, and thus are highly
 // unreliable
 if(abs(vec(0))>10 || abs(vec(1))>10 || abs(vec(2))>10) return NULL;

 cv::Vec3f *point3f = new cv::Vec3f();
 (*point3f)[0] = vec(0);
 (*point3f)[1] = vec(1);
 (*point3f)[2] = vec(2);

    return point3f;
}
Gelasius answered 15/3, 2014 at 2:36 Comment(5)
Can you show the disparity map you obtained and the parameters you gave to the stereo block matching algorithm ?Vogt
Sure, you can see the left frame and the disparity here: postimg.org/image/yuimlj5u7 And these are the parameters I use to compute disparity: StereoBM sbm(StereoBM::BASIC_PRESET,80,5); sbm.state->preFilterSize=15; sbm.state->preFilterCap=20; sbm.state->SADWindowSize=11; sbm.state->minDisparity=0; sbm.state->numberOfDisparities=80; sbm.state->textureThreshold=0; sbm.state->uniquenessRatio=8; sbm.state->speckleWindowSize=0; sbm.state->speckleRange=0;Gelasius
You're right the disparity map seems OK. Could the cone shape when reprojected to 3D be due to the noise in the disparity map ? Can you show what you get in 3D ?Vogt
Hi, thanks for you help so far! The 3D points can be seen here: postimg.org/image/9lunzg917 . Also, as I said in the post, there are many people with the exact same issue. Here is the link to one of the discussion forums where they posted their problem (opencv-users.1802565.n2.nabble.com/…), in case you want more datails. Do you think this might be related to the format of the disparity image? the StereoBM algorithm returns a CV_16S image, and I'm converting it to CV_8U. This is how I do the conversion:Gelasius
double minVal; double maxVal; minMaxLoc( imgDisparity16S, &minVal, &maxVal ); imgDisparity16S.convertTo( imgDisparity8U, CV_8UC1, 255/(maxVal - minVal));Gelasius
V
13

Your code seems fine to me. It could be a bug with the reprojectImageTo3D. Try to replace it with the following code (which has the same role):

cv::Mat_<cv::Vec3f> XYZ(disparity32F.rows,disparity32F.cols);   // Output point cloud
cv::Mat_<float> vec_tmp(4,1);
for(int y=0; y<disparity32F.rows; ++y) {
    for(int x=0; x<disparity32F.cols; ++x) {
        vec_tmp(0)=x; vec_tmp(1)=y; vec_tmp(2)=disparity32F.at<float>(y,x); vec_tmp(3)=1;
        vec_tmp = Q*vec_tmp;
        vec_tmp /= vec_tmp(3);
        cv::Vec3f &point = XYZ.at<cv::Vec3f>(y,x);
        point[0] = vec_tmp(0);
        point[1] = vec_tmp(1);
        point[2] = vec_tmp(2);
    }
}

I never used reprojectImageTo3D, however I am using successfully code similar to the snippet above.

[Initial answer]

As it is explained in the documentation for StereoBM, if you request a CV_16S disparity map, you have to divide each disparity value by 16 before using them.

Hence, you should convert the disparity map as follows before using it:

imgDisparity16S.convertTo( imgDisparity32F, CV_32F, 1./16);

You can also directly request a CV_32F disparity map from the StereoBM structure, in which case you directy get the true disparities.

Vogt answered 15/3, 2014 at 15:19 Comment(16)
Thanks again for you fast reply! I tried to use both methods, but the results are still strange: postimg.org/image/w97qfoq0hGelasius
OK, can you then edit your post and show the code you are using (eveything between stereoRectify and reprojectImageTo3D) ?Vogt
Done. Let me know if you need more details.Gelasius
@Gelasius I edited my answer, try to replace reprojectImageTo3D with the code I mentionned.Vogt
Thanks for the snippet! Now the resulting point cloud (postimg.org/image/aj0vgkeyr) looks better and we can see the points corresponding to the chair and to the other objects, but I still don't understand why there is a cone-shaped group of points. Also, it seems that all the points have the same depth. Looking at the file where I store them, I noticed most of the 3D points have indeed the same value of 59016.1.Gelasius
@Gelasius I think the red points are behind the camera and correspond to areas where no matches could be found between the images. The cone shaped points seems to be the "usable" part of the point cloud. Are you sure you did not mixed the left and right cameras and associated images (i.e. put the left image in right_frame, or the left intrinsics in right_cam_matrix, and veice versa) ?Vogt
I'm checking the code again and again, but can't find any of these mistakes. I'll keep working on it, and will let you know if I have any progress. Thanks for your time and interest!Gelasius
after many hours spent and a lot of tweaking, I've finally managed to generate the seemingly correct (or at least coeherent) point cloud from the stereo pair. Here is the screenshot of the first reasonable result: postimg.org/image/pfv25ppkx. You can see that the points seem flipped when compared to the image, so I used the -T instead of T (translation vector obtained from the calibration)available at github.com/lessthanoptimal/BoofCV-Data/tree/master/applet/…, and got the following result: postimg.org/image/p8pnn7pel ...Gelasius
... There were also some issues with the type of the matrices/variables used in the computation, but apparently they were solved.Gelasius
[UPDATE] It seems that the flip on the point cloud is actually a flaw in the PCL visualizer (See pcl-users.org/Flipped-point-clouds-in-visualizer-td4021086.html and github.com/PointCloudLibrary/pcl/issues/116)Gelasius
@Gelasius Those results look good. Could you explain what tweaking you did? I've got the same problem using the Python API. My disparity map looks great but the point cloud looks like a cone. What helped in your case?Underglaze
@Daniel-Lee I edited the question to add the code used to get 3D coordinates from the disparity image (note that it's based on previous answers, but I've added some validity checks). I think you can easily adapt it to your needs and use it instead of OpenCV's function. Hope it helps!Gelasius
Sure @aledalgrande, here is the Q matrix obtained by the stereoRectify function: Q= [1, 0, 0, -160.7056255340576; 0, 1, 0, -123.4229545593262; 0, 0, 0, 245.3052543487613; 0, 0, -8.329822043183768, 0] Let me know if you need anything else.Gelasius
@user3921720 I edited the code to make it clearer and to fix the typo.Vogt
I still get a cone / pyramid 3d scene after using your codeEvesham
None of the link in the comments are aliveRochellrochella

© 2022 - 2024 — McMap. All rights reserved.