OpenCV StereoRectifyUncalibrated to 3D Point Cloud
Asked Answered
L

1

1

I have some code that works out all of the parts up to calculating values with cv::stereoRectifyUncalibrated. However, I am not sure where to go from there to get a 3D Point cloud from it.

I have code that works with the calibrated version that gives me a Q matrix and I then use that with reprojectImageTo3D and StereoBM to give me a point cloud.

I want to compare the results of the two different methods as sometimes I will not be able to calibrate the camera.

Lorenalorene answered 19/11, 2012 at 22:29 Comment(0)
K
1

http://blog.martinperis.com/2012/01/3d-reconstruction-with-opencv-and-point.html I think this will be helpful. It has a code which converts Disparity Image to Point cloud using PCL and shows in 3D viewer.

Since you have Q, two images and other camera params(from calibration), you should use ReprojectTo3D to get depth map.

Use StereoBM or stereoSGBM to get Disparity Map and use that Disparit Map and Q to get depth image.

StereoBM sbm;
sbm.state->SADWindowSize = 9;
sbm.state->numberOfDisparities = 112;
sbm.state->preFilterSize = 5;
sbm.state->preFilterCap = 61;
sbm.state->minDisparity = -39;
sbm.state->textureThreshold = 507;
sbm.state->uniquenessRatio = 0;
sbm.state->speckleWindowSize = 0;
sbm.state->speckleRange = 8;
sbm.state->disp12MaxDiff = 1;

sbm(g1, g2, disp); // g1 and g2 are two gray images
reprojectImageTo3D(disp, Image3D, Q, true, CV_32F);

And that code basically converts depth map to Point cloud.

 pcl::PointCloud<pcl::PointXYZRGB>::Ptr point_cloud_ptr (new pcl::PointCloud<pcl::PointXYZRGB>);
 double px, py, pz;
 uchar pr, pg, pb;

 for (int i = 0; i < img_rgb.rows; i++)
 {
     uchar* rgb_ptr = img_rgb.ptr<uchar>(i);
     uchar* disp_ptr = img_disparity.ptr<uchar>(i);
     double* recons_ptr = recons3D.ptr<double>(i);
     for (int j = 0; j < img_rgb.cols; j++)
     {
         //Get 3D coordinates

          uchar d = disp_ptr[j];
          if ( d == 0 ) continue; //Discard bad pixels
          double pw = -1.0 * static_cast<double>(d) * Q32 + Q33; 
          px = static_cast<double>(j) + Q03;
          py = static_cast<double>(i) + Q13;
          pz = Q23;

          // Normalize the points
          px = px/pw;
          py = py/pw;
          pz = pz/pw;

          //Get RGB info
          pb = rgb_ptr[3*j];
          pg = rgb_ptr[3*j+1];
          pr = rgb_ptr[3*j+2];

          //Insert info into point cloud structure
          pcl::PointXYZRGB point;
          point.x = px;
          point.y = py;
          point.z = pz;
          uint32_t rgb = (static_cast<uint32_t>(pr) << 16 |
          static_cast<uint32_t>(pg) << 8 | static_cast<uint32_t>(pb));
          point.rgb = *reinterpret_cast<float*>(&rgb);
          point_cloud_ptr->points.push_back (point);
    }
}

point_cloud_ptr->width = (int) point_cloud_ptr->points.size();
point_cloud_ptr->height = 1;

//Create visualizer
boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer;
viewer = createVisualizer( point_cloud_ptr );
Kroo answered 19/3, 2013 at 15:1 Comment(4)
While this may theoretically answer the question, it would be preferable to include the essential parts of the answer here, and provide the link for reference.Equivocal
Actually, that link has a code which converts OpenCV disparity map to PCL point cloud and shows in 3D.Kroo
The problem is that you have only given a link and in the event the link ever fails, this answer will become obsolete. You should provide an overview of the link, code, etc here and include the link only as a reference.Equivocal
In the uncalibrated case, he does not have the Q matrix, so this answer is not valid.Auld

© 2022 - 2024 — McMap. All rights reserved.