bad Disparity map using opencv StereoBM
Asked Answered
E

2

6

I am trying to use StereoBM to get disparity map of two images. I tried some sample code and images. They are working fine. However, when I try my own images, I got very bad map, very noisy.

enter image description here

enter image description here

enter image description here

my StereoBM parameters

sbm.state->SADWindowSize = 25;
sbm.state->numberOfDisparities = 128;
sbm.state->preFilterSize = 5;
sbm.state->preFilterCap = 61;
sbm.state->minDisparity = -39;
sbm.state->textureThreshold = 507;
sbm.state->uniquenessRatio = 0;
sbm.state->speckleWindowSize = 0;
sbm.state->speckleRange = 8;
sbm.state->disp12MaxDiff = 1;

My questions are

  1. Any problems about my images?
  2. Is possible to get good disparity map without calibration of camera? Do I need to rectify images before StereoBM

Thanks.

Here is my code for rectifying image

Mat img_1 = imread( "image1.jpg", CV_LOAD_IMAGE_GRAYSCALE );
Mat img_2 = imread( "image2.jpg", CV_LOAD_IMAGE_GRAYSCALE );

int minHessian = 430;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );

//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );

//-- Step 3: Matching descriptor vectors with a brute force matcher
BFMatcher matcher(NORM_L1, true);   //BFMatcher matcher(NORM_L2);

std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );

double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < matches.size(); i++ )
{ double dist = matches[i].distance;
    if( dist < min_dist ) min_dist = dist;
    if( dist > max_dist ) max_dist = dist;
}

std::vector< DMatch > good_matches;
vector<Point2f>imgpts1,imgpts2;
for( int i = 0; i < matches.size(); i++ )
{
    if( matches[i].distance <= max(4.5*min_dist, 0.02) ){
        good_matches.push_back( matches[i]);
        imgpts1.push_back(keypoints_1[matches[i].queryIdx].pt);
        imgpts2.push_back(keypoints_2[matches[i].trainIdx].pt);
    }

}

std::vector<uchar> status;
cv::Mat F = cv::findFundamentalMat(imgpts1, imgpts2, cv::FM_8POINT, 3., 0.99, status);   //FM_RANSAC

Mat H1,H2;
cv::stereoRectifyUncalibrated(imgpts1, imgpts1, F, img_1.size(), H1, H2);

cv::Mat rectified1(img_1.size(), img_1.type());
cv::warpPerspective(img_1, rectified1, H1, img_1.size());

cv::Mat rectified2(img_2.size(), img_2.type());
cv::warpPerspective(img_2, rectified2, H2, img_2.size());

StereoBM sbm;
sbm.state->SADWindowSize = 25;
sbm.state->numberOfDisparities = 128;
sbm.state->preFilterSize = 5;
sbm.state->preFilterCap = 61;
sbm.state->minDisparity = -39;
sbm.state->textureThreshold = 507;
sbm.state->uniquenessRatio = 0;
sbm.state->speckleWindowSize = 0;
sbm.state->speckleRange = 8;
sbm.state->disp12MaxDiff = 1;

Mat disp,disp8;
sbm(rectified1, rectified2, disp);

the rectified images and disparity map are here

enter image description here

enter image description here

enter image description here

enter image description here

Emeric answered 9/3, 2016 at 23:3 Comment(2)
Yes, they need to be rectified, as you can read in the docsPapilloma
@Papilloma Can images be rectified without any info of camera? Sorry, I am new to this. I read the stereo_match.cpp. It requires intrinsic parameters of camera.Emeric
L
5
  1. There is no particular problem about your images. However, if computation time is not crucial I'd suggest you use a larger resolution. Also, you better us an uncompressed image format, if possible.

  2. You calibrate your stereo cameras to rectify your stereo pictures. Yo do need to rectify the pictures but it is also possible to rectify them without having calibrated cameras. If you have only few pictures to process you can do it in Photoshop or the like by shifting or rotating the images so that matching points are on the same line. If you have a higher number of pictures to process you can do it like you tried in your code.

I did not go through your code in detail but I suppose you should check whether matching points are on the same row.

In your sample pictures this is actually the case and using StereoSGMB instead of StereoBM I got some better yet still very noisy result.

enter image description here

It is a bit of parameter tuning to get good results in StereoSGMB. Also note that the result for the block in the back is much better than for the objects in the front because the block has a textured surface.

Here's the parameters I used:

    Ptr<StereoSGBM> sgbm = StereoSGBM::create(0,    //int minDisparity
                                        96,     //int numDisparities
                                        5,      //int SADWindowSize
                                        600,    //int P1 = 0
                                        2400,   //int P2 = 0
                                        20,     //int disp12MaxDiff = 0
                                        16,     //int preFilterCap = 0
                                        1,      //int uniquenessRatio = 0
                                        100,    //int speckleWindowSize = 0
                                        20,     //int speckleRange = 0
                                        true);  //bool fullDP = false

sgbm->compute(left, right, disp);
Luhe answered 13/3, 2016 at 23:55 Comment(1)
Thanks for your answer. I think the textured surface is the key. I got better result by using textured object.Emeric
D
0

If your cameras are planar don't pass the rotation matrix returned from stereoRectify to initUndistort. The rectification and undistort process makes the epipolar lines horizontal.

When done correctly the three dimensional points should be in the same row of the image of each image. It doesn't look like it is.

Denver answered 16/4, 2018 at 21:28 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.