OpenCv depth estimation from Disparity map
Asked Answered
A

2

6

I'm trying to estimate depth from a stereo pair images with OpenCV. I have disparity map and depth estimation can be obtained as:

             (Baseline*focal)
depth  =     ------------------
           (disparity*SensorSize)

I have used Block Matching technique to find the same points in the two rectificated images. OpenCV permits to set some block matching parameter, for example BMState->numberOfDisparities.

After block matching process:

cvFindStereoCorrespondenceBM( frame1r, frame2r, disp, BMState);
cvConvertScale( disp, disp, 16, 0 );
cvNormalize( disp, vdisp, 0, 255, CV_MINMAX );

I found depth value as:

if(cvGet2D(vdisp,y,x).val[0]>0)
   {
   depth =((baseline*focal)/(((cvGet2D(vdisp,y,x).val[0])*SENSOR_ELEMENT_SIZE)));
   }

But the depth value obtaied is different from the value obtaied with the previous formula due to the value of BMState->numberOfDisparities that changes the result value.

How can I set this parameter? what to change this parameter?

Thanks

Ar answered 6/10, 2013 at 16:35 Comment(0)
A
6

The simple formula is valid if and only if the motion from left camera to right one is a pure translation (in particular, parallel to the horizontal image axis).

In practice this is hardly ever the case. It is common, for example, to perform the matching after rectifying the images, i.e. after warping them using a known Fundamental Matrix, so that corresponding pixels are constrained to belong to the same row. Once you have matches on the rectified images, you can remap them onto the original images using the inverse of the rectifying warp, and then triangulate into 3D space to reconstruct the scene. OpenCV has a routine to do that: reprojectImageTo3d

Armoury answered 6/10, 2013 at 20:15 Comment(2)
Thaks for your answer, I give at the block matching process the two rectified images, so this formula should work right? Now I'll try to use reprojectImageTo3D function. ITALIAN VERSION: Dal nome credo che tu sia italiano, quindi per una maggiore chiarezza (per me); se utilizzo già le immagini rettificate la formula precedente dovrebbe andare bene giusto? Se invece utilizzo ReprojectImageTo3D dovrei estrapolare soltanto il valore Z? Grazie per la risposta.Ar
Define "should work"? The parallel-camera formula gives you a depth at a given pixel with respect to an ideal camera that observes the rectified image. Its reconstruction will be projectively, but not metrically, accurate. [Yes, I do speak Italian, but this is an English-only forum]Armoury
O
0

The formula you mentioned above wont work as the camera plane and the image plane is not same i.e the camera will be situated at some height and the plane it captures will be on the ground. So, you have to do a little modification in this formula. You can fit these disparity values and known distance on a polynomial by curve fitting.From it you will get the coefficients which can be used for other unknown distances. 2nd way is to create a 3d Point Cloud using wrap matrix and reprojectimageTo3d (Opencv API).

Otalgia answered 7/10, 2013 at 5:4 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.