cvReprojectImageTo3D -3d modelling from 2d images issue-
Asked Answered
S

1

5

i need your help about this issue badly. i am trying to model a simple scene in 3d out of 2d images. i am using 2 images (left and right-the famous tsukuba scene) http://www.cc.gatech.edu/classes/AY2003/cs7495_fall/ProblemSets/Data/tsukuba-right.bmp i get a disparity map. like this one. http://www.robots.ox.ac.uk/~ojw/2op/tsukuba_score.png

after here i have some questions. i think the steps should be:

cvStereoRectify ( to get Q) cvReprojectImageTo3D (disparity map, 3dimage , Q )

but i dont know what to pass as inputs in stereoRectify i only have 2 images,i dont have any info about cameras. (maybe i can use stereoRectifyUncalibrated instead, if so how do i?)

please help thanks

Scribble answered 19/7, 2011 at 8:11 Comment(1)
Was the answer ok, or would you need more information ?Theophany
T
13

Extract from opencv doc :

" The function stereoRectify computes the rotation matrices for each camera that (virtually) make both camera image planes the same plane. Consequently, that makes all the epipolar lines parallel and thus simplifies the dense stereo correspondence problem. On input the function takes the matrices computed by stereoCalibrate() and on output it gives 2 rotation matrices and also 2 projection matrices in the new coordinates. "

Answer:

It means 3 options :

  • Or you have two images and you know the model of your camera (intrisics) that you loaded from a XML for instance loadXMLFromFile() => stereoRectify() => reprojectImageTo3D()

  • Or you don't have them but you can calibrate your camera => stereoCalibrate() => stereoRectify() => reprojectImageTo3D()

  • Or you can't calibrate the camera (it is your case, because you don't have the Sir Tsukuba's camera, then : you need to find pair keypoints on both images with SURF, SIFT for instance (you can use any blob detector actually), then compute descriptors of these keypoints, then matching keypoints from image right and image left according to their descriptors, and then find the fundamental mat from them. The processing is much harder and would be like this: detect keypoints (SURF, SIFT) => extract descriptors (SURF,SIFT) => compare and match descriptors (BruteForce, Flann based approaches) => find fundamental mat (findFundamentalMat()) from these pairs => stereoRectifyUncalibrated() => reprojectImageTo3D()

I hope it helped you, if not, please, let me know

Julien,

Theophany answered 19/7, 2011 at 16:21 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.