I have computed the Fundamental Matrix between two cameras using opencv's findFundamentalMat. Then I plot the epipolar lines in the image. And I get something like:
Now, I tried to get the pose from that fundamental matrix, computing first the essential matrix and then using Hartley & Zissserman approach.
K2=np.mat(self.calibration.getCameraMatrix(1))
K1=np.mat(self.calibration.getCameraMatrix(0))
E=K2.T*np.mat(F)*K1
w,u,vt = cv2.SVDecomp(np.mat(E))
if np.linalg.det(u) < 0:
u *= -1.0
if np.linalg.det(vt) < 0:
vt *= -1.0
#Find R and T from Hartley & Zisserman
W=np.mat([[0,-1,0],[1,0,0],[0,0,1]],dtype=float)
R = np.mat(u) * W * np.mat(vt)
t = u[:,2] #u3 normalized.
In order to check everything until here was correct, I recompute E and F and plot the epipolar lines again.
S=np.mat([[0,-T[2],T[1]],[T[2],0,-T[0]],[-T[1],T[0],0]])
E=S*np.mat(R)
F=np.linalg.inv(K2).T*np.mat(E)*np.linalg.inv(K1)
But surprise, the lines have moved and they don't go through the points anymore. Have I done something wrong?
It might be related with this question http://answers.opencv.org/question/18565/pose-estimation-produces-wrong-translation-vector/, but they didn't provide a solution
The matrices I get are:
Original F=[[ -1.62627683e-07 -1.38840952e-05 8.03246936e-03]
[ 5.83844799e-06 -1.37528349e-06 -3.26617731e-03]
[ -1.15902181e-02 1.23440336e-02 1.00000000e+00]]
E=[[-0.09648757 -8.23748182 -0.6192747 ]
[ 3.46397143 -0.81596046 0.29628779]
[-6.32856235 -0.03006961 -0.65380443]]
R=[[ 9.99558381e-01 -2.72074658e-02 1.19497464e-02]
[ 3.50795548e-04 4.12906861e-01 9.10773189e-01]
[ -2.97139627e-02 -9.10366782e-01 4.12734058e-01]]
T=[[-8.82445166e-02]
[8.73204425e-01]
[4.79298380e-01]]
Recomputed E=
[[-0.0261145 -0.99284189 -0.07613091]
[ 0.47646462 -0.09337537 0.04214901]
[-0.87284976 -0.01267909 -0.09080531]]
Recomputed F=
[[ -4.40154169e-08 -1.67341327e-06 9.85070691e-04]
[ 8.03070680e-07 -1.57382143e-07 -4.67389530e-04]
[ -1.57927152e-03 1.47100268e-03 2.56606003e-01]]