OpenCV: get perspective matrix from translation & rotation
Asked Answered
H

3

14

I'm trying to verify my camera calibration, so I'd like to rectify the calibration images. I expect that this will involve using a call to warpPerspective but I do not see an obvious function that takes the camera matrix, and the rotation and translation vectors to generate the perspective matrix for this call.

Essentially I want to do the process described here (see especially the images towards the end) but starting with a known camera model and pose.

Is there a straightforward function call that takes the camera intrinsic and extrinsic parameters and computes the perspective matrix for use in warpPerspective?

I'll be calling warpPerspective after having called undistort on the image.

In principle, I could derive the solution by solving the system of equations defined at the top of the opencv camera calibration documentation after specifying the constraint Z=0, but I figure that there must be a canned routine that will allow me to orthorectify my test images.

In my seearches, I'm finding it hard to wade through all of the stereo calibration results -- I only have one camera, but want to rectify the image under the constraint that I'm only looking a a planar test pattern.

Hermosa answered 24/4, 2014 at 17:32 Comment(4)
So, by rectify do you mean remove rotation effects? Meaning, you only have one view, not two views relative to each other, which you want to rectify (align epipolar lines).Fathead
@DavidNilosek yes, I have an oblique image of an array of calibration circles, I'd like to recover the "top down" view.Hermosa
I'm trying to think through it, but I'm unfortunately running low on time. Try using the inverse of the rotation matrix as the perspective matrix for warpPerspective, if that works I can write something that explains it a little better.Fathead
You want to convert the image viewed by a perspective camera into an image viewed by an orthographic camera, I don't think this can be done using the inverse rotation matrix. One straight-forward method would be to use cv::getPerspectiveTransform with 4 appropriate points. However it might be possible to derive the transformation directly from the camera calibration, I'll look into it.Peradventure
P
20

Actually there is no need to involve an orthographic camera. Here is how you can get the appropriate perspective transform.

If you calibrated the camera using cv::calibrateCamera, you obtained a camera matrix K a vector of lens distortion coefficients D for your camera and, for each image that you used, a rotation vector rvec (which you can convert to a 3x3 matrix R using cv::rodrigues, doc) and a translation vector T. Consider one of these images and the associated R and T. After you called cv::undistort using the distortion coefficients, the image will be like it was acquired by a camera of projection matrix K * [ R | T ].

Basically (as @DavidNilosek intuited), you want to cancel the rotation and get the image as if it was acquired by the projection matrix of form K * [ I | -C ] where C=-R.inv()*T is the camera position. For that, you have to apply the following transformation:

Hr = K * R.inv() * K.inv()

The only potential problem is that the warped image might go outside the visible part of the image plane. Hence, you can use an additional translation to solve that issue, as follows:

     [ 1  0  |         ]
Ht = [ 0  1  | -K*C/Cz ]
     [ 0  0  |         ]

where Cz is the component of C along the Oz axis.

Finally, with the definitions above, H = Ht * Hr is a rectifying perspective transform for the considered image.

Peradventure answered 25/4, 2014 at 12:51 Comment(8)
To get the image centered I had to do the following: let u0=-K*C/Cz (the translation part of Ht above); shift it by half of the input image size: u[0] = u0[0] - image_size[0], u[1] = u0[1]-image_size[1], u[2]=u0[2] and then use this shifted u vector in place of -K*C/Cz in the constuction of Ht. This is related to the fact that the origin of my world coordinate system is at the center of my calibration gridHermosa
There are typeos in the code-snippets my previous comment: I shift by 1/2 of the image size.Hermosa
Hi, Aldur. Could you take a look at my similar question #46679922. I also tried changing the matPerspective used in warpPerspective to H you mentioned above. The result is empty. Thanks.Levona
@Dave: To get a correct re-projection, I needed to invert 'u[2]'.Absorptance
@AlduurDisciple Could you please explain what Cz means in the final Ht equation?Marysa
@AbhijitBalaji see my edit. This is the 3rd element of vector C.Peradventure
@AldurDisciple I followed your steps and I am getting an all black image. Could you please clarify? ThanksMarysa
@AldurDisciple Could you please take a look at this? #48576587 ThanksMarysa
H
1

This is a sketch of what I mean by "solving the system of equations" (in Python):

import cv2
import scipy  # I use scipy by habit; numpy would be fine too
#rvec= the rotation vector
#tvec = the translation *emphasized text*matrix
#A = the camera intrinsic

def unit_vector(v):
    return v/scipy.sqrt(scipy.sum(v*v))

(fx,fy)=(A[0,0], A[1,1])
Ainv=scipy.array( [ [1.0/fx, 0.0, -A[0,2]/fx],
                     [ 0.0,  1.0/fy, -A[1,2]/fy],
                     [ 0.0,    0.0,     1.0] ], dtype=scipy.float32 )
R=cv2.Rodrigues( rvec )
Rinv=scipy.transpose( R )

u=scipy.dot( Rinv, tvec ) # displacement between camera and world coordinate origin, in world coordinates


# corners of the image, for here hard coded
pixel_corners=[ scipy.array( c, dtype=scipy.float32 ) for c in [ (0+0.5,0+0.5,1), (0+0.5,640-0.5,1), (480-0.5,640-0.5,1), (480-0.5,0+0.5,1)] ]
scene_corners=[]
for c in pixel_corners:
    lhat=scipy.dot( Rinv, scipy.dot( Ainv, c) ) #direction of the ray that the corner images, in world coordinates
    s=u[2]/lhat[2]
    # now we have the case that (s*lhat-u)[2]==0,
    # i.e. s is how far along the line of sight that we need
    # to move to get to the Z==0 plane.
    g=s*lhat-u
    scene_corners.append( (g[0], g[1]) )

# now we have: 4 pixel_corners (image coordinates), and 4 corresponding scene_coordinates
# can call cv2.getPerspectiveTransform on them and so on..
Hermosa answered 25/4, 2014 at 13:35 Comment(1)
I suspect that this is an inelegant way of achieving the same effect as @AldurDisciple's answerHermosa
O
1

For anyone struggling with the alignment of the image when following @BConic's answer, a practical solution is to warp the image corner points using Hr, and define Ht to offset the result:

Hr = K @ R.T @ np.linalg.pinv(K)

# warp image corner points:
w, h = image_size
points = [[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]
points = np.array(points, np.float32).reshape(-1, 1, 2)

warped_points = cv2.perspectiveTransform(points, Hr).squeeze()

# get size and offset of warped corner points:
xmin, ymin = warped_points.min(axis=0)
xmax, ymax = warped_points.max(axis=0)

# size:
warped_image_size = int(round(xmax - xmin)), int(round(ymax - ymin))

# offset:
Ht = np.eye(3)
Ht[0, 2] = -xmin
Ht[1, 2] = -ymin

H = Ht @ Hr
Oberon answered 6/7, 2023 at 6:49 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.