opencv update homography matrix to fit on an image double the size
Asked Answered
R

2

6

I'm doing video stabilization using optical flow. To make calcOpticalFlowPyrLK work faster I'm downscaling the original image 2x and running the function on that.

How can I modify the homograph matrix (retrieved via findHomography) to be able to warpPerspective the original, larger, image.

Ralli answered 29/5, 2012 at 21:52 Comment(0)
D
4

Let B be the transformation you have computed, you can multiply B by another homography, A, to get AB = C, where C is a homography that does both transformations, this is equivalent to apply first B and then A. To find A you can use getPerspectiveTransform.

Edit: by AB I meant matrix multiplication, not element-wise multiplication.

Edit 2: to get A you pass the four corners of the two images in the same order to getPerspectiveTransform such that the corners of the downsampled image are the source points and the corners of the original image are the destination points.

Divulsion answered 29/5, 2012 at 22:7 Comment(7)
I'm getting B via findHomography, as you mentioned. Do you mean I should get A via getPerspectiveTransform? What should I send to getPerspectiveTransform? I have original image, original_image_gray_downscaled, optical_flow_points_downscaled, homography_downscaled.Ralli
getPerspectiveTransform should really efficient, and 3x3 matrix multiplication should take less than 1 ms on a decent machine.Divulsion
It's not working. It's showing me only a corner of the image: gist.github.com/93c82a7ddb638ac43e9aRalli
I don't want to change the size of the image, I want to change the parameters from findHomography so I can apply them to the bigger, original, image.Ralli
MB does it work? If not why did u mark this as correct answer?Institutor
I'm also interested if it worked out since I have the exact same problem. I'm stitching multiple pictures (terrain shot from an UAV) and also according to the stitching pipeline in OpenCV's documentation we need to downsize the images, match them, find homographies (or fundamental matrices depending on the situation) and then apply those homographies on the original images. Obviously we talk about a huge performance boost when extracting features from a downsized image but somehow it doesn't seem to be working. The source code of the stitcher is also quite confusing sometimes.Woodsman
I got it to work by doing getPerspectiveTransform(src, dst) * B * getPerspectiveTransform(dst, src)Bronk
G
10

This is a little late and the answer you have works fine but I have one thing to add. I don't like taking functions like getPerspectiveTransform for granted. In this case it is easy to just make the matrix yourself. Image reductions that are powers of 2 are easy. Suppose you have a point and you want to move it to an image with twice the resolution.

float newx = (oldx+.5)*2 - .5;
float newy = (oldy+.5)*2 - .5;

conversely, to go to an image of half the resolution...

float newx = (oldx+.5)/2 - .5;
float newy = (oldy+.5)/2 - .5;

Draw yourself a diagram if you need to and convince yourself it works, remember 0 indexing. Instead of thinking about making your transformation work on other resolutions, think about moving every point to the resolution of your transform, then using your transform, then moving it back. Fortunately you can do all of this in 1 matrix, we just need to build that matrix! First build a matrix for each of the three steps

//move point to an image of half resolution, note it is equivalent to the above equation
project_down=(.5,0,-.25,
               0,.5,-.25,
               0, 0,  1)

//move point to an image of twice resolution, these are inverses of one another
project_up=(2,0,.5,
            0,2,.5,
            0, 0,1)

To make your final transformation just combine them

final_transform = [project_up][your_homography][project_down];

The nice thing is you only have to do this once for any given homography. This should work the same as getPerspectiveTransform (and probably run faster). Hopefully understanding this will help you deal with other questions you may run into regarding image resolution changes.

Gloucester answered 23/7, 2012 at 23:45 Comment(0)
D
4

Let B be the transformation you have computed, you can multiply B by another homography, A, to get AB = C, where C is a homography that does both transformations, this is equivalent to apply first B and then A. To find A you can use getPerspectiveTransform.

Edit: by AB I meant matrix multiplication, not element-wise multiplication.

Edit 2: to get A you pass the four corners of the two images in the same order to getPerspectiveTransform such that the corners of the downsampled image are the source points and the corners of the original image are the destination points.

Divulsion answered 29/5, 2012 at 22:7 Comment(7)
I'm getting B via findHomography, as you mentioned. Do you mean I should get A via getPerspectiveTransform? What should I send to getPerspectiveTransform? I have original image, original_image_gray_downscaled, optical_flow_points_downscaled, homography_downscaled.Ralli
getPerspectiveTransform should really efficient, and 3x3 matrix multiplication should take less than 1 ms on a decent machine.Divulsion
It's not working. It's showing me only a corner of the image: gist.github.com/93c82a7ddb638ac43e9aRalli
I don't want to change the size of the image, I want to change the parameters from findHomography so I can apply them to the bigger, original, image.Ralli
MB does it work? If not why did u mark this as correct answer?Institutor
I'm also interested if it worked out since I have the exact same problem. I'm stitching multiple pictures (terrain shot from an UAV) and also according to the stitching pipeline in OpenCV's documentation we need to downsize the images, match them, find homographies (or fundamental matrices depending on the situation) and then apply those homographies on the original images. Obviously we talk about a huge performance boost when extracting features from a downsized image but somehow it doesn't seem to be working. The source code of the stitcher is also quite confusing sometimes.Woodsman
I got it to work by doing getPerspectiveTransform(src, dst) * B * getPerspectiveTransform(dst, src)Bronk

© 2022 - 2024 — McMap. All rights reserved.