In my application:
I track an object.
Get where its
corners
are coming in this frame.I find the homography between its
corners
from last frame and current frame.Use that homography to do a
perspectiveTransform
on thecorners
found in the current frame, to gettransformed_corners
.Use the
transformed_corners
to find the homography between them and theoverlay_image
.Apply above homography
M
tooverlay_image
, to get what would be called thewarped_image
using warpPerspective. This is the slow part.And then using masking operations, I print the
warped_image
onto the current frame where the object was found.
Now I know after reading this blog article here why warpPerspective is slow.
And I'm getting ~300ms per frame in just the 6th step above, all because of warpPerspective. It's significantly affecting the FPS output of my application. Basically, it went down to 2FPS from 12 FPS without warping on every frame.
Is there any faster alternative to this? It's all done on Android, using NDK r9. What are some fast alternatives, and optimizations to reduce the warp time from 300ms to sub 50ms times?
INTER_LINEAR
is being used. See here – RegaleINTER_NEAREST
but from my experience it isn't much faster. Can you give some information about input/output images sizes etc? Maybe some code is necessary to tell whether there is any optimization possible. – MakingMat
s and passing around from here and there. Image sizes vary from 1920x1080 forcurrentFrame
and around 390x293 forlogoImage
. It's obvious that full HD res for currentFrame is too big. We tried smaller size, 960x540, and smaller, but time remains around ~250-350ms. – Regalesnippet
are you referring to? From the algorithm description it isn't visible whether you re-use an allocatedwarped_image
and the chosen size ofwarped_image
etc... – Making