How to know if an images is similar to another (slightly different angle but same point of view)
Asked Answered
H

2

7

I've checked methods like Phasher to get similar images. Basically to resize images to 8x8, grayscale, get average pixel and create a binary hash of each pixel comparing if it's above or below the average pixel.

This method is very well explained here: http://hackerfactor.com/blog/index.php?/archives/432-Looks-Like-It.html

Example working: - image 1 of a computer on a table - image 2, the same, but with a coin

enter image description here

This would work, since, using the hash of a very reduced, grayscale image, both of them will be almost the same, or even the same. So you can conclude they are similar when 90% of more of the pixels are the same (in the same place!)

My problem is in images that are taken from the same point of view but different angle, for example this ones:

enter image description here

In this case, the hashes "fingerprint" generated are so shifted each other, that we can not compare the hashes bit by bit, it will be very different.

The pixels are "similar", but they are not in the same place, since in this case there's more sky, and the houses "starts" more below than the first one.

So the hash comparison results in "they are different images".

Possible solution:

I was thinking about creating a larger hash for the first image, then get 10 random "sub hashes" for the second one, and try to see if the 10 sub hashes are or are not in "some place" of the first big hash (if a substring is contained into another bigger).

Problem here I think is the CPU/time when working with thousands of images, since you have to compare 1 image to 1000, and in each one, compare 10 sub hashes with a big one.

Other solutions ? ;-)

Hyaline answered 27/5, 2014 at 9:14 Comment(1)
I know it's been 3 years but did you by any chance find a good solution to this?Hagiography
N
7

One option is to detect a set of "interesting" points for each image and store that alongside your hash. It's somewhat similar to the solution you suggested.

We want those points be unlikely to vary between images like yours that have shifts in perspective. These lecture slides give a good overview of how to find points like that with fairly straightforward linear algebra. I'm using Mathematica because it has built in functions for a lot of this stuff. ImageKeypoints does what we want here.

After we have our interesting points we need to find which ones match between the images we're comparing. If your images are very similar, like the ones in your examples, you could probably just take an 8x8 greyscale image for each interesting point and compare each from one image with the ones for the nearby interesting points on the other image. I think you could use your existing algorithm.

If you wanted to use a more advanced algorithm like SIFT you'd need to have a look at ImageKeypoint's properties like scale and orientation.

The ImageKeypoints documentation has this example you can use to get a small piece of the image for each interesting point (it uses the scale property instead of a fixed size):

MapThread[ImageTrim[img, {#1}, 2.5 #2] &, 
 Transpose@
  ImageKeypoints[img, {"Position", "Scale"}, 
   "KeypointStrength" -> .001]]

Finding a certain number of matching points might be enough to say that the images are similar, but if not you can use something like RANSAC to figure out the transformation you need to align your hash images (the 8x8 images you're already able to generate) enough that your existing algorithm works.

I should point out that Mathematica has ImageCorrespondingPoints, which does all of this stuff (using ImageKeypoints) much better. But I don't know how you could have it cache the intermediate results so that scales for what you're trying to do. You might want to look into its ability to constrain matching points to a perspective transform, though.

Here's a plot of the matching points for your example images to give you an idea of what parts end up matching:

matching points

So you can precalculate the interesting points for your database of images, and the greyscale hashes for each point. You'll have to compare several hash images for each image in your database, rather than just two, but it will scale to within a constant factor of your current algorithm.

Nickens answered 4/6, 2014 at 19:27 Comment(0)
H
0

You can try an upper bound if the hashes doesn't match compare how many pixels match from the 8x8 grid. Maybe you can try to match the colors like in photo mosaic:Photo Mosaic Algorithm. How to create a mosaic photo given the basic image and a list of tiles?.

Houseboat answered 30/5, 2014 at 8:50 Comment(2)
Phpdna, that's exactly what Phaser does ... but the problem is not to get the "fingerprint". Also comparing pixels is too much random, two photos with too much sky will have same "average" pixels as I described in the example aboveHyaline
I mean compare the 8x8 pixel. From the example it looks very similar. Define an upper bound or threshold and done.Houseboat

© 2022 - 2024 — McMap. All rights reserved.