Metric for SURF
Asked Answered
D

1

9

I'm searching for a usable metric for SURF. Like how good one image matches another on a scale let's say 0 to 1, where 0 means no similarities and 1 means the same image.

SURF provides the following data:

  • interest points (and their descriptors) in query image (set Q)
  • interest points (and their descriptors) in target image (set T)
  • using nearest neighbor algorithm pairs can be created from the two sets from above

I was trying something so far but nothing seemed to work too well:

  1. metric using the size of the different sets: d = N / min(size(Q), size(T)) where N is the number of matched interest points. This gives for pretty similar images pretty low rating, e.g. 0.32 even when 70 interest points were matched from about 600 in Q and 200 in T. I think 70 is a really good result. I was thinking about using some logarithmic scaling so only really low numbers would get low results, but can't seem to find the right equation. With d = log(9*d0+1) I get a result of 0.59 which is pretty good but still, it kind of destroys the power of SURF.

  2. metric using the distances within pairs: I did something like find the K best match and add their distances. The smallest the distance the similar the two images are. The problem with this is that I don't know what are the maximum and minimum values for an interest point descriptor element, from which the distant is calculated, thus I can only relatively find the result (from many inputs which is the best). As I said I would like to put the metric to exactly between 0 and 1. I need this to compare SURF to other image metrics.

The biggest problem with these two are that exclude the other. One does not take in account the number of matches the other the distance between matches. I'm lost.

EDIT: For the first one, an equation of log(x*10^k)/k where k is 3 or 4 gives a nice result most of the time, the min is not good, it can make the d bigger then 1 in some rare cases, without it small result are back.

Discordant answered 15/6, 2011 at 23:30 Comment(1)
Take a look at reference.wolfram.com/mathematica/ref/…. They introduced an additional parameter (transformation) that perhaps could turn on some light on yours ...Bestow
B
6

You can easily create a metric that is the weighted sum of both metrics. Use machine learning techniques to learn the appropriate weights.

What you're describing is related closely to the field of Content-Based Image Retrieval which is a very rich and diverse field. Googling that will get you lots of hits. While SURF is an excellent general purpose low-mid level feature detector, it is far from sufficient. SURF and SIFT (what SURF was derived from), is great at duplicate or near-duplicate detection but is not that great at capturing perceptual similarity.

The best performing CBIR systems usually utilize an ensemble of features optimally combined via some training set. Some interesting detectors to try include GIST (fast and cheap detector best used for detecting man-made vs. natural environments) and Object Bank (a histogram-based detector itself made of 100's of object detector outputs).

Bathos answered 15/6, 2011 at 23:56 Comment(1)
Thanks for the methods, but it's something like I can't turn back now. I had to use SURF because I was instructed so (you can't really argue with a professor after they take something to their heads, can you). Anyway +1 for the weighted sum (why haven't I thought of it?), but that does not solve the whole problem. There's still that I can't use the 2nd metric.Iconolatry

© 2022 - 2024 — McMap. All rights reserved.