Nearest neighbors in high-dimensional data?
Asked Answered
C

15

184

I have asked a question a few days back on how to find the nearest neighbors for a given vector. My vector is now 21 dimensions and before I proceed further, because I am not from the domain of Machine Learning nor Math, I am beginning to ask myself some fundamental questions:

  • Is Euclidean distance a good metric for finding the nearest neighbors in the first place? If not, what are my options?
  • In addition, how does one go about deciding the right threshold for determining the k-neighbors? Is there some analysis that can be done to figure this value out?
  • Previously, I was suggested to use kd-Trees but the Wikipedia page clearly says that for high-dimensions, kd-Tree is almost equivalent to a brute-force search. In that case, what is the best way to find nearest-neighbors in a million point dataset efficiently?

Can someone please clarify the some (or all) of the above questions?

Cosmopolitan answered 22/4, 2011 at 0:10 Comment(8)
Try asking on metaoptimize.comEspinoza
"High dimension" is 20 for some people and some data, 50 or 100 or 1000 for others. Please give numbers if you can, e.g. "I've done dim 21, 1000000 data points, using xx".Retrocede
kD-Tree splits the data in two along one dimension at a time. If you have 20 dimensions and only 1M data points, you get about 1 level of tree - where level means split on every axis. Since there is no real depth, you don't get the benefit of ignoring branches of the tree. It's helpful not to think of it so much as a binary tree, but more like a quad-tree, octtree, etc. even though it's implemented like a binary tree.Tirado
@denis, was 'dim 21, 1000000 data points' for the Higgs dataset?Galvanoscope
@nikk, no, just made that up. Can you point to real data online ? That would be useful for NN programs and people.Retrocede
Here is the link to download the Higgs dataset. 11 Million observations with 28 attributes. The last column is the label: 1 for signal, zero for noise. archive.ics.uci.edu/ml/datasets/HIGGSGalvanoscope
I had a similar problem . I used ANN but the approximation was not enough for me.I used KNN bruteforce algorithm on GPU.Hyperostosis
I’m voting to close this question because Machine learning (ML) theory questions are off-topic on Stack Overflow - gift-wrap candidate for Cross-ValidatedLecompte
A
203

I currently study such problems -- classification, nearest neighbor searching -- for music information retrieval.

You may be interested in Approximate Nearest Neighbor (ANN) algorithms. The idea is that you allow the algorithm to return sufficiently near neighbors (perhaps not the nearest neighbor); in doing so, you reduce complexity. You mentioned the kd-tree; that is one example. But as you said, kd-tree works poorly in high dimensions. In fact, all current indexing techniques (based on space partitioning) degrade to linear search for sufficiently high dimensions [1][2][3].

Among ANN algorithms proposed recently, perhaps the most popular is Locality-Sensitive Hashing (LSH), which maps a set of points in a high-dimensional space into a set of bins, i.e., a hash table [1][3]. But unlike traditional hashes, a locality-sensitive hash places nearby points into the same bin.

LSH has some huge advantages. First, it is simple. You just compute the hash for all points in your database, then make a hash table from them. To query, just compute the hash of the query point, then retrieve all points in the same bin from the hash table.

Second, there is a rigorous theory that supports its performance. It can be shown that the query time is sublinear in the size of the database, i.e., faster than linear search. How much faster depends upon how much approximation we can tolerate.

Finally, LSH is compatible with any Lp norm for 0 < p <= 2. Therefore, to answer your first question, you can use LSH with the Euclidean distance metric, or you can use it with the Manhattan (L1) distance metric. There are also variants for Hamming distance and cosine similarity.

A decent overview was written by Malcolm Slaney and Michael Casey for IEEE Signal Processing Magazine in 2008 [4].

LSH has been applied seemingly everywhere. You may want to give it a try.


[1] Datar, Indyk, Immorlica, Mirrokni, "Locality-Sensitive Hashing Scheme Based on p-Stable Distributions," 2004.

[2] Weber, Schek, Blott, "A quantitative analysis and performance study for similarity-search methods in high-dimensional spaces," 1998.

[3] Gionis, Indyk, Motwani, "Similarity search in high dimensions via hashing," 1999.

[4] Slaney, Casey, "Locality-sensitive hashing for finding nearest neighbors", 2008.

Abeokuta answered 24/4, 2011 at 20:33 Comment(20)
@Steve: Thank you for the reply. Do you have some suggestions on an LSH implementation? The only one I saw was the one from MIT. Are there any other packages floating around?Cosmopolitan
Besides that one, no, I don't know of others. I ended up writing my own in Python for my specific purposes. Essentially, each hash table is implemented as a Python dictionary, d, where d[k] is one bin with key k. d[k] contains the labels of all points whose hash is k. Then, you just need to compute the hash for each point. See Eq. (1) in [4], or Section 3 in [1].Abeokuta
@Steve: Thanks for your help. I will start implementing it now. Do you have any idea on how this methodology performs for large datasets by any chance?Cosmopolitan
You're welcome. It should work pretty well; in fact, the benefit of LSH is even more apparent for large datasets because of its sublinear complexity. I've done it for over 100,000 elements in a 2048-dimensional space.Abeokuta
@Steve: Great. In that case, I will explore this approach once. As a last question, I was just having one doubt when I was implementing it in Python: Doesn't this approach depend on the hash function that we are using? Otherwise, I am having trouble wrapping head around the fact that it will work for nearest-neighbor queries. I think I need to read all the references you provided.Cosmopolitan
Yes; the hash function is everything. However, the right one also depends on your data and your distance metric. Not knowing what your data represents (semantically), I don't have an answer for that. But the basic goal of your hash is to reduce dimensionality. Here's a starter: h(x) = 0 if x1 > 0.5, and 1 otherwise. In this case, h maps R^n to a single bit. This is a valid hash (though perhaps not a high-performance one). Here's another: h(x) = sign(<p,x>), i.e., the sign of the inner product between p and x, where p is a random Gaussian vector.Abeokuta
+1: good summary, good links (Slaney ++). Caltech-image-search says it has C++/Matlab for both Kd-trees and LSH; has anyone used it to compare the two ?Retrocede
Sorry, I have not. But I did compare the kd-tree (in Python, scipy.spatial) with my own LSH code, and the kd-tree was far slower. Caveat: I did use the kd-tree "as-is", so perhaps it could be optimized.Abeokuta
@Steve, was that the cython cKDTree or pure-python KDTree ? And what dim, Npoints, Nquery please -- they vary so much that it would be good to have a rough idea "kd tree here, LSH there ..."Retrocede
Hmm, I cannot remember off the top of my head. It might have been the pure Python implementation; whichever is in scipy.spatial. As a general statement, I think that dimensionalities above 100 would favor LSH, but that may not be universally true.Abeokuta
Another reference supporting LSH: Comparing Nearest Neighbor Algorithms in High-Dimensional Space, Hendra Gunadi, 2011. cs.anu.edu.au/student/projects/11S2/Reports/Hendra%20Gunadi.pdfBaize
@SteveTjoa: Found it hard to visually grasp keywords and embedded formula. As you had a single highlight on LSH already, I supplemented it. With only the best intentions. Feel free to revert, though. It's your answer after all. :)Bowlds
I'll read some of those links shortly, but right away a question popped into my head: If each object is mapped to exactly 1 bin, and we only look in 1 bin to answer an ANN query, then won't LSH tend to perform very badly near the "edges" of those bins? E.g. suppose we have a set of points (i, i) for 0 <= i <= 1000, and we use k bins. Won't we find that there will be at least k-1 "boundary" points (i, i) for which the ANN query returns only points (j, j) with j <= i, ignoring the nearly-equal number of points where j > i?Amphictyon
Mahout 0.8 has Minhash which can be used as a guidance. (0.9 does not have it.)Gyron
There is a survey on LSH, very useful before checking out other papers and to get a grasp on LSH for several difference metrics. research.microsoft.com/en-us/um/people/jingdw/…Larocca
Nice answer, +1, I added a new more recent answer here, what do you think? :)Herta
"To query, just compute the hash of the query point, then retrieve all points in the same bin from the hash table.". What do you do if your bin is empty?Methylnaphthalene
"LSH is compatible with any Lp norm for 0 < p <= 2." How do you define an Lp norm for 0 < p < 1?Psychro
I didn't see anyone mention this: www1.cs.columbia.edu/CAVE/publications/pdfs/Nene_TR95.pdfSurvivor
Can LSH return all matches for a small cluster of adjacent bins (so that you don't miss a point in the case where the query point is at the edge of one bin and the nearest neighbor is just barely over the edge in the adjacent bin)? Or would you have to hash the query point multiple times within a given radius to discover the bin indices for all neighboring bins within that radius?Profession
D
90

I. The Distance Metric

First, the number of features (columns) in a data set is not a factor in selecting a distance metric for use in kNN. There are quite a few published studies directed to precisely this question, and the usual bases for comparison are:

  • the underlying statistical distribution of your data;

  • the relationship among the features that comprise your data (are they independent--i.e., what does the covariance matrix look like); and

  • the coordinate space from which your data was obtained.

If you have no prior knowledge of the distribution(s) from which your data was sampled, at least one (well documented and thorough) study concludes that Euclidean distance is the best choice.

YEuclidean metric used in mega-scale Web Recommendation Engines as well as in current academic research. Distances calculated by Euclidean have intuitive meaning and the computation scales--i.e., Euclidean distance is calculated the same way, whether the two points are in two dimension or in twenty-two dimension space.

It has only failed for me a few times, each of those cases Euclidean distance failed because the underlying (cartesian) coordinate system was a poor choice. And you'll usually recognize this because for instance path lengths (distances) are no longer additive--e.g., when the metric space is a chessboard, Manhattan distance is better than Euclidean, likewise when the metric space is Earth and your distances are trans-continental flights, a distance metric suitable for a polar coordinate system is a good idea (e.g., London to Vienna is is 2.5 hours, Vienna to St. Petersburg is another 3 hrs, more or less in the same direction, yet London to St. Petersburg isn't 5.5 hours, instead, is a little over 3 hrs.)

But apart from those cases in which your data belongs in a non-cartesian coordinate system, the choice of distance metric is usually not material. (See this blog post from a CS student, comparing several distance metrics by examining their effect on kNN classifier--chi square give the best results, but the differences are not large; A more comprehensive study is in the academic paper, Comparative Study of Distance Functions for Nearest Neighbors--Mahalanobis (essentially Euclidean normalized by to account for dimension covariance) was the best in this study.

One important proviso: for distance metric calculations to be meaningful, you must re-scale your data--rarely is it possible to build a kNN model to generate accurate predictions without doing this. For instance, if you are building a kNN model to predict athletic performance, and your expectation variables are height (cm), weight (kg), bodyfat (%), and resting pulse (beats per minute), then a typical data point might look something like this: [ 180.4, 66.1, 11.3, 71 ]. Clearly the distance calculation will be dominated by height, while the contribution by bodyfat % will be almost negligible. Put another way, if instead, the data were reported differently, so that bodyweight was in grams rather than kilograms, then the original value of 86.1, would be 86,100, which would have a large effect on your results, which is exactly what you don't want. Probably the most common scaling technique is subtracting the mean and dividing by the standard deviation (mean and sd refer calculated separately for each column, or feature in that data set; X refers to an individual entry/cell within a data row):

X_new = (X_old - mu) / sigma


II. The Data Structure

If you are concerned about performance of the kd-tree structure, A Voronoi Tessellation is a conceptually simple container but that will drastically improve performance and scales better than kd-Trees.

dat

This is not the most common way to persist kNN training data, though the application of VT for this purpose, as well as the consequent performance advantages, are well-documented (see e.g. this Microsoft Research report). The practical significance of this is that, provided you are using a 'mainstream' language (e.g., in the TIOBE Index) then you ought to find a library to perform VT. I know in Python and R, there are multiple options for each language (e.g., the voronoi package for R available on CRAN)

Using a VT for kNN works like this::

From your data, randomly select w points--these are your Voronoi centers. A Voronoi cell encapsulates all neighboring points that are nearest to each center. Imagine if you assign a different color to each of Voronoi centers, so that each point assigned to a given center is painted that color. As long as you have a sufficient density, doing this will nicely show the boundaries of each Voronoi center (as the boundary that separates two colors.

How to select the Voronoi Centers? I use two orthogonal guidelines. After random selecting the w points, calculate the VT for your training data. Next check the number of data points assigned to each Voronoi center--these values should be about the same (given uniform point density across your data space). In two dimensions, this would cause a VT with tiles of the same size.That's the first rule, here's the second. Select w by iteration--run your kNN algorithm with w as a variable parameter, and measure performance (time required to return a prediction by querying the VT).

So imagine you have one million data points..... If the points were persisted in an ordinary 2D data structure, or in a kd-tree, you would perform on average a couple million distance calculations for each new data points whose response variable you wish to predict. Of course, those calculations are performed on a single data set. With a V/T, the nearest-neighbor search is performed in two steps one after the other, against two different populations of data--first against the Voronoi centers, then once the nearest center is found, the points inside the cell corresponding to that center are searched to find the actual nearest neighbor (by successive distance calculations) Combined, these two look-ups are much faster than a single brute-force look-up. That's easy to see: for 1M data points, suppose you select 250 Voronoi centers to tesselate your data space. On average, each Voronoi cell will have 4,000 data points. So instead of performing on average 500,000 distance calculations (brute force), you perform far lesss, on average just 125 + 2,000.

III. Calculating the Result (the predicted response variable)

There are two steps to calculating the predicted value from a set of kNN training data. The first is identifying n, or the number of nearest neighbors to use for this calculation. The second is how to weight their contribution to the predicted value.

W/r/t the first component, you can determine the best value of n by solving an optimization problem (very similar to least squares optimization). That's the theory; in practice, most people just use n=3. In any event, it's simple to run your kNN algorithm over a set of test instances (to calculate predicted values) for n=1, n=2, n=3, etc. and plot the error as a function of n. If you just want a plausible value for n to get started, again, just use n = 3.

The second component is how to weight the contribution of each of the neighbors (assuming n > 1).

The simplest weighting technique is just multiplying each neighbor by a weighting coefficient, which is just the 1/(dist * K), or the inverse of the distance from that neighbor to the test instance often multiplied by some empirically derived constant, K. I am not a fan of this technique because it often over-weights the closest neighbors (and concomitantly under-weights the more distant ones); the significance of this is that a given prediction can be almost entirely dependent on a single neighbor, which in turn increases the algorithm's sensitivity to noise.

A must better weighting function, which substantially avoids this limitation is the gaussian function, which in python, looks like this:

def weight_gauss(dist, sig=2.0) :
    return math.e**(-dist**2/(2*sig**2))

To calculate a predicted value using your kNN code, you would identify the n nearest neighbors to the data point whose response variable you wish to predict ('test instance'), then call the weight_gauss function, once for each of the n neighbors, passing in the distance between each neighbor the the test point.This function will return the weight for each neighbor, which is then used as that neighbor's coefficient in the weighted average calculation.

Diurnal answered 24/4, 2011 at 10:14 Comment(3)
Great answer! Comprehensive and accurate relative to my experience.Enochenol
Nice answer, +1, I added a new more recent answer here, is it good?Herta
"So imagine you have one million data points..... If the points were persisted in an ordinary 2D data structure, or in a kd-tree, you would perform on average a couple million distance calculations for each new data points whose response variable you wish to predict." Disagree. It can be proven that KD-trees have O(sqrt(n)) search complexity in 2D.Roadstead
Q
18

What you are facing is known as the curse of dimensionality. It is sometimes useful to run an algorithm like PCA or ICA to make sure that you really need all 21 dimensions and possibly find a linear transformation which would allow you to use less than 21 with approximately the same result quality.

Update: I encountered them in a book called Biomedical Signal Processing by Rangayyan (I hope I remember it correctly). ICA is not a trivial technique, but it was developed by researchers in Finland and I think Matlab code for it is publicly available for download. PCA is a more widely used technique and I believe you should be able to find its R or other software implementation. PCA is performed by solving linear equations iteratively. I've done it too long ago to remember how. = )

The idea is that you break up your signals into independent eigenvectors (discrete eigenfunctions, really) and their eigenvalues, 21 in your case. Each eigenvalue shows the amount of contribution each eigenfunction provides to each of your measurements. If an eigenvalue is tiny, you can very closely represent the signals without using its corresponding eigenfunction at all, and that's how you get rid of a dimension.

Quass answered 22/4, 2011 at 0:54 Comment(2)
+1 Thank You. This is a very interesting suggestion and makes perfect sense. As a final request, are you familiar with any hands-on tutorial (either in python or R or some other language) that explains how to do this interactively (I mean explaining step by step the whole process). I have read a few documents since yesterday but most of them seem way out of my understanding. Any suggestions?Cosmopolitan
Nitpicking: ICA is not a dimension reduction algorithm. It does not know how to score the components and should not be used as such.Kajdan
H
14

Top answers are good but old, so I'd like to add up a 2016 answer.


As said, in a high dimensional space, the curse of dimensionality lurks around the corner, making the traditional approaches, such as the popular k-d tree, to be as slow as a brute force approach. As a result, we turn our interest in Approximate Nearest Neighbor Search (ANNS), which in favor of some accuracy, speedups the process. You get a good approximation of the exact NN, with a good propability.


Hot topics that might be worthy:

  1. Modern approaches of LSH, such as Razenshteyn's.
  2. RKD forest: Forest(s) of Randomized k-d trees (RKD), as described in FLANN, or in a more recent approach I was part of, kd-GeRaF.
  3. LOPQ which stands for Locally Optimized Product Quantization, as described here. It is very similar to the new Babenko+Lemptitsky's approach.

You can also check my relevant answers:

  1. Two sets of high dimensional points: Find the nearest neighbour in the other set
  2. Comparison of the runtime of Nearest Neighbor queries on different data structures
  3. PCL kd-tree implementation extremely slow
Herta answered 15/7, 2016 at 20:9 Comment(0)
E
12

To answer your questions one by one:

  • No, euclidean distance is a bad metric in high dimensional space. Basically in high dimensions, data points have large differences between each other. That decreases the relative difference in the distance between a given data point and its nearest and farthest neighbour.
  • Lot of papers/research are there in high dimension data, but most of the stuff requires a lot of mathematical sophistication.
  • KD tree is bad for high dimensional data ... avoid it by all means

Here is a nice paper to get you started in the right direction. "When in Nearest Neighbour meaningful?" by Beyer et all.

I work with text data of dimensions 20K and above. If you want some text related advice, I might be able to help you out.

Empathize answered 22/4, 2011 at 3:0 Comment(4)
+1 I am printing out that paper to read it now. In the mean time, do you have suggestions on how else to figure out nearest neighbors? If both the distance metric and the definition of the neighbor itself is flawed, then how do people generally solve higher dimension problems where they want to do approximate matching based on feature vectors? Any suggestions?Cosmopolitan
In case of text we use cosine similarity a lot. I am working in text classification myself and find that for high dimensions, SVM with linear kernels seem to be the most effective.Empathize
@Empathize How do you defined your space. I mean based on bage of word vector or embeded vector?Tomy
@user3487667, The space depends on how you formulate your problem. I was talking about a simple bag-of-words model.Empathize
I
5

Cosine similarity is a common way to compare high-dimension vectors. Note that since it's a similarity not a distance, you'd want to maximize it not minimize it. You can also use a domain-specific way to compare the data, for example if your data was DNA sequences, you could use a sequence similarity that takes into account probabilities of mutations, etc.

The number of nearest neighbors to use varies depending on the type of data, how much noise there is, etc. There are no general rules, you just have to find what works best for your specific data and problem by trying all values within a range. People have an intuitive understanding that the more data there is, the fewer neighbors you need. In a hypothetical situation where you have all possible data, you only need to look for the single nearest neighbor to classify.

The k Nearest Neighbor method is known to be computationally expensive. It's one of the main reasons people turn to other algorithms like support vector machines.

Icj answered 22/4, 2011 at 3:19 Comment(3)
This is interesting. Can you elaborate more on how I could utilize SVMs in my case? I thought k-nearest neighbors was more like unsupervised and SVMs are supervised. Please correct me if I am wrong.Cosmopolitan
Both methods are supervised, because your training data is annotated with the correct classes. If you only have the feature vectors, and don't know the classes they belong in, then you can't use kNN or SVMs. Unsupervised learning methods are usually referred to as clustering algorithms. They can identify groups of similar data, but they don't tell you what the groups mean.Icj
Thank you for the clarification. You are right. It is indeed a supervised technique. I just did not realize what I called categories were actually classes too :)Cosmopolitan
M
4

kd-trees indeed won't work very well on high-dimensional data. Because the pruning step no longer helps a lot, as the closest edge - a 1 dimensional deviation - will almost always be smaller than the full-dimensional deviation to the known nearest neighbors.

But furthermore, kd-trees only work well with Lp norms for all I know, and there is the distance concentration effect that makes distance based algorithms degrade with increasing dimensionality.

For further information, you may want to read up on the curse of dimensionality, and the various variants of it (there is more than one side to it!)

I'm not convinced there is a lot use to just blindly approximating Euclidean nearest neighbors e.g. using LSH or random projections. It may be necessary to use a much more fine tuned distance function in the first place!

Mho answered 1/1, 2013 at 18:42 Comment(2)
Do you have references for your 1st and 2nd paragraphs?Xanthochroism
No, but they should be fairly obvious from the usual "curse of dimensionality" instantiations (c.f., survey) & try to find any k-d-tree that supports anything else than Euclidean... supporting other distances is possible, but not common (ELKI allows all Minkowski distances + squared Euclidean, but most will only have Euclidean). Just consider that k-d-trees use one dimension only for pruning, and compare this to the distance involving all dimensions. Plus, your splits will not be able to split in each dimension.Mho
V
3

A lot depends on why you want to know the nearest neighbors. You might look into the mean shift algorithm http://en.wikipedia.org/wiki/Mean-shift if what you really want is to find the modes of your data set.

Venepuncture answered 22/4, 2011 at 0:29 Comment(1)
As far as i know Mean-Shift is not suited for clustering high dimensional data. K-Means may be a better choice.Barnwell
J
3

I think cosine on tf-idf of boolean features would work well for most problems. That's because its time-proven heuristic used in many search engines like Lucene. Euclidean distance in my experience shows bad results for any text-like data. Selecting different weights and k-examples can be done with training data and brute-force parameter selection.

Jesuit answered 23/4, 2011 at 7:16 Comment(0)
R
3

KD Trees work fine for 21 dimensions, if you quit early, after looking at say 5 % of all the points. FLANN does this (and other speedups) to match 128-dim SIFT vectors. (Unfortunately FLANN does only the Euclidean metric, and the fast and solid scipy.spatial.cKDTree does only Lp metrics; these may or may not be adequate for your data.) There is of course a speed-accuracy tradeoff here.

(If you could describe your Ndata, Nquery, data distribution, that might help people to try similar data.)

Added 26 April, run times for cKDTree with cutoff on my old mac ppc, to give a very rough idea of feasibility:

kdstats.py p=2 dim=21 N=1000000 nask=1000 nnear=2 cutoff=1000 eps=0 leafsize=10 clustype=uniformp
14 sec to build KDtree of 1000000 points
kdtree: 1000 queries looked at av 0.1 % of the 1000000 points, 0.31 % of 188315 boxes; better 0.0042 0.014 0.1 %
3.5 sec to query 1000 points
distances to 2 nearest: av 0.131  max 0.253

kdstats.py p=2 dim=21 N=1000000 nask=1000 nnear=2 cutoff=5000 eps=0 leafsize=10 clustype=uniformp
14 sec to build KDtree of 1000000 points
kdtree: 1000 queries looked at av 0.48 % of the 1000000 points, 1.1 % of 188315 boxes; better 0.0071 0.026 0.5 %
15 sec to query 1000 points
distances to 2 nearest: av 0.131  max 0.245
Retrocede answered 25/4, 2011 at 12:13 Comment(0)
P
3

iDistance is probably the best for exact knn retrieval in high-dimensional data. You can view it as an approximate Voronoi tessalation.

Preemie answered 1/4, 2013 at 19:14 Comment(0)
K
3

I've experienced the same problem and can say the following.

  1. Euclidean distance is a good distance metric, however it's computationally more expensive than the Manhattan distance, and sometimes yields slightly poorer results, thus, I'd choose the later.

  2. The value of k can be found empirically. You can try different values and check the resulting ROC curves or some other precision/recall measure in order to find an acceptable value.

  3. Both Euclidean and Manhattan distances respect the Triangle inequality, thus you can use them in metric trees. Indeed, KD-trees have their performance severely degraded when the data have more than 10 dimensions (I've experienced that problem myself). I found VP-trees to be a better option.

Kindergarten answered 9/1, 2014 at 16:53 Comment(0)
L
2

You could try a z order curve. It's easy for 3 dimension.

Lissa answered 24/4, 2011 at 11:0 Comment(0)
A
1

I had a similar question a while back. For fast Approximate Nearest Neighbor Search you can use the annoy library from spotify: https://github.com/spotify/annoy

This is some example code for the Python API, which is optimized in C++.

from annoy import AnnoyIndex
import random

f = 40
t = AnnoyIndex(f, 'angular')  # Length of item vector that will be indexed
for i in range(1000):
    v = [random.gauss(0, 1) for z in range(f)]
    t.add_item(i, v)

t.build(10) # 10 trees
t.save('test.ann')

# ...

u = AnnoyIndex(f, 'angular')
u.load('test.ann') # super fast, will just mmap the file
print(u.get_nns_by_item(0, 1000)) # will find the 1000 nearest neighbors

They provide different distance measurements. Which distance measurement you want to apply depends highly on your individual problem. Also consider prescaling (meaning weighting) certain dimensions for importance first. Those dimension or feature importance weights might be calculated by something like entropy loss or if you have a supervised learning problem gini impurity gain or mean average loss, where you check how much worse your machine learning model performs, if you scramble this dimensions values.

Often the direction of the vector is more important than it's absolute value. For example in the semantic analysis of text documents, where we want document vectors to be close when their semantics are similar, not their lengths. Thus we can either normalize those vectors to unit length or use angular distance (i.e. cosine similarity) as a distance measurement.

Hope this is helpful.

Astrolabe answered 25/11, 2020 at 16:51 Comment(0)
M
0

Is Euclidean distance a good metric for finding the nearest neighbors in the first place? If not, what are my options?

I would suggest soft subspace clustering, a pretty common approach nowadays, where feature weights are calculated to find the most relevant dimensions. You can use these weights when using euclidean distance, for example. See curse of dimensionality for common problems and also this article can enlighten you somehow:

A k-means type clustering algorithm for subspace clustering of mixed numeric and categorical datasets

Margrettmarguerie answered 5/4, 2016 at 16:45 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.