Nearest neighbor search with periodic boundary conditions
Asked Answered
D

2

12

In a cubic box I have a large collection points in R^3. I'd like to find the k nearest neighbors for each point. Normally I'd think to use something like a k-d tree, but in this case I have periodic boundary conditions. As I understand it, a k-d tree works by partitioning the space by cutting it into hyper planes of one less dimension, i.e. in 3D we would split the space by drawing 2D planes. For any given point, it is either on the plane, above it, or below it. However, when you split the space with periodic boundary conditions a point could be considered to be on either side!

What's the most efficient method of finding and maintaining a list of nearest neighbors with periodic boundary conditions in R^3?

Approximations are not sufficient, and the points will only be moved one at a time (think Monte Carlo not N-body simulation).

Delusive answered 2/8, 2011 at 19:44 Comment(5)
I'm not familiar with the term "periodic boundary condition;" can you elaborate on this?Foretime
A periodic boundardy condition is best understood with an example. In 1D imagine the possible points are from 0 to 1. The point 0 is the same as the point 1. The distance between two points at .2 and .3 is .1 as normal, but the distance between two points .1 and .9 is .2 since it loops around. This generalizes for higher dimensions, the x,y,z axes all loop around in 3D.Delusive
I'm wondering, did you ever manage to implement this? If so, did you have any improvement in speed? Thanks.Lignify
@Lignify I haven't, and I usually fall back on creating the 27 voxels in 3D to compute nearest neighbors. It's ugly and slow but foolproof and I can drop it into a KDTree.Delusive
Ok, thank you. I tried to implement pbc in the distance calculation within k-d tree, but I end up with comparable calculation times. I will try copying points as well.Lignify
D
5

Even in the Euclidean case, a point and its nearest neighbor may be on opposite sides of a hyperplane. The core of nearest-neighbor search in a k-d tree is a primitive that determines the distance between a point and a box; the only modification necessary for your case is to take the possibility of wraparound into account.

Alternatively, you could implement cover trees, which work on any metric.

Doubs answered 2/8, 2011 at 20:34 Comment(0)
F
3

(I'm posting this answer even though I'm not fully sure it works. Intuitively it seems right, but there might be an edge case I haven't considered)

If you're working with periodic boundary conditions, then you can think of space as being cut into a series of blocks of some fixed size that are all then superimposed on top of one another. Suppose that we're in R2. Then one option would be to replicate that block nine times and arrange them into a 3x3 grid of duplicates of the block. Given this, if we find the nearest neighbor of any single node in the central square, then either

  1. The nearest neighbor is inside the central square, in which case the neighbor is a nearest neighbor, or
  2. The nearest neighbor is in a square other than the central square. In that case, if we find the point in the central square that the neighbor corresponds to, that point should be the nearest neighbor of the original test point under the periodic boundary condition.

In other words, we just replicate the elements enough times so that the Euclidean distance between points lets us find the corresponding distance in the modulo space.

In n dimensions, you would need to make 3n copies of all the points, which sounds like a lot, but for R3 is only a 27x increase over the original data size. This is certainly a huge increase, but if it's within acceptable limits you should be able to use this trick to harness a standard kd-tree (or other spacial tree).

Hope this helps! (And hope this is correct!)

Foretime answered 3/8, 2011 at 0:54 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.