The data structure doesn't have to change, but the search procedure does. Represent each point by coordinates (x, y) in [0, w) * [0, h), where w is the width of the map, h is the height, and * denotes a Cartesian product. Store these points in a normal KD tree.
The fundamental primitive for searching a KD tree is, given a point (x, y) and a rectangle [a, b] * [c, d], determine the distance (squared) from the point to the rectangle. Normally this is g(x, a, b)2 + g(y, c, d)2, where
g(z, e, f) = e - z if z < e
0 if e <= z <= f
z - f if f < z
is the one-dimensional distance of z to [e, f]. In a toroidal space, we modify g slightly to account for wraparound.
g(z, e, f, v) = min(e - z, (z + v) - f) if z < e
0 if e < z < f
min(z - f, (e + v) - z) if f < z.
and the distance squared is g(x, a, b, w)2 + g(y, c, d, h)2. I expect that the running times will be comparable for this variant. (I'd redo the recurrences, but the worst-case for regular KD trees is much worse than practice most of the time - O(n1/2) for identifying the nearest neighbor in 2D among n points.)