How to calculate distance between 2 coordinates below a certain threshold in R?
Asked Answered
S

3

4

I have 44,000 US Zip codes and it's corresponding centroid lat/long in R. This is from the package 'zipcode' in R. I need to calculate the distance between each zipcode and keep those distances that are less than 5 miles. The problem is to calculate all distances between the zipcodes I have to create a vector of size 44,000x44,0000 which I can't due to space issues.

I checked through the posts in R, the closest to my requirement is one that spits out the minimum distance between 2 datasets with lat/long

DB1 <- data.frame(location_id=1:7000,LATITUDE=runif(7000,min = -90,max = 90),LONGITUDE=runif(7000,min = -180,max = 180))
DB2 <- data.frame(location_id=7001:12000,LATITUDE=runif(5000,min = -90,max = 90),LONGITUDE=runif(5000,min = -180,max = 180))

DistFun <- function(ID){
  TMP <- DB1[DB1$location_id==ID,]
  TMP1 <- distGeo(TMP[,3:2],DB2[,3:2])
  TMP2 <- data.frame(DB1ID=ID,DB2ID=DB2[which.min(TMP1),1],DistanceBetween=min(TMP1)      ) 
  print(ID)
  return(TMP2)
}

DistanceMatrix <- rbind_all(lapply(DB1$location_id, DistFun))

Even if we can modify the above code to incorporate all distances <= 5 miles (for eg), it is extremely slow in execution.

Is there an efficient way to arrive at all zip code combinations that are <=5 miles from each others centroids?

Suffer answered 18/4, 2016 at 5:49 Comment(3)
I suspect you can skip any distance check on combinations whose numeric values are greater than some number on the order of 500. It would make the combinatorial load much easier to manage.Gunas
How about filtering out pairs that don't lie within a square of length 10 miles centered at point 1 and then calculating distance only for those that lie within. data.tables should help you do that filtering very efficiently.Bekelja
How would you like to store the distances?Elkeelkhound
B
4

Generating the whole distance matrix at a time will be very RAM consuming, looping over each combination of unique zipcodes - very time consuming. Lets find some compromise.

I suggest chunking the zipcode data.frame into pieces of (for example) 100 rows (with the help of chunk function from package bit), then calculating distances between 44336 and 100 points, filtering according to the target distance treshold and then moving on to the next data chunk. In my example I convert zipcode data into data.table to gain some speed and save RAM.

library(zipcode)
library(data.table)
library(magrittr)
library(geosphere)

data(zipcode)

setDT(zipcode)
zipcode[, dum := NA] # we'll need it for full outer join

Just for information - that's the approximate size of each piece of data in RAM.

merge(zipcode, zipcode[1:100], by = "dum", allow.cartesian = T) %>% 
  object.size() %>% print(unit = "Mb")
# 358.2 Mb

The code itself.

lapply(bit::chunk(1, nrow(zipcode), 1e2), function(ridx) {
  merge(zipcode, zipcode[ridx[1]:ridx[2]], by = "dum", allow.cartesian = T)[
    , dist := distGeo(matrix(c(longitude.x, latitude.x), ncol = 2), 
                      matrix(c(longitude.y, latitude.y), ncol = 2))/1609.34 # meters to miles
    ][dist <= 5 # necessary distance treshold
      ][, dum := NULL]
  }) %>% rbindlist -> zip_nearby_dt

zip_nearby_dt # not the whole! for first 10 chunks only

       zip.x          city.x state.x latitude.x longitude.x zip.y     city.y state.y latitude.y longitude.y     dist
    1: 00210      Portsmouth      NH   43.00590   -71.01320 00210 Portsmouth      NH   43.00590   -71.01320 0.000000
    2: 00210      Portsmouth      NH   43.00590   -71.01320 00211 Portsmouth      NH   43.00590   -71.01320 0.000000
    3: 00210      Portsmouth      NH   43.00590   -71.01320 00212 Portsmouth      NH   43.00590   -71.01320 0.000000
    4: 00210      Portsmouth      NH   43.00590   -71.01320 00213 Portsmouth      NH   43.00590   -71.01320 0.000000
    5: 00210      Portsmouth      NH   43.00590   -71.01320 00214 Portsmouth      NH   43.00590   -71.01320 0.000000
---                                                                                                              
15252: 02906      Providence      RI   41.83635   -71.39427 02771    Seekonk      MA   41.84345   -71.32343 3.688747
15253: 02912      Providence      RI   41.82674   -71.39770 02771    Seekonk      MA   41.84345   -71.32343 4.003095
15254: 02914 East Providence      RI   41.81240   -71.36834 02771    Seekonk      MA   41.84345   -71.32343 3.156966
15255: 02916         Rumford      RI   41.84325   -71.35391 02769   Rehoboth      MA   41.83507   -71.26115 4.820599
15256: 02916         Rumford      RI   41.84325   -71.35391 02771    Seekonk      MA   41.84345   -71.32343 1.573050

On my machine it took 1.7 minutes to process 10 chunks, so the whole processing may take 70-80 minutes, not fast, but may be satisfying. We can increase the chunk size to 200 or 300 rows depending on available RAM volume, this will shorten the processing time 2 or 3 times respectively.

The drawback of this solution is that the resulting data.table contains "duplicated" rows - I mean there are both distances from point A to point B, and from B to A. This may need some additional filtering.

Berti answered 18/4, 2016 at 14:38 Comment(0)
E
0

I guess the most efficient algorithms would first translate the spatial locations to a tree-like data structure. You don't need to do this explicitly though, if you have an algorithm that can 1) bin lat/longs to a spatial index, 2) tell you neighbors of that index, then you can use it to filter your square data. (This will be less efficient than building a tree, but probably easier to implement.)

geohash is such an algorithm. It turns continuous lat/long into 2-d bins. There is a (quite new) package providing geohash in R. Here's one idea of how you could use it for this problem:

First, with geohash do some preliminary calibration:

  1. translate lat/long to a hash with bin precision p (say)

  2. assess whether the hash is calibrated at a precision similar to the distances you're interested in (say, 3-7 miles between neighbor centroids), if not return to 1 and adjust the precision p

this yields a zipcode-hash value relationship.

Then, compute distances for each (unique) hash value

  1. determine its (8, bc hashes form a 2-d grid) nearest-neighbors and so select 9 hash values

  2. calculate pairwise distances among all zips within the 9 hashes (using, e.g, distGeo as in the question)

  3. return all zip-zip pairwise distances for the hash value (e.g., in a matrix)

this yields a hash value-zip-zip distance object relationship

(In step 2 it'd clearly be optimal to only calculate each nearest-neighbor pair once. But this might not be necessary.)

Finally, for each zip

  1. use the above two steps to (through the hash value as key) get the zip-zip
    distance object for the zip
  2. filter the object to the distances from the focal zip (recall, it's all pairwise distances within a set of hashes adjacent to that of the focal zip)
  3. only keep distances < 5 miles

this yields a zip-zips within 5 miles object. (the zips within 5 miles of the focal zip could be stored as a column of lists (each element is a list) in a dataframe next to a column of focal zips, or as a separate list with focal zips as names).

Elkeelkhound answered 20/4, 2016 at 6:29 Comment(0)
A
0

The following is a solution using spatialrisk. The functions are written in C++ and are therefore very fast. On my machine it takes about 25 seconds.

library(zipcodeR)
library(spatialrisk)
library(dplyr)

# Zip code data
zipcode <- zipcodeR::zip_code_db

# Radius in meters
radius_meters <- 5000

# Find zipcodes within 5000 meters
sel <- tibble(zipcode) %>%
  select(zipcode, lat, lon = lng) %>%
  filter(!is.na(lat), !is.na(lon)) %>%
  mutate(zipcode_within_radius = purrr::map2(lon, lat, ~points_in_circle(zipcode_sel, .x, .y, radius = radius_meters)[-1,])) %>%
  unnest(cols = c(zipcode_within_radius), names_repair = "unique")

Ascending answered 26/4, 2021 at 11:45 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.