Overlap join with start and end positions
Asked Answered
Q

5

56

Consider the following data.tables. The first defines a set of regions with start and end positions for each group 'x':

library(data.table)

d1 <- data.table(x = letters[1:5], start = c(1,5,19,30, 7), end = c(3,11,22,39,25))
setkey(d1, x, start)

#    x start end
# 1: a     1   3
# 2: b     5  11
# 3: c    19  22
# 4: d    30  39
# 5: e     7  25

The second data set has the same grouping variable 'x', and positions 'pos' within each group:

d2 <- data.table(x = letters[c(1,1,2,2,3:5)], pos = c(2,3,3,12,20,52,10))
setkey(d2, x, pos)

#    x pos
# 1: a   2
# 2: a   3
# 3: b   3
# 4: b  12
# 5: c  20
# 6: d  52
# 7: e  10

Ultimately I'd like to extract the rows in 'd2' where 'pos' falls within the range defined by 'start' and 'end', within each group x. The desired result is

#    x pos start  end
# 1: a   2     1    3
# 2: a   3     1    3
# 3: c  20    19   22
# 4: e  10     7   25

The start/end positions for any group x will never overlap but there may be gaps of values not in any region.

Now, I believe I should be using a rolling join. From what i can tell, I cannot use the "end" column in the join.

I've tried

d1[d2, roll = TRUE, nomatch = 0, mult = "all"][start <= end]

and got

#    x start end
# 1: a     2   3
# 2: a     3   3
# 3: c    20  22
# 4: e    10  25

which is the right set of rows I want; However "pos" has become "start" and the original "start" has been lost. Is there a way to preserve all the columns with the roll join so i could report "start", "pos", "end" as desired?

Quaver answered 29/6, 2014 at 20:34 Comment(0)
S
53

Overlap joins was implemented with commit 1375 in data.table v1.9.3, and is available in the current stable release, v1.9.4. The function is called foverlaps. From NEWS:

29) Overlap joins #528 is now here, finally!! Except for type="equal" and maxgap and minoverlap arguments, everything else is implemented. Check out ?foverlaps and the examples there on its usage. This is a major feature addition to data.table.

Let's consider x, an interval defined as [a, b], where a <= b, and y, another interval defined as [c, d], where c <= d. The interval y is said to overlap x at all, iff d >= a and c <= b 1. And y is entirely contained within x, iff a <= c,d <= b 2. For the different types of overlaps implemented, please have a look at ?foverlaps.

Your question is a special case of an overlap join: in d1 you have true physical intervals with start and end positions. In d2 on the other hand, there are only positions (pos), not intervals. To be able to do an overlap join, we need to create intervals also in d2. This is achieved by creating an additional variable pos2, which is identical to pos (d2[, pos2 := pos]). Thus, we now have an interval in d2, albeit with identical start and end coordinates. This 'virtual, zero-width interval' in d2 can then be used in foverlap to do an overlap join with d1:

require(data.table) ## 1.9.3
setkey(d1)
d2[, pos2 := pos]
foverlaps(d2, d1, by.x = names(d2), type = "within", mult = "all", nomatch = 0L)
#    x start end pos pos2
# 1: a     1   3   2    2
# 2: a     1   3   3    3
# 3: c    19  22  20   20
# 4: e     7  25  10   10

by.y by default is key(y), so we skipped it. by.x by default takes key(x) if it exists, and if not takes key(y). But a key doesn't exist for d2, and we can't set the columns from y, because they don't have the same names. So, we set by.x explicitly.

The type of overlap is within, and we'd like to have all matches, only if there is a match.

NB: foverlaps uses data.table's binary search feature (along with roll where necessary) under the hood, but some function arguments (types of overlaps, maxgap, minoverlap etc..) are inspired by the function findOverlaps() from the Bioconductor package IRanges, an excellent package (and so is GenomicRanges, which extends IRanges for Genomics).


So what's the advantage?

A benchmark on the code above on your data results in foverlaps() slower than Gabor's answer (Timings: Gabor's data.table solution = 0.004 vs foverlaps = 0.021 seconds). But does it really matter at this granularity?

What would be really interesting is to see how well it scales - in terms of both speed and memory. In Gabor's answer, we join based on the key column x. And then filter the results.

What if d1 has about 40K rows and d2 has a 100K rows (or more)? For each row in d2 that matches x in d1, all those rows will be matched and returned, only to be filtered later. Here's an example of your Q scaled only slightly:

Generate data:

require(data.table)
set.seed(1L)
n = 20e3L; k = 100e3L
idx1 = sample(100, n, TRUE)
idx2 = sample(100, n, TRUE)
d1 = data.table(x = sample(letters[1:5], n, TRUE), 
                start = pmin(idx1, idx2), 
                end = pmax(idx1, idx2))

d2 = data.table(x = sample(letters[1:15], k, TRUE), 
                pos1 = sample(60:150, k, TRUE))

foverlaps:

system.time({
    setkey(d1)
    d2[, pos2 := pos1]
    ans1 = foverlaps(d2, d1, by.x=1:3, type="within", nomatch=0L)
})
# user  system elapsed 
#   3.028   0.635   3.745 

This took ~ 1GB of memory in total, out of which ans1 is 420MB. Most of the time spent here is on subset really. You can check it by setting the argument verbose=TRUE.

Gabor's solutions:

## new session - data.table solution
system.time({
    setkey(d1, x)
    ans2 <- d1[d2, allow.cartesian=TRUE, nomatch=0L][between(pos1, start, end)]
})
#   user  system elapsed 
# 15.714   4.424  20.324 

And this took a total of ~3.5GB.

I just noted that Gabor already mentions the memory required for intermediate results. So, trying out sqldf:

# new session - sqldf solution
system.time(ans3 <- sqldf("select * from d1 join 
            d2 using (x) where pos1 between start and end"))
#   user  system elapsed 
# 73.955   1.605  77.049 

Took a total of ~1.4GB. So, it definitely uses less memory than the one shown above.

[The answers were verified to be identical after removing pos2 from ans1 and setting key on both answers.]

Note that this overlap join is designed with problems where d2 doesn't necessarily have identical start and end coordinates (ex: genomics, the field where I come from, where d2 is usually about 30-150 million or more rows).


foverlaps() is stable, but is still under development, meaning some arguments and names might get changed.

NB: Since I mentioned GenomicRanges above, it is also perfectly capable of solving this problem. It uses interval trees under the hood, and is quite memory efficient as well. In my benchmarks on genomics data, foverlaps() is faster. But that's for another (blog) post, some other time.

Steiner answered 4/9, 2014 at 0:15 Comment(1)
Did you ever make that blog post? I'm attempting to overlap nearly every nucleotide in the human genome (scored for a relevant metric) with a set of ranges and I killed GRanges's findOverlaps after like four hours to start off on the foverlaps tangent. Having this a ready-made blog post with instructions would have saved me a lot of time today. :DSavour
S
30

data.table v1.9.8+ has a new feature - non-equi joins. With that, this operation becomes even more straightforward:

require(data.table) #v1.9.8+
# no need to set keys on `d1` or `d2`
d2[d1, .(x, pos=x.pos, start, end), on=.(x, pos>=start, pos<=end), nomatch=0L]
#    x pos start end
# 1: a   2     1   3
# 2: a   3     1   3
# 3: c  20    19  22
# 4: e  10     7  25
Steiner answered 15/7, 2016 at 10:46 Comment(4)
would you mind adding memory-usage and timing for this approachDitch
It's helpful to note here that x is referring to two things. The first x is the column "x" and the second x in x.pos refers to d2.Manda
@RyanWardValverde Why is it necessary to include pos=x.pos? Say there are more columns in d2 such as 'a' and 'b'. When selecting the columns; a and b don't need to be prefixed with x, however pos does. Furthermore if you don't prefix pos with x then pos==start. Do you know why that's expected?Opener
Seems like a byproduct of the order of operations. Removing everything between the parens in the j position, i.e. .(...) gives us d2[d1, on = .(x, pos>=start, pos<=end), nomatch = NULL], and yields columns named pos and pos.1, which correspond to start and end, which is unexpected behavior to me. This seems to indicate that the join is done first, and to access the data.table in the x position, it needs to be called explicitly.Manda
T
25

1) sqldf This is not data.table but complex join criteria are easy to specify in a straight forward manner in SQL:

library(sqldf)

sqldf("select * from d1 join d2 using (x) where pos between start and end")

giving:

  x start end pos
1 a     1   3   2
2 a     1   3   3
3 c    19  22  20
4 e     7  25  10

2) data.table For a data.table answer try this:

library(data.table)

setkey(d1, x)
setkey(d2, x)
d1[d2][between(pos, start, end)]

giving:

   x start end pos
1: a     1   3   2
2: a     1   3   3
3: c    19  22  20
4: e     7  25  10

Note that this does have the disadvantage of forming the possibly large intermeidate result d1[d2] which SQL may not do. The remaining solutions may have this problem too.

3) dplyr This suggests the corresponding dplyr solution. We also use between from data.table:

library(dplyr)
library(data.table) # between

d1 %>% 
   inner_join(d2) %>% 
   filter(between(pos, start, end))

giving:

Joining by: "x"
  x start end pos
1 a     1   3   2
2 a     1   3   3
3 c    19  22  20
4 e     7  25  10

4) merge/subset Using only the base of R:

subset(merge(d1, d2), start <= pos & pos <= end)

giving:

   x start end pos
1: a     1   3   2
2: a     1   3   3
3: c    19  22  20
4: e     7  25  10

Added Note that the data table solution here is much faster than the one in the other answer:

dt1 <- function() {
 d1 <- data.table(x=letters[1:5], start=c(1,5,19,30, 7), end=c(3,11,22,39,25))
 d2 <- data.table(x=letters[c(1,1,2,2,3:5)], pos=c(2,3,3,12,20,52,10))
 setkey(d1, x, start)
 idx1 = d1[d2, which=TRUE, roll=Inf] # last observation carried forwards

 setkey(d1, x, end)
 idx2 = d1[d2, which=TRUE, roll=-Inf] # next observation carried backwards

 idx = which(!is.na(idx1) & !is.na(idx2))
 ans1 <<- cbind(d1[idx1[idx]], d2[idx, list(pos)])
}

dt2 <- function() {
 d1 <- data.table(x=letters[1:5], start=c(1,5,19,30, 7), end=c(3,11,22,39,25))
 d2 <- data.table(x=letters[c(1,1,2,2,3:5)], pos=c(2,3,3,12,20,52,10))
 setkey(d1, x)
 ans2 <<- d1[d2][between(pos, start, end)]
}

all.equal(as.data.frame(ans1), as.data.frame(ans2))
## TRUE

benchmark(dt1(), dt2())[1:4]
##     test replications elapsed relative
##  1 dt1()          100    1.45    1.667  
##  2 dt2()          100    0.87    1.000  <-- from (2) above
Thew answered 29/6, 2014 at 21:9 Comment(5)
I do appreciate your comprehensiveness. I just really wanted something to take advantage of the optimized tree search implemented in the data.table roll join. Thanks for the sqldf suggestion. I was accustomed to writing joins like sqldf("select * from d1 join d2 on d1.x==d2.x and d2.pos>=d1.start and d2.pos<=d1.end") in SQL Server which were nice when you could add indexes to table. I think this may avoid creating a full outer join in memory (but didn't test)Quaver
However, note that the data.table code here is shorter and faster.Thew
You are right, between is faster. I also tested d1[d2, roll=T, nomatch=0, mult="all"][start<=end] with your harness and was close to between. It just goes to show you have to test everything because you never know what will be faster. Thanks for taking the time to check these. Very interesting. Maybe when @Steiner implements "range join" in data.table he can make it faster.Quaver
Could you explain how "d1[d2][between(pos, start, end)]" works, please?Amnesty
d1[d2] joins d1 and d2 along x and [between(...)] selects those rows for which the between(...) is TRUE.Thew
M
9

Overlap joins are available in dplyr 1.1.0 via the function join_by.

With join_by, you can do overlap join with between, or manually with >= and <=:

library(dplyr)
inner_join(d2, d1, by = join_by(x, between(pos, start, end)))
#  x pos start end
#1 a   2     1   3
#2 a   3     1   3
#3 c  20    19  22
#4 e  10     7  25
inner_join(d2, d1, by = join_by(x, pos >= start, pos <= end))
#  x pos start end
#1 a   2     1   3
#2 a   3     1   3
#3 c  20    19  22
#4 e  10     7  25
Manhood answered 26/8, 2022 at 9:10 Comment(0)
A
4

Using fuzzyjoin :

result <- fuzzyjoin::fuzzy_inner_join(d1, d2, 
                           by = c('x', 'pos' = 'start', 'pos' = 'end'),
                           match_fun = list(`==`, `>=`, `<=`))
result

#  x.x     pos x.y   start   end
#  <chr> <dbl> <chr> <dbl> <dbl>
#1 a         2 a         1     3
#2 a         3 a         1     3
#3 c        20 c        19    22
#4 e        10 e         7    25

Since fuzzyjoin returns all the columns we might need to do some cleaning to keep the columns that we want.

library(dplyr)
result %>% select(x = x.x, pos, start, end)

# A tibble: 4 x 4
#  x       pos start   end
#  <chr> <dbl> <dbl> <dbl>
#1 a         2     1     3
#2 a         3     1     3
#3 c        20    19    22
#4 e        10     7    25
Adz answered 3/8, 2020 at 4:26 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.