And from v1.9.8 (NEWS item 16), using rowid
with rleid
dataset[, counter := rowid(rleid(input))]
timing code:
set.seed(1L)
library(data.table)
DT <- data.table(input=sample(letters, 1e6, TRUE))
DT1 <- copy(DT)
bench::mark(DT[, counter := seq_len(.N), by=rleid(input)],
DT1[, counter := rowid(rleid(input))])
timings:
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc total_time
<bch:expr> <bch:t> <bch:t> <dbl> <bch:byt> <dbl> <int> <dbl> <bch:tm>
1 DT[, `:=`(counter, seq_len(.N)), by = rleid(input)] 613.8ms 613.8ms 1.63 18.8MB 8.15 1 5 614ms
2 DT1[, `:=`(counter, rowid(rleid(input)))] 60.5ms 71.4ms 12.7 26.4MB 14.5 7 8 553ms
An efficient and more straightforward version of the function written below is available now in data.table package, called rleid
. Using that, it's just:
setDT(dataset)[, counter := seq_len(.N), by=rleid(input)]
See ?rleid
for more on usage and examples. Thanks to @Henrik for the suggestion to update this post.
rle
is definitely the most convenient way to do it (+1 @Ananda's). But one could do better (in terms of speed) on bigger data. You can use the duplist
and vecseq
functions (not exported) from data.table
as follows:
require(data.table)
arun <- function(y) {
w = data.table:::duplist(list(y))
w = c(diff(w), length(y)-tail(w,1L)+1L)
data.table:::vecseq(rep(1L, length(w)), w, length(y))
}
x <- c("a","b","b","a","a","c","a","a","a","a","b","c")
arun(x)
# [1] 1 1 2 1 2 1 1 2 3 4 1 1
Benchmarking on big data:
set.seed(1)
x <- sample(letters, 1e6, TRUE)
# rle solution
ananda <- function(y) {
sequence(rle(y)$lengths)
}
require(microbenchmark)
microbenchmark(a1 <- arun(x), a2<-ananda(x), times=100)
Unit: milliseconds
expr min lq median uq max neval
a1 <- arun(x) 123.2827 132.6777 163.3844 185.439 563.5825 100
a2 <- ananda(x) 1382.1752 1899.2517 2066.4185 2247.233 3764.0040 100
identical(a1, a2) # [1] TRUE