I thought that generally speaking using %>%
wouldn't have a noticeable effect on speed. But in this case it runs 4x slower.
library(dplyr)
library(microbenchmark)
set.seed(0)
dummy_data <- dplyr::data_frame(
id=floor(runif(10000, 1, 10000))
, label=floor(runif(10000, 1, 4))
)
microbenchmark(dummy_data %>% group_by(id) %>% summarise(list(unique(label))))
microbenchmark(dummy_data %>% group_by(id) %>% summarise(label %>% unique %>% list))
Without pipe:
min lq mean median uq max neval
1.691441 1.739436 1.841157 1.812778 1.880713 2.495853 100
With pipe:
min lq mean median uq max neval
6.753999 6.969573 7.167802 7.052744 7.195204 8.833322 100
Why is %>%
so much slower in this situation? Is there a better way to write this?
EDIT:
I made the data frame smaller and incorporated Moody_Mudskipper's suggestions into the benchmarking.
microbenchmark(
nopipe=dummy_data %>% group_by(id) %>% summarise(list(unique(label))),
magrittr=dummy_data %>% group_by(id) %>% summarise(label %>% unique %>% list),
magrittr2=dummy_data %>% group_by(id) %>% summarise_at('label', . %>% unique %>% list),
fastpipe=dummy_data %.% group_by(., id) %.% summarise(., label %.% unique(.) %.% list(.))
)
Unit: milliseconds
expr min lq mean median uq max neval
nopipe 59.91252 70.26554 78.10511 72.79398 79.29025 214.9245 100
magrittr 469.09573 525.80084 568.28918 558.05634 590.48409 767.4647 100
magrittr2 84.06716 95.20952 106.28494 100.32370 110.92373 241.1296 100
fastpipe 93.57549 103.36926 109.94614 107.55218 111.90049 162.7763 100
microbenchmark
call:microbenchmark(code1 = { ...first snippet... }, code2 = { ...second snippet... })
(or without the names) so you can compare the times directly. – Giffy