I know there are many questions here in SO about ways to convert a list of data.frames to a single data.frame using do.call or ldply, but this questions is about understanding the inner workings of both methods and trying to figure out why I can't get either to work for concatenating a list of almost 1 million df's of the same structure, same field names, etc. into a single data.frame. Each data.frame is of one row and 21 columns.
The data started out as a JSON file, which I converted to lists using fromJSON, then ran another lapply to extract part of the list and converted to data.frame and ended up with a list of data.frames.
I've tried:
df <- do.call("rbind", list)
df <- ldply(list)
but I've had to kill the process after letting it run up to 3 hours and not getting anything back.
Is there a more efficient method of doing this? How can I troubleshoot what is happening and why is it taking so long?
FYI - I'm using RStudio server on a 72GB quad-core server with RHEL, so I don't think memory is the problem. sessionInfo below:
> sessionInfo()
R version 2.14.1 (2011-12-22)
Platform: x86_64-redhat-linux-gnu (64-bit)
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=C LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] multicore_0.1-7 plyr_1.7.1 rjson_0.2.6
loaded via a namespace (and not attached):
[1] tools_2.14.1
>
Reduce(list, f = rbind)
? – Ligerdata.table
andrbindlist
are the way to go here! – Inauguratedata.table
, could you consider moving the accept to mnel's answer please? However, I don't know how S.O. etiquette works when a better answer comes along a long time later, especially when that new answer uses new features not available originally.rbindlist
is a conclusive solution though, which is many times faster thando.call("bind",...)
, and this question and answers are all about speed for large data. – Chetnikdo.call(rbind,...)
is then on that tiny size. But, the output, at 250,000 rows and 2 columns is just 3MB. So maybe it's related to this benchmark: a large number (50,000) of very small data.frame (5x2) with all those (identical) column names repeated over and over in the input. Perhapsdo.call
is checking all those column name vectors. Anyway, we could scale up from 40MB to 400MB at least. – Chetnik