Fastest & most flexible way to chart over 2 million rows of flat file data?
Asked Answered
C

3

29

I'm collecting some system data in a flatfile, which has this format:

YYYY-MM-DD-HH24:MI:SS DD1 DD2 DD3 DD4

Where DD1-DD4 are four items of data. An example of the file is this:

2011-02-01-13:29:53 16 8 7 68
2011-02-01-13:29:58 13 8 6 110
2011-02-01-13:30:03 26 25 1 109
2011-02-01-13:30:08 13 12 1 31
2011-02-01-13:30:14 192 170 22 34
2011-02-01-13:30:19 16 16 0 10
2011-02-01-13:30:24 137 61 76 9
2011-02-01-13:30:29 452 167 286 42
2011-02-01-13:30:34 471 177 295 11
2011-02-01-13:30:39 502 192 309 10

The file is over 2 million rows, with data points every five seconds.

I need to chart this data to be able to derive meaning from it.

What I've tried

At the moment I've tried gnuplot and rrdtool with a variety of unix tools (awk, sed, etc). Both of these work, but seem to require a lot of chopping and recutting the data each time I want to view it in a different way. My gut feel is that rrdtool is the right way to go, but at the moment I'm struggling to get the data into it fast enough, partly because I have to convert my timestamp into Unix epoch. My understanding is also that if I decide I want a new granularity of aggregation I have to rebuild the rrd (which makes sense for realtime collection, but not retrospective loads like this). These things make me think maybe I'm using the wrong tool.

The collection of the data to flat file is fixed - for example I cannot pipe the collection directly into rrdtool.

My Question

I would like people's opinions on the best way to do charting. I have these requirements:

  1. It should be as fast as possible to create a graph (not just render, but set up to render too)
  2. It should be as flexible as possible - I need to muck around with the graphs to work out the best granularity for the data (5 seconds is probably too granular)
  3. It should be able to aggregate (MAX/AVG/etc) where necessary
  4. It should be repeatable and new data files as they come in
  5. Ideally I want to be able to overlay DD1 vs DD2, or DD1 last week with DD1 this week
  6. Unix or Windows, don't care. Prefer *nix though :-)

Any suggestions?

Caphaitien answered 25/2, 2011 at 16:20 Comment(3)
I added the R tag because Kodner's at the beach. (asshole)Monomolecular
Thanks for everyone's answers, seems I can only accept one but they're all great!Caphaitien
@RobinMoffatt How is your question not opinionated and mine is? You are specifically asking a similar question as mine and yet you voted to close my question. #65072179 Sorry did not know another way to bring attentionElope
M
27

This is a really good question. I'm glad to see some R folks weighing in. I too think R is the right tool for the job, although it's my main hammer so everything looks a bit like a nail to me.

There are a handful of R concepts needed to tackle this challenge. As I see it, you need the following (references in parens) :

  1. Import data into R. (R Import Export Guide)
  2. Get the Data into an appropriate time series structure. (XTS Vignette PDF)
  3. A little bit of plotting. (Quick-R intro to graphics)

Here's example code using 2mm points. If you notice, I don't illustrate plotting all 2mm points. It's slow and not that informative. But this should give you some ideas on getting started. Feel free to come back with more specific questions if you do decide to jump down the R rabbit hole!

require( xts )
require( lubridate )

## set up some example data
dataLength <- 2e6
startTime <- ymd_hms("2011-02-01-13-29-53")
fistFullOfSeconds <- 1:dataLength
date <- startTime + fistFullOfSeconds
DD1 <- rnorm( dataLength )
DD2 <- DD1 + rnorm(dataLength, 0, .1 )
DD3 <- rnorm( dataLength, 10, 2)
DD4 <- rnorm( dataLength )

myXts <- xts(matrix( c( DD1, DD2, DD3, DD4 ), ncol=4 ), date)

## now all the data are in the myXts object so let's do some
## summarizing and visualization

## grabbing just a single day from the data
## converted to data.frame to illustrate default data frame plotting
oneDay <- data.frame( myXts["2011-02-02"] ) 
plot( oneDay )

The relationship between DD1 and DD2 kinda jumps out
enter image description here

boxplot( oneDay )

Boxplot is the piechart of statistical graphics. The plot you love to hate. Might as well link to this while we're here. enter image description here

## look at the max value of each variable every minute
par(mfrow=c(4,1)) ## partitions the graph window
ep <- endpoints(myXts,'minutes')
plot(period.apply(myXts[,1],INDEX=ep,FUN=max))
plot(period.apply(myXts[,2],INDEX=ep,FUN=max))
plot(period.apply(myXts[,3],INDEX=ep,FUN=max))
plot(period.apply(myXts[,4],INDEX=ep,FUN=max))

Even at one minute resolution I'm not sure this is informative. Should probably subset. enter image description here

Monomolecular answered 25/2, 2011 at 19:42 Comment(5)
nice answer JD, good illustration of some of the add-on packages that make these things easier than base R functionality.Nidia
Agreed. But do play with a) 'alpha blending' to allow over-plotting (and not all graphics devices support it), b) the awesome hexbin package or, as JD did c) sub-sampling. Once every minute is still too fine-grained for the way the data is set up (with a second interval).Ailin
@Dirk re: every minute - indeed. I edited my answer to work on hourly aggregates after initially doing it at the minute time step, which was just a mess.Nidia
@JD Why did you have to load myXts as xts(matrix( c( DD1, DD2, DD3, DD4 ), ncol=4 ), date) rather than xts( c( DD1, DD2, DD3, DD4) ) ?Multiplier
@Lao, DD1-4 are each vectors. If you do c( DD1, DD2, DD3, DD4) you get a single vector four times as long. I want each of the 4 vectors to be "columns" in an xts objectMonomolecular
D
10

Here's some R code for playing around with 8000000 numbers in 4 columns of 2000000 rows:

> d=matrix(runif(8000000),ncol=4)
> dim(d)
[1] 2000000       4
> plot(d[1:1000,1])
> plot(d[1:1000,1],type='l')
> plot(d[1:10000,1],type='l')

now it starts to get a bit slow:

> plot(d[1:100000,1],type='l')

what about correlation of two columns:

> cor(d[,1],d[,2])
[1] 0.001708502

-- instant. Fourier transform?

> f=fft(d[,1])

also instant. Don't try and plot it though.

Let's plot a thinned version of one of the columns:

> plot(d[seq(1,2000000,len=1000),1],type='l')

-- instant.

What's really missing is an interactive plot where you could zoom and pan around the whole data set.

Disqualification answered 25/2, 2011 at 19:18 Comment(4)
For giggles and fun, I loaded your d matrix and messed around with the alpha parameter in ggplot. It took ~2 minutes, but I was able to make this terribly informative plot with qplot(d[,1], d[,2], alpha = 1/100). I think your notion of a "thinned" version of the dataset is going to be the way to go with so much data here.Contrasty
@chase, I printed that out in color and hung it on my wall. I'm titling it "Darkness."Monomolecular
@jd - I think if we added in an colour = rnorm(100) or something in there, we could make a series out of them. Turn this "free and open source" thing into a profit generating mecca!Contrasty
Make them glow under a black light and all the stats students will have them hanging in their dorm rooms. We could have the next great "velvet Elvis" hereMonomolecular
N
6

Here is an example along the lines of the data you have, as loaded into R, aggregated etc...

First, some dummy data to write out to a file:

stime <- as.POSIXct("2011-01-01-00:00:00", format = "%Y-%d-%m-%H:%M:%S")
## dummy data
dat <- data.frame(Timestamp = seq(from = stime, by = 5, length = 2000000),
                  DD1 = sample(1:1000, replace = TRUE),
                  DD2 = sample(1:1000, replace = TRUE),
                  DD3 = sample(1:1000, replace = TRUE),
                  DD4 = sample(1:1000, replace = TRUE))
## write it out
write.csv(dat, file = "timestamp_data.txt", row.names = FALSE)

Then we can time reading in the 2-million rows. To speed this up, we tell R the classes of the columns in the file: "POSIXct" is one way in R to store the sort of timestamps you have.

## read it in:
system.time({
             tsdat <- read.csv("timestamp_data.txt", header = TRUE,
                                 colClasses = c("POSIXct",rep("integer", 4)))
            })

which, takes about 13 seconds to read in and format in internal unix times on my modest laptop.

   user  system elapsed 
 13.698   5.827  19.643 

Aggregation can be done in lots of ways, one is using aggregate(). Say aggregate to the hour mean/average:

## Generate some indexes that we'll use the aggregate over
tsdat <- transform(tsdat,
                   hours   = factor(strftime(tsdat$Timestamp, format = "%H")),
                   jday    = factor(strftime(tsdat$Timestamp, format = "%j")))
## compute the mean of the 4 variables for each minute
out <- aggregate(cbind(Timestamp, DD1, DD2, DD3, DD4) ~ hours + jday, 
                 data = tsdat, FUN = mean)
## convert average Timestamp to a POSIX time
out <- transform(out,
                 Timestamp = as.POSIXct(Timestamp, 
                                        origin = ISOdatetime(1970,1,1,0,0,0)))

That (the line creating out) takes ~16 seconds on my laptop, and gives the following output:

> head(out)
  hours jday           Timestamp      DD1      DD2      DD3      DD4
1    00  001 2010-12-31 23:29:57 500.2125 491.4333 510.7181 500.4833
2    01  001 2011-01-01 00:29:57 516.0472 506.1264 519.0931 494.2847
3    02  001 2011-01-01 01:29:57 507.5653 499.4972 498.9653 509.1389
4    03  001 2011-01-01 02:29:57 520.4111 500.8708 514.1514 491.0236
5    04  001 2011-01-01 03:29:57 498.3222 500.9139 513.3194 502.6514
6    05  001 2011-01-01 04:29:57 515.5792 497.1194 510.2431 496.8056

Simple plotting can be achieved using the plot() function:

plot(DD1 ~ Timestamp, data = out, type = "l")

We can overlay more variables via, e.g.:

ylim <- with(out, range(DD1, DD2))
plot(DD1 ~ Timestamp, data = out, type = "l", ylim = ylim)
lines(DD2 ~ Timestamp, data = out, type = "l", col = "red")

or via multiple panels:

layout(1:2)
plot(DD1 ~ Timestamp, data = out, type = "l", col = "blue")
plot(DD2 ~ Timestamp, data = out, type = "l", col = "red")
layout(1)

This has all been done with base R functionality. Others have shown how add-on packages can make working with dates easier.

Nidia answered 25/2, 2011 at 20:16 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.