Are document databases good for storing large amounts of Stock Tick data? [closed]
Asked Answered
S

4

11

I was thinking of using a database like mongodb or ravendb to store a lot of stock tick data and wanted to know if this would be viable compared to a standard relational such as Sql Server.

The data would not really be relational and would be a couple of huge tables. I was also thinking that I could sum/min/max rows of data by minute/hour/day/week/month etc for even faster calculations.

Example data: 500 symbols * 60 min * 60sec * 300 days... (per record we store: date, open, high,low,close, volume, openint - all decimal/float)

So what do you guys think?

Saturn answered 8/7, 2010 at 20:12 Comment(0)
F
4

The answer here will depend on scope.

MongoDB is great way to get the data "in" and it's really fast at querying individual pieces. It's also nice as it is built to scale horizontally.

However, what you'll have to remember is that all of your significant "queries" are actually going to result from "batch job output".

As an example, Gilt Groupe has created a system called Hummingbird that they use for real-time analytics on their web site. Presentation here. They're basically dynamically rendering pages based on collected performance data in tight intervals (15 minutes).

In their case, they have a simple cycle: post data to mongo -> run map-reduce -> push data to webs for real-time optimization -> rinse / repeat.

This is honestly pretty close to what you probably want to do. However, there are some limitations here:

  1. Map-reduce is new to many people. If you're familiar with SQL, you'll have to accept the learning curve of Map-reduce.
  2. If you're pumping in lots of data, your map-reduces are going to be slower on those boxes. You'll probably want to look at slaving / replica pairs if response times are a big deal.

On the other hand, you'll run into different variants of these problems with SQL.

Of course there are some benefits here:

  1. Horizontal scalability. If you have lots of boxes then you can shard them and get somewhat linear performance increases on Map/Reduce jobs (that's how they work). Building such a "cluster" with SQL databases is lot more costly and expensive.
  2. Really fast speed and as with point #1, you get the ability to add RAM horizontally to keep up the speed.

As mentioned by others though, you're going to lose access to ETL and other common analysis tools. You'll definitely be on the hook to write a lot of your own analysis tools.

Flint answered 9/7, 2010 at 4:59 Comment(1)
Thanks for the responses, looks like I will have to do a few test scenarios and play around first. But the analysis tools support is something I over looked. Thanks.Saturn
T
10

Since when this question was asked in 2010, several database engines were released or have developed features that specifically handle time series such as stock tick data:

With MongoDB or other document-oriented databases, if you target performance, the advices is to contort your schema to organize ticks in an object keyed by seconds (or an object of minutes, each minute being another object with 60 seconds). With a specialized time series database, you can query data simply with

SELECT open, close FROM market_data
WHERE symbol = 'AAPL' AND time > '2016-09-14' AND time < '2016-09-21'

I was also thinking that I could sum/min/max rows of data by minute/hour/day/week/month etc for even faster calculations.

With InfluxDB, this is very straightforward. Here's how to get the daily minimums and maximums:

SELECT MIN("close"), MAX("close") FROM "market_data" WHERE WHERE symbol = 'AAPL'
GROUP BY time(1d)

You can group by time intervals which can be in microseconds (u), seconds (s), minutes (m), hours (h), days (d) or weeks (w).

TL;DR

Time-series databases are better choices than document-oriented databases for storing and querying large amounts of stock tick data.

Tenantry answered 15/9, 2016 at 0:32 Comment(4)
Could you provide some resources about what you call 'time-series databases' ? Should I understand Column oriented databases like HBase or cassandra? TxDesegregate
@bAN: To quote this post on the top TSDBS, "Databases built from the ground up for time series data are significantly faster than those that sit on top of non purpose built databases like Cassandra and Hadoop."Tenantry
@DanDascalescu your comment link brokeFertilizer
@JosephGarvin: luckily the Wayback Machine has archived it.Tenantry
F
4

The answer here will depend on scope.

MongoDB is great way to get the data "in" and it's really fast at querying individual pieces. It's also nice as it is built to scale horizontally.

However, what you'll have to remember is that all of your significant "queries" are actually going to result from "batch job output".

As an example, Gilt Groupe has created a system called Hummingbird that they use for real-time analytics on their web site. Presentation here. They're basically dynamically rendering pages based on collected performance data in tight intervals (15 minutes).

In their case, they have a simple cycle: post data to mongo -> run map-reduce -> push data to webs for real-time optimization -> rinse / repeat.

This is honestly pretty close to what you probably want to do. However, there are some limitations here:

  1. Map-reduce is new to many people. If you're familiar with SQL, you'll have to accept the learning curve of Map-reduce.
  2. If you're pumping in lots of data, your map-reduces are going to be slower on those boxes. You'll probably want to look at slaving / replica pairs if response times are a big deal.

On the other hand, you'll run into different variants of these problems with SQL.

Of course there are some benefits here:

  1. Horizontal scalability. If you have lots of boxes then you can shard them and get somewhat linear performance increases on Map/Reduce jobs (that's how they work). Building such a "cluster" with SQL databases is lot more costly and expensive.
  2. Really fast speed and as with point #1, you get the ability to add RAM horizontally to keep up the speed.

As mentioned by others though, you're going to lose access to ETL and other common analysis tools. You'll definitely be on the hook to write a lot of your own analysis tools.

Flint answered 9/7, 2010 at 4:59 Comment(1)
Thanks for the responses, looks like I will have to do a few test scenarios and play around first. But the analysis tools support is something I over looked. Thanks.Saturn
I
1

Here's my reservation with the idea - and I'm going to openly acknowledge that my working knowledge of document databases is weak. I’m assuming you want all of this data stored so that you can perform some aggregation or trend-based analysis on it.

If you use a document based db to act as your source, the loading and manipulation of each row of data (CRUD operations) is very simple. Very efficient, very straight forward, basically lovely.

What sucks is that there are very few, if any, options to extract this data and cram it into a structure more suitable for statistical analysis e.g. columnar database or cube. If you load it into a basic relational database, there are a host of tools, both commercial and open source such as pentaho that will accommodate the ETL and analysis very nicely.

Ultimately though, what you want to keep in mind is that every financial firm in the world has a stock analysis/ auto-trader application; they just caused a major U.S. stock market tumble and they are not toys. :)

Indetermination answered 8/7, 2010 at 20:27 Comment(0)
E
0

A simple datastore such as a key-value or document database is also beneficial in cases where performing analytics reasonably exceeds a single system's capacity. (Or it will require an exceptionally large machine to handle the load.) In these cases, it makes sense to use a simple store since the analytics require batch processing anyway. I would personally look at finding a horizontally scaling processing method to coming up with the unit/time analytics required.

I would investigate using something built on Hadoop for parallel processing. Either use the framework natively in Java/C++ or some higher level abstraction: Pig, Wukong, binary executables through the streaming interface, etc. Amazon offers reasonably cheap processing time and storage if that route is of interest. (I have no personal experience but many do and depend on it for their businesses.)

Effervescent answered 9/7, 2010 at 0:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.