Azure Table Storage Performance from Massively Parallel Threaded Reading
Asked Answered
N

1

8

Short version: Can we read from dozens or hundreds of table partitions in a multi-threaded manner to increase performance by orders of magnitude?

Long version: We're working on a system that is storing millions of rows in Azure table storage. We partition the data into small partitions, each one containing about 500 records, which represents a day worth of data for a unit.

Since Azure doesn't have a "sum" feature, to pull a year worth of data, we either have to use some pre-caching, or sum the data ourselves in an Azure web or worker role.

Assuming the following: - Reading a partition doesn't affect the performance of another - Reading a partition has a bottleneck based on network speed and server retrieval

We can then take a guess that if we wanted to quickly sum a lot of data on the fly (1 year, 365 partitions), we could use a massively parallel algorithm and it would scale almost perfectly to the number of threads. For example, we could use the .NET parallel extensions with 50+ threads and get a HUGE performance boost.

We're working on setting up some experiments, but I wanted to see if this has been done before. Since the .NET side is basically idle waiting on high-latency operations, this seems perfect for multi-threading.

Nip answered 7/10, 2010 at 2:31 Comment(2)
Do you have any comment for this 6 years later?Weltanschauung
Yes, it's totally a good idea, especially since the scalability targets have been going up over time. Take a look at this page to understand the limits: learn.microsoft.com/en-us/azure/storage/…Nip
L
5

There are limits imposed on the number of transactions that can be performed against a storage account and a particular partition or storage server in a given time period (somewhere around 500 req/s). So in that sense, there is a reasonable limit to the number of requests you could execute in parallel (before it will begin to look like a DoS attack).

Also, in implementation, I would be wary of concurrent connection limits imposed on the client, such as by System.Net.ServicePointManager. I am not sure if the Azure storage client is subject to those limits; they might require adjustment.

Lahdidah answered 7/10, 2010 at 3:3 Comment(2)
The limit of 500 req/s is for per partition. The limit for an account is "a few thousand" per second. Using a small VM I've noticed very little performance improvement using more than 20 threads.Aretha
Update so far - In my testing, I was able to read 365,000 rows by using 365 threads, and I got the data in an average of about 7 seconds. For 30,000 rows spread over 30 partitions using 30 threads, I was averaging 1.4 seconds. Huge win!Nip

© 2022 - 2024 — McMap. All rights reserved.