According to the offical document Storage limits
of Azure subscription and service limits, quotas, and constraints
, there are some limits about your scenario which can not around as below.
- Maximum request rate1 per storage account: 20,000 requests per second
- Max egress:
- for general-purpose v2 and Blob storage accounts (all regions): 50 Gbps
- for general-purpose v1 storage accounts (US regions): 20 Gbps if RA-GRS/GRS enabled, 30 Gbps for LRS/ZRS 2
- for general-purpose v1 storage accounts (Non-US regions): 10 Gbps if RA-GRS/GRS enabled, 15 Gbps for LRS/ZRS 2
- Target throughput for single blob: Up to 60 MiB per second, or up to 500 requests per second
Considering for download data to local environment, except your network bandwidth and stablity, you have to compute the max concurrent number of requests per blob not over 500
and the total number of all requests not over 20,000
if you want to move data programmatically. So it's the key point for high concurrency controll.
If just move data inside Azure or not by programming, the best way is to use the offical transfer data tool AzCopy
(for Windows or Linux) and Azure Data Factory. Then you will not need to consider for these limits and just wait for the move progress done.
Any concern, please feel free to let me know.
max concurrent number of requests per blob not over 500
, how do you get this? – Gideon