Very large HTTP request vs many small requests
Asked Answered
T

3

50

I need a 2D array (as Json) to be sent from server to client. It would be around 400x400 in size with each entry around 4 characters of text. So that makes it around 640KB of data.

Which of the following extreme approaches is better ?

  1. I make a large HTTP request of all the data at one go.
  2. I make 400 requests - each asking for a single row (around 1.6 KB)

I believe optimal approach would be somewhere in middle. Could anyone give me an idea what might be the optimal single request size for this data?

Thanks.

Twirl answered 29/6, 2010 at 6:36 Comment(0)
A
62

Unless you are dealing with slow (very slow by today's standards) connections and really need incremental updates, do it in one request.

That gives you better efficiency for compressing the response, and avoids the overhead of the extra HTTP requests and response headers.

Azpurua answered 29/6, 2010 at 6:40 Comment(5)
+1 - and you avoid the overhead of the round trips. Even at 20ms.... 400 requests would make that 8000ms overhead = 8 seconds. At 80ms... (far away), this would be 32 seconds wasted.Lenzi
Thanks a lot to both David & Tom. That was really useful. :)Twirl
Well, @Lenzi you didn't factor in parallel requests...! its only 20ms in the beginning and at the end on a perfect connection if there's no limit on concurrent connections :) Correct me if I'm wrong, it's only 400 x (amount of time taken to process headers) and not 400 x RTTExcitor
@ZekeDran — Browsers impose a limit on the number of concurrent connections.Azpurua
Assuming it's six, the final value is reduced by 6 times, that's close to 1.33 seconds; Hence if there's no progressive processing, this 1.33s would still be a waste!Excitor
J
91

Couple of considerations for choosing one big vs several small:

  • In the single request case, you can't do progressive data processing as the data arrives; you need to wait for the full packet to arrive before you can do anything. If it fails, you need to start everything from scratch.
  • In the multiple requests case, you can do progressive data processing. However, you now have to consider the potential for multiple failures and how to recover from these.
  • Multiple requests incur overhead for each request. This is additional bandwidth you app will be consuming.
  • Some HTTP agents limit the number of concurrent requests to the same server, and you might need to do some logic to work around that.
  • Response compression will work better for the single request case.
  • Multiple requests won't require you to allocate the full memory for your data. Granted, 640KB is not that big chunk of memory, so that might not be a big consideration for you, depending on how often you will allocate it.
  • In the case of early terminate of the process (either a Cancel button or the app is terminated or the browser navigates away from your page), the single request will still finish the full response download; however, for the multiple requests case, any request your code hasn't started yet will not be executed.

Honestly, I wouldn't be that worried about the last two and would base my choice on 1) is progressive data processing important; and 2) what your app tolerance is for failures and partial data.

Joan answered 29/6, 2010 at 6:49 Comment(2)
+1 I find this a much better answer than the currently accepted one for such a case since it provides a good comparison of the two options rather than a direct answer.Certain
Consider caching/busting too. A single request that is cached means any change leads to another large request. Split into multiple requests, any change only needs to download that chunk again.Despotism
A
62

Unless you are dealing with slow (very slow by today's standards) connections and really need incremental updates, do it in one request.

That gives you better efficiency for compressing the response, and avoids the overhead of the extra HTTP requests and response headers.

Azpurua answered 29/6, 2010 at 6:40 Comment(5)
+1 - and you avoid the overhead of the round trips. Even at 20ms.... 400 requests would make that 8000ms overhead = 8 seconds. At 80ms... (far away), this would be 32 seconds wasted.Lenzi
Thanks a lot to both David & Tom. That was really useful. :)Twirl
Well, @Lenzi you didn't factor in parallel requests...! its only 20ms in the beginning and at the end on a perfect connection if there's no limit on concurrent connections :) Correct me if I'm wrong, it's only 400 x (amount of time taken to process headers) and not 400 x RTTExcitor
@ZekeDran — Browsers impose a limit on the number of concurrent connections.Azpurua
Assuming it's six, the final value is reduced by 6 times, that's close to 1.33 seconds; Hence if there's no progressive processing, this 1.33s would still be a waste!Excitor
P
3

And you have to keep in mind that the servers may have vulnerabilities with large requests.

Pelerine answered 27/12, 2013 at 20:19 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.