Optimal buffer size for response stream of HttpWebResponse
Asked Answered
F

3

11

What's the optimal buffer size to use with a stream from HttpWebResponse.GetResponseStream()?

Online examples vary from 256b to as much as 5Kb. What gives? I guess buffer sizes might be situational. If so what are the situations to use what type of buffer size?

Thanks.

Flapdoodle answered 23/4, 2009 at 4:28 Comment(0)
S
7

Really, it doesn't matter very much.

Sure, if you use really small buffers, you may have to make a few extra calls down through the layers to get the bytes (though the stream is likely doing at least some buffering -- I don't know what it's defaults are). And sure, if you use really big buffers, you'll waste some memory and introduce some fragmentation. Since you're obviously doing IO here, any time you gain by tweaking the buffers is going to be dominated by the IO time.

As a general rule, I go with a power of two between 2048 (2k) and 8192 (8k). Just make sure you know what you're doing if you go with a buffer equal to or larger than 85,000 bytes (it's then a "large object" and subject to different GC rules).

In fact, more important than the buffer size is how long you hold it. For objects outside of the large object heap, the GC is very good at dealing with very short-lived objects (Gen 0 collections are fast), or very long-lived objects (Gen 2). Objects that live long enough to get to Gen 1 or 2 before being freed are comparatively more costly, and usually much more worth your time worrying about than how big the buffer is.

One final note: if you think you have a performance issue because of the size of buffers you are using, test it. It's unlikely, but who knows, maybe you have an odd confluence of OS version, network hardware, and driver release that has some odd issue with certain-sized buffers.

Selfabsorbed answered 23/4, 2009 at 4:44 Comment(1)
Very useful post, but I think you meant 8192 ? Power of 2's -> 2048, 4096, 8192 (8k).Copier
N
3

My anecdotal experience has been that it really does depend on what you are doing, but typically anything in the range of 1024-4096 bytes (1-4KB a.k.a. power of two) would give me comparable performance (with 4KB being the "best" number I've seen).

Basically, you want a buffer large enough so you are not needlessly reading data from the stream, but not so large you diminish returns. If your buffer is too big (~MBs), then you will increase your memory cache misses, which might actually start to decrease your performance. Of course, this varies a lot based on actual H/W (bus speed, cache size, etc), but I've seem cases where a 4MB buffer was slower than the 4KB buffer (both cases had long lifetimes, so GC was not an issue).

As Jonathan notes, test your current implementation before trying premature optimizations.

Nonprofit answered 23/4, 2009 at 5:6 Comment(0)
B
2

Actually I am having issue when the buffer size is too small. I have tested it and VERIFIED it that buffer size should not be set to be small value. In my example I set it to 2048 and the download is becoming VERY SLOW comparing to firefox one (firefox one is without download segmentation too, the same as mine).

And After I set it to a big size 409600, the download is MUCH FASTER, I think that extra call will cost overhead or such that makes the download slow. Perhaps in the network level, the buffer is exceeding your buffer size, so the TCP need to ask to resend the package again? (Just a guess, as I don't know how TCP works), however small buffer size is definitely slowing down my download. I have tested it by running using firefox default download (without add on and segmenetation) and using my class, both are far too different.

Now it is much faster, every time it loops, it will read about 200000 bytes (200Kb) as the connection here is quiet fast, but after I am running two threads, it will much slower, probably need to share with another thread.

Bismuthous answered 13/12, 2010 at 4:18 Comment(1)
Same happened to me in the case of uploading a big file (2Gb) : Bigger buffer made it much fasterRobers

© 2022 - 2024 — McMap. All rights reserved.