Optimizing File Cacheing and HTTP2
Asked Answered
C

1

10

Our site is considering making the switch to http2.

My understanding is that http2 renders optimization techniques like file concatenation obsolete, since a server using http2 just sends one request.

Instead, the advice I am seeing is that it's better to keep file sizes smaller so that they are more likely to be cached by a browser.

It probably depends on the size of a website, but how small should a website's files be if its using http2 and wants to focus on caching?

In our case, our many individual js and css files fall in the 1kb to 180kb range. Jquery and bootstrap might be more. Cumulatively, a fresh download of a page on our site is usually less than 900 kb.

So I have two questions:

Are these file sizes small enough to be cached by browsers?

If they are small enough to be cached, is it good to concatenate files anyways for users who use browsers that don't support http2.

Would it hurt to have larger file sizes in this case AND use HTTP2? This way, it would benefit users running either protocol because a site could be optimized for both http and http2.

Camarata answered 23/2, 2016 at 21:36 Comment(2)
By the way I think this isn't a programming question, so marking it off topic. It might be a great fit for webmasters.stackexchange.com though.Sac
I'm voting to close this question as off-topic because more of a webmasters.stackexchange.com question.Sac
B
13

Let's clarify a few things:

My understanding is that http2 renders optimization techniques like file concatenation obsolete, since a server using http2 just sends one request.

HTTP/2 renders optimisation techniques like file concatenation somewhat obsolete since HTTP/2 allows many files to download in parallel across the same connection. Previously, in HTTP/1.1, the browser could request a file and then had to wait until that file was fully downloaded before it could request the next file. This lead to workarounds like file concatenation (to reduce the number of files required) and multiple connections (a hack to allow downloads in parallel).

However there's a counter argument that there are still overheads with multiple files, including requesting them, caching them, reading them from cache... etc. It's much reduced in HTTP/2 but not gone completely. Additionally gzipping text files works better on larger files, than gzipping lots of smaller files separately. Personally, however I think the downsides outweigh these concerns, and I think concatenation will die out once HTTP/2 is ubiquitous.

Instead, the advice I am seeing is that it's better to keep file sizes smaller so that they are more likely to be cached by a browser.

It probably depends on the size of a website, but how small should a website's files be if its using http2 and wants to focus on caching?

The file size has no bearing on whether it would be cached or not (unless we are talking about truly massive files bigger than the cache itself). The reason splitting files into smaller chunks is better for caching is so that if you make any changes, then any files which has not been touched can still be used from the cache. If you have all your javascript (for example) in one big .js file and you change one line of code then the whole file needs to be downloaded again - even if it was already in the cache.

Similarly if you have an image sprite map then that's great for reducing separate image downloads in HTTP/1.1 but requires the whole sprite file to be downloaded again if you ever need to edit it to add one extra image for example. Not to mention that the whole thing is downloaded - even for pages which just use one of those image sprites.

However, saying all that, there is a train of thought that says the benefit of long term caching is over stated. See this article and in particular the section on HTTP caching which goes to show that most people's browser cache is smaller than you think and so it's unlikely your resources will be cached for very long. That's not to say caching is not important - but more that it's useful for browsing around in that session rather than long term. So each visit to your site will likely download all your files again anyway - unless they are a very frequent visitor, have a very big cache, or don't surf the web much.

is it good to concatenate files anyways for users who use browsers that don't support http2.

Possibly. However, other than on Android, HTTP/2 browser support is actually very good so it's likely most of your visitors are already HTTP/2 enabled.

Saying that, there are no extra downsides to concatenating files under HTTP/2 that weren't there already under HTTP/1.1. Ok it could be argued that a number of small files could be downloaded in parallel over HTTP/2 whereas a larger file needs to be downloaded as one request but I don't buy that that slows it down much any. No proof of this but gut feel suggests the data still needs to be sent, so you have a bandwidth problem either way, or you don't. Additionally the overhead of requesting many resources, although much reduced in HTTP/2 is still there. Latency is still the biggest problem for most users and sites - not bandwidth. Unless your resources are truly huge I doubt you'd notice the difference between downloading 1 big resource in I've go, or the same data split into 10 little files downloaded in parallel in HTTP/2 (though you would in HTTP/1.1). Not to mention gzipping issues discussed above.

So, in my opinion, no harm to keep concatenating for a little while longer. At some point you'll need to make the call of whether the downsides outweigh the benefits given your user profile.

Would it hurt to have larger file sizes in this case AND use HTTP2? This way, it would benefit users running either protocol because a site could be optimized for both http and http2.

Absolutely wouldn't hurt at all. As mention above there are (basically) no extra downsides to concatenating files under HTTP/2 that weren't there already under HTTP/1.1. It's just not that necessary under HTTP/2 anymore and has downsides (potentially reduces caching use, requires a build step, makes debugging more difficult as deployed code isn't same as source code... etc.).

Use HTTP/2 and you'll still see big benefits for any site - except the most simplest sites which will likely see no improvement but also no negatives. And, as older browsers can stick with HTTP/1.1 there are no downsides for them. When, or if, you decide to stop implementing HTTP/1.1 performance tweaks like concatenating is a separate decision.

In fact only reason not to use HTTP/2 is that implementations are still fairly bleeding edge so you might not be comfortable running your production website on it just yet.

**** Edit August 2016 ****

This post from an image heavy, bandwidth bound, site has recently caused some interest in the HTTP/2 community as one of the first documented example of where HTTP/2 was actually slower than HTTP/1.1. This highlights the fact that HTTP/2 technology and understand is still new and will require some tweaking for some sites. There is no such thing as a free lunch it seems! Well worth a read, though worth bearing in mind that this is an extreme example and most sites are far more impacted, performance wise, by latency issues and connection limitations under HTTP/1.1 rather than bandwidth issues.

Blender answered 27/3, 2016 at 22:8 Comment(6)
Thanks for clearing that up - this answer is very helpful! The sources I had read seemed to discuss caching strategy under http2 and entirely different. Really - it just depends on how often you alter files.Camarata
Yup. All caching issues revolve around how often you alter your files really.Blender
The corrolary to "browser cache is smaller than you think" is that web sites are using way too much useless junk. Drop two thirds of the Javascript your site is probably running and optimize the images. Really, five far too nosy "tracker" and "user optimization" scripts, and three separate ad networks? Not you, you, in particular, just the generic all of you out there.Sac
Couldn't agree more!Blender
Whether the downsides outweigh the cost or not depends on the specific case. No general statement can be made. +1 otherwise, in particular the better compression due to bigger chunk size is significant.Forgiveness
Fair point @Forgiveness and was reminded of this when reading a recent blog post so updated my answer to include an extra paragraph on that.Blender

© 2022 - 2024 — McMap. All rights reserved.