Serving gzipped content directly — bad thing to do?
Asked Answered
U

1

23

I have my website configured to serve static content using gzip compression, like so:

<link rel='stylesheet' href='http://cdn-domain.com/css/style.css.gzip?ver=0.9' type='text/css' media='all' />

I don't see any website doing anything similar. So, the question is, what's wrong with this? Am I to expect shortcomings?

Precisely, as I understand it, most websites are configured to serve normal static files (.css, .js, etc) and gzipped content (.css.gz, .js.gz, etc) only if the request comes with a Accept-Encoding: gzip header. Why should they be doing this when all browsers support gzip just the same?

PS: I am not seeing any performance issues at all because all the static content is gzipped prior to uploading it to the CDN which then simply serves the gzipped files. Therefore, there's no stress/strain on my server.


Just in case it's helpful, here's the HTTP Response Header information for the gzipped CSS file:

Screenshot 1

And this for gzipped favicon.ico file:

Screenshot 2

Undistinguished answered 25/7, 2012 at 15:39 Comment(6)
PS: Possible duplicate, I know—but that question is years old, and the answers based around browser support aren't relevant anymore, IMHO.Undistinguished
Why wouldn't they be relevant any more? I still stand by my answer there: All modern browsers support receiving content as gzipped for transfer. How the content is pre/processed on the server side is irrelevant. You are doing something smart here, but it's not really a new question. As long as your server is still sending the right headers despite it being preprocessed you will be perfectly fine. Browsers may even save you these days even if you don't send all the right headers.Procuration
For reference, many sites do actually do this but most won't keep the .gzip suffix (which should technically be only .gz).Procuration
@MatthewScharley The thing is, I am thinking the issue is something more than browsers, the reason why I believe this is not widely adopted (and I don't know what exactly, it could be). Yes, all browsers today support gzip very, very well. So, no issues on the browser-end, or on mobile devices. PS Updated my question with some info.Undistinguished
Amazon is returning everything just fine, so you should be right. I can't think of any reason why any device wouldn't support Content-Encoding: gzip, it's a very highly used mechanism these days. As I already mentioned, preprocessing the files for gzip compression isn't a new idea (think svgz especially), but most hosts will be set up to handle it transparently without relying on a file name suffix as you've used. Are you actually having an issue? If so, could you highlight that? Currently I'm totally missing the question in your question.Procuration
@MatthewScharley Please check if my updated question makes it more clear. :)Undistinguished
P
30

Supporting Content-Encoding: gzip isn't a requirement of any current HTTP specification, that's why there is a trigger in the form of the request header.

In practice? If your audience is using a web browser and you are only worried about legitimate users then there is very, very slim to no chance that anyone will actually be affected by only having preprocessed gzipped versions available. It's a remnant of a bygone age. Browsers these days should handle being force-fed gzipped content even if they don't request it as long as you also provide them correct headers for the content being given to them. It's important to realise that HTTP request/response is a conversation and that most of the headers in a request are just that; a request. For the most part, the server on the other end is under no obligation to honor any particular headers, and as long as they return a valid response that makes sense the client on the other end should do their best to make sense of what was returned. This includes enabling gzip if the server responds that it has used it.

If your target is machine consumption however, then be a little wary. People still think that it's a smart idea to write their own HTTP/SMTP/etc parsers sometimes even though the topic has been done to death in multiple libraries for pretty much every language out there. All the libraries should support gzip just fine, but hand-rolled parsers usually won't.

Procuration answered 26/7, 2012 at 13:38 Comment(4)
What do you mean by machine consumption and hand-rolled "parsers" exactly? Can you please give an example? Mine is a blog by the way.Undistinguished
@AahanKrish: machine consumption: anything that isn't being displayed to a user directly. Screen scrapers and the like or something similar in function to a REST API. Parsing is the term used to describe taking some complex element (in this case, say a CSS or HTML file or a HTTP conversation) and turning it into a data structure or structures that a computer can understand directly. 'Hand-rolled' refers to code that a developer creates themselves, ie. not a library.Procuration
@AahanKrish: Also, see my edits to the second paragraph expanding a little on why this will work out for you :)Procuration
Other examples of machine consumption include RSS feed readers, search engines, various bots and scrapers (eg HTML validators, web archives) and so on. The more "hand-made" ones of these are the ones that are less likely to accept gzipped content. You should be all right with search engines, for example. But using this on your RSS feeds might be iffy if you want joe blogg's mini RSS reader to support it.Gaitskell

© 2022 - 2024 — McMap. All rights reserved.