How to implement image compression on-the-fly with Nginx?
Asked Answered
C

2

12

Problem to solve:

I'm on a CentOS platform. I would like to have static image assets (jpg/gif/png) compressed for optimized web serving (without resizing), while keeping the originals.

For example,

Request sent to http://server-A/images/image.jpg will be compressed on-the-fly (and cached) with a pre-configure lossless/lossy parameter.

I would like to achieve similar effect of Polish feature in Cloudflare, but on my own web server.

What are the tools that can be used for such integration?

An alternative thought:

Is there a way to watch the path /originals/ for any change, if yes, then doing an offline image compress and output it to /compressed/ path?

Coauthor answered 9/2, 2017 at 20:51 Comment(0)
T
1

I think you could accomplish this in a couple ways.

Option 1: Use Apache's PageSpeed module (aka mod_pagespeed):

The PageSpeed Modules optimize your images to minimize their size and thus reduce their load time. They remove non-visible image information and apply high-efficiency compression techniques. This can result in a data saving of 50% or more.

Using PageSpeed Modules, you can focus on the content of your site, knowing your visitors will receive images in the best format and dimensions for their device and browser while using minimum bandwidth.

https://www.modpagespeed.com/doc/filter-image-optimize

Info on how to install it can be found in the official docs.

Option #2: use a custom image compression service, and reverse-proxy image requests to it.

If for some reason the Apache PageSpeed module won't work, you could set up a reverse proxying cache. Nginx would check the cache first, get a MISS, internally it'd then request the image from your compression service, then return the compressed image while also saving that to disk so the next request will HIT the cache.

Details on setup follow.

You would first create a location { ... } block with a regular expression that nginx will match on when it gets a request for your raw image.

server { 
  # your standard nginx server stuff here, then:

  # when reverse proxying, nginx needs a DNS resovler defined
  # you may be able to skip this if you use an IP address instead of
  # example.com below.
  resolver               1.1.1.1 1.0.0.1 valid=300s;
  resolver_timeout       2s;
  
  location ~ ^/your/image/originals/(.*) {
       proxy_ssl_server_name on;
       proxy_pass https://www.example.com/your-custom-compression-service/$1;

       # plus we need to define some extra stuff for caching; which I put
       # in another file for modularity or reuse if later needed.
       include "/absolute/path/to/nginx-includes/location-cache.config";
    }
}

in your "location-cache.config" (name it anything), you name a cache, let's call it "images_cache":

proxy_cache            images_cache;
proxy_cache_valid      200 1d; # Cache HTTP 200 responses for up to 1 day.

# Keep using a stale cache during the short time it takes to refresh it (so
# we don't get a bunch of requests through # at the instant the cache becomes
# invalid).
# See: https://www.nginx.com/blog/mitigating-thundering-herd-problem-pbs-nginx/
proxy_cache_use_stale  error timeout invalid_header updating
                        http_500 http_502 http_503 http_504;

# "Allows starting a background subrequest to update an expired cache item,
# while a stale cached response is returned to the client."
# See: https://www.nginx.com/blog/nginx-caching-guide/#proxy_cache_background_update
proxy_cache_background_update on;

Finally, in the http {...} block, you set up the "images_cache" cache we named above:

proxy_cache_path
    /tmp/nginx-cache

    # Use a two-level cache directory structure, because the default
    # (single directory) is said to lead to potential performance issues.
    # Why that is default then... your guess is as good as mine.
    levels=1:2

    # A 10MB zone keeps ~80,000 keys in memory. This helps quickly determine
    # if a request is a HIT or a MISS without having to go to disk.
    keys_zone=images_cache:10m

    # If something hasn't been used in quite a while (60 days), evict it.
    inactive=60d

    # Limit on total size of all cached files.
    max_size=100m;

In your custom image compression service (at example.com above), you'd probably want to write a little service (in Node, Python, Rust, or whatever) that grabs the URL passed to it, fetches the URL from the disk location (in .../images/originals or wherever), compresses and returns it. I'll leave that for the reader :-)

Thaothapa answered 22/3, 2021 at 23:28 Comment(0)
L
-1

A .jpg image is already a compressed, binary format. There is nothing here nginx can do for you.

If I understand your question correctly, you want to reduce the size of an image while keeping the same quality, which requires either a better algorithm for compression or an image of less quality.

If I understood cloudflares approach correctly, for lossless they just strip away the meta-data(Exif) of the image, which you could also achieve, but not with nginx, rather in your asset-pipeline.

The second approach, lossy, implements a better image compression algorithm, which could also deployed in your asset-pipeline. See stuff like tinypng.

Hope it helps, G

Lallans answered 20/9, 2018 at 16:9 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.