I think you could accomplish this in a couple ways.
Option 1: Use Apache's PageSpeed module (aka mod_pagespeed
):
The PageSpeed Modules optimize your images to minimize their size and thus reduce their load time. They remove non-visible image information and apply high-efficiency compression techniques. This can result in a data saving of 50% or more.
Using PageSpeed Modules, you can focus on the content of your site, knowing your visitors will receive images in the best format and dimensions for their device and browser while using minimum bandwidth.
https://www.modpagespeed.com/doc/filter-image-optimize
Info on how to install it can be found in the official docs.
Option #2: use a custom image compression service, and reverse-proxy image requests to it.
If for some reason the Apache PageSpeed module won't work, you could set up a reverse proxying cache. Nginx would check the cache first, get a MISS, internally it'd then request the image from your compression service, then return the compressed image while also saving that to disk so the next request will HIT the cache.
Details on setup follow.
You would first create a location { ... }
block with a regular expression that nginx will match on when it gets a request for your raw image.
server {
# your standard nginx server stuff here, then:
# when reverse proxying, nginx needs a DNS resovler defined
# you may be able to skip this if you use an IP address instead of
# example.com below.
resolver 1.1.1.1 1.0.0.1 valid=300s;
resolver_timeout 2s;
location ~ ^/your/image/originals/(.*) {
proxy_ssl_server_name on;
proxy_pass https://www.example.com/your-custom-compression-service/$1;
# plus we need to define some extra stuff for caching; which I put
# in another file for modularity or reuse if later needed.
include "/absolute/path/to/nginx-includes/location-cache.config";
}
}
in your "location-cache.config" (name it anything), you name a cache, let's call it "images_cache":
proxy_cache images_cache;
proxy_cache_valid 200 1d; # Cache HTTP 200 responses for up to 1 day.
# Keep using a stale cache during the short time it takes to refresh it (so
# we don't get a bunch of requests through # at the instant the cache becomes
# invalid).
# See: https://www.nginx.com/blog/mitigating-thundering-herd-problem-pbs-nginx/
proxy_cache_use_stale error timeout invalid_header updating
http_500 http_502 http_503 http_504;
# "Allows starting a background subrequest to update an expired cache item,
# while a stale cached response is returned to the client."
# See: https://www.nginx.com/blog/nginx-caching-guide/#proxy_cache_background_update
proxy_cache_background_update on;
Finally, in the http {...}
block, you set up the "images_cache" cache we named above:
proxy_cache_path
/tmp/nginx-cache
# Use a two-level cache directory structure, because the default
# (single directory) is said to lead to potential performance issues.
# Why that is default then... your guess is as good as mine.
levels=1:2
# A 10MB zone keeps ~80,000 keys in memory. This helps quickly determine
# if a request is a HIT or a MISS without having to go to disk.
keys_zone=images_cache:10m
# If something hasn't been used in quite a while (60 days), evict it.
inactive=60d
# Limit on total size of all cached files.
max_size=100m;
In your custom image compression service (at example.com above), you'd probably want to write a little service (in Node, Python, Rust, or whatever) that grabs the URL passed to it, fetches the URL from the disk location (in .../images/originals or wherever), compresses and returns it. I'll leave that for the reader :-)