We have around 10,000+ images in a bucket in Amazon S3, how can I set the expires header on all of the images in one go?
Just a heads up that I found a great solution using the AWS CLI:
aws s3 cp s3://bucketname/optional_path s3://bucketname/optional_path --recursive --acl public-read --metadata-directive REPLACE --cache-control max-age=2592000
This will set the Cache-Control for 30 days. Note that you have the option to copy or replace the previous header data. Since AWS will automatically include the right meta content-type data for each media type and I had some bad headers I just chose to overwrite everything.
cp
you need to repeat the path twice as they are source and destination –
Pickup Cache-Control
set, do we need Expires? Services like pingdom and gtmetrix seem to balk at a missing Expires even if there's a Cache-control. –
Stalder You can make bulk changes to bucket files with third party apps that use the S3 API. Those apps will not set the headers using only one request, but will automate the 10,000+ required requests.
The one I currently use is Cloudberry Explorer, which is a freeware utility to interact with your S3 buckets. In this tool I can select multiple files and specify HTTP headers that will be applied to all of them.
An alternative would be to develop your own script or tool using the S3 API libraries.
An alternative solution is to add response-expires
parameter in your URL. It sets the Expires
header of the response.
See Request Parameters section in http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html for more detail.
- Select the folder
- From the top menu, More
- Select Change Metadata
- Add Key as Expires
- Add Value as 2592000 (for example)
access plus 1 year
? If we set manually the year etc, we'll have to constantly manually edit Expires header! –
Stalder Cyberduck will edit headers as well.
- Choose all the items
- command & i ( get info )
- Offers a GUI to edit various Headers with presets built in.
Just processed 6000 images in one bucket without a hitch.
Pretty sure it's not possible to do this in a single request. Instead you'll have to make 10,000 PUT requests, one for each key, with the new headers/metadata you want along with the x-amz-copy-source
header pointing to the same key (so that you don't need to re-upload the object). The link I provided goes into more detail on the PUT-copy operation, but it's pretty much the way to change object metadata on s3.
© 2022 - 2024 — McMap. All rights reserved.