Amazon S3 ACL for read-only and write-once access
Asked Answered
R

4

66

I'm developing a web application and I currently have the following ACL assigned to the AWS account it uses to access its data:

{
  "Statement": [
    {
      "Sid": "xxxxxxxxx", // don't know if this is supposed to be confidential
      "Action": [
        "s3:*"
      ],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:s3:::cdn.crayze.com/*"
      ]
    }
  ]
}

However I'd like to make this a bit more restrictive so that if our AWS credentials were ever compromised, an attacker could not destroy any data.

From the documentation, it looks like I want to allow just the following actions: s3:GetObject and s3:PutObject, but I specifically want the account to only be able to create objects that don't exist already - i.e. a PUT request on an existing object should be denied. Is this possible?

Realize answered 14/5, 2012 at 23:58 Comment(2)
Didn't know about language tags! Where are those listed?Realize
I think the reason it doesn't support this is that S3 is sort of eventually-consistent, so there's no authoritative "object doesn't exist" semantics.Mcclees
A
58

This is not possible in Amazon S3 like you probably envisioned it; however, you can work around this limitation by Using Versioning which is a means of keeping multiple variants of an object in the same bucket and has been developed with use cases like this in mind:

You might enable versioning to prevent objects from being deleted or overwritten by mistake, or to archive objects so that you can retrieve previous versions of them.

There are a couple of related FAQs as well, for example:

  • What is Versioning? - Versioning allows you to preserve, retrieve, and restore every version of every object stored in an Amazon S3 bucket. Once you enable Versioning for a bucket, Amazon S3 preserves existing objects anytime you perform a PUT, POST, COPY, or DELETE operation on them. By default, GET requests will retrieve the most recently written version. Older versions of an overwritten or deleted object can be retrieved by specifying a version in the request.

  • Why should I use Versioning? - Amazon S3 provides customers with a highly durable storage infrastructure. Versioning offers an additional level of protection by providing a means of recovery when customers accidentally overwrite or delete objects. This allows you to easily recover from unintended user actions and application failures. You can also use Versioning for data retention and archiving. [emphasis mine]

  • How does Versioning protect me from accidental deletion of my objects? - When a user performs a DELETE operation on an object, subsequent default requests will no longer retrieve the object. However, all versions of that object will continue to be preserved in your Amazon S3 bucket and can be retrieved or restored. Only the owner of an Amazon S3 bucket can permanently delete a version. [emphasis mine]

If you are really paramount about the AWS credentials of the bucket owner (who can be different than the accessing users of course), you can take that one step further even, see How can I ensure maximum protection of my preserved versions?:

Versioning’s MFA Delete capability, which uses multi-factor authentication, can be used to provide an additional layer of security. [...] If you enable Versioning with MFA Delete on your Amazon S3 bucket, two forms of authentication are required to permanently delete a version of an object: your AWS account credentials and a valid six-digit code and serial number from an authentication device in your physical possession. [...]

Amphibrach answered 15/5, 2012 at 1:4 Comment(4)
It is unfortunate that this is the only solution available for a very common and obvious backup requirement ("write new only".) If you use S3 versioning, it precludes using S3's lifecycle management policies. So now you are forced to choose between having solid backup security, or having a convenient way to remove old backups. I don't think it's too much to expect both.Cardiff
I use both versioning and the lifecycle system within the same bucket quite often where it is needed - using one does not preclude the other. From the description of versioning within the s3 interface: You can use Lifecycle rules to manage all versions of your objects as well as their associated costs. Lifecycle rules enable you to automatically archive your objects to the Glacier Storage Class and/or remove them after a specified time period.Lorna
Sounds good. Is it possible for an attacker to disable versioning? Or does it not matter, because they wouldn't be able to delete the already-versioned objects anyway?Biggin
"Is it possible for an attacker to disable versioning". If an attacker has the "PutBucketVersioning" permission, they can disable versioning, which stops the creation of versions going forward, but does not delete previously created versions. But whatever has write-only access to the bucket probably shouldn't have "PutBucketVersioning" accessSquawk
H
7

If this is accidental overwrite you are trying to avoid, and your business requirements allow a short time window of inconsistency, you can do the rollback in the Lambda function:

  1. Make it a policy that "no new objects with the same name". Most of the time it will not happen. To enforce it:
  2. Listen for S3:PutObject events in an AWS Lambda function.
  3. When the event is fired, check whether more than one version is present.
  4. If there is more than one version present, delete all but the one you want to keep.
  5. Notify the uploader what happened (it's useful to have the original uploader in x-amz-meta-* of the object. More info here).
Histrionic answered 30/6, 2016 at 11:56 Comment(1)
If you are trying to prevent overwrite, shouldn't #4 be "delete all but the oldest one"?Convert
P
7

You can now lock versions of objects with S3 Object Lock. It's a per-bucket setting, and allows you to place one of two kinds of WORM locks.

  • "retention period" - can't be changed
  • "legal hold" - can be changed by the bucket owner at any time

https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock.html

As mentioned by @Kijana Woodard below, this does not prevent creation of new versions of objects.

Predicative answered 3/12, 2018 at 19:1 Comment(3)
This is a great solution to the often requested functionality from S3.Monet
You can still get new versions created. "Placing a retention period or legal hold on an object protects only the version specified in the request, and doesn't prevent new versions of the object from being created. "Giraldo
@KijanaWoodard strangely I could not find your quote in the documentation. It is likely updated several times since then, but I don't see a similar note nowJolly
V
1

Edit: Applicable if you came here from this question.

Object Locks only work in versioned buckets. If you can not enable versioning for your bucket, but can tolerate brief inconsistencies where files are presumed to exist while DELETE-ing them is still in-flight (S3 is only eventually-consistent) possibly resulting in PUT-after-DELETE failing intermittently if used in a tight-loop, or conversely, successive PUTs falsely succeeding intermittently, then the following solution may be appropriate.

Given the object path, read the Object's Content-Length header (from metadata, HeadObject request). Write the object only if the request succeeds, and where applicable, if length is greater than zero.

Venusian answered 14/9, 2020 at 17:14 Comment(1)
Isn’t it still possible for the file to be overwritten if both processes read the head before attempting to write the file? Wouldn’t you need the head read and the object write to be in a transaction of some sort for this to work?Trepidation

© 2022 - 2024 — McMap. All rights reserved.