AccessDenied when calling the CreateMultipartUpload operation in Django using django-storages and boto3
Asked Answered
M

7

24

I want to use django-storages to store my model files in Amazon S3 but I get Access Denied error. I have granted the user almost all S3 permission PutObject, ListBucketMultipartUploads, ListMultipartUploadParts, AbortMultipartUpload permissions, etc. on all resources but this didn't fix it.

settings.py

...
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
AWS_S3_REGION_NAME = 'eu-west-1'
AWS_S3_CUSTOM_DOMAIN = 'www.xyz.com'
AWS_DEFAULT_ACL = None
AWS_STORAGE_BUCKET_NAME = 'www.xyz.com'
...

Using the Django shell, I tried to use the storage system as shown below.

Python 3.6.6 (default, Sep 12 2018, 18:26:19)
[GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> import os
>>> AWS_ACCESS_KEY_ID = os.environ.get( 'AWS_ACCESS_KEY_ID', 'anything' )
>>> AWS_SECRET_ACCESS_KEY = os.environ.get( 'AWS_SECRET_ACCESS_KEY', 'anything' )
>>> AWS_DEFAULT_ACL = 'public-read'
>>> from django.core.files.storage import default_storage
>>> file = default_storage.open('test', 'w')
...
>>> file.write('storage contents')
2018-09-27 16:41:42,596 botocore.hooks [DEBUG] Event before-parameter-build.s3.CreateMultipartUpload: calling handler <function validate_ascii_metadata at 0x7fdb5e848d08>
2018-09-27 16:41:42,596 botocore.hooks [DEBUG] Event before-parameter-build.s3.CreateMultipartUpload: calling handler <function sse_md5 at 0x7fdb5e848158>
2018-09-27 16:41:42,597 botocore.hooks [DEBUG] Event before-parameter-build.s3.CreateMultipartUpload: calling handler <function validate_bucket_name at 0x7fdb5e8480d0>
2018-09-27 16:41:42,597 botocore.hooks [DEBUG] Event before-parameter-build.s3.CreateMultipartUpload: calling handler <bound method S3RegionRedirector.redirect_from_cache of <botocore.utils.S3RegionRedirector object at 0x7fdb5c5d1128>>
2018-09-27 16:41:42,597 botocore.hooks [DEBUG] Event before-parameter-build.s3.CreateMultipartUpload: calling handler <function generate_idempotent_uuid at 0x7fdb5e846c80>
2018-09-27 16:41:42,598 botocore.hooks [DEBUG] Event before-call.s3.CreateMultipartUpload: calling handler <function add_expect_header at 0x7fdb5e848598>
2018-09-27 16:41:42,598 botocore.hooks [DEBUG] Event before-call.s3.CreateMultipartUpload: calling handler <bound method S3RegionRedirector.set_request_url of <botocore.utils.S3RegionRedirector object at 0x7fdb5c5d1128>>
2018-09-27 16:41:42,598 botocore.endpoint [DEBUG] Making request for OperationModel(name=CreateMultipartUpload) with params: {'url_path': '/www.xyz.com/test?uploads', 'query_string': {}, 'method': 'POST', 'headers': {'Content-Type': 'application/octet-stream', 'User-Agent': 'Boto3/1.7.80 Python/3.6.6 Linux/4.14.67-66.56.amzn1.x86_64 Botocore/1.11.1 Resource'}, 'body': b'', 'url': 'https://s3.eu-west-1.amazonaws.com/www.xyz.com/test?uploads', 'context': {'client_region': 'eu-west-1', 'client_config': <botocore.config.Config object at 0x7fdb5c8e80b8>, 'has_streaming_input': False, 'auth_type': None, 'signing': {'bucket': 'www.xyz.com'}}}
2018-09-27 16:41:42,599 botocore.hooks [DEBUG] Event request-created.s3.CreateMultipartUpload: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7fdb5c8db780>>
2018-09-27 16:41:42,599 botocore.hooks [DEBUG] Event choose-signer.s3.CreateMultipartUpload: calling handler <bound method ClientCreator._default_s3_presign_to_sigv2 of <botocore.client.ClientCreator object at 0x7fdb5cabff98>>
2018-09-27 16:41:42,599 botocore.hooks [DEBUG] Event choose-signer.s3.CreateMultipartUpload: calling handler <function set_operation_specific_signer at 0x7fdb5e846b70>
2018-09-27 16:41:42,599 botocore.hooks [DEBUG] Event before-sign.s3.CreateMultipartUpload: calling handler <function fix_s3_host at 0x7fdb5e983048>
2018-09-27 16:41:42,600 botocore.utils [DEBUG] Checking for DNS compatible bucket for: https://s3.eu-west-1.amazonaws.com/www.xyz.com/test?uploads
2018-09-27 16:41:42,600 botocore.utils [DEBUG] Not changing URI, bucket is not DNS compatible: www.xyz.com
2018-09-27 16:41:42,601 botocore.auth [DEBUG] Calculating signature using v4 auth.
2018-09-27 16:41:42,601 botocore.auth [DEBUG] CanonicalRequest:
POST
/www.xyz.com/test
uploads=
content-type:application/octet-stream
host:s3.eu-west-1.amazonaws.com
x-amz-content-sha256:e3b0c44298fc1c149afbf343ddd27ae41e4649b934ca495991b7852b855
x-amz-date:20180927T164142Z

content-type;host;x-amz-content-sha256;x-amz-date
e3b0c44298fc1c149afb65gdfg33441e4649b934ca495991b7852b855
2018-09-27 16:41:42,601 botocore.auth [DEBUG] StringToSign:
AWS4-HMAC-SHA256
20180927T164142Z
20180927/eu-west-1/s3/aws4_request
8649ef591fb64412e923359a4sfvvffdd6d00915b9756d1611b38e346ae
2018-09-27 16:41:42,602 botocore.auth [DEBUG] Signature:
61db9afe5f87730a75692af5a95ggffdssd6f4e8e712d85c414edb14f
2018-09-27 16:41:42,602 botocore.endpoint [DEBUG] Sending http request: <AWSPreparedRequest stream_output=False, method=POST, url=https://s3.eu-west-1.amazonaws.com/www.xyz.com/test?uploads, headers={'Content-Type': b'application/octet-stream', 'User-Agent': b'Boto3/1.7.80 Python/3.6.6 Linux/4.14.67-66.56.amzn1.x86_64 Botocore/1.11.1 Resource', 'X-Amz-Date': b'20180927T164142Z', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fbdsdsffdss649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=X1234567890/20180927/eu-west-1/s3/aws4_request, SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date, Signature=61db9afe5f87730a7sdfsdfs20b7137cf5d6f4e8e712d85c414edb14f', 'Content-Length': '0'}>
2018-09-27 16:41:42,638 botocore.parsers [DEBUG] Response headers: {'x-amz-request-id': '9E879E78E4883471', 'x-amz-id-2': 'ZkCfOMwLoD08Yy4Nzfxsdfdsdfds3y9wLxzqFw+o3175I+QEdtdtAi8vIEH1vi9iq9VGUC98GqlE=', 'Content-Type': 'application/xml', 'Transfer-Encoding': 'chunked', 'Date': 'Thu, 27 Sep 2018 16:41:42 GMT', 'Server': 'AmazonS3'}
2018-09-27 16:41:42,639 botocore.parsers [DEBUG] Response body:
b'<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>9E879E78E4883471</RequestId><HostId>ZkCfOMwLoD08Yy4Nzfxo8RpzsdfsdfsxzqFw+o3175I+QEdtdtAi8vIEH1vi9iq9VGUC98GqlE=</HostId></Error>'
2018-09-27 16:41:42,639 botocore.hooks [DEBUG] Event needs-retry.s3.CreateMultipartUpload: calling handler <botocore.retryhandler.RetryHandler object at 0x7fdb5c618ac8>
2018-09-27 16:41:42,640 botocore.retryhandler [DEBUG] No retry needed.
2018-09-27 16:41:42,640 botocore.hooks [DEBUG] Event needs-retry.s3.CreateMultipartUpload: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7fdb5c5d1128>>
Traceback (most recent call last):
  File "<console>", line 1, in <module>
  File "/usr/local/lib/python3.6/dist-packages/storages/backends/s3boto3.py", line 127, in write
    self._multipart = self.obj.initiate_multipart_upload(**parameters)
  File "/usr/local/lib/python3.6/dist-packages/boto3/resources/factory.py", line 520, in do_action
    response = action(self, *args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/boto3/resources/action.py", line 83, in __call__
    response = getattr(parent.meta.client, operation_name)(**params)
  File "/usr/local/lib/python3.6/dist-packages/botocore/client.py", line 314, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/usr/local/lib/python3.6/dist-packages/botocore/client.py", line 612, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied

These are the versions I'm using.

boto3==1.7.80
botocore==1.11.1
Django==2.1
s3transfer==0.1.13
django-storages==1.7.1

Why is it raising an exception?

Millipede answered 27/9, 2018 at 17:3 Comment(1)
I hit this error when I tried to upload something to the wrong AWS account (i.e. uploading Prod artefact, with Prod AWS Keys to Sandbox S3 URL ).Severson
M
54

It turns out that I had to specify a policy that adds permission to use any object /* under a bucket.

Before

...
"Resource": [
            "arn:aws:s3:::www.xyz.com"
            ]
...

After

...
"Resource": [
            "arn:aws:s3:::www.xyz.com/*"
            ]
...
Millipede answered 28/9, 2018 at 8:8 Comment(3)
Why doesn't the policy generator take this into account?Cluster
You may need both of these listed, ie "arn:aws:s3:::www.xyz.com","arn:aws:s3:::www.xyz.com/*" (if you are writing objects in to the root of the bucket?). I needed both to resolve this.Maladroit
It's even worse - if you have a subdirectory that you are uploading the file into, this also needs to be added as a resource. "arn:aws:s3:::www.xyz.com/*" will not allow you to upload files into any subdirectory in this bucket. You also need to whitelist resources like "arn:aws:s3:::www.xyz.com/foo/bar/*" (for instance).Aspiration
K
10

I also got this error, but I was making a different mistake. The django-storages function was creating the object with an ACL of "public-read". This is the default, which makes sense for a web framework, and indeed it is what I intended, but I had not included ACL-related permissions in my IAM policy.

  • PutObjectAcl
  • PutObjectVersionAcl

This policy worked for me (it is based on this one):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListBucketMultipartUploads",
                "s3:AbortMultipartUpload",
                "s3:PutObjectVersionAcl",
                "s3:DeleteObject",
                "s3:PutObjectAcl",
                "s3:ListMultipartUploadParts"
            ],
            "Resource": [
                "arn:aws:s3:::bucketname/*",
                "arn:aws:s3:::bucketname"
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
            ],
            "Resource": "arn:aws:s3:::bucketname"
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "*"
        }
    ]
}
Kinglet answered 5/5, 2019 at 17:48 Comment(2)
I have s3* but I'm still getting this errorPterosaur
if you want to give access to all on s3 actions, it should be not s3* but s3:*, not sure if w/o : works well while it would then identify any services starting with those symbols. : ensures that you typed in full service identifier w/o typos so i would expect to not work w/o :. Also another thing is Resource part, is it * or something else.Joslin
M
6

Another possible cause is that your bucket has encryption switched on. You'll want a second statement adding kms:GenerateDataKey and kms:Decrypt. Here's my statement for that:

        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
               "kms:Decrypt",
               "kms:GenerateDataKey"
            ],
            "Resource": "*"
        }

Note that I am using built-in keys, not CMKs. See AWS docs here for more.

Maladroit answered 11/5, 2021 at 8:18 Comment(1)
This resolved my error, pretty cryptic error from AWS, should probably mention KMS in the error message.Macey
K
1

FYI another cause for this is that your destination bucket does not have the proper policy definition.

For my use case I was trying to copy S3 files from one bucket in an AWS Account A to another bucket in AWS Account B. I created a role and policy that enabled this, but I did not add a bucket policy that allowed the outside AWS Role to write to it. I was able to fix this issue by following this AWS doc site : https://aws.amazon.com/premiumsupport/knowledge-center/copy-s3-objects-account/

(Ignore if above link works)

If the above link breaks, the site mentions :

Important: Objects in Amazon S3 are no longer automatically owned by the AWS account that uploads it. By default, any newly created buckets now have the Bucket owner enforced setting enabled. It's also a best practice to use the Bucket owner enforced setting when changing Object Ownership. However, note that this option disables all bucket ACLs and ACLs on any objects in your bucket.

With the Bucket owner enforced setting in S3 Object Ownership, all objects in an Amazon S3 bucket are automatically owned by the bucket owner. The Bucket owner enforced feature also disables all access control lists (ACLs), which simplifies access management for data stored in S3. However, for existing buckets, an Amazon S3 object is still owned by the AWS account that uploaded it, unless you explicitly disable the ACLs. To change object ownership of objects in an existing bucket, see How can I change the ownership of publicly owned objects in my S3 bucket?

If your existing method of sharing objects relies on using ACLs, then identify the principals that use ACLs to access objects. For more information about how to review permissions before disabling any ACLs, see Prerequisites for disabling ACLs.

If you can't disable your ACLs, then follow these steps to take ownership of objects until you can adjust your bucket policy:

  1. In the source account, create an AWS Identity and Access Management (IAM) customer managed policy that grants an IAM identity (user or role) proper permissions. The IAM user must have access to retrieve objects from the source bucket and put objects back into the destination bucket. You can use an IAM policy similar to the following:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetObject" ], "Resource": [ "arn:aws:s3:::source-DOC-EXAMPLE-BUCKET", "arn:aws:s3:::source-DOC-EXAMPLE-BUCKET/" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:PutObject", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET", "arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET/" ] } ] } Note: This example IAM policy includes only the minimum required permissions for listing objects and copying objects across buckets in different accounts. You must customize the allowed S3 actions according to your use case. For example, if the user must copy objects that have object tags, then you must also grant permissions for s3:GetObjectTagging. If you experience an error, try performing these steps as an admin user.

  1. In the source account, attach the customer managed policy to the IAM identity that you want to use to copy objects to the destination bucket.

  2. In the destination account, set S3 Object Ownership on the destination bucket to bucket owner preferred. After you set S3 Object Ownership, new objects uploaded with the access control list (ACL) set to bucket-owner-full-control are automatically owned by the bucket's account.

  3. In the destination account, modify the bucket policy of the destination bucket to grant the source account permissions for uploading objects. Additionally, include a condition in the bucket policy that requires object uploads to set the ACL to bucket-owner-full-control. You can use a statement similar to the following:

Note: Replace destination-DOC-EXAMPLE-BUCKET with the name of the destination bucket. Then, replace arn:aws:iam::222222222222:user/Jane with the Amazon Resource Name (ARN) of the IAM identity from the source account.

{ "Version": "2012-10-17", "Id": "Policy1611277539797", "Statement": [ { "Sid": "Stmt1611277535086", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::222222222222:user/Jane" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } }, { "Sid": "Stmt1611277877767", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::222222222222:user/Jane" }, "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET" } ] } Note: This example bucket policy includes only the minimum required permissions for uploading an object with the required ACL. You must customize the allowed S3 actions according to your use case. For example, if the user must copy objects that have object tags, you must also grant permissions for s3:GetObjectTagging

  1. After you configure the IAM policy and bucket policy, the IAM identity from the source account must upload objects to the destination bucket. Make sure that the ACL is set to bucket-owner-full-control. For example, the source IAM identity must run the cp AWS CLI command with the --acl option:

aws s3 cp s3://source-DOC-EXAMPLE-BUCKET/object.txt s3://destination-DOC-EXAMPLE-BUCKET/object.txt --acl bucket-owner-full-control

Kinfolk answered 13/4, 2022 at 5:30 Comment(0)
L
1

I was receiving this same error (An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied) using the the following Python script:

import logging
import boto3
import datetime

def create_boto3_client(s3_id, s3_secret_key):
    try:
        logging.info(f'####### Creating boto3Client... #######')
        s3_client = boto3.resource(
                's3',
                aws_access_key_id = s3_id,
                aws_secret_access_key = s3_secret_key,
        )
        logging.info(f'####### Successfully created boto3Client #######')
    except:
        logging.error(f'####### Failed to create boto3Client  #######')
    return s3_client


def upload_file_to_s3(s3_client, s3_bucket, aws_path, blob):
    try:
        ul_start = datetime.datetime.now()
        logging.info(f'####### Starting file upload at {str(ul_start)} #######')
        config = boto3.s3.transfer.TransferConfig(multipart_threshold=1024*25, max_concurrency=10, multipart_chunksize=1024*25, use_threads=True)
        s3_client.Bucket(s3_bucket).upload_fileobj(blob, Key = aws_path, Config = config)
        ul_end = datetime.datetime.now()
        logging.info(f'####### File uploaded to AWS S3 bucket at {str(ul_end) } #######')
        ul_duration = str(ul_end - ul_start)
        logging.info(f'####### Upload duration:{str(ul_duration)} #######')
    except Exception as e:
        logging.error(f'####### Failed to upload file to AWS S3: {e}  #######')
    return ul_start, ul_end, ul_duration

In my case, the aws_path (.upload_file(Key)) was incorrect. Was pointing to a path that the s3_client did not have access to.

Lure answered 15/12, 2022 at 0:8 Comment(0)
E
0

In my case, uploading to s3 using Github actions was failing and throwing similar error - An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied

Post validation of policies in place with IAM user and S3 bucket, which were good and similar setup is working fine with different IAM user and S3 bucket.

Since Github don't allow to view the added secrets, rotated security_credentials for IAM user and used at Github. That helped.

Eydie answered 12/8, 2022 at 5:42 Comment(0)
S
0

In my situation, there was a typo in the resource name.

Scorecard answered 22/5 at 11:49 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.