AWS S3 Generating Signed Urls ''AccessDenied''
Asked Answered
V

11

40

I am using NodeJs to upload files to AWS S3. I want the client to be able to download the files securely. So I am trying to generate signed URLs, that expire after one usage. My code looks like this:

Uploading

const s3bucket = new AWS.S3({
    accessKeyId: 'my-access-key-id',
    secretAccessKey: 'my-secret-access-key',
    Bucket: 'my-bucket-name',
})
const uploadParams = {
    Body: file.data,
    Bucket: 'my-bucket-name',
    ContentType: file.mimetype,
    Key: `files/${file.name}`,
}
s3bucket.upload(uploadParams, function (err, data) {
    // ...
})

Downloading

const url = s3bucket.getSignedUrl('getObject', {
    Bucket: 'my-bucket-name',
    Key: 'file-key',
    Expires: 300,
})

Issue

When opening the URL I get the following:

This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
    <Code>AccessDenied</Code>
    <Message>
        There were headers present in the request which were not signed
    </Message>
    <HeadersNotSigned>host</HeadersNotSigned>
    <RequestId>D63C8ED4CD8F4E5F</RequestId>
    <HostId>
        9M0r2M3XkRU0JLn7cv5QN3S34G8mYZEy/v16c6JFRZSzDBa2UXaMLkHoyuN7YIt/LCPNnpQLmF4=
    </HostId>
</Error>

I coultn't manage to find the mistake. I would really appreciate any help :)

Vagrancy answered 5/10, 2018 at 14:6 Comment(5)
Anyone with valid security credentials can create a pre-signed URL. However, in order for you to successfully upload an object, the pre-signed URL must be created by someone who has permission to perform the operation that the pre-signed URL is based upon docs.aws.amazon.com/AmazonS3/latest/dev/…. Does your IAM policy have permissions to access S3 bucket? If the file is successfully generated in your bucket and immediately after you create a signedUrl you are not able to access it check the filename and bucket you are passing to getSignedUrl are validWorkable
Is there a way to check if the IAM policy has permissions to access the bucket ?Vagrancy
Indeed, a quick way to check this is just to look at your bucket and confirm the object has been created. since you´re getting a AccessDeniedresponse try checking your bucket permissions and allow the user to read and view (enable read and view permissions).Workable
prntscr.com/l2lkwb the account has permissionsVagrancy
you can grant the role AmazonS3FullAccess permission. if it works, then you know that the problem lies with the access permission granted to the role. delete AmazonS3FullAccess and grant GetObject to your bucket and try it out. if it still does not work then you will have to do some research to find out which permissions you need and also check that you are using the correct resource (i.e bucket)Galatians
Z
52

Your code is correct, double check the following things:

  1. Your bucket access policy.

  2. Your bucket permission via your API key.

  3. Your API key and secret.

  4. Your bucket name and key.

For bucket policy you can use the following:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::bucket/*"
        }
    ]
}

Change bucket with your bucket name.

For users and access key permission (#2), you should follow these steps:

1-Goto AWS Identity and Access Management (IAM) and click on Policies link and click on "Create policy" button.

enter image description here

2-Select the JSON tab.

enter image description here

3-Enter the following statement, make sure change the bucket name and click on "review policy" button.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::YOURBUCKETNAME"
        }
    ]
}

enter image description here

4-Enter a name for your policy and click on "Create policy" button.

enter image description here

5-Click on Users link, and find your current username (You already have the access key and secret for that)

enter image description here

6-Click on "add permission" button.

enter image description here

7-Add the policy we created in the previous step and save.

enter image description here

Finally, make sure your bucket not accessible from Public, add the correct content type to your file and set signatureVersion: 'v4'

The final code should be like this, thanks @Vaisakh PS:

const s3bucket = new AWS.S3({
    signatureVersion: 'v4',
    accessKeyId: 'my-access-key-id',
    secretAccessKey: 'my-secret-access-key',
    Bucket: 'my-bucket-name',
})
const uploadParams = {
    Body: file.data,
    Bucket: 'my-bucket-name',
    ContentType: file.mimetype,
    Key: `files/${file.name}`,
}
s3bucket.upload(uploadParams, function (err, data) {
    // ...
})
const url = s3bucket.getSignedUrl('getObject', {
    Bucket: 'my-bucket-name',
    Key: 'file-key',
    Expires: 300,
})
Zook answered 8/10, 2018 at 8:56 Comment(12)
I didn't put anything into the Bucket Policy... But I am also not sure what to put there 🙄Vagrancy
I updated the policy (prntscr.com/l3jj8q) but I am still getting the same error. I am pretty sure the 3rd and 4th point are correct. But I never used an API keyVagrancy
So, make sure about your #2Zook
The documentation states nothing about an API key (aws.amazon.com/sdk-for-node-js/?nc1=h_ls) or where is this key supposed to be?Vagrancy
The answer updated with #2 point, if not work, test all steps again, maybe you missed one step.Zook
Thank you for your detailed explanation! Unfortunately, I am getting the same error again. Can it be caused by the fact, that I had to configure signatureVersion to v4? (Otherwise, I am getting this error: prntscr.com/l3l39v)Vagrancy
Could you update your question and share the whole part of the code, and as I mentioned before, if not work, test all steps again, maybe you missed one stepZook
I don't know the exact cause of the issue. I refactored everything and created a new bucket etc., but now it's working. Thank you so much!Vagrancy
Unfortunately, when downloading the files via the signed url I get a warning from Chrom that the file is dangerous (prntscr.com/l3osrc) Do you know reason for this? P.S.: the normal url works without warningVagrancy
Let us continue this discussion in chat.Zook
This bucket policy doesn't achieve the stated aim "I want the client to be able to download the files securely". That bucket policy makes all files public! Your answer is excellent, but now that troubleshooting is completed it would be best to go back and tighten up that policy or add a warning that it allows every principal access to GetObject.Shaw
It's(bucket policy) about everyone already have access to the bucket, not everyone on the Internet.Zook
U
49

Highest upvoted answer here technically works but isn't practical since it's opening up the bucket to be public.

I had the same problem and it was due to the role that was used to generate the signed url. The role I was using had this:

- Effect: Allow
  Action: 
    - "s3:ListObjects"
    - "s3:GetObject"
    - "s3:GetObjectVersion"
    - "s3:PutObject"
  Resource:
    - "arn:aws:s3:::(bucket-name-here)"

But the bucket name alone wasn't enough, I had to add a wildcard on the end to designate access to whole bucket:

- Effect: Allow
  Action: 
    - "s3:ListObjects"
    - "s3:GetObject"
    - "s3:GetObjectVersion"
    - "s3:PutObject"
  Resource:
    - "arn:aws:s3:::(bucket-name-here)/*"
Unproductive answered 24/2, 2022 at 16:36 Comment(3)
In addition to this answer I think it's useful to know (as described in this answer) that credentials from your environment or command line might be used (and they might not be the ones you think you're using).Perceptual
Thanks for your infos, leading me to me own solution. When you grant e.g. your lambda permissions to an S3 bucket. You need to use either grantRead() or grantReadWrite(). grantWrite() is not including the read permissions.Piglet
This should be the correct answer. Resolved the issue using this with AWS lambda serverless application.Gillie
C
5

I kept having a similar problem but mine were due to region settings. In our back end we had some configuration settings for the app.

One of which was "region": "us-west-2" so the presigned url was created with this region but when it was called on the front end the region was set to "us-west-1".

Changing it to be the same fixed the issue.

Conversation answered 22/2, 2019 at 18:37 Comment(0)
U
5

I battled with this as well with an application using Serverless Framework.

My fix was adding S3 permissions to the IAM Role inside of the serverless.yml file.

I'm not exactly sure how s3 makes the presigned URL but it turns out they take your IAM role into account.

Adding all s3 actions did the trick. This is what the IAM role looks like for S3 👇

iamRoleStatements:
  - Effect: Allow
      Action:
        - 's3:*'
      Resource:
        - 'arn:aws:s3:::${self:custom.imageBucket}/*'
Unstuck answered 1/4, 2021 at 20:9 Comment(4)
Nice hint, but it should be inside provider, like this:provider: name: aws iamRoleStatements: - Effect: Allow Action: - 's3:*' Resource: - 'arn:aws:s3:::${env:S3_BUCKET_NAME}/*'Sheep
This did the trick for me. The only permission the Lambda had as s3:PutObject. As soon as I changed it to s3:*. It would be nice to narrow down exactly what are the permissions needed, as I feel that s3:* is too permissive. If I found out I'll post another comment here. ThanksDeuteranopia
Update: I was missing s3:GetObject. This is full policy for Lambda access to bucket, new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ['s3:PutObject', 's3:GetObject'], resources: [${imageBucket.bucketArn}/*] })Deuteranopia
Another thing I was doing wrrong is that the path to the object (the Key) in the params was wrong, s3Client.getSignedUrlPromise('getObject', { Bucket: bucket, Key: path, Expires: expires }). Once I fixed it it worked. So in summary, I had two issues, (1) lacking getObject S3 permission on the Lambda role, and (2) wrong S3 object path was given to S3.getSignedUrlPromise(). Cheers and have a nice day!Deuteranopia
I
4

Your code looks good but I think you are missing the signatureVersion: 'v4' parameter while creating the s3bucket object. Please try the below updated code.

const s3bucket = new AWS.S3({
    signatureVersion: 'v4',
    accessKeyId: 'my-access-key-id',
    secretAccessKey: 'my-secret-access-key',
    Bucket: 'my-bucket-name',
})
const uploadParams = {
    Body: file.data,
    Bucket: 'my-bucket-name',
    ContentType: file.mimetype,
    Key: `files/${file.name}`,
}
s3bucket.upload(uploadParams, function (err, data) {
    // ...
})
const url = s3bucket.getSignedUrl('getObject', {
    Bucket: 'my-bucket-name',
    Key: 'file-key',
    Expires: 300,
})

For more about signatureVersion: 'v4' see the below links

https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html

https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html

You can also try out the below nodejs library that create presigned url

https://www.npmjs.com/package/aws-signature-v4

Initiation answered 8/10, 2018 at 16:54 Comment(2)
I am still getting the error. But I appreciate your help!Vagrancy
I fixed my problem and afterward I got another error... the new error was fixed by setting signatureVersion to v4, THANKS!Vagrancy
C
2

If your s3 files are encrypted than make sure that your policy also access to encryption key and related actions.

Coadunate answered 11/9, 2020 at 13:42 Comment(1)
Amazon S3 evaluates and applies bucket policies before applying bucket encryption settings. Even if you enable bucket encryption settings, your PUT requests without encryption information will be rejected if you have bucket policies to reject such PUT requests. Check your bucket policy and modify it if required.Subcritical
M
2

After banging my head for many hours with this same issue. I noticed that my account had a MFA setup , making the generation of the signed url with only the accessKeyId and secretAccesKey useless.

The solution was installing this https://github.com/broamski/aws-mfa

After running it , it asks to create a .aws/credentials file, where you must input your access id / secret and aws_mfa_device . The later will look something like

aws_mfa_device = arn:aws:iam::youruserid:mfa/youruser

The data can be found in your user in the aws console (Website)

After that you will find that credentials are populated with new keys with 1 week duration iirc.

Then simply generate a url again

AWS.config.update({ region: 'xxx' });
var s3 = new AWS.S3();

var presignedGETURL = s3.getSignedUrl('putObject', {
    Bucket: 'xxx',
    Key: 'xxx', //filename
    Expires: xxx, //time to expire in seconds,
    ContentType: 'xxx'
});

And this time it will work.

Remember to NOT pass any credentials to AWS.config , since they will be automatically picked from the .aws/credentials folder.

Matos answered 1/6, 2022 at 8:32 Comment(1)
how do i not pass any credentials to AWS config? what should I pass in BasicAWSCredentials awsCredentials = new BasicAWSCredentials(accessKey, secretKey); ??Coulomb
G
1

I had the same issue when i'm locally testing my lambda function its works but after deploy it didn't work. once i add the s3 full access to lambda function it worked.

Germaine answered 15/10, 2020 at 10:1 Comment(0)
P
1

I got this problem because I was using the wrong key. I guess the way s3 works is that it will generate a pre-signed url for the provided key without checking if the file actually exists or not. Double check your key.

Paez answered 11/1 at 22:48 Comment(0)
A
0

I saw this problem recently when moving from a bucket that was created a while ago to one created recently.

It appears that v2 pre-signed links (for now) continue to work against older buckets while new buckets are mandated to use v4.

Revised Plan – Any new buckets created after June 24, 2020 will not support SigV2 signed requests, although existing buckets will continue to support SigV2 while we work with customers to move off this older request signing method.

Even though you can continue to use SigV2 on existing buckets, and in the subset of AWS regions that support SigV2, I encourage you to migrate to SigV4, gaining some important security and efficiency benefits in the process.

https://docs.amazonaws.cn/AmazonS3/latest/API/sigv4-query-string-auth.html#query-string-auth-v4-signing-example

Our solution involved updating the AWS SDK to use this by default; I suspect newer versions probably already default this setting.

https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config-other.html#config-setting-aws-s3-usesignatureversion4

Abdias answered 21/10, 2020 at 16:58 Comment(0)
B
0

To allow a signed URL for an S3 PUT to also be downloadable by anyone, add:

    const s3Params = {
      Bucket, Key, ContentType,
      // This ACL makes the uploaded object publicly readable. You must also uncomment
      // the extra permission for the Lambda function in the SAM template.
      ACL: 'public-read'
    }

The ACL: 'public-read' at the end is key to allowing you to download after upload.

But in order to set ACLs on the new file from a signed URL, the caller must have s3:PutObjectACL permission so you'll also need to grant that permission to the URL signer:

        - Statement:
          - Effect: Allow
            Resource: (BUCKET_ARN)/*
            Action:
              - s3:putObjectAcl

where BUCKET_ARN is your bucket ARN, so something like:

  Resource: "arn:aws:s3:::My-Bucket-Name/*"

See this link for more.

I think it's also possible to just get away with only s3:PutObject if the whole bucket is marked public. This used to be easy to do (a checkbox) but now seems overly complex. However, I think you can just add the policy found in Step 2 at this link.

Biannulate answered 22/12, 2022 at 0:47 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.