Amazon S3 - How to fix 'The request signature we calculated does not match the signature' error?
Asked Answered
S

67

296

I have searched on the web for over two days now, and probably have looked through most of the online documented scenarios and workarounds, but nothing worked for me so far.

I am on AWS SDK for PHP V2.8.7 running on PHP 5.3.

I am trying to connect to my Amazon S3 bucket with the following code:

// Create a `Aws` object using a configuration file
$aws = Aws::factory('config.php');

// Get the client from the service locator by namespace
$s3Client = $aws->get('s3');

$bucket = "xxx";
$keyname = "xxx";

try {
    $result = $s3Client->putObject(array(
        'Bucket' => $bucket,
        'Key' => $keyname,
        'Body' => 'Hello World!'
    ));

    $file_error = false;
} catch (Exception $e) {
    $file_error = true;

    echo $e->getMessage();

    die();
}

My config.php file is as follows:

return [
    // Bootstrap the configuration file with AWS specific features
    'includes' => ['_aws'],
    'services' => [
        // All AWS clients extend from 'default_settings'. Here we are
        // overriding 'default_settings' with our default credentials and
        // providing a default region setting.
        'default_settings' => [
            'params' => [
                'credentials' => [
                    'key'    => 'key',
                    'secret' => 'secret'
                ]
            ]
        ]
    ]
];

It is producing the following error:

The request signature we calculated does not match the signature you provided. Check your key and signing method.

I've already checked my access key and secret at least 20 times, generated new ones, used different methods to pass in the information (i.e. profile and including credentials in code) but nothing is working at the moment.

Stationary answered 28/5, 2015 at 23:47 Comment(5)
So, the AWS SDK just implements a bunch of direct API calls. With AWS, every single call you make takes your private key (or secret above), and uses that to calculate a signature based on your access key, the current timestamp, plus a bunch of other factors. See docs.aws.amazon.com/general/latest/gr/…. It's a longshot, but given that they include the timestamp, perhaps your local environment's time is off?Footpound
Happened when we had passed an incorrect size (Content-Length) in object metadata. (Long version: we were directly passing the input stream from a Java HttpServletRequest to the S3 client, and passing in request.getContentLength() as Content-Length via metadata; when the servlet was (randomly) receiving chunked requests (Transfer-Encoding: chunked), getContentLength() was returning -1 - which led putObject to fail (randomly). Obscure; but clearly our fault because we were passing an incorrect object size.)Destiny
First time visitor, please go through many answers, there are many scenario in which you will get this error & various solutions given in this pageSurefire
In my case, for opensearch, i had given different info in path and URL...Crusted
This error was occurring for me because my query params were not included as part of my path when signing. So it should be path: "/default/resource?param1=1111&param2=11111" Not just path: "/default/resourceRiordan
S
192

After two days of debugging, I finally discovered the problem...

The key I was assigning to the object started with a period i.e. ..\images\ABC.jpg, and this caused the error to occur.

I wish the API provides more meaningful and relevant error message, alas, I hope this will help someone else out there!

Stationary answered 29/5, 2015 at 1:37 Comment(26)
I had the state bucket and key backwards and this is the error you get (signature doesn't match). Wtf terraform?Truly
A leading slash also caused this issue for me. You need just path/to/file, not /path/to/fileGrooms
And for me the issue were white spaces inside of keyMinefield
Replacing /home/user/ with ~ and then changing it back again worked for meNikola
To add to this, I was getting this error message when having a plus sign + in my key.Clive
in my case it was on aws so new S3( 'key' 'secret', true ); last additional optional useSSL = true needed to set, which by defaultDine
In my case this was caused by having a path in the bucket parameter. Instead of bucket = "bucketname", I had bucket = "bucketname/something". This also gives the Signature does not match error.Pander
I was getting this when I did not provide the Content-Type header in my upload file requestRobbirobbia
I had to replace a plus sign (+) in my URL with %20.Balkh
I had a problem with spanish tildes Alegría| note the í` was throwing an error.Pothole
I had a problem with an extra URL parameter that I was adding to the query string (&version=1.3). Can't have extra parametersMirthamirthful
I was stuck because my file ended right after the secret key, i.e. no line return...Heathenry
Mine was setting "OriginalFileName" in the header with a leading space / tabTrunkfish
I had a / in the middle of the SERVER_SECRET_KEY and solved after three hours of research...Congratulate
In my case, I was making a POST request, instead of PUT (getSignedUrlPromise method had an operation parameter 'putObject')Leonhard
Came here having this issue using Minio. I can confirm: HTTP Verb mismatch will trigger a signature fail as well as additional characters somewhere. Take this as an example on how NOT to create API error reportings.Dorinedorion
my secret key also has + and failed. how to resolve thisRuder
Adding to the laundry list of potential causes, for me it was the browser environment itself. Seems that some cookies, possibly from AWS logins may interfere causing this error message. Opening the link in Incognito mode has helped at times with the link then starting to work outside of Incognito too. Basically what I'm saying is that even though the link and associated credentials are 100% correct it can still malfunction and become utterly confusing.Ortegal
I have spent practically the entire workday to discover that my kotlin server was setting a content-type of image/jpeg, and my javascript library was setting a content-type header of image/jpg. That one little e... Nearly an entire workday... I read the min.io documentation, installed mc, did mc admin trace etc etc, stared at logs for hours... one e... jpeg... #@#!IM$#@M!Eachelle
In my case, I was copying the signed URL out of a quoted string, and I accidently included a trailing ` \ ` (backslash) at the end of the URL that was meant to escape the final "Emblements
for .net c# we have to us as below var request = new GetPreSignedUrlRequest() { BucketName = bucketName, Key = objectKey, Verb = HttpVerb.PUT, Expires = DateTime.UtcNow.AddHours(1), ContentType = "image/jpeg", };Orebro
Had the same problem description, here is what helped - I've regenerated the key and that helped. Seems like AWS changed the format and our key was generated 4 years ago. We are sure that none of our credentials were changed before this action, so: regeneration helped. We are using AWS SDK for .NET.Hypertrophy
For me, it was adding region name in the presigning command, ie something like --region ap-south-1 --endpoint-url https://s3.ap-south-1.amazonaws.com in the aws cli commandAlli
agreed, watch out for any special characters for the attachment name.Pretermit
Swapping the bucket name and file name did it for me.Convulsion
Another cause: there seems to be a delay after setting up a brand new bucket before signed URLs will work. Been caught out multiple times by this.Dodd
D
77

I get this error with the wrong credentials. I think there were invisible characters when I pasted it originally.

Dacoit answered 5/9, 2015 at 7:4 Comment(7)
I simply dobuble-clicked on key_hash_lala/key_hash_continues and it selected only one part. Alas, how hard is it to tell the user "wrong passsword, dude!"?Byram
The first time I had issues copying the key from the downloadable csv. For the second key i created, I just copied it from the the browser and didn't have any issuesGunflint
+1 to @Gunflint - copying from the .csv caused a failure - copying directly from the browser and it works a treatMattland
For me, it was a result of wrong credentials as well. I missed a character in my credentials.Dight
for all of us that use double click to select and copy, it won't copy trailing "+" chars!!Meingolda
For me there were an invisible \n at the end of AWS_ACCESS_KEY_ID that were causing the errorHadst
Hello from 2022, same issue :) thank you!Myology
D
41

I had the same error in nodejs. But adding signatureVersion in s3 constructor helped me:

const s3 = new AWS.S3({
  apiVersion: '2006-03-01',
  signatureVersion: 'v4',
});
Dichlorodiphenyltrichloroethane answered 8/7, 2019 at 16:20 Comment(5)
Tried many things before i stumbled onto this! This was the answer for me.Prognosticate
Worked for me, file path ok, every else was ok, currently the same function is in use for other app and never give this error in that app. Thanks, OlegAuse
This solved it for me too.Janson
This is what worked for me as well signatureVersion. Would have been helpful if the document had mention about this docs.aws.amazon.com/sdk-for-php/v3/developer-guide/…Riannon
you are a life saver . i was facing this error for the past 8 hoursSubtorrid
P
33

I had the same problem when tried to copy an object with some UTF8 characters. Below is a JS example:

var s3 = new AWS.S3();

s3.copyObject({
    Bucket: 'somebucket',
    CopySource: 'path/to/Weird_file_name_ðÓpíu.jpg',
    Key: 'destination/key.jpg',
    ACL: 'authenticated-read'
}, cb);

Solved by encoding the CopySource with encodeURIComponent()

Pembrook answered 5/4, 2016 at 20:31 Comment(1)
Thanks, worked with me! I also tried to encode the "Key" since the key also contains UTF8 characters, and it ends up in a wrong directory Only encoding the CopySource works just fine.Dill
W
33

I've just encountered this and, I'm a little embarrassed to say, it was because I was using an HTTP POST request instead of PUT.

Despite my embarrassment, I thought I'd share in case it saves somebody an hour of head scratching.

Woodworker answered 20/1, 2022 at 13:25 Comment(2)
lol, I'm so glad you shared this -- I did the same thing and didn't even think to check that!Jiffy
You should not be embarrassed, saviour of my day!Baughman
H
26

My AccessKey had some special characters in that were not properly escaped.

I didn't check for special characters when I did the copy/paste of the keys. Tripped me up for a few mins.

A simple backslash fixed it. Example (not my real access key obviously):

secretAccessKey: 'Gk/JCK77STMU6VWGrVYa1rmZiq+Mn98OdpJRNV614tM'

becomes

secretAccessKey: 'Gk\/JCK77STMU6VWGrVYa1rmZiq\+Mn98OdpJRNV614tM'

Haplosis answered 18/11, 2020 at 20:38 Comment(2)
this is a good catch i too have it in mine, but this also didnt solved my issueBenetta
A quick double click copy and paste, happened to me, misses the / and the ending.Allegorical
E
25

This error seems to occur mostly if there is a space before or after your secret key

Editor answered 1/6, 2020 at 16:59 Comment(2)
Had same problem. Skype sometimes copies values with blank lines. Just paste it to notepad and then copy it without whitespaces.Cecilla
Yes ! Check also if you have spaces in any other headers.Halutz
O
16

For Python set - signature_version s3v4

s3 = boto3.client(
   's3',
   aws_access_key_id='AKIAIO5FODNN7EXAMPLE',
   aws_secret_access_key='ABCDEF+c2L7yXeGvUyrPgYsDnWRRC1AYEXAMPLE',
   config=Config(signature_version='s3v4')
)
Officiate answered 8/9, 2020 at 10:5 Comment(1)
Indeed. More info here: aws.amazon.com/premiumsupport/knowledge-center/…Ala
E
14

In my case I was using s3.getSignedUrl('getObject') when I needed to be using s3.getSignedUrl('putObject') (because I'm using a PUT to upload my file), which is why the signatures didn't match.

Esquimau answered 28/2, 2020 at 17:45 Comment(2)
Thank you! I was using POST instead of PUT... using PUT just worked.Profiterole
This also fixed my problem. chatgpt gave me wrong code =PEgomania
M
13

In a previous version of the aws-php-sdk, prior to the deprecation of the S3Client::factory() method, you were allowed to place part of the file path, or Key as it is called in the S3Client->putObject() parameters, on the bucket parameter. I had a file manager in production use, using the v2 SDK. Since the factory method still worked, I did not revisit this module after updating to ~3.70.0. Today I spent the better part of two hours debugging why I had started receiving this error, and it ended up being due to the parameters I was passing (which used to work):

$s3Client = new S3Client([
    'profile' => 'default',
    'region' => 'us-east-1',
    'version' => '2006-03-01'
]);
$result = $s3Client->putObject([
    'Bucket' => 'awesomecatpictures/catsinhats',
    'Key' => 'whitecats/white_cat_in_hat1.png',
    'SourceFile' => '/tmp/asdf1234'
]);

I had to move the catsinhats portion of my bucket/key path to the Key parameter, like so:

$s3Client = new S3Client([
    'profile' => 'default',
    'region' => 'us-east-1',
    'version' => '2006-03-01'
]);
$result = $s3Client->putObject([
    'Bucket' => 'awesomecatpictures',
    'Key' => 'catsinhats/whitecats/white_cat_in_hat1.png',
    'SourceFile' => '/tmp/asdf1234'
]);

What I believe is happening is that the Bucket name is now being URL Encoded. After further inspection of the exact message I was receiving from the SDK, I found this:

Error executing PutObject on https://s3.amazonaws.com/awesomecatpictures%2Fcatsinhats/whitecats/white_cat_in_hat1.png

AWS HTTP error: Client error: PUT https://s3.amazonaws.com/awesomecatpictures%2Fcatsinhats/whitecats/white_cat_in_hat1.png resulted in a 403 Forbidden

This shows that the / I provided to my Bucket parameter has been through urlencode() and is now %2F.

The way the Signature works is fairly complicated, but the issue boils down to the bucket and key are used to generate the encrypted signature. If they do not match exactly on both the calling client, and within AWS, then the request will be denied with a 403. The error message does actually point out the issue:

The request signature we calculated does not match the signature you provided. Check your key and signing method.

So, my Key was wrong because my Bucket was wrong.

Mosque answered 4/3, 2019 at 21:12 Comment(1)
Thank you for posting this, when I saw "check your key" I was thinking the access key or secret key was wrong. In my case it was the object key (and bucket). So moving around the bucket and object key values as you describe worked. Amazon needs to clarify what key they're complaining about IMO. Thanks againEustatius
C
9

Actually in Java i was getting same error.After spending 4 hours to debug it what i found that that the problem was in meta data in S3 Objects as there was space while sitting cache controls in s3 files.This space was allowed in 1.6.* version but in 1.11.* it is disallowed and thus was throwing the signature mismatch error

Cloche answered 21/2, 2017 at 10:33 Comment(1)
Also happens if you pass an incorrect Content-Length in the metadataDestiny
B
8

For me I used axios and by deafult it sends header

content-type: application/x-www-form-urlencoded

so i change to send:

content-type: application/octet-stream

and also had to add this Content-Type to AWS signature

const params = {
    Bucket: bucket,
    Key: key,
    Expires: expires,
    ContentType: 'application/octet-stream'
}

const s3 = new AWS.S3()
s3.getSignedUrl('putObject', params)
Blackguardly answered 28/2, 2019 at 3:5 Comment(1)
Same, changing content-type did the trick.Clyve
G
7

Another possible issue might be that the meta values contain non US-ASCII characters. For me it helped to UrlEncode the values when adding them to the putRequest:

request.Metadata.Add(AmzMetaPrefix + "artist", HttpUtility.UrlEncode(song.Artist));
request.Metadata.Add(AmzMetaPrefix + "title", HttpUtility.UrlEncode(song.Title));
Gully answered 14/12, 2018 at 9:16 Comment(0)
A
7

I had the same issue, the problem I had was I imported the wrong environment variable, which means that my secret key for AWS was wrong. Based on reading all the answers, I would verify that all your access ID and secret key is right and there are no additional characters or anything.

Actinopod answered 6/1, 2021 at 19:57 Comment(0)
L
6

If none of the other mentioned solution works for you , then try using

aws configure

This command (Getting started with the AWS CLI) will open a set of options asking for keys, region and output format.

Hope this helps!

Lumberjack answered 19/2, 2018 at 6:21 Comment(0)
H
5

In my case I parsed an S3 url into its components.

For example:

Url:    s3://bucket-name/path/to/file

Was parsed into:

Bucket: bucket-name
Path:   /path/to/file

Having the path part containing a leading '/' failed the request.

Hedonic answered 13/11, 2018 at 9:44 Comment(0)
S
5

I had the same issue. I had the default method, PUT set to define the pre-signed URL but was trying to perform a GET. The error was due to method mismatch.

Sherl answered 29/5, 2019 at 5:41 Comment(2)
This worked for me. The HTTP verb (PUT, POST) used to generate the signed URL must be the same as the verb used when performing an upload with that URL.Colemancolemanite
It was the opposite for me, ie I was using GET to define the presigned URL, and then was trying to use the url with PUT metnod, which obviously resulted in a 403.Archaism
S
5

When I gave the wrong secret key which is of value "secret" knowingly, it gave this error. I was expecting some valid error message details like "authentication failed" or something

Surefire answered 15/3, 2021 at 14:23 Comment(0)
S
4

Most of the time it happens because of the wrong key (AWS_SECRET_ACCESS_KEY). Please cross verify your AWS_SECRET_ACCESS_KEY. Hope it will work...

Serajevo answered 26/11, 2019 at 5:25 Comment(0)
H
4

This issue happened to me because I was accidentally assigning the value of the ACCESS_KEY_ID to SECRET_ACCESS_KEY_ID. Once this was fixed everything worked fine.

Horizontal answered 19/8, 2021 at 10:46 Comment(0)
I
3

I just experienced this uploading an image to S3 using the AWS SDK with React Native. It turned out to be caused by the ContentEncoding parameter.

Removing that parameter "fixed" the issue.

Instalment answered 9/3, 2018 at 18:17 Comment(0)
E
3

generating a fresh access key worked for me.

Elisavetpol answered 1/8, 2019 at 11:22 Comment(1)
fresh access key worked for me too - thankfully i got the hint from reading github.com/aws/aws-sdk-js/issues/86#issuecomment-153433220 and in my case it was SQS that was throwing the exception in the title. The keys I was earlier using (when getting exception) were 97 days old with exclamation mark in the IAM dashboardPerforation
A
3

After debugging and spending a lot of time, in my case, the issue was with the access_key_id and secret_access_key, just double check your credentials or generate new one if possible and make sure you are passing the credentials in params.

Amparoampelopsis answered 27/6, 2020 at 18:13 Comment(1)
When I read the above answer, I double-checked my secret key and realized that I have added / at the end.Realist
G
2

Like others, I also had the similar issue but in java sdk v1. For me, below 2 fixes helped me.

  1. My key to object looked like this /path/to/obj/. In this, i first removed the / in the beginning.
  2. Further, point 1 alone did not solve the issue. I upgraded my sdk version from 1.9.x to 1.11.x

After applying both the fixes, it worked. So my suggestion is not slog it out. If nothing else is working, just try upgrading the lib.

Gollin answered 4/6, 2021 at 17:43 Comment(0)
C
2

I have spent 8 hours trying to fix this issue. For me, everything mentioned in all answers were fine. The keys were correct and tested through CLI. I was using SDK V3 which is the latest and doesn't need the signature version. It finally turned out to be passing a wrong object in the Body! (not a text nor a array buffer). Yes, it's one of the most stupid error messages that I have ever seen in my 16 years career. AWS sometimes drives me crazy.

Curtsy answered 14/6, 2023 at 9:35 Comment(1)
I literally spent 1.5 days to fix this issue. This suggestion helped. I was uploading a blob object earlier but then I updated it to array buffer with the content type as 'application/octet-stream' and it worked.Shermy
A
1

I had a similar error, but for me it seemed to be caused by re-using an IAM user to work with S3 in two different Elastic Beanstalk environments. I treated the symptom by creating an identically permissioned IAM user for each environment and that made the error go away.

Arrant answered 14/6, 2017 at 23:55 Comment(0)
K
1

I don't know if anyone came to this issue while trying to test the outputted URL in browser but if you are using Postman and try to copy the generated url of AWS from the RAW tab, because of escaping backslashes you are going to get the above error.

Use the Pretty tab to copy and paste the url to see if it actually works.

I run into this issue recently and this solution solved my issue. It's for testing purposes to see if you actually retrieve the data through the url.

This answer is a reference to those who try to generate a download, temporary link from AWS or generally generate a URL from AWS to use.

Kumasi answered 6/3, 2019 at 8:5 Comment(1)
can you please tell me how you solved that issue? it is working fine in postman but not in nodejsTanbark
P
1

If you are a Android developer and is using the signature function from AWS sample code, you are most likely wondering why the ListS3Object works but not the GetS3Object. This is because when you set the setDoOutput(true) and using GET HTTP method, Android's HttpURLConnection switches the request to a POST. Thus, invalidating your signature. Check my original post of the issue.

Pacha answered 30/11, 2021 at 8:10 Comment(0)
T
1

I was getting this error in our shared environment where the SDK was being used, but using the same key/secret and the aws cli, it worked fine. The build system script had a space after the key and secret and session keys, which the code read in as well. So the fix for me was to adjust the build script to remove the spaces after the variables being used.

Just adding this for anyone who might miss that frustrating invisible space at the end of their creds.

Teshatesla answered 24/2, 2022 at 1:2 Comment(0)
C
0

In my case, the issue is that we are using the wrong Bucket Name. AWS S3 buckets have specific naming conventions that we need to follow. You can find the naming convention rules in the link below:

Bucket Naming Rules

For example:

Bucket Name: g7asset-shwe (throwing error)

Bucket Name: g7asset (working properly)

Additionally, it's important to note that S3 does not actually have a "folder" structure. Each object in a bucket has a unique key, and the object is accessed through that key.

While some S3 utilities, including the AWS console, simulate a "folder" structure, it's not directly related to how S3 functions. In other words, you don't need to worry about it. Simply create the object with a forward slash (/) in its key, and everything will work as expected.

Cotswolds answered 28/5, 2015 at 23:48 Comment(1)
This is incorrect. "Bucket names can consist only of lowercase letters, numbers, dots (.), and hyphens (-)."Rooftop
I
0

In my case the bucketname was wrong, it included the first part of the key (bucketxxx/keyxxx) - there was nothing wrong with the signature.

Insolvency answered 9/10, 2018 at 13:56 Comment(0)
H
0

In my case (python) it failed because I had these two lines of code in the file, inherited from an older code

http.client.HTTPConnection._http_vsn = 10 http.client.HTTPConnection._http_vsn_str = 'HTTP/1.0'

Hambley answered 28/11, 2018 at 13:33 Comment(0)
T
0

I encountered this in a Docker image, with a non-AWS S3 endpoint, when using the latest awscli version available to Debian stretch, i.e. version 1.11.13.

Upgrading to CLI version 1.16.84 resolved the issue.

To install the latest version of the CLI with a Dockerfile based on a Debian stretch image, instead of:

RUN apt-get update
RUN apt-get install -y awscli
RUN aws --version

Use:

RUN apt-get update
RUN apt-get install -y python-pip
RUN pip install awscli
RUN aws --version
Thunderclap answered 7/1, 2019 at 12:40 Comment(0)
E
0

I had to set

Aws.config.update({
  credentials: Aws::Credentials.new(access_key_id, secret_access_key)
})

before with the ruby aws sdk v2 (there is probably something similiar to this in the other languages as well)

Enfield answered 16/1, 2019 at 14:7 Comment(0)
H
0

The issue in my case was the API Gateway URL used to configure Amplify that had an extra slash at the end...

The queried url looked like https://....amazonaws.com/myapi//myendpoint. I removed the extra slash in the conf and it worked.

Not the most explicit error message of my life.

Hanover answered 29/3, 2019 at 17:8 Comment(0)
D
0

In my case I was calling s3request.promise().then() incorreclty which caused two executions of the request happening when only one call was done.

What I mean is that I was iterating through 6 objects but 12 requests were made (you can check by logging in the console or debuging network in the browser)

Since the timestamp for the second, unwanted, request did not match the signture of the firs one this produced this issue.

Diphenyl answered 25/4, 2019 at 21:6 Comment(0)
H
0

Got this error while uploading document to CloudSearch through Java SDK. The issue was because of a special character in the document to be uploaded. The error "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method." is very misleading.

Henleigh answered 25/7, 2019 at 6:23 Comment(0)
A
0

Like others have said, I had this exact same problem and it turned out to be related to the password / access secret. I generated a password for my s3 user that was not valid, and it didn't inform me. When trying to connect with the user, it gave this error. It doesn't seem to like certain or all symbols in passwords (at least for Minio)

Altogether answered 25/9, 2019 at 17:22 Comment(0)
S
0

I am getting the same error Due to following reason.

I have entered right credentials but with copy-paste. So that may b issue of junk characters insertion while copy paste. I have entered manually and ran the code and now its working fine

Thank You

Selfrenunciation answered 26/9, 2019 at 6:7 Comment(0)
T
0

I solved this issue by adding apiVersion inside AWS.S3(), then it works perfectly for S3 signed url.

Change from

var s3 = new AWS.S3();

to

var s3 = new AWS.S3({apiVersion: '2006-03-01'});

For more detailed examples, can refer to this AWS Doc SDK Example: https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javascript/example_code/s3/s3_getsignedurl.js

Theory answered 26/11, 2019 at 15:15 Comment(0)
R
0

Just to add to the many different ways this can show up.

If you using safari on iOS and you are connected to the Safari Technology Preview console - you will see the same problem. If you disconnect from the console - the problem will go away.

Of course it makes troubleshooting other issues difficult but it is a 100% repro.

I am trying to figure out what I can change in STP to stop it from doing this but have not found it yet.

Row answered 13/2, 2020 at 3:34 Comment(0)
G
0

I got this error while trying to copy an object. I fixed it by encoding the copySource. This is actually described in the method documentation:

Params: copySource – The name of the source bucket and key name of the source object, separated by a slash (/). Must be URL-encoded.

CopyObjectRequest objectRequest = CopyObjectRequest.builder()
                .copySource(URLEncoder.encode(bucket + "/" + oldFileKey, "UTF-8"))
                .destinationBucket(bucket)
                .destinationKey(newFileKey)
                .build();
Gisele answered 7/4, 2020 at 2:3 Comment(0)
L
0

This mostly happens when you take a SECRET key and pass it to elastic client.

e.g: Secret Key: ABCW1233**+OxMMMMMMM8x**

While configuring in the client, You should only pass: ABCW1233**(The part before the + sign).

Lund answered 18/5, 2020 at 9:39 Comment(2)
It looks like this is a real key, that is NOT a good idea to publish on a public website such as SOIngmar
It was not a real key but just random digits, however on your suggestion, I have made it look more like a total example key. Thank you.Lund
P
0

In my case, I was using S3 (uppercase) as service name when making request using postman in AWS signature Authorization method

Pollux answered 25/5, 2020 at 10:28 Comment(1)
can you please add more detail where to ad AWS Sign ?Joanjoana
L
0

Weirdly I previously had an error The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. There was an answer on Stackoverflow which required to add AWS_S3_REGION_NAME = 'eu-west-2' (your region), AWS_S3_SIGNATURE_VERSION = "s3v4.

After doing that the previous error cleared but I ended up with this signature error again. Searched for answers until I ended up removing the AWS_S3_SIGNATURE_VERSION = "s3v4 Then it worked. Placed it here maybe it might help someone. I am using Django by the way.

Lambrequin answered 16/8, 2020 at 15:1 Comment(0)
W
0

I was facing the same issue. Code snippet where causing the issue:

<iframe title="PDF Viewer" src={`${docPdfLink}&embed=true`} />

For me, there were two problems

  1. Using embed at the end of the link was making a signature mismatch, So, was getting the problem.
  2. file in S3 has content type other than application/pdf due to which I wasn't able to render pdf after fixing 1st point even.

So, Here is what I did in code:

<iframe title="PDF Viewer" src={docPdfLink} />

and here in s3 bucket: s3 file content type Besides, we also need to make sure that whenever we are adding or creating pdf to s3, it should have application content application/pdf

Willi answered 23/6, 2021 at 15:16 Comment(0)
V
0

These changes worked for me. Modified the code

FROM: const s3 = new AWS.S3();

TO: const s3 = new AWS.S3({ apiVersion: '2006-03-01', signatureVersion: 'v4', });

Changed the method call from POST to PUT.

Viyella answered 3/9, 2021 at 14:56 Comment(0)
B
0

I had the same issue in c#. In turn out that the issue was coming from the way restsharp return the body when you try to access it directly. In our case, it was with the /feeds/2021-06-30/documents endpoint with this body:

{
    "contentType":"text/xml; charset=UTF-8"
}

The issue was when trying to sign the request on the AWSSignerHelper class on the HashRequestBody method you have the following code:

 public virtual string HashRequestBody(IRestRequest request)
    {
        Parameter body = request.Parameters.FirstOrDefault(parameter => ParameterType.RequestBody.Equals(parameter.Type));
        string value = body != null ? body.Value.ToString() : string.Empty;
        return Utils.ToHex(Utils.Hash(value));
    }

At this point the value of body.Value.ToString() will be:

{contentType:text/xml; charset=UTF-8}

It is missing the double quotes which restsharp add when it post the request however when you access the value like that it doesn't which give an invalid hash because the value isn't the same as the one sended.

I replaced the code with that for the moment and it work:

public virtual string HashRequestBody(IRestRequest request)
    {
        Parameter body = request.Parameters.FirstOrDefault(parameter => ParameterType.RequestBody.Equals(parameter.Type));
        string value = body != null ? body.Value.ToString() : string.Empty;
        if (body?.ContentType == "application/json")
        {
            value = Newtonsoft.Json.JsonConvert.SerializeObject(body.Value);
        }
        return Utils.ToHex(Utils.Hash(value));
    }
Bloodline answered 7/9, 2021 at 15:4 Comment(0)
Y
0

As per java docs of files uploading to S3 bucket: If you are uploading Amazon Web Services KMS-encrypted objects, you need to specify the correct region of the bucket on your client and configure Amazon Web Services Signature Version 4 for added security. For more information on how to do this, see http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html# specify-signature-version

So you may need to configure the signature version 4.

Yeager answered 1/12, 2021 at 10:10 Comment(0)
S
0

In my case it was missing CORS configuration for the bucket. This helped:

[{
    "AllowedHeaders": ["*"],
    "AllowedMethods": ["GET","HEAD","POST","PUT"],
    "AllowedOrigins": ["*"],
    "ExposeHeaders": []
}]
Strepphon answered 26/6, 2022 at 12:4 Comment(0)
C
0

I am using the java SDK and got the same error. For me it was because I was sending special characters in the request. The characters I was sending were the Korean letters of a file name. And the specific location was:

com.amazonaws.services.s3.model.PutObjectRequest request.metadata.userMetadata

I realised that I didn't really need to send this information, so removing it fixed my error.

Concertize answered 19/7, 2022 at 0:42 Comment(0)
H
0

I was getting this same error while downloading a S3 file during a CloudFormation::Init procedure. The issue was that the folder name in S3 had a space in it. I moved the files to a new folder without a space, instead an underscore, and that fixed the issue.

Hypochondria answered 20/10, 2022 at 15:15 Comment(0)
M
0

I was facing the same issue with Cloudfront backend with s3, and my solution was not sending the "Host" header in the cloudfront origin policy.

Hope it would solve some other's problem.

Regards, Krishna k

Manufactory answered 3/2, 2023 at 7:23 Comment(2)
Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.Penalize
Please add further details to your answers.Enow
U
0

Also stuck on this for hours... turns out seems to be an issue/ bug on the AWS side as per this Github issue/ comment. The suggested solution is to specify the AWS endpoint directly

boto3.client(
  's3',
  endpoint_url=f'https://s3.{region}.amazonaws.com',
  config=boto3.session.Config(s3={'addressing_style': 'virtual'})
)
User answered 21/6, 2023 at 16:29 Comment(0)
S
0

I encountered the same error message when using the Amazon SES SDK to instantiate an AmazonSimpleEmailServiceClient object and subsequently GetSendStatistics.

I was using my administrative level IAM users credentials to connect ... which failed with the familiar error: "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."

I resolved this by creating an Access Key under the My Security Credentials for my IAM user. When I used the credentials from the new access key, my connection to Amazon SES via the SDK worked.

Slickenside answered 26/7, 2023 at 18:56 Comment(0)
C
0

You must check all the metadata values you are sending to s3 must be of type string, as aws s3 doesn't support non-string values.

Corrales answered 1/8, 2023 at 16:35 Comment(1)
Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.Penalize
T
0

In my case, I was using "aws-sdk" (version 2) for s3 functionality in my Node js application. Switching to @aws-sdk/client-s3 (version > 3), resolved this issue.

Trim answered 28/9, 2023 at 20:41 Comment(0)
S
0

I have two web applications, one current and one that is in development. The S3 upload is working fine on the current application, but not in the one that is in development. On closer inspection I noticed that the S3 SDK versions were different as the application that is in development has the latest version of the S3 SDK and the current application has an older S3 SDK version. By downgrading the new application to the same S3 SDK version as the current application, the S3 upload worked in the new application. So clearly and not surprisingly there is a difference between the versions in the SDK on how the buckets/folder paths are handled amongst other things.

Solorio answered 16/11, 2023 at 21:41 Comment(0)
E
0

I'm working with Go (GoLang, AWS SDK v2), and the problem was that if you want to set an expiration date for your presigned request, you must set it in the optFns ...func(*s3.PresignOptions) argument, i.e., the third and optional argument to PresignPutObject.

I had this:

const validity = time.Second * 60 * 5
expires := time.Now().Add(validity)

request, err := c.client.PresignPutObject(context.TODO(), &s3.PutObjectInput{
    Bucket:      aws.String(appconfig.Get().S3Bucket),
    Key:         aws.String(key),
    Expires:     &expires,
})

But this is what you actually need:

const validity = time.Second * 60 * 5

request, err := c.client.PresignPutObject(context.TODO(), &s3.PutObjectInput{
    Bucket:      aws.String(appconfig.Get().S3Bucket),
    Key:         aws.String(key),
}, func(opts *s3.PresignOptions) { opts.Expires = validity })
Electrokinetics answered 14/2 at 18:27 Comment(0)
K
0

For me, I'm using React and Axios to send the API request.

it was withCredentials: true:

const instance = axios.create({
    withCredentials: true,
});

Just removed the withCredentials: true and it works:

const instance = axios.create();

withCredentials: true in Axios enables sending cookies and authorization headers with cross-origin requests.

Kneeland answered 6/3 at 22:45 Comment(0)
L
-1

I could solve this issue with setting environment variables.

export AWS_ACCESS_KEY=
export AWS_SECRET_ACCESS_KEY=

In IntelliJ + py.test, I set environment variables with [Run] > [Edit Configurations] > [Configuration] > [Environment] > [Environment variables]

Lipase answered 14/12, 2018 at 2:21 Comment(0)
T
-1

I got this when I had quotes around the key in ~/.aws/credentials.

aws_secret_access_key = "KEY"

Timeworn answered 24/3, 2020 at 4:45 Comment(0)
O
-1

For no good reason that I could see, deleting the bucket and re-creating it worked for me.

Overbid answered 22/4, 2021 at 6:8 Comment(0)
G
-1

It May not be 100% the answer to the OP, but some people might find this useful, In my case, it was one of those times when the IDE autocomplete the code and you don't check afterwards :

My bean had

new BasicAWSCredentials(storageProperties.getAccessKey(), storageProperties.getAccessKey())))

So basically two getAccessKey() instead of getSecret(), so it should be:

new BasicAWSCredentials(storageProperties.getAccessKey(), storageProperties.getSecret())))
Guillema answered 15/11, 2022 at 1:24 Comment(0)
C
-1

I had the same error [1] when I was trying to get a file from S3 using Ansible. My error was to use the presigned URL returned by aws_s3 when I put the file to S3 in order to get the file later in my Ansible role.

- name: Upload CVE report to S3
  amazon.aws.aws_s3:
    profile: "{{ wazuh_cve_report_aws_boto_profile }}"
    bucket: "{{ wazuh_cve_report_aws_s3_bucket }}"
    object: "{{ wazuh_cve_report_aws_s3_object_prefix }}"
    src: "{{ wazuh_cve_report_generated_reports_dir }}/vulnerabilities.csv"
    region: "{{ wazuh_cve_report_aws_region }}"
    mode: put
    overwrite: different
    encrypt: true
  register: wazuh_cve_report_s3_object_register

- name: Debug S3 object
  ansible.builtin.debug:
    msg: "{{ wazuh_cve_report_s3_object_register.url }}"

Trying to get the file using the URL in wazuh_cve_report_s3_object_register.url results in SignatureDoesNotMatch error code.

To remediate to this problem I had to use another task with mode geturl to get the correct presigned URL to download the file that I had just uploaded.

- name: Upload CVE report to S3
  amazon.aws.aws_s3:
    profile: "{{ wazuh_cve_report_aws_boto_profile }}"
    bucket: "{{ wazuh_cve_report_aws_s3_bucket }}"
    object: "{{ wazuh_cve_report_aws_s3_object_prefix }}"
    src: "{{ wazuh_cve_report_generated_reports_dir }}/vulnerabilities.csv"
    region: "{{ wazuh_cve_report_aws_region }}"
    mode: put
    overwrite: different
    encrypt: true

- name: Get CVE report presigned URL for Downloading from S3
  amazon.aws.aws_s3:
    profile: "{{ wazuh_cve_report_aws_boto_profile }}"
    bucket: "{{ wazuh_cve_report_aws_s3_bucket }}"
    object: "{{ wazuh_cve_report_aws_s3_object_prefix }}"
    src: "{{ wazuh_cve_report_generated_reports_dir }}/vulnerabilities.csv"
    region: "{{ wazuh_cve_report_aws_region }}"
    # 7 days
    expiry: 604800
    mode: geturl
  register: wazuh_cve_report_s3_object_register

- name: Debug S3 object
  ansible.builtin.debug:
    msg: "{{ wazuh_cve_report_s3_object_register.url }}"

We can't GET a file with presigned URL that was signed for PUT method.

When you create a presigned URL, you must provide your security credentials, and then specify the following:

  • An Amazon S3 bucket
  • An object key (if downloading this object will be in your Amazon S3 bucket, if uploading this is the file name to be uploaded)
  • An HTTP method (GET for downloading objects or PUT for uploading)
  • An expiration time interval [2]

Hoping my answer may help someone.

Regards

[1] The request signature we calculated does not match the signature you provided. Check your key and signing method.

[2] https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html

Coumas answered 4/7, 2023 at 11:37 Comment(0)
M
-2

In my case I had to wait for a couple of hours between uploading files into the bucket and generating pre-signed URLs for them.

Marrin answered 8/4, 2020 at 11:57 Comment(1)
It is not event a solution. Nobody will wait for hours to upload a file.Cherin
W
-2

In my case incorrect order of the api call parameters caused this.

For example when I called /api/call1?parameter1=x&parameter2=y I received the following message:

"The signature of the request did not match calculated signature."

Upon swapping the parameters: /api/call1?parameter2=y&parameter1=x, the api call worked as expected.

Very frustrating as the api documentation itself had the parameters in a different order. This also wasn't the only call this happened for.

Westphal answered 11/8, 2021 at 5:38 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.