AWS Lambda attached to S3 ObjectCreated event returns "NoSuchKey: The specified key does not exist:
Asked Answered
P

2

12

I am uploading a file from Android device to S3 bucket via this code

TransferUtility trasnferManager = new TransferUtility(s3, context);
trasnferManager.upload(..,..,..);

After that I have a lambda trigger attached to S3:ObjectCreated event.

When the lambda is executed I am trying to get the file via S3.getObject() function. Unfortunately sometimes I am receiving "NoSuchKey: The specified key does not exist:" error. After that lambda retries couple of times and successfully gets the file and proceeds with its execution.

In my opinion lambda function is executed before the file in S3 is avaivable? But that should not happen by design. The trigger should be triggered after the file upload on S3 is complete.

According to announcement on Aug 4, 2015:

Amazon S3 buckets in all Regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES.

Read-after-write consistency allows you to retrieve objects immediately after creation in Amazon S3.

But prior to this:

All regions except US Standard (renamed to US East (N. Virginia)) supported read-after-write consistency for new objects uploaded to Amazon S3.

My bucket is in US East (N. Virginia) region and it is created before Aug 4, 2015. I don't know that this could be the issue...

EDIT: 20.10.2016

According to documentaion - EVENTUALLY CONSISTENT READ operation may return NO RESULT even if two or more WRITE operations had been completed before it.

In this example, both W1 (write 1) and W2 (write 2) complete before the start of R1 (read 1) and R2 (read 2). For a consistent read, R1 and R2 both return color = ruby. For an eventually consistent read, R1 and R2 might return color = red, color = ruby, or no results, depending on the amount of time that has elapsed.

Consistent example

Plectrum answered 17/8, 2016 at 8:56 Comment(6)
Please mention the end point that you are using and add more code in your question. This may help - forums.aws.amazon.com/ann.jspa?annID=3112Huertas
I am using "s3.amazonaws.com". I believe that this is outdated because it is said "Amazon S3 buckets in all Regions provide read-after-write consistency" without no additional information.Plectrum
I'm having a similar issue, however it's not always. I find with large files this happens. I have an event firing a lambda, and most of the time the lambda then tries to moves the file and is successful. On larger files (38mb jpg), it says it does not exist and fails. Once the lambda re-initialises to retry due to failure, it works fine. Seems ridiculous that the event would fire before the file is accessible.Accad
Same here. Even with small files. @AccadPlectrum
Has anyone had any luck fixing this yet?Osi
+1, we use Singapore center, and it faces this issue for new files. On an average, this happens once for every 500 files uploaded.Bayreuth
J
4

Sometimes when the files are large, they are uploaded using multi-part upload and it sends a trigger to lambda before the file is fully uploaded. Presumably, it is related to the event that triggers the Lambda function. In the event field of the lambda function, make sure you add both put and complete multi-part upload to the event.

Junction answered 16/9, 2016 at 23:36 Comment(6)
I wasn't aware this was an option, but will definitely look into this.Accad
This issue is not connected with the file size. It happens even on small files (~200kB).Plectrum
@Accad can you update what happened when you added the multi-part upload to the event. ThanksJunction
@AhmedAbouhegaza unfortunately I'm no longer working on the project so unlikely I'll get a chance to test this. Did you have luck with this?Accad
@Accad I faced the issue and I added the multipart to the event and it worked fine for me. It even happened for small files.Junction
@AhmedAbouhegaza unfortunately no. Maybe it's a good idea to check if you are doing OVERWRITE PUT operations. In that case the read-after-write consistency is EVENTUAL and this could cause the errors. P.S.: I am not doing such operations and still got the same errors.Plectrum
O
3

To protect against this issue, S3 SDK waiter can be used. Once notification was received, we can ensure the object is actually there. For example, for the AWS JavaScript SDK you can use the following snippet:

s3.waitFor("objectExists", {
    Bucket: "<bucket-name>",
    Key: "<object-key>"
}, callback);

Please note that waitFor will increase the execution time of your Lambda, meaning you will need to extend the timeout. According to the documentation, the check will be performed every 5 seconds up to 20 times. So setting the timeout to something around 1 minute should help to avoid execution timeout exceptions.

Link to the documentation: AWS JavaScript SDK S3 Class

Obliging answered 4/1, 2017 at 23:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.