Uploading file to AWS S3 through Chalice API call
Asked Answered
C

1

13

I'm trying to upload a file to my S3 bucket through Chalice (I'm playing around with it currently, still new to this). However, I can't seem to get it right.

I have AWS setup correctly, doing the tutorial successfully returns me some messages. Then I try to do some upload/download, and problem shows up.

s3 = boto3.resource('s3', region_name=<some region name, in this case oregon>)
BUCKET= 'mybucket'
UPLOAD_FOLDER = os.path.abspath('')  # the file I wanna upload is in the same folder as my app.py, so I simply get the current folder name

@app.route('/upload/{file_name}', methods=['PUT'])
def upload_to_s3(file_name):
    s3.meta.client.upload_file(UPLOAD_FOLDER+file_name, BUCKET, file_name)
    return Response(message='upload successful',
                    status_code=200,
                    headers={'Content-Type': 'text/plain'}
    )

Please don't worry about how I set my file path, unless that's the issue, of course.

I got the error log:

No such file or directory: ''

in this case file_name is just mypic.jpg.

I'm wondering why the UPLOAD_FOLDER part is not being picked up. Also, for the reference, it seems like using absolute path will be troublesome with Chalice (while testing, I've seen the code being moved to /var/task/)

Does anyone know how to set it up correctly?

EDIT:

the complete script

from chalice import Chalice, Response

import boto3

app = Chalice(app_name='helloworld')  # I'm just modifying the script I used for the tutorial 
s3 = boto3.client('s3', region_name='us-west-2')
BUCKET = 'chalicetest1'

@app.route('/')
def index():
    return {'status_code': 200,
            'message': 'welcome to test API'}

@app.route('/upload/{file_name}, methods=['PUT'], content_types=['application/octet-stream'])
def upload_to_s3(file_name):
    try:
        body = app.current_request.raw_body
        temp_file = '/tmp/' + file_name
        with open(temp_file, 'wb') as f:
            f.write(body)
        s3.upload_file(temp_file, BUCKET, file_name)
        return Response(message='upload successful',
                        headers=['Content-Type': 'text/plain'],
                        status_code=200)
    except Exception, e:
        app.log.error('error occurred during upload %s' % e)
        return Response(message='upload failed',
                        headers=['Content-Type': 'text/plain'],
                        status_code=400)
Convoke answered 12/11, 2017 at 10:56 Comment(2)
Which file from where do you want to upload to what location? You app.py is deployed to AWS API and Lambda functions, i.e. "same folder" probably is not true after it is deployed.Brawn
The file is in the same folder as the app.py. I also tried using absolute path for the file, neither works. I guess the correct question should be "what is the correct way of doing it?"Convoke
B
17

I got it running and this works for me as app.py in an AWS Chalice project:

from chalice import Chalice, Response
import boto3

app = Chalice(app_name='helloworld')

BUCKET = 'mybucket'  # bucket name
s3_client = boto3.client('s3')


@app.route('/upload/{file_name}', methods=['PUT'],
           content_types=['application/octet-stream'])
def upload_to_s3(file_name):

    # get raw body of PUT request
    body = app.current_request.raw_body

    # write body to tmp file
    tmp_file_name = '/tmp/' + file_name
    with open(tmp_file_name, 'wb') as tmp_file:
        tmp_file.write(body)

    # upload tmp file to s3 bucket
    s3_client.upload_file(tmp_file_name, BUCKET, file_name)

    return Response(body='upload successful: {}'.format(file_name),
                    status_code=200,
                    headers={'Content-Type': 'text/plain'})

You can test this with curl and its --upload-file directly from the command line with:

curl -X PUT https://YOUR_API_URL_HERE/upload/mypic.jpg --upload-file mypic.jpg --header "Content-Type:application/octet-stream"

To get this running, you have to manually attach the policy to write to s3 to the role of your lambda function. This role is auto-generated by Chalice. Attach the policy (e.g. AmazonS3FullAccess) manually next to the existing policy in the AWS IAM web interface to the role created by your Chalice project.

Things to mention:

  • You cannot write to the working directory /var/task/ of the Lambda functions, but you have some space at /tmp/, see this answer.
  • You have to specify the accepted content-type 'application/octet-stream' for the @app.route (and upload the file accordingly via curl).
  • HTTP PUT puts a file or resource at a specific URI, so to use PUT this file has to be uploaded to the API via HTTP.
Brawn answered 12/11, 2017 at 20:2 Comment(11)
I'm still getting an 500 following your steps. I've created permission for the chalice user to have full access of S3, following this guy's instruction: brianpark.ca/blog/… I put in a try/except clause to log any error with app.log.error(blhblah) and nothing is being logged.Convoke
I attached the policy in AWS IAM web interface -> Roles -> helloworld-dev -> "Attach policy". Is your bucket name correct?Brawn
And this runs as Python 3.6 lambda functionBrawn
yes the bucket name is correct. Also I'm running 2.7, but I don't think there's any difference in terms of code in this case?Convoke
Really strange. I tested it at my place again (in a new Chalice project with the code above) and it worked. Are you using the same curl command as mentioned above?Brawn
Yea, I tried your command and I also tried using httpie, both return 500. Usually for internal server error, there's something wrong with code. I was reexaming my other part of code, and it just doesn't seem like there's something going wrong. I'll add other part of script in EDITConvoke
Okay I think I'm getting somewhere. I updated the policies on IAM console directly and every time I redeploy, the policy gets reset. I tried modifying in .chalice/police-dev.json as well, but when I redeploy, it informs me that it's going to remove those two statements I addedConvoke
I'm selecting your answer as the correct answer as I think my issue comes from some other parts. I believe your solution solves the question I asked. I just have to figure out the permission part. If you have some take on that, please share with me :)Convoke
Thx. This is a screenshot of my IAM settings, if that hepls. I added the s3 access via "Attach policy"Brawn
ah.....I see what you mean. I was editing the chalice role's (in your case, the hello-chalice-dev) permissions directlyConvoke
This way (as in the picture) it is not overwritten after a chalice deploy.Brawn

© 2022 - 2024 — McMap. All rights reserved.