Direct (and simple!) AJAX upload to AWS S3 from (AngularJS) Single Page App
Asked Answered
P

4

15

I know there's been a lot of coverage on upload to AWS S3. However, I've been struggling with this for about 24 hours now and I have not found any answer that fits my situation.

What I'm trying to do

Upload a file to AWS S3 directly from my client to my S3 bucket. The situation is:

  1. It's a Single Page App, so upload request must be in AJAX
  2. My server and my client are not on the same domain
  3. The S3 bucket is of the newest sort (Frankfurt), for which some signature-generating libraries don't work (see below)
  4. Client is in AngularJS
  5. Server is in ExpressJS

What I've tried

  • Heroku's article on direct upload to S3. Doesn't fit my client/server configuration (plus it really does not fit harmoniously with Angular)
  • ready-made directives like ng-s3upload. Does not work because their signature-generating algorithm is not accepted by recent s3 buckets.
  • Manually creating a file upload directive and logic on the client like in this article (using FormData and Angular's $http). It consisted of getting a signed URL from AWS on the server (and that part worked), then AJAX-uploading to that URL. It failed with some mysterious CORS-related message (although I did set a CORS config on Heroku)

It seems I'm facing 2 difficulties: having a file input that works in my Single Page App, and getting AWS's workflow right.

The kind of solution I'm looking for

If possible, I'd like to avoid 'all included' solutions that manage the whole process while hiding of all of the complexity, making it hard to adapt to special cases. I'd much rather have a simple explanation breaking down the flow of data between the various components involved, even if it requires some more plumbing from me.

Penney answered 26/2, 2015 at 10:29 Comment(0)
P
25

I finally managed. The key points were:

  1. Let go of Angular's $http, and use native XMLHttpRequest instead.
  2. Use the getSignedUrl feature of AWS's SDK, instead on implementing my own signature-generating workflow like many libraries do.
  3. Set the AWS configuration to use the proper signature version (v4 at the time of writing) and region ('eu-central-1' in the case of Frankfurt).

Here below is a step-by-step guide of what I did; it uses AngularJS and NodeJS on the server, but should be rather easy to adapt to other stacks, especially because it deals with the most pathological cases (SPA on a different domain that the server, with a bucket in a recent - at the time of writing - region).


Workflow summary

  1. The user selects a file in the browser; your JavaScript keeps a reference to it.
  2. the client sends a request to your server to obtain a signed upload URL.
  3. Your server chooses a name for the object to put in the bucket (make sure to avoid name collisions!).
  4. The server obtains a signed URL for your object using the AWS SDK, and sends it back to the client. This involves the object's name and the AWS credentials.
  5. Given the file and the signed URL, the client sends a PUT request directly to your S3 Bucket.

Before you start

Make sure that:

  • Your server has the AWS SDK
  • Your server has AWS credentials with proper access rights to your bucket
  • Your S3 bucket has a proper CORS configuration for your client.

Step 1: setup a SPA-friendly file upload form / widget.

All that matters is to have a workflow that eventually gives you programmatic access to a File object - without uploading it.

In my case, I used the ng-file-select and ng-file-drop directives of the excellent angular-file-upload library. But there are other ways of doing it (see this post for example.).

Note that you can access useful information in your file object such as file.name, file.type etc.

Step 2: Get a signed URL for the file on your server

On your server, you can use the AWS SDK to obtain a secure, temporary URL to PUT your file from someplace else (like your frontend).

In NodeJS, I did it this way:

// ---------------------------------
// some initial configuration
var aws = require('aws-sdk');

aws.config.update({
  accessKeyId: process.env.AWS_ACCESS_KEY,
  secretAccessKey: process.env.AWS_SECRET_KEY,
  signatureVersion: 'v4',
  region: 'eu-central-1'
});

// ---------------------------------
// now say you want fetch a URL for an object named `objectName`
var s3 = new aws.S3();
var s3_params = {
  Bucket: MY_BUCKET_NAME,
  Key: objectName,
  Expires: 60,
  ACL: 'public-read'
};
s3.getSignedUrl('putObject', s3_params, function (err, signedUrl) {
  // send signedUrl back to client
  // [...]
});

You'll probably want to know the URL to GET your object to (typically if it's an image). To do this, I simply removed the query string from the URL:

var url = require('url');
// ...
var parsedUrl = url.parse(signedUrl);
parsedUrl.search = null;
var objectUrl = url.format(parsedUrl);

Step 3: send the PUT request from the client

Now that your client has your File object and the signed URL, it can send the PUT request to S3. My advice in Angular's case is to just use XMLHttpRequest instead of the $http service:

var signedUrl, file;
// ...
var d_completed = $q.defer(); // since I'm working with Angular, I use $q for asynchronous control flow, but it's not mandatory
var xhr = new XMLHttpRequest();
xhr.file = file; // not necessary if you create scopes like this

xhr.onreadystatechange = function(e) {
  if ( 4 == this.readyState ) {
    // done uploading! HURRAY!
    d_completed.resolve(true);
  }
};
xhr.open('PUT', signedUrl, true);
xhr.setRequestHeader("Content-Type","application/octet-stream");
xhr.send(file);

Acknowledgements

I would like to thank emil10001 and Will Webberley, whose publications were very valuable to me for this issue.

Penney answered 1/3, 2015 at 14:48 Comment(5)
Thanks for this! Why did you decide to use XMLHttpRequest instead of Angular's $http ?Safety
I had to add xhr.setRequestHeader('enctype','multipart/form-data'); and did a verification on the onreadystatechange function to assure I only resolved my promise if the status code was 2xx (successful)Safety
Note that if you're sending the credentials (using PresignedPost), the file must be the last object appended on your FormData object to be sent, as S3 ignores everything after a file key.Safety
@BrunoPeres: why no $http ? I have to admit I just couldn't get it to work :) no rational reason here. My guess is that it has some bad defaults in this case.Penney
I'm don't really know why, but I think the same, should be some default configs on Angular's $http. Angular 1.4.7 has released a $xhrFactory that seems to do the job, but I haven't tested though.Safety
A
4

You can use the ng-file-upload $upload.http method in conjunction with the aws-sdk getSignedUrl to accomplish this. After you get the signedUrl back from your server, this is the client code:

var fileReader = new FileReader();
fileReader.readAsArrayBuffer(file);
fileReader.onload = function(e) {
$upload.http({
  method: 'PUT',
  headers: {'Content-Type': file.type != '' ? file.type : 'application/octet-stream'},
  url: signedUrl,
  data: e.target.result
}).progress(function (evt) {
  var progressPercentage = parseInt(100.0 * evt.loaded / evt.total);
  console.log('progress: ' + progressPercentage + '% ' + file.name);
}).success(function (data, status, headers, config) {
  console.log('file ' + file.name + 'uploaded. Response: ' + data);
});
Aerobe answered 5/4, 2015 at 13:13 Comment(0)
D
0

To do multipart uploads, or those larger than 5 GB, this process gets a bit more complicated, as each part needs its own signature. Conveniently, there is a JS library for that:

https://github.com/TTLabs/EvaporateJS

via https://github.com/aws/aws-sdk-js/issues/468

Dwanadwane answered 16/8, 2016 at 23:18 Comment(0)
B
0

Use s3-file-upload open source directive having dynamic data-binding and auto-callback functions - https://github.com/vinayvnvv/s3FileUpload

Bushcraft answered 3/1, 2017 at 11:12 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.