Being able to upload files to Amazon S3, especially in HTML5, has been a goal for quite some time and while it was somehow possible in Flash and Silverlight, HTML5 was out of the game. Amazon S3 simply refused to send Access-Control-Allow-Origin header - that single miraculous one that makes AJAX requests to suddenly reach the server across domains. Finally, after continuous lament from users, with bleeding shouts like: "Two and a half year later, still no cigar?.." - Amazon made it happen.

So now it's possible.

Table of Contents

Disclaimer

  • We assume that you already have active S3 account.
  • We describe the most generic scenario. Feel free to customize it to your needs.
  • This is a working draft. Suggestions are welcome.

Preface

Do not expect it to just work. It is definitely achievable, but still requires some effort (not that much though). Each runtime has its own specifics and requirements. Flash and Silverlight are similar in some sense and in general can share exactly the same configuration, both server- and client-side. But again - there's an option.

Prepare server-side (S3)

First, you need to create a bucket. This may sound simple, but there are some implications to take into account, specifically do not use . (dot) character in the name of the bucket, as it causes weird problems (see #779) for HTTPS connections.

Note that you should NOT grant Upload/Delete permissions to Everyone on this bucket (earlier versions of this tutorial stated otherwise). The whole point of sending a signed policy along with the request is to make uploads possible while keeping standard permissions. (If you grant permissions to everyone, uploading files is possible by just sending the file and the key parameter.)

... for HTML5 runtime

In the Permissions section of the bucket there is an option to Add CORS Configuration (if you do not know what the CORS is - HTML5 Rocks has an excellent write-up about it):

Add CORS Configuration

As always there are some options, but we will use the most generic configuration to make sure that S3 bucket is indeed compatible with our HTML5 upload:

<CORSConfiguration>
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedHeader>*</AllowedHeader>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
    </CORSRule>
</CORSConfiguration>

What we say here is that we allow cross-domain access from any domain, using requests with any headers, via GET or POST. Preflight requests will be cached for 3000 secs.

... for Flash runtime

To support cross-origin requests, Flash requires crossdomain.xml - special policy file at the root of your bucket with such content for example:

<?xml version="1.0"?>
<!DOCTYPE cross-domain-policy SYSTEM
"http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd">
<cross-domain-policy>
  <allow-access-from domain="*" secure="false" />
</cross-domain-policy>

Again, we simply allow all domains here. You might want to restrict this to specific ones only. Also notice an attribute secure="false" - it will make your bucket accessible via both HTTP and HTTPS. If you want to allow only secured connections (HTTPS), then set this attribute to true.

Finally, do not forget to make crossdomain.xml public - files do not automatically inherit bucket permissions and you have to do it manually.

... for Silverlight runtime

In fact you can stop here and do nothing else for Silverlight. Although Silverlight uses its own Security Policy File - clientaccesspolicy.xml, with slightly different format, for the most generic case (when all domains are allowed) it can fallback to crossdomain.xml. So it will basically make a request for clientaccesspolicy.xml and if it's not there it will request crossdomain.xml and try to use it.

But if you do not like that single 404 Not Found in your network log, or you would like to allow only specific domains, rather than giving access to everyone, than you have to provide an actual clientaccesspolicy.xml file. Here is how typical policy file looks like:

<?xml version="1.0" encoding="utf-8" ?>
<access-policy>
  <cross-domain-access>
    <policy>
      <allow-from http-request-headers="*">
        <domain uri="*"/>
      </allow-from>
      <grant-to>
        <resource path="/" include-subpaths="true"/>
      </grant-to>
    </policy>
  </cross-domain-access>
</access-policy>

You can familiarize yourself with the peculiarities of the process here.

Prepare Client-side

To begin with, you require Access Key ID and Secret Access Key to encrypt and sign your request. You can generate both under Security Credentials section of your AWS account.

Generate Policy

In order to upload successfully to S3 you need to accompany each request with special Base64 encoded document - policy, or basically a set of rules your request should conform, if you don't - Amazon will reject it and respond with 403 Forbidden. Additionally you need to sign your request with encrypted signature (more on this below).

While we can generate the policy (and signature) on client-side, this obviously ruins whole point of encryption, since all of your keys and secrets will inevitably get exposed to anyone interested. Therefore we need to do it on server-side, during page construction and simply inject already encrypted values into the Plupload configuration.

Here is how we do it in PHP (there is little Sinatra one-file-app that illustrates the same approach in Ruby):

<?php 
// important variables that will be used throughout this example
$bucket = 'BUCKET';

// these can be found on your Account page, under Security Credentials > Access Keys
$accessKeyId = 'ACCESS_KEY_ID';
$secret = 'SECRET_ACCESS_KEY';

$policy = base64_encode(json_encode(array(
  // ISO 8601 - date('c'); generates uncompatible date, so better do it manually
  'expiration' => date('Y-m-d\TH:i:s.000\Z', strtotime('+1 day')),  
  'conditions' => array(
    array('bucket' => $bucket),
    array('acl' => 'public-read'),
    array('starts-with', '$key', ''),
    array('starts-with', '$Content-Type', ''), // accept all files
    // Plupload internally adds name field, so we need to mention it here
    array('starts-with', '$name', ''),  
    // One more field to take into account: Filename - gets silently sent by FileReference.upload() in Flash
    // http://docs.amazonwebservices.com/AmazonS3/latest/dev/HTTPPOSTFlash.html
    array('starts-with', '$Filename', ''), 
  )
)));

As you see the Policy is simply Base64 encoded JSON document.

Generate Signature

In addition to sending Policy you must sign your request with HMAC-SHA1 encrypted and Base64 encoded signature.

Quote from Amazon S3 documentation:

The algorithm takes as input two byte-strings: a key and a message. For Amazon S3 Request authentication, use your AWS Secret Access Key (YourSecretAccessKeyID) as the key, and the UTF-8 encoding of the StringToSign as the message. The output of HMAC-SHA1 is also a byte string, called the digest. The Signature request parameter is constructed by Base64 encoding this digest.

Example in PHP:

<?php
$signature = base64_encode(hash_hmac('sha1', $policy, $secret, true));

More details here: Signing and Authenticating REST Requests

Configuring Plupload

{
  // General settings
  runtimes : 'html5,flash,silverlight',

        flash_swf_url : '../../src/moxie/bin/flash/Moxie.swf',
  silverlight_xap_url : '../../src/moxie/bin/silverlight/Moxie.xap',

  // S3 specific settings
  url : "https://<?php echo $bucket; ?>.s3.amazonaws.com:443/",

  multipart_params: {
    'key': '${filename}', // use filename as a key
    'Filename': '${filename}', // adding this to keep consistency across the runtimes
    'acl': 'public-read',
    'Content-Type': '',
    'AWSAccessKeyId' : '<?php echo $accessKeyId; ?>',   
    'policy': '<?php echo $policy; ?>',
    'signature': '<?php echo $signature; ?>'
  }
}

Putting it all together

We bundle full PHP example with the Plupload itself.

There is also example written as Sinatra app that illustrates the same approach in Ruby.

Fork me on GitHub