published on

Flipping to Amazon

A good friend of mine has been nice enough to host this site for me. When I moved to the beautiful redwoods, it became apparent that with the fluctuating power situation, I would no longer have the same kind of uptime.

With the transition, Christopher was also nice enough to let me host my site over https. I am one of those people that sort of believe in encryption everywhere, even though my site does not transmit any user data.

In the last few weeks, two events occurred:

  1. My SSL certificate was up for renewal.
  2. Amazon announced their own Certificate service, which lets you create a certificate that is usable withe their infrastructure

This was good timing. I enjoy using Amazons services, providing they stay cheap, so I decided to modify my git post-commit hook.

First thing was first though, I needed to create an S3 bucket for my site:

# s3cmd mb s3://
Bucket 's3://' created

I then had to setup the bucket policy for a static website, which I did through the control panel, as I don’t know how to do that in s3cmd.

I can however, get the settings, so here:

# s3:// (bucket):
   Location:  us-east-1
   Payer: BucketOwner
   Expiration Rule: none
   policy:    {"Version":"2012-10-17","Statement":[{"Sid":"AddPerm","Effect":"Allow","Principal":"*","Action":"s3:GetObject","Resource":"*"}]}
   cors:      none
   ACL:       mcarlson72: READ
   ACL:       mcarlson72: WRITE
   ACL:       mcarlson72: READ_ACP
   ACL:       mcarlson72: WRITE_ACP
   ACL:       *anon*: READ

I could have left the site just like this, but as I mentioned earlier, I am ideologically sold on encryption everywhere. S3 static site hosting does not support HTTPS with your own domain certificate.

CloudFront however, does.

It was pretty simple to create a CloudFront distribution, and point its origin source to my new S3 bucket:

# s3cmd cflist
DistId:         cf://E312GMVV6Y9V86
Status:         Deployed
Enabled:        True

The tricky part was getting this integrated with Route 53. You normally use a CNAME record with S3 or CF, but, since I like to have “” and not “”, that meant I would have to change the apex record from an A record, to a CNAME… which is illegal.

Thankfully, I read the documentation here and I learned that I needed to use an Alias record type in Route53

With ALL of that now setup, I could finally sync my generated content to the s3 bucket:

s3cmd --acl-public --cf-invalidate --delete-removed --no-progress sync public/* s3://

The –cf-invalidate flag is crucial, only if you are using CloudFront.

After a few hours of getting the infrastructure lined up, and finally syncing my content, it was all pretty smooth. There was a bit of downtime while the content was pushed out.

I normally pay around $5 a month for AWS, which is mostly the Route53 costs. So far, I’m projected to pay $5. This site is pretty low traffic wise, and a full CDN is completely overkill, but I do not anticipate the costs to be that high. If they are, back to titan-project I go!