Static Website Hosting with AWS S3 new Console UI 2017

8 minute read  

The topic of hosting a static site in AWS S3 has been beaten to death. We’ve never written a post on the basics of hosting a site with S3 as you can easily find that somewhere else. However, AWS recently revamped their S3 Console with a new updated UI in May 2017, so let’s complete our series on hosting with S3 and walk through how to host a static site in the new S3 Console UI.

AWS S3 series:

  1. Host a Static Website in AWS S3 (this post)
  2. Naked domain redirection in S3
  3. Host S3 with HTTPs using AWS CloudFront and Certificate Manager
  4. Password protect a regular HTTP S3 site by mimicking HTTP Basic Authentication
  5. Password protect an HTTPs S3 site

Quick summary of Hosting a Static Website in AWS S3:

  1. Create an S3 bucket
  2. Set static website hosting
  3. Set public read
  4. Upload files to S3
  5. Set DNS (optional)

AWS S3 meaning of static

In the context of AWS S3, hosting a Static Website in S3 simply means hosting a website that uses pure Frontend technologies (HTML/CSS/JS). The word static here has nothing to do with whether the site has static vs dynamic content. In AWS S3 context, static means static files (aka flat files) where the files are returned straight from S3 to the browsers without being processed by any Backend interpreter or compiler.

Static Website Hosting with AWS S3 means hosting a website in AWS S3 that contains only static files. This means you can fully host a site with dynamic content on S3 using a pure Frontend framework such as AngularJS that utilizes only flat files HTML/CSS/JS.


Let’s get started by creating a new S3 bucket on AWS. Pick and stay with a good naming convention. Here are 2 options:

  • prefix.domain
  • domain-postfix


Bucket Name represents the full Hostname with the subdomain as the prefix follows by the domain name, e.g.,,

This is the most natural bucket naming convention that maps nicely to a hosted site. In order to use CNAME record later to map the Hostname to the bucket, AWS requires the Bucket Name to be the same as the Hostname. This naming convention satisfies that.


Bucket Name starts with the domain name follows by a slight twist in the subdomain part, e.g.,,

This option groups all related buckets for a domain under the prefix domain- so buckets look organized as groups in the S3 Console UI when you have to manage lots of buckets/sites. This naming convention won’t work for mapping a hosted S3 site to a Hostname through a CNAME record.

Only use this if you meet these 2 conditions:

  • You plan to have more than a dozen buckets on S3 (hence the need for grouping related buckets under prefix domain-).
  • All of your hosted buckets are accessible only through the long S3 Endpoint URL e.g. (due to CNAME mapping between Hostname and S3 won’t work). If you need it to be accessible through a hostname e.g., you have to use CloudFront on top of S3 (CNAME mapping between Hostname and CloudFront works).

If you’re unsure about any of these, go with the first naming convention prefix.domain.

For our walkthrough, we’ll be using the natural prefix.domain to create a bucket that will map through a CNAME record to our Hostname

Create a bucket:

Fill in Bucket Name and Region:

Keep the default Set properties. Continue:

Keep the default Set permissions. Continue:

Verify info and finish:

Here is our newly created S3 bucket:


Our S3 bucket is accessible through the S3 Console UI or an S3 client. We now want to host a website with it and make it public accessible.

Navigate to the Bucket > Properties > Static website hosting:

Fill in the default page (normally index.html) and hit Save. Note down the S3 Endpoint URL which we’ll use to access the site later:

One more step before we can access the site through the Endpoint URL: make it publicly accessible.


By default files on S3 are private to the public. To make it accessible by the public, we must set public read on the site. There are a few ways to do so:

  • Use S3 Console UI
  • Use an S3 client
  • Use S3 Policy

Use S3 Console UI to set permissions of files/folders

With the S3 Console UI, you can navigate to files and folder to set their Permissions to allow Everyone with Read permission for Object access.

You can also use the Make public menu dropdown to set public read permission for multiple files or folders at once. Do it at the root level of your bucket where you would select all files and folders. Note that the Make public on a folder only applies to current files inside the folder and not new files that you upload later.

Select all files at the root level of the Bucket. Then choose More > Make public:

The above step is equivalent to going through each individual files and set their permission.

Choose a File > Permissions > Manage public permissions:

Everyone > Object access > Read:

This is the least convenient and the most tedious way of setting public read. When new files get uploaded, you have to hunt them down on the S3 Console UI to adjust the permission.

Use an S3 client to set permissions of files/folders

A more convenient way of setting public read permission is doing so at the same time you’re uploading the files. Depending on your S3 client, there should be an option to allow you to set public read for files being uploaded. E.g. use --acl-public flag in the s3cmd S3 client (see more below).

Use S3 Policy on the bucket level

The most convenient way which I recommend is to use S3 Policy to apply permission to the entire bucket where it will effect all current and future files/folders.

Navigate to the Bucket > Permissions > Bucket Policy:

This policy allows public read to our bucket

    "Version": "2012-10-17",
    "Id": "Public access",
    "Statement": [
            "Sid": "Allow public access",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            "Action": "s3:GetObject",
            "Resource": "*"

We can now go to to access our site. But we’ll get a 404 Not Found error because we haven’t yet uploaded any files including the index page index.html.


Create an IAM credential and then use either an S3 GUI client (Cyberduck on Mac), or an S3 command line client (s3cmd, AWS CLI) to upload your site content to S3.

Here is our demo-s3 code for a simple demo site:

├── index.html
└── main.css
<!-- index.html -->
<!DOCTYPE html>
  <title>Demo - Static Website Hosting with AWS S3 new Console UI</title>
  <link href="main.css" rel="stylesheet">
  <h1>AWS S3 Demo</h1>
  <p>This is hosted with an S3 bucket</p>
  <p>There is an "index.html" file (this file) and a "<a href="main.css">main.css</a>" file</p>
  <div class="blue">This should have a White text on Blue background color if CSS is working</div>
/* main.css */
html {
  font-family: "Helvetica Neue", Arial, sans-serif;
.blue {
  color: white;
  background-color: #4885ed;

Upload to S3 with s3cmd

$ s3cmd sync -r --delete-removed --no-preserve demo-s3/* s3://

Optional 1: correct file type

Depending on a particular s3cmd version and your system, you may run into situation where s3cmd uploads files to S3 with the wrong file type. This happens to our CI pipeline using Jenkins and s3cmd on Ubuntu Trusty where occasionally we have *.css files uploaded with the wrong type text/plain instead of the correct type text/css.

If after uploading files to S3, your site renders the majority of content but the format/style looks funky, double check the file type of your CSS on S3:

  • Navigate to the S3 bucket > Objects tab
  • Navigate to the CSS file in question, and click on it
  • In that file, navigate to the Properties tab > Metadata and look at the Key Content-Type and verify the Value

If the Value for Content-Type Key isn’t text/css, we can correct it either by using the S3 Console UI or with s3cmd.

With S3 Console UI, click on the input box for Value and change it to text/css. If the dropdown list of Value doesn’t have that string option, type it in the box using your keyboard.

With s3cmd we’ll use the modify command to alter the file type of those CSS files:

# modify all CSS files at root level
$ s3cmd modify s3://*.css --add-header="content-type:text/css"

# modify all CSS files inside nested folders
$ s3cmd modify s3://**/*.css --add-header="content-type:text/css"

This will modify all *.css files in all folders on our site. Read more about the glob pattern ** here.

Optional 2: set public read

If you use s3cmd exclusively to upload files to S3, you can use its flag --acl-public to set public read on the uploaded files and can skip the policy method above.

$ s3cmd sync -r --delete-removed --no-preserve --acl-public demo-s3/* s3://

However I don’t recommend using this. Occasionally you will need to use another tool (say Cyberduck) to troubleshoot and upload some file. It’s more convenient to set the whole bucket public with S3 Policy than setting individual file with s3cmd --acl-public.

5. SET DNS (optional)

Our demo site should be now fully accessible at That long AWS S3 URL is sufficient for a quick demo or a test project. Most likely you will want to map it into an actual Hostname with your own or your client’s domain.

Set DNS CNAME record for your subdomain

For our demo site, we have set our CNAME record to map the Hostname to the S3 Endpoint URL

A reminder here is that the Bucket Name must match the Hostname

You can check your DNS records with dig:

$ dig


We can now access the site at both and

Host the site at the root/naked domain

If you want to host your site at the root/naked domain, you need to use AWS Route 53 Alias. This is the only solution that works.

You may have read somewhere else that you could potentially use CNAME for the root/naked domain. DON’T. Some DNS may allow you to set the CNAME record for your root/naked domain (most won’t), but it will definitely mess up your email delivery.

Tags: ,


Leave a Comment