Is there any way to host a static wesbite on AWS S3 without giving public access? [closed]
Asked Answered
S

4

12

I wish to host a static website on Amazon S3 without actually giving the public access to the bucket. I am using a client AWS account in which all the buckets have public accessed blocked, when I try to configure my bucket as public, it redirects me to a page where I have to grant public access to all the buckets.

Syllabub answered 9/9, 2021 at 12:56 Comment(2)
Do you want your "static website" to be publicly accessible? If so, the only way to do it is to make it public. If you do not want it to be public, how were you wanting to determine whether users can access it?Placatory
@JohnRotenstein I want the static content to be publicly accessible but the public access to the s3 bucket must not be public, that is where CloudFront comes into the pictureSyllabub
P
4

Yes it is possible, All you need to do is serve s3 through cloudfront. Client -> Route53 -> Cloudfront -> S3 (blocked public access)

In Cloudfront

  • Create cloudfront function (from left menu), this will redirect any request with index.html appended. For ex: example.com/home to example.com/home/index.html

    'use strict';
    function handler(event) {
     var request = event.request;
     var uri = request.uri;
    
     // Check whether the URI is missing a file name.
     if (uri.endsWith('/')) {
         request.uri += 'index.html';
     } 
     // Check whether the URI is missing a file extension.
     else if (!uri.includes('.')) {
         request.uri += '/index.html';
     }
      return request;
    }
    
  • Create the origin access (from left menu), this will be used in distribution's origin

  • In Distributions

    • In origin tab

      • Create origin as S3 type, by choosing the s3 bucket

      • Click on origin access control settings that create at first step

    • Edit general settings and put index.html in default root object.

    • Edit Behaviours, In Function associations, select cloudfront function in viewer request. Don’t need to go with lambda function

In S3

  • In properties, disable static s3 website hosting

  • In permissions

    • Block all public access

    • Edit the bucket policy with below:

      {
         "Version": "2008-10-17",
         "Id": "PolicyForCloudFrontPrivateContent",
         "Statement": [
          {
              "Sid": "AllowCloudFrontServicePrincipal",
              "Effect": "Allow",
              "Principal": {
                  "Service": "cloudfront.amazonaws.com"
              },
              "Action": "s3:GetObject",
              "Resource": "arn:aws:s3:::BUCKET_NAME/*",
              "Condition": {
                  "StringEquals": {
                      "AWS:SourceArn": "arn:aws:cloudfront::ACC_NUMBER:distribution/DISTRIBUTION_ID"
                  }
              }
          }
         ]
      }
      
      
      
      

In Route53

  • Create A record by selecting cloudfront distribution
Pastis answered 9/11, 2022 at 15:38 Comment(0)
T
8

You can front your static site with an Amazon CloudFront distribution. In addition to providing the benefits of an integrated CDN, you can configure an Origin Access Identity that ensures that the bucket can only be accessed through CloudFront, not through public S3.

Teamwork answered 9/9, 2021 at 19:57 Comment(1)
Now, it is recommended to use Origin Access Control instead of Origin Access Identity. Migration guide.Adala
P
4

Yes it is possible, All you need to do is serve s3 through cloudfront. Client -> Route53 -> Cloudfront -> S3 (blocked public access)

In Cloudfront

  • Create cloudfront function (from left menu), this will redirect any request with index.html appended. For ex: example.com/home to example.com/home/index.html

    'use strict';
    function handler(event) {
     var request = event.request;
     var uri = request.uri;
    
     // Check whether the URI is missing a file name.
     if (uri.endsWith('/')) {
         request.uri += 'index.html';
     } 
     // Check whether the URI is missing a file extension.
     else if (!uri.includes('.')) {
         request.uri += '/index.html';
     }
      return request;
    }
    
  • Create the origin access (from left menu), this will be used in distribution's origin

  • In Distributions

    • In origin tab

      • Create origin as S3 type, by choosing the s3 bucket

      • Click on origin access control settings that create at first step

    • Edit general settings and put index.html in default root object.

    • Edit Behaviours, In Function associations, select cloudfront function in viewer request. Don’t need to go with lambda function

In S3

  • In properties, disable static s3 website hosting

  • In permissions

    • Block all public access

    • Edit the bucket policy with below:

      {
         "Version": "2008-10-17",
         "Id": "PolicyForCloudFrontPrivateContent",
         "Statement": [
          {
              "Sid": "AllowCloudFrontServicePrincipal",
              "Effect": "Allow",
              "Principal": {
                  "Service": "cloudfront.amazonaws.com"
              },
              "Action": "s3:GetObject",
              "Resource": "arn:aws:s3:::BUCKET_NAME/*",
              "Condition": {
                  "StringEquals": {
                      "AWS:SourceArn": "arn:aws:cloudfront::ACC_NUMBER:distribution/DISTRIBUTION_ID"
                  }
              }
          }
         ]
      }
      
      
      
      

In Route53

  • Create A record by selecting cloudfront distribution
Pastis answered 9/11, 2022 at 15:38 Comment(0)
U
2

Speaking about just 1 S3 bucket that is hosting a static site, you can add a bucket policy under the Permissions tab, allowing or disallowing IP Addresses. There are some great examples here & I've added a simplified example allowing certain IPs. In this case granting access to the other account's VPC NAT Gateway IP address should work. https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html

{
  "Id":"PolicyId54",
  "Version":"2012-10-17",
  "Statement":[
    {
      "Sid":"AllowIPmix",
      "Effect":"Allow",
      "Principal":"*",
      "Action":"s3:*",
      "Resource": [
        "arn:aws:s3:::DOC-EXAMPLE-BUCKET",
        "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"
      ],
      "Condition": {
        "IpAddress": {
          "aws:SourceIp": [
            "54.240.143.0/24",
            "2001:DB8:1234:5678::/64"
          ]
        }
      }
    }
  ]
}

Note, you still turn "Block public access" off along with the above policy.

Used answered 9/9, 2021 at 13:35 Comment(4)
Granting s3:* to anonymous users is dangerous. As written, this policy will allow anonymous users from the specified IP to delete objects from the bucket. I'm sure this is not what the OP wants.Mastership
@Mastership agreed. I should've clarified. You can grant access to the other account's VPC NAT Gateway IP address. I've updated my answer accordingly.Used
The OP clarified in comments on the question that the site needs to be publicly accessible. This is still not a good answer.Mastership
@Mastership in the original question it is not clear. Only clarified in the comments afterwards. So based on the original question when that wasn't clarified, this was a legitimate answer. I answered before that was clarified.Used
J
1

Similar to what @PaulG said, you can also include a bucket policy that includes a sourceVpc condition, which allows you to set up a vpc endpoint to the bucket and only access the bucket from that VPC. I remember testing this setup a few months back and it worked to only access the website from a vpc.

Jeramie answered 9/9, 2021 at 21:47 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.