How to solve Error loading state: AccessDenied: Access Denied status code: 403 when trying to use s3 for terraform backend?
Asked Answered
F

16

31

My simple terraform file is:

provider "aws" {
  region = "region"
  access_key = "key" 
  secret_key = "secret_key"
}

terraform {
  backend "s3" {
    # Replace this with your bucket name!
    bucket         = "great-name-terraform-state-2"
    key            = "global/s3/terraform.tfstate"
    region         = "eu-central-1"
    # Replace this with your DynamoDB table name!
    dynamodb_table = "great-name-locks-2"
    encrypt        = true
  }
}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "great-name-terraform-state-2"
  # Enable versioning so we can see the full revision history of our
  # state files
  versioning {
    enabled = true
  }
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}

resource "aws_dynamodb_table" "terraform_locks" {
  name         = "great-name-locks-2"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"
  attribute {
    name = "LockID"
    type = "S"
    }
}

All I am trying to do is to replace my backend from local to be store at S3. I am doing the following:

  1. terraform init ( when the terrafrom{} block is comment )

  2. terrafrom apply - I can see in my AWS that the bucket was created and the Dynmpo table as well.

  3. now I am un commenting the terrafrom block and again terraform init and i get the following error:

Error loading state:
    AccessDenied: Access Denied
        status code: 403, request id: xxx, host id: xxxx

My IAM has administer access I am using Terraform v0.12.24 as one can observe, I am directly writing my AWS key and secret in the file

What am i doing wrong?

I appreciate any help!

Fustian answered 17/5, 2020 at 12:37 Comment(0)
W
27

I encountered this before. Following are the steps that will help you overcome that error-

  1. Delete the .terraform directory
  2. Place the access_key and secret_key under the backend block. like below given code
  3. Run terraform init
  backend "s3" {
    bucket = "great-name-terraform-state-2"
    key    = "global/s3/terraform.tfstate"
    region = "eu-central-1"
    access_key = "<access-key>"
    secret_key = "<secret-key>"
  }
}

The error should be gone.

Wicked answered 26/12, 2020 at 15:13 Comment(4)
You can also set the AWS profile name instead of the access and secret keys.Anelace
Best practices would not advise for you to store sensitive material like your access and secret keys in your Terraform files. This is especially true if you also use a code repository like Github. As @Anelace points out, all you need to do is include a line in the backend like this: profile = your_profile_name_from_the_aws_credentials_file Also, deleting your .terraform directory is entirely unnecessary.Whitmer
Aditionallly you can use shared_credentials_file to point to a different credentials file on other location than ~/.aws/credentials if needed.Anelace
I confirm that the only thing needed is to add the profile property. Don't delete the .terraform dir and ideally don't put the access_key or secret_key in there, use the profile instead.Deltoro
B
10

I knew that my credentials were fine by running terraform init on other projects that shared the same S3 bucket for their Terraform backend.

What worked for me:

rm -rf .terraform/

Edit

Make sure to run terraform init again after deleting your local .terraform directory to ensure you installed the required packages.

Backblocks answered 17/2, 2022 at 11:49 Comment(1)
After running this, you should also terraform init again.Bathsheeb
G
5

I also faced the same issue. Then I manually remove the state file from my local system. You can find the terraform.tfstate file under .terraform/ directory and run init again. in case you had multiple profiles configured in aws cli. not mentioning profile under aws provider configuration will make terraform use default profile.

Groveman answered 10/11, 2020 at 4:4 Comment(0)
H
5

For better security, you may use shared_credentials_file and profile like so;

provider "aws" {
  region = "region"
  shared_credentials_file = "$HOME/.aws/credentials # default
  profile = "default" # you may change to desired profile
}

terraform {
  backend "s3" {
    profile = "default" # change to desired profile
    # Replace this with your bucket name!
    bucket         = "great-name-terraform-state-2"
    key            = "global/s3/terraform.tfstate"
    region         = "eu-central-1"
    # Replace this with your DynamoDB table name!
    dynamodb_table = "great-name-locks-2"
    encrypt        = true
  }
}
Hallway answered 21/4, 2021 at 5:45 Comment(0)
G
3

I googled arround but nothing help. Hope this will solve your problem. My case: I was migrating the state from local to AWS S3 bucket.

  1. Comment out terraform scope
provider "aws" {
  region = "region"
  access_key = "key" 
  secret_key = "secret_key"
}

#terraform {
#  backend "s3" {
#    # Replace this with your bucket name!
#    bucket         = "great-name-terraform-state-2"
#    key            = "global/s3/terraform.tfstate"
#    region         = "eu-central-1"
#    # Replace this with your DynamoDB table name!
#    dynamodb_table = "great-name-locks-2"
#    encrypt        = true
#  }
#}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "great-name-terraform-state-2"
  # Enable versioning so we can see the full revision history of our
  # state files
  versioning {
    enabled = true
  }
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}

resource "aws_dynamodb_table" "terraform_locks" {
  name         = "great-name-locks-2"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"
  attribute {
    name = "LockID"
    type = "S"
    }
}
  1. Run
terraform init
terraform plan -out test.tfplan
terraform apply "test.tfplan"

to create resources (S3 bucket and DynamoDb)

  1. Then uncomment terraform scope, run
AWS_PROFILE=REPLACE_IT_WITH_YOUR  TF_LOG=DEBUG   terraform init

If you get errors, just search for X-Amz-Bucket-Region:

-----------------------------------------------------
2020/08/14 15:54:38 [DEBUG] [aws-sdk-go] DEBUG: Response s3/ListObjects Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 403 Forbidden
Connection: close
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Fri, 14 Aug 2020 08:54:37 GMT
Server: AmazonS3
X-Amz-Bucket-Region: eu-central-1
X-Amz-Id-2: REMOVED
X-Amz-Request-Id: REMOVED

Copy the value of X-Amz-Bucket-Region, my case is eu-central-1.

  1. Change your region in terraform backend configuration to the corresponding value.
terraform {
  backend "s3" {
    # Replace this with your bucket name!
    bucket         = "great-name-terraform-state-2"
    key            = "global/s3/terraform.tfstate"
    region         = "eu-central-1"
    # Replace this with your DynamoDB table name!
    dynamodb_table = "great-name-locks-2"
    encrypt        = true
  }
}
Georgettageorgette answered 14/8, 2020 at 10:8 Comment(1)
Setting env var AWS_PROFILE explicitly did the trick! 🎉Cleavers
N
2

As Mintu said, we need to include the credentials in the backend configuration. One other way to do that (not to include creds) is:

  backend "s3" {
    bucket = "great-name-terraform-state-2"
    key    = "global/s3/terraform.tfstate"
    region = "eu-central-1"
    profile = "AWS_PROFILE"
  }
}

Not that, the AWS profile needs to be configured in the machine:

aws configure

or

nano .aws/credentials

One thing here to watch out, when you need to apply terraform from inside an EC2 instance, you may have an IAM Role assigned to the instance, and that may produce conflict in the permissions.

Northerly answered 21/8, 2022 at 9:53 Comment(0)
M
1

i had the same issue my IAM role didnt have the correct premissions to do List on the bucket to check use:

aws s3 ls

and see if you have access. If not add the proper IAM role

Mesdames answered 27/8, 2022 at 14:25 Comment(0)
J
0

It's not possible to create the S3 bucket that you are planning to use as remote state storage within the same terraform project. You will have to create another terraform project where you provision your state buckets (+ lock tables) or just create the bucket manually.

For a more detailed answer please read this

Jacie answered 17/5, 2020 at 20:59 Comment(6)
created another project to use the previous bucket and the dynamo table, made myself the folder system as it is in the key when made terraform init got Successfully configured the backend "s3"! Terraform will automatically use this backend unless the backend configuration changes. Error refreshing state: AccessDenied: Access Denied status code: 403, request id: xxx, host id: xxxFustian
In the most cases it is easier to just create it by hand, especially when you don't have to do it often. What I meant by "create another TF project" is: Image you are working in a DevOps Team and you have to create new dynamic terraform projects on the fly to provide to your team. Then, instead of creating the state bucket manually, you could write a simple terraform file which has a local state and provisions an s3 bucket and a dynamo db table. Afterwards you take these two components and reference them by name in your terraform { backend "s3" {} } block.Jacie
I would be interested to see what output you get when you create the bucket by hand.Jacie
sorry for the late replay, nothing works, I try to make the bucket and table from a different project - didnt work . as well tried to create manually always the same ErrorFustian
You can try to debug the terraform init command with: TF_LOG=DEBUG terraform init. Maybe its worth having a look at your ~/.aws/credentials file (or your environment variables echo $AWS_ACCESS_KEY_ID ,echo $AWS_SECRET_ACCESS_KEY and echo $AWS_SESSION_TOKEN ) if there are some different credentials which may override your set credentials.Jacie
The best bet would be to look at the TF_LOG=DEBUG. Maybe also have a look at this github issue for more information.Jacie
R
0

I was getting the same issue after running terraform apply; terraform init worked fine. None of the suggestions here worked but switching my shell from zsh to bash solved it.

Ratsbane answered 24/11, 2021 at 17:55 Comment(0)
L
0

This happened to me and the problem was that I was trying to create a bucket with a name that already exists!

Lebbie answered 4/2, 2022 at 15:27 Comment(0)
B
0

What works for me was the answer for the topic: "Error refreshing state: state data in S3 does not have the expected content" from @Exequiel Barriero (Case 2).

Link: Answer from @Exequiel Barriero

But also a different reason why you will get this error and is not related to the backend is if you try to create a lambda function with a layer and you pass a wrong ARN, in my case one extra character in the ARN caused me this headache, so please review your ARN carefully.

Bullen answered 26/7, 2022 at 19:31 Comment(0)
S
0

mostly we do as like comment out the s3 and dynamodb table configuration or else check the buckets and dynamodb table values sometimes those values were miss matched that time also we are facing issues.

Sommers answered 30/6, 2023 at 7:19 Comment(0)
T
0

There might be a wrong aws_access_key_id or aws_secret_access_key in the .aws/config file.

When I erased two lines from the file it worked!

Tapia answered 29/8, 2023 at 7:17 Comment(0)
T
0

I got the same error & I found that my aws user (in the IAM service) wasn't having Admin permission .. So, the steps to solve it was the following:

  1. AWS Account
  2. IAM service >> users >> click on ur user name
  3. permissions >> Add Permission >> choose: AdministratorAccess >> confirm/submit

Then it worked with me

Tameshatamez answered 30/9, 2023 at 12:44 Comment(0)
M
-1

I got the same error while creating s3 bucket using terraform

for modifying that error Just go to your IAM user 1.click on which user you are created for this task 2. add permissions which is "AdministratorAcess" 3. And, now again try to apply terraform plan and then terraform apply. now it will work.

Marinara answered 31/1 at 10:57 Comment(1)
Welcome to StackOverflow! While this fixed your specific issue, OP explicitly mentioned his account already had administrator-level access.Bice
A
-1

If we getting 403 Error

Make the ACL Public

go the bucket , there must be the option to make it ACL public and select.

it will work

Alit answered 28/4 at 18:38 Comment(1)
Opening your bucket to the world by making the ACL public is not good security practice. There are better, more precise ways to allow the identities you want access to the bucket without allowing the whole world to access itSphery

© 2022 - 2024 — McMap. All rights reserved.