Terraform aws_s3_bucket_website_configuration keeps creating website block of aws_s3_bucket resource - terraform-provider-aws

I'm using ~3.0 as AWS provider version on Terraform and last terraform init downloaded 3.75.1. When I ran terraform plan, a WARNING came up;
Warning: Argument is deprecated
on main.tf line 14, in resource "aws_s3_bucket" "xxx":
14: resource "aws_s3_bucket" "xxx" {
Use the aws_s3_bucket_website_configuration resource instead
My bucket resource was like this;
resource "aws_s3_bucket" "bucket" {
bucket = "bucket"
acl = "public-read"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket/*"
}
]
}
EOF
website {
index_document = "index.html"
error_document = "index.html"
}
}
And due to latest changes on provider configuration and Deprecation warning I got because of changes, I divided my bucket resource to 3 like below;
resource "aws_s3_bucket" "bucket" {
bucket = "bucket"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket/*"
}
]
}
EOF
}
resource "aws_s3_bucket_acl" "bucket-acl" {
bucket = aws_s3_bucket.bucket.id
acl = "public-read"
}
resource "aws_s3_bucket_website_configuration" "bucket-website-config" {
bucket = aws_s3_bucket.bucket.id
index_document {
suffix = "index.html"
}
error_document {
key = "index.html"
}
}
I ran terraform plan, Output was like below;
# aws_s3_bucket.bucket will be updated in-place
~ resource "aws_s3_bucket" "bucket" {
~ acl = "public-read" -> "private"
id = "bucket"
tags = {}
# (13 unchanged attributes hidden)
- website {
- error_document = "index.html" -> null
- index_document = "index.html" -> null
}
# (1 unchanged block hidden)
}
# aws_s3_bucket_acl.bucket-acl will be created
+ resource "aws_s3_bucket_acl" "bucket-acl" {
+ acl = "public-read"
+ bucket = "bucket"
+ id = (known after apply)
+ access_control_policy {
+ grant {
+ permission = (known after apply)
+ grantee {
+ display_name = (known after apply)
+ email_address = (known after apply)
+ id = (known after apply)
+ type = (known after apply)
+ uri = (known after apply)
}
}
+ owner {
+ display_name = (known after apply)
+ id = (known after apply)
}
}
}
# aws_s3_bucket_website_configuration.bucket-website-config will be created
+ resource "aws_s3_bucket_website_configuration" "bucket-website-config" {
+ bucket = "bucket"
+ id = (known after apply)
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
+ error_document {
+ key = "index.html"
}
+ index_document {
+ suffix = "index.html"
}
}
Despite the confusion (because I couldn't understand the changes on aws_s3_bucket. Because I'm using the same configuration values basically), I ran terraform apply to see what will be happening.
After every change is applied, I ran terraform plan to make sure everything is up-to-date. After this point, my environment entered kind of a vicious circle here.
Second terraform plan output is;
aws_s3_bucket.bucket will be updated in-place
~ resource "aws_s3_bucket" "bucket" {
id = "bucket"
tags = {}
# (14 unchanged attributes hidden)
- website {
- error_document = "index.html" -> null
- index_document = "index.html" -> null
}
# (1 unchanged block hidden)
}
As we can see, it tries to remove website configuration from bucket. I ran terraform apply for this as well and after apply, I ran terraform plan for the 3rd time;
# aws_s3_bucket_website_configuration.bucket-website-config will be created
+ resource "aws_s3_bucket_website_configuration" "bucket-website-config" {
+ bucket = "bucket"
+ id = (known after apply)
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
+ error_document {
+ key = "index.html"
}
+ index_document {
+ suffix = "index.html"
}
}
When I apply this, Terraform is trying to remove website config again, And these circle of changes goes on and on.
Is this a bug, are there anyone stumbled upon this issue? Is there any solution other than adding ignore_changes block or downgrading provider version?
Any help will be appreciated,
Thank you very much.

I had exactly the same case and I ran into it because of a too old provider version.
I was also using a ~3.62 AWS provider.
According to the provider changelog some of this resources just got added with 4.0.0:
New Resource: aws_s3_bucket_website_configuration (#22648)
New Resource: aws_s3_bucket_acl (#22853)
I switched to version >= 4.4 for the AWS provider and afterwards everything was working as expected (just to mention it, I have chosen 4.4 for additional reasons not related to this problem. 4.0 should have also already been enough).

as #lopin said, it's an old version provider problem. Additionally to #Oguzhan Aygun lifecycle workaround, you can use the old version provider method which is the website block inside the aws_s3_bucket resource like the following;
resource "aws_s3_bucket" "b" {
bucket = "s3-website-test.hashicorp.com"
website {
index_document = "index.html"
error_document = "error.html"
routing_rules = ...
}```

Related

Google-beta seems to be using non-existent project with google_firebase_project. What should I do?

Objective
I am trying to fix a Firebase deployment managed in Terraform. My module looks something like this...
data "google_client_config" "default_project" {
provider = google-beta
}
data "google_project" "project" {
provider = google-beta
project_id = var.gcp_project
}
resource "google_firebase_project" "default" {
provider = google-beta
project = var.gcp_project
}
# enable billing API
resource "google_project_service" "cloud_billing" {
provider = google-beta
project = google_firebase_project.default.id
service = "cloudbilling.googleapis.com"
}
# enable firebase
resource "google_project_service" "firebase" {
provider = google-beta
project = google_firebase_project.default.id
service = "firebase.googleapis.com"
}
# enable access context manage api
resource "google_project_service" "access_context" {
provider = google-beta
project = google_firebase_project.default.id
service = "accesscontextmanager.googleapis.com"
}
resource "google_firebase_web_app" "app" {
provider = google-beta
project = data.google_project.project.project_id
display_name = "firestore-controller-${google_firebase_project.default.display_name}"
depends_on = [
google_firebase_project.default,
google_project_service.firebase,
google_project_service.access_context,
google_project_service.cloud_billing
]
}
data "google_firebase_web_app_config" "app" {
provider = google-beta
web_app_id = google_firebase_web_app.app.app_id
}
resource "google_storage_bucket" "storage" {
provider = google-beta
name = "firestore-controller-${google_firebase_project.default.display_name}"
location = "US"
}
locals {
firebase_config = jsonencode({
appId = google_firebase_web_app.app.app_id
apiKey = data.google_firebase_web_app_config.app.api_key
authDomain = data.google_firebase_web_app_config.app.auth_domain
databaseURL = lookup(data.google_firebase_web_app_config.app, "database_url", "")
storageBucket = lookup(data.google_firebase_web_app_config.app, "storage_bucket", "")
messagingSenderId = lookup(data.google_firebase_web_app_config.app, "message_sender_id", "")
measurementId = lookup(data.google_firebase_web_app_config.app, "measurement_id", "")
})
}
resource "google_storage_bucket_object" "firebase_config" {
provider = google-beta
bucket = google_storage_bucket.storage.name
name = "firebase-config.json"
content = local.firebase_config
}
Issue
Unfortunately, this fails at google_firebase_project.default with the following message:
{
│ "#type": "type.googleapis.com/google.rpc.ErrorInfo",
│ "domain": "googleapis.com",
│ "metadata": {
│ "consumer": "projects/764086051850",
│ "service": "firebase.googleapis.com"
│ },
│ "reason": "SERVICE_DISABLED"
│ }
This is strange because a project with that number does not exist (unless it's some kind of root project that I'm having trouble finding). If this is the the project number for some child of the project I am providing to google_firebase_project.default that is also strange; var.gcp_project_name certainly has this service enabled.
What I've tried thusfar
Removing tfstate.
Refactoring back and forth from legacy modules.
I have double-checked and confirmed that the google-beta provider does indeed recognize the correct project in its configuration when using data.google_project without specifying a project_id.
Where is this mysterious projects/764086051850 coming from?
cross-post

flow log for specific ENI

so I am supposed be able to create a vpc flow log for a specific instance network interface.
I have been able to create a vpc flow log for the entire VPC but not a specific instance network interface. If I create an instance. it comes with a eni. I would think that I should be able to inspect the instance to find the eni and get the eni id.
for this source code
resource "aws_instance" "master_inst" { ...}
resource "aws_flow_log" "example-instance-flow-log" {
provider = aws.region_master
iam_role_arn = aws_iam_role.master-vpc-flow-log-role.arn
log_destination = aws_cloudwatch_log_group.master-instance-flow-log.arn
traffic_type = "ALL"
eni_id = aws_instance.master_inst.network_interface.id
}
resource "aws_cloudwatch_log_group" "master-instance-flow-log" {
provider = aws.region_master
name = "master-instance-flow-log"
}
I am getting
Error: Cannot index a set value
│
│ on ../../modules/instances.tf line 78, in resource "aws_flow_log" "example-instance-flow-log":
│ 78: eni_id = aws_instance.master_inst.network_interface.id
│
│ Block type "network_interface" is represented by a set of objects, and set elements do not have addressable keys. To find elements matching specific criteria, use a "for" expression with an "if"
│ clause.
this does the trick
In order for Terraform destroy to clean up the log group the role needs to have permission to destroy the log group. Now unfortunately adding the delete to the policy for some reason 1 out of 3 does not actually delete the log group. so you have to keep the console open to manually delete the log group.
resource "aws_iam_role" "flowlog-role" {
provider = aws.region_master
name = "flowlog-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "vpc-flow-logs.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy" "flowlog-role-policy" {
provider = aws.region_master
name = "flowlog-role-policy"
role = aws_iam_role.flowlog-role.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:DeleteLogGroup",
"logs:CreateLogStream",
"lots:DeleteLogStream",
"logs:PutLogEvents",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_cloudwatch_log_group" "master-instance-flowlog-grp" {
count = var.enable_instance_flowlog? 1 : 0
provider = aws.region_master
name = "master-instance-flowlog-grp"
retention_in_days = 3 ## need to specify number of days otherwise terraform destroy will not remove log group
}
resource "aws_flow_log" "master-instance-flowlog" {
count = var.enable_instance_flowlog? 1 : 0
provider = aws.region_master
iam_role_arn = aws_iam_role.flowlog-role.arn
log_destination = aws_cloudwatch_log_group.master-instance-flowlog-grp[count.index].arn
traffic_type = "ALL"
eni_id = aws_instance.master_instance.primary_network_interface_id
}

Kubernetes Nginx ingress - failed to ensure load balancer: could not find any suitable subnets for creating the ELB

I would like to deploy a minimal k8s cluster on AWS with Terraform and install a Nginx Ingress Controller with Helm.
The terraform code:
provider "aws" {
region = "us-east-1"
}
data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}
variable "cluster_name" {
default = "my-cluster"
}
variable "instance_type" {
default = "t2.large"
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
version = "~> 1.11"
}
data "aws_availability_zones" "available" {
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.0.0"
name = "k8s-${var.cluster_name}-vpc"
cidr = "172.16.0.0/16"
azs = data.aws_availability_zones.available.names
private_subnets = ["172.16.1.0/24", "172.16.2.0/24", "172.16.3.0/24"]
public_subnets = ["172.16.4.0/24", "172.16.5.0/24", "172.16.6.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
public_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "12.2.0"
cluster_name = "eks-${var.cluster_name}"
cluster_version = "1.18"
subnets = module.vpc.private_subnets
vpc_id = module.vpc.vpc_id
worker_groups = [
{
name = "worker-group-1"
instance_type = "t3.small"
additional_userdata = "echo foo bar"
asg_desired_capacity = 2
},
{
name = "worker-group-2"
instance_type = "t3.small"
additional_userdata = "echo foo bar"
asg_desired_capacity = 1
},
]
write_kubeconfig = true
config_output_path = "./"
workers_additional_policies = [aws_iam_policy.worker_policy.arn]
}
resource "aws_iam_policy" "worker_policy" {
name = "worker-policy-${var.cluster_name}"
description = "Worker policy for the ALB Ingress"
policy = file("iam-policy.json")
}
The installation performs correctly:
helm install my-release nginx-stable/nginx-ingress
NAME: my-release
LAST DEPLOYED: Sat Jun 26 22:17:28 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The NGINX Ingress Controller has been installed.
The kubectl describe service my-release-nginx-ingress returns:
Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB
The VPC is created and the public subnet seems to be correctly tagged, what is lacking to make the Ingress aware of the public subnet ?
In the eks modules you are prefixing the cluster name with eks-:
cluster_name = "eks-${var.cluster_name}"
However you do not use the prefix in your subnet tags:
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
Drop the prefix from cluster_name and add it to the cluster name variable (assuming you want the prefix at all). Alternatively, you could add the prefix to your tags to fix the issue, but that approach makes it easier to introduce inconsistencies.

Error: Error getting Backup Vault: AccessDeniedException:

Can Someone please help what is this error for? I was configuring AWS Backup and got this error message. I have tried in many ways (IAM policy etc) but no luck. Any assistance is much appreciated.
Error: Error getting Backup Vault: AccessDeniedException:
status code: 403, request id: 501c0713-0ce9-4879-93f6-1887322a38be
I ran into this issue using terraform. I figured this out by adding the "backup-storage:MountCapsule" permission to the policy of the role I am using to create the resource. Here is a slightly edited policy and role configuration. Hopefully, this helps someone.
data "aws_iam_policy_document" "CloudFormationServicePolicy" {
statement {
sid = "AllResources"
effect = "Allow"
actions = [
"backup:*",
"backup-storage:MountCapsule",
...
]
resources = ["*"]
}
statement {
sid = "IAM"
effect = "Allow"
actions = ["iam:PassRole"]
resources = ["*"]
}
}
resource "aws_iam_policy" "CloudFormationServicePolicy" {
name = "${local.resource_name}-CloudFormationServicePolicy"
description = "policy for the IAM role "
path = "/${local.metadata["project"]}/${local.metadata["application"]}/"
policy = data.aws_iam_policy_document.CloudFormationServicePolicy.json
}
resource "aws_iam_role" "CloudFormationServiceRole" {
name = "${local.resource_name}-CloudFormationServiceRole"
description = "Allow cluster to manage node groups, fargate nodes and cloudwatch logs"
force_detach_policies = true
assume_role_policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
"Action" : "sts:AssumeRole",
"Principal" : {
"Service" : ["cloudformation.amazonaws.com", "ecs-tasks.amazonaws.com"]
},
"Effect" : "Allow",
"Sid" : "TrustStatement"
},
{
"Effect" : "Allow",
"Principal" : {
"AWS" : "arn:aws:iam::xxxxxxx:role/OrganizationAdministratorRole"
},
"Action" : "sts:AssumeRole"
}
]
})
}
resource "aws_iam_role_policy_attachment" "CloudFormationService_task_role_policy_attachment" {
role = aws_iam_role.CloudFormationServiceRole.name
policy_arn = aws_iam_policy.CloudFormationServicePolicy.arn
}

Pass variables from terraform to arm template

I am deploying an ARM template with Terraform.
We deploy all our Azure infra with Terraform but for AKS there are some preview features which are not in terraform yet so we want to deploy an AKS cluster with an ARM template.
If I create a Log Analytics workspace with TF, how can I pass the workspace id to ARM.
resource "azurerm_resource_group" "test" {
name = "k8s-test-bram"
location = "westeurope"
}
resource "azurerm_log_analytics_workspace" "test" {
name = "lawtest"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
sku = "PerGB2018"
retention_in_days = 30
}
So here is a snippet of the AKS ARM where I want to enable monitoring and I refer to the workspaceresourceId. But how do I define/declare the parameter to get the id from the workspace that I created with TF
"properties": {
"kubernetesVersion": "[parameters('kubernetesVersion')]",
"enableRBAC": "[parameters('EnableRBAC')]",
"dnsPrefix": "[parameters('DnsPrefix')]",
"addonProfiles": {
"httpApplicationRouting": {
"enabled": false
},
omsagent": {
"enabled": true,
"config": {
"logAnalyticsWorkspaceResourceID": "[parameters('workspaceResourceId')]"
}
}
},
you could use the parameters property of the azurerm_template_deployment deployment to pass in parameters:
parameters = {
"workspaceResourceId" = "${azurerm_log_analytics_workspace.test.id}"
}
I think it should look more or less like that, here's the official doc on this.

Resources