Facing error with count.index in terraform - terraform-provider-aws

I am preparing to deploy Aurora MySQL Cluster, I have defined the following in the “variable.tf” under root:
variable "db_subnet_id" {
type = list
description = `DB Subnet IDs`
default = [“subnet-02ddc8565555aaa9b”,“subnet-1d30f3a41ce19635d”] #db-sub-eu-central-1a , db-sub-eu-central-1b
}
variable `db-sg` {
type = list
description = “List of standard security groups”
default = [`“sg-00cfed28101ea95ab”,“sg-08017f86bc12348e8”,“sg-0e8c67c7a3cd79a79”`]
}
variable `db_az` {
type = list
description = “Frankfurt Availability Zones”
default = [“eu-central-1a”,“eu-central-1b”]
}
how can I use those variables with the following parameters:
availability_zones
db_subnet_group_name
vpc_security_group_ids
I have tried to use this vailability_zones = var.db_az[count.index] but the following error shows up:
The “count” object can only be used in “module”, “resource”, and “data” blocks, and only when the “count” argument is set.
please advise, how can I use those variables with those parameters ?
Thanks

Related

Having trouble assigning a list variable to SSM Parameter as a stringList

I receive an error, value must be a string, when trying to set an ssm parameter with type=stringlist to a variable of type list using terraform.
resource "aws_ssm_parameter" "customer_list_stg" {
name = "/corp/stg/customer_list"
type = "StringList"
value = var.customer_list
tags = {
environment = var.environment
}
}
customer_list = ["sar", "smi", "heath","first","human","stars","ther","ugg","stars","well"]
terraform apply: expecting an ssm parameter with a list of 10 strings
received an error: Inappropriate value for attribute "value": string required.
I have tried tostring, jsonencode and flatten without success.
Just use list as the type, StringList isn't a valid HCL type. Please see the documentation here:
[https://developer.hashicorp.com/terraform/language/expressions/types][1]
Regardless of the attribute type = "StringList", the value attribute of aws_ssm_parameter always expects a string literal.
Please refer to the AWS API docs
Hence the correct code would be as follow
resource "aws_ssm_parameter" "customer_list_stg" {
name = "/corp/stg/customer_list"
type = "StringList"
value = join(",", var.customer_list) ## if you want to use list(string) input
# value = "sar,smi,heath,first,human,stars,ther,ugg,stars,well" ## as a string literal (you can put this value in a variable too)
tags = {
environment = var.environment # define this variable in your config too.
}
}
variable "customer_list" {
type = list(string)
description = "(optional) Customer List"
default = ["sar", "smi", "heath", "first", "human", "stars", "ther", "ugg", "stars", "well"]
}

Get a filtered list from a terraform map of objects?

I have a list of users with characteristics like this and I want to create a local variable that includes the names of the users in the "maker" group.
variable "users" {
type = map(object({
groups = list(string)
}))
default = {
"kevin.mccallister" = {
groups = ["kids", "maker"],
},
"biff" = {
groups = ["kids", "teens", "bully"],
},
}
}
I want to write the local like this, but it complains
Error: Invalid 'for' expression ... Key expression is required when
building an object.
locals {
makers_list = flatten({
for user, attr in var.users: user
if contains(attr.groups, "makers")
})
}
How can I take that map of objects and get out a list of names based on group affiliation?
flatten() is not required for this. Also, the {} is pushing this to build an object. You can instead build a list using [] and then it will create a list of the users filtered by their group association.
makers_list = [
for user, attr in var.users: user
if contains(attr.groups, "makers")
]

terraform and kms key aliases

I am using the aws provider and trying to create an aws_workspaces_workspace with encrypted volumes.
I created an aws_kms_key with an associated alias (aws_kms_alias).
I specified the key alias (as a string) for volume_encryption_key.
The resource is created as expected and I can verify in the console that the volumes are encrypted with the specified key.
My issue is that every time I re-run terraform apply, terraform reports that the aws_workspaces_workspace needs to be replaced because of an update in the key value (from a key id to the alias)
How can I prevent this form happening? Is this a bug? Am I doing something incorrectly? Some of the relevant code is below.
resource "aws_workspaces_workspace" "workspace" {
directory_id = aws_workspaces_directory.ws-ad.id
bundle_id = var.bundle_id
user_name = var.username
root_volume_encryption_enabled = true
user_volume_encryption_enabled = true
volume_encryption_key = "alias/workspace-volume"
workspace_properties {
compute_type_name = "POWER"
user_volume_size_gib = 80
root_volume_size_gib = 50
running_mode = "AUTO_STOP"
running_mode_auto_stop_timeout_in_minutes = 60
}
}
resource "aws_kms_key" "kms-ws-volume" {
description = "Workspace Volume Encryption Key"
key_usage = "ENCRYPT_DECRYPT"
deletion_window_in_days = 30
is_enabled = true
}
resource "aws_kms_alias" "kms-ws-volume-alias" {
name = "alias/workspace-volume"
target_key_id = aws_kms_key.kms-ws-volume.key_id
}
Here's what terraform apply reports:
# aws_workspaces_workspace.workspace["1"] must be replaced
-/+ resource "aws_workspaces_workspace" "workspace" {
~ computer_name = "WSAMZN-T34E23BK" -> (known after apply)
~ id = "ws-v98b0y17z" -> (known after apply)
~ ip_address = "10.0.0.45" -> (known after apply)
~ state = "STOPPED" -> (known after apply)
tags = {
"Name" = "workspace-user1-env1"
"Owner" = "mario"
"Profile" = "dev"
"Stack" = "env1"
}
~ volume_encryption_key = "arn:aws:kms:us-west-2:927743275319:key/09de3db9-ecdd-4be1-a781-705fdd0294f9" -> "alias/workspace-volume" # forces replacement
# (6 unchanged attributes hidden)
# (1 unchanged block hidden)
}
Use the ARN of the key: aws_kms_key.kms-ws-volume.arn
volume_encryption_key is storing the ARN of the key, and therefore the plan detects a change.
The example on https://registry.terraform.io/providers/hcavarsan/aws/latest/docs/resources/workspaces_workspace might be misleading in this regard, despite an alias will also work.
Something similar happens with kms_key_id of aws_instance, in that it stores the ARN and not the key_id , and the plan always requires a volume replacement when using key_id instead of ARN. https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance#kms_key_id

Terraform: pass variable list to container environment variables

I am trying to find a way how to pass/map the list variable to kubernetes_deployment container environment variables. I run an asp.net core application in the container and I trying to modify the app with setting env variables to override the app settings.json. It is not a problem to override single values from settings.json, the problem is if I need to define/override the whole array there.
I have variable list like this in terrafrom:
variable "allowed_cars" {
type = list(
object({
manufacturer = string
model = string
})
)
}
Then, I have resource definition of kubernetes_deployment with containers.
I believe it could work if I set the env variables for the container like this:
env {
name = "App__AllowedCars__0__Manufacturer"
value = "xxx"
}
env {
name = "App__AllowedCars__0__Model"
value = "xxx"
}
env {
name = "App__AllowedCars__1__Manufacturer"
value = "xxx"
}
env {
name = "App__AllowedCars__1__Model"
value = "xxx"
}
...
Is there a way how to pass these env variables to the container in a dynamic way based on allowed_cars terraform variable? I don't know how many items will be defined for each environments, etc...
Thank you very much.
Something of this definition (used example of azurerm_container_group in azure)
environment_variables = "${merge(var.env_vars,var.secure_env_vars,local.master_env)}"
and then variables can be passed in
variable "env_vars" {
type = "map"
description = "envvaars"
default = {
WEB_USER = "locust"
HATCH_RATE = 25
LOCUST_COUNT = 50
LOCUST_FILE = "/locust/locustfile.py"
ATTACKED_HOST = "https://api-perf.yrdy.com"
}
}
variable "secure_env_vars" {
type = "map"
description = "secure env vars"
default = {
WEB_PASSWORD = "dummy"
API_KEY = "test"
}
}
As #ydaetskcoR mentioned, the dynamic block was solution for me.
https://www.terraform.io/docs/language/expressions/dynamic-blocks.html
dynamic "env" {
for_each = var.allowed_cars
content {
name = "App__AllowedCars__0__${env.key}__Manufacturer"
value = env.value["manufacturer"]
}
}

How to pass multiple bootstrap actions in AWS EMR using Terraform?

I am trying to create a terraform module for AWS EMR cluster. I need to run multiple bootstrap scripts in EMR, where I am having errors.
For example:
main.tf
...
variable bootstrap_actions { type = "list"}
...
resource "aws_emr_cluster" "emr-cluster" {
name = "${var.emr_name}"
release_label = "${var.release_label}"
applications = "${var.applications}"
termination_protection = true
ec2_attributes {
subnet_id = "${data.aws_subnet.subnet.id}"
emr_managed_master_security_group = "${data.aws_security_group.emr_managed_master_security_group.id}"
emr_managed_slave_security_group = "${data.aws_security_group.emr_managed_slave_security_group.id}"
service_access_security_group = "${data.aws_security_group.service_access_security_group.id}"
additional_slave_security_groups = "${var.additional_slave_security_groups_id}"
instance_profile = "${var.instance_profile}"
key_name = "${var.key_name}"
}
master_instance_type = "${var.master_instance_type}"
core_instance_type = "${var.core_instance_type}"
core_instance_count = "${var.core_instance_count}"
tags {
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
BUSINESS_REGION = "${var.BUSINESS_REGION}"
CLIENT = "${var.CLIENT}"
ENVIRONMENT = "${var.env}"
PLATFORM = "${var.PLATFORM}"
Name = "${var.emr_name}"
}
bootstrap_action = "${var.bootstrap_actions}"
configurations = "test-fixtures/emr_configurations.json"
service_role = "${var.service_role}"
autoscaling_role = "${var.autoscaling_role}"
}
I passed all the variables including bootstrap as :
bootstrap_actions = [ "path=s3://bucket/bootstrap/hive/metastore/JSON41.sh,name=SERDE","path=s3://bucket/bootstrap/hive/hive-nofile-nproc-increase.sh,name=ulimit" ]
When I am applying the plan, I am getting error:
* aws_emr_cluster.emr-cluster: bootstrap_action.0: expected object, got invalid
* aws_emr_cluster.emr-cluster: bootstrap_action.1: expected object, got invalid
Does anyone have any idea on it? How can I pass multiple bootstrap actions here.
Please advise.
Thanks.
It works :
bootstrap_action = [
{
name = "custombootrstrap_test1"
path = "s3://${aws_s3_bucket.bucketlogs.bucket}/bootstrap-actions/master/configure-test1.sh"
},
{
name = "custombootrstrap_test2"
path = "s3://${aws_s3_bucket.bucketlogs.bucket}/bootstrap-actions/master/configure-test2.sh"
},
]
Doc: bootstrap_action - (Optional) list of bootstrap actions that will be run before Hadoop is started on the cluster nodes.
Your variable is list of strings, not list of objects. This variable is probably ok (not tested):
bootstrap_actions = [
{
path = "s3://bucket/bootstrap/hive/metastore/JSON41.sh"
name = "SERDE"
},
{
path = "s3://bucket/bootstrap/hive/hive-nofile-nproc-increase.sh"
name = "ulimit"
},
]
It should probably look more like this, https://github.com/hashicorp/terraform-provider-aws/issues/22416
bootstrap_action {
path = "s3://${var.emrS3LogUri}/folder1/folder2/emr1.sh"
name = "emr1"
args = ["s3://${var.emrS3LogUri}/folder1/certs/", var.s3BucketRegion]
}
bootstrap_action {
path = "s3://${var.emrS3LogUri}/folder1/folder2/emr2.sh"
name = "emr2"
}

Resources