How to Output the Cosmos DB Primary and Secondary Connection Strings using Terraform - azure-cosmosdb

I need to output the Primary Connection or Secondary Connection Strings to use this connection string as an input value in Azure Data Factory MongoApi Linked Services to connect the database to upload the Json files from Azure storage account to Azure cosmos db. But I'm getting the error message while output the connection strings using terraform
Can Someone please check and help me in this with detailed explanation is much appreciated.
output "cosmosdb_connection_strings" {
value = data.azurerm_cosmosdb_account.example.connection_strings
sensitive = true
}
Error: Unsupported attribute
│
│ on outputs.tf line 21, in output "cosmosdb_connection_strings":
│ 21: value = data.azurerm_cosmosdb_account.example.connection_strings
│
│ This object has no argument, nested block, or exported attribute named "connection_strings"

I tried to reproduce the same in my environment:
resource "azurerm_cosmosdb_account" "db" {
name = "tfex-cosmos-db-31960"
location = "westus2"
resource_group_name = data.azurerm_resource_group.example.name
offer_type = "Standard"
kind = "MongoDB"
enable_automatic_failover = true
capabilities {
name = "EnableAggregationPipeline"
}
capabilities {
name = "mongoEnableDocLevelTTL"
}
capabilities {
name = "MongoDBv3.4"
}
capabilities {
name = "EnableMongo"
}
consistency_policy {
consistency_level = "BoundedStaleness"
max_interval_in_seconds = 300
max_staleness_prefix = 100000
}
geo_location {
location = "eastus"
failover_priority = 0
}
}
You can get the output using below code:
output "cosmosdb_connectionstrings" {
value = "AccountEndpoint=${azurerm_cosmosdb_account.db.endpoint};AccountKey=${azurerm_cosmosdb_account.db.primary_key};"
sensitive = true
}
I have below terraform azurerm provider version:
terraform {
required_providers {
azapi = {
source = "azure/azapi"
version = "=0.1.0"
}
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.2"
}
Try upgrade you terraform version.
You can even traverse the array of connection strings and output required one whith below code:
output "cosmosdb_connectionstrings" {
value = tostring("${azurerm_cosmosdb_account.db.connection_strings[0]}")
sensitive = true
}
Result:
As they are sensitive you cannot see output values to the UI, but you can export to required resource.
I Have created a keyvault and exported the connection strings to keyvault.
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "example" {
name = "kaexamplekeyvault"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
enabled_for_disk_encryption = true
tenant_id = data.azurerm_client_config.current.tenant_id
soft_delete_retention_days = 7
purge_protection_enabled = false
sku_name = "standard"
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"Get","List", "Backup", "Create"
]
secret_permissions = [
"Get","List", "Backup", "Delete", "Purge", "Recover", "Restore", "Set"
]
storage_permissions = [
"Get", "List", "Backup", "Delete", "DeleteSAS", "GetSAS", "ListSAS", "Purge", "Recover", "RegenerateKey", "Restore", "Set", "SetSAS", "Update",
]
}
}
resource "azurerm_key_vault_secret" "example" {
count = length(azurerm_cosmosdb_account.db.connection_strings)
name = "ASCosmosDBConnectionString-${count.index}"
value = tostring("${azurerm_cosmosdb_account.db.connection_strings[count.index]}")
key_vault_id = azurerm_key_vault.example.id
}
Then you can check the connection string values in your keyvault.
check the version and click on show secret from which you can copy the secret value which is connection string.

I have found two ways and implemented both ways were working.
In the first way I can be able to store the primary connection string of the cosmos db using azurerm_cosmosdb_account.acc.connection_strings[0] with index number. So, it will only store the Primary Connection String.
resource "azurerm_key_vault_secret" "ewo11" {
name = "Cosmos-DB-Primary-String"
value = azurerm_cosmosdb_account.acc.connection_strings[0]
key_vault_id = azurerm_key_vault.ewo11.id
depends_on = [
azurerm_key_vault.ewo11,
azurerm_key_vault_access_policy.aduser,
azurerm_key_vault_access_policy.demo-terraform-automation
]
}
In the Second Way is I'm creating it manually by using join function. I have found some common values in the connection string, like wise I have creating and I'm successfully able to connect with this string.
output "cosmosdb_account_primary_key" {
value = azurerm_cosmosdb_account.acc.primary_key
sensitive = true
}
locals {
kind = "mongodb"
db_name = azurerm_cosmosdb_account.acc.name
common_value = ".mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName="
}
output "cosmosdb_connection_strings" {
value = join("", [local.kind, ":", "//", azurerm_cosmosdb_account.acc.name, ":", azurerm_cosmosdb_account.acc.primary_key, "#", local.db_name, local.common_value, "#", local.db_name, "#"])
sensitive = true
}
resource "azurerm_key_vault_secret" "example" {
name = "cosmos-connection-string"
value = join("", [local.kind, ":", "//", azurerm_cosmosdb_account.acc.name, ":", azurerm_cosmosdb_account.acc.primary_key, "#", local.db_name, local.common_value, "#", local.db_name, "#"])
key_vault_id = data.azurerm_key_vault.example.id
}
In both ways I can be able to fix the problems.
If we want to see the sensitive values, we check those values in terraform.tfstate file. It will be available when we call them in outputs.

Related

I need support to add the AWS variables in Terraform

Need support to add following variables to my terraform code so that user can input the details and it can create the desired resources in AWS. I don't know how to do that your kind support will be highly appreciated.
`
resource "aws_instance" "ec2" {
ami = "ami-0fe0b2cf0e1f25c8a"
instance_type = "t2.micro"
vpc_security_group_ids = [var.security_group_id]
subnet_id = var.subnet_id
key_name = var.key
}
variable "security_group_id" {
type = string
description = "Enter the SG"
}
variable "key" {
description = "Enter the Kaypair"
}
variable "subnet_id" {
type = string
description = "Enter the Subnet"
}
`
How can i add following AWS variables
Size, AZ, HDD, Port
I have tried the following code.But due to lack of knowledge i dont know how to add the required variables e.g there are resources + data source.
resource "aws_instance" "ec2" {
ami = "ami-0fe0b2cf0e1f25c8a"
instance_type = "t2.micro"
vpc_security_group_ids = [var.security_group_id]
subnet_id = var.subnet_id
key_name = var.key
}
variable "security_group_id" {
type = string
description = "Enter the SG"
}
variable "key" {
description = "Enter the Kaypair"
}
variable "subnet_id" {
type = string
description = "Enter the Subnet"
}
#Malik, hopefully, this will help you and give you a bit of an idea how to initiate your config.
https://github.com/ishuar/terraform-eks/blob/main/examples/private_cluster/eks-ec2-private-jump-host.tf#L253
this only includes basic configuration for EC2 Instances.
If you are looking for a module then its better to look into: https://github.com/terraform-aws-modules/terraform-aws-ec2-instance
unfortunately, there are multiple ways of defining the resources that you have to add to your Ec2 instance but the very standard and basic way would be
resource "aws_instance" "ec2" {
count = var.instance_count
ami = "ami-0fe0b2cf0e1f25c8a"
instance_type = "t2.micro"
vpc_security_group_ids = [var.security_group_id]
subnet_id = var.subnet_id
key_name = var.key
tags = { Name = "${var.name}-${count.index + 1}" }
## For the root volume attached to Ec2 instance
root_block_device {
delete_on_termination = var.delete_on_termination
encrypted = var.encrypted
iops = var.iops
volume_size = var.volume_size
volume_type = var.volume_type
throughput = var.throughput
}
## For the additional EBS block device attached to Ec2 instance
ebs_block_device {
delete_on_termination = var.delete_on_termination
device_name = var.device_name
encrypted = var.encrypted
iops = var.iops
kms_key_id = var.kms_key_id
snapshot_id = var.snapshot_id
volume_size = var.volume_size
volume_type = var.volume_type
throughput = var.throughput
}
}
Regarding ports I assume you mean the ports to access, those can be controlled via security groups.
I hope this info somewhat helps even though it is not exactly a module of complete aws_instance resource somewhat you can have an idea.

Terraform setproduct with inner loop

I'm trying to create a CloudWatch alarm that will cycle through instances defined in data.tf and for each on of these to cycle through the volume id's
data.tf
data "aws_instances" "instance_cloudwatch" {
instance_tags = {
Type = var.type
}
}
data "aws_ebs_volumes" "cw_volumes" {
tags = {
Name = var.name
}
}
data "aws_ebs_volume" "cw_volume" {
for_each = toset(data.aws_ebs_volumes.cw_volumes.ids)
filter {
name = "volume-id"
values = [each.value]
}
}
In the resource I created
locals {
vol_map = {
for pair in setproduct(data.aws_instances.instance_cloudwatch.ids,data.aws_ebs_volume.cw_volume.*.id) : "${pair[0]}-${pair[1]}" => {
id = pair[0]
vol = pair[1]
}
}
}
And then I try to use these pairs in the alarm dimensions
resource "aws_cloudwatch_metric_alarm" "some_alarm" {
for_each = local.vol_map
...
dimensions = {
InstanceId = each.value.id
VolumeId = each.value.vol
}
When I run terraform apply I get this error
Error: Unsupported attribute
for pair in setproduct(data.aws_instances.instance_cloudwatch.ids,data.aws_ebs_volume.cw_volume..id) : "${pair[0]}-${pair[1]}" => {*
This object does not have an attribute named "id" I tried volume_id and got the same error
The issue is that you can't use the .*. syntax (in data.aws_instances.instance_cloudwatch.ids,data.aws_ebs_volume.cw_volume.*.id) on a resource that you created with for_each. The .*. syntax only works when you use count. This is because it only works with arrays/lists, and for_each creates a map.
Try values(data.aws_instances.instance_cloudwatch.ids,data.aws_ebs_volume.cw_volume).*.id. This will get the data.aws_ebs_volume.cw_volume values as a list instead of a map, so you can then use .*. on them.

helm_release nginx-ingress-controller renames digitalocean_loadbalancer name

I have terraform config that creates digitalocean_loadbalancer and then creates helm_release with nginx-ingress-controller chart.
The first part:
resource "digitalocean_loadbalancer" "do_lb" {
name = "do-lb"
region = "ams3"
size = "lb-small"
algorithm = "round_robin"
redirect_http_to_https = true
forwarding_rule {
entry_port = 80
entry_protocol = "http"
target_port = 80
target_protocol = "http"
}
forwarding_rule {
entry_port = 443
entry_protocol = "https"
target_port = 443
target_protocol = "https"
tls_passthrough = true
}
}
it creates loadbalancer with name "do-lb" successfully.
Then, after applying helm_release
resource "helm_release" "nginx_ingress_chart" {
name = "nginx-ingress-controller"
namespace = "default"
repository = "https://charts.bitnami.com/bitnami"
chart = "nginx-ingress-controller"
set {
name = "service.type"
value = "LoadBalancer"
}
set {
name = "service.annotations.kubernetes\\.digitalocean\\.com/load-balancer-id"
value = digitalocean_loadbalancer.do_lb.id
}
depends_on = [
digitalocean_loadbalancer.do_lb,
]
}
it automatically renames the loadbalancer name to something md5-like.
The question is how to prevent such renaming?
The solution is to provide the service.beta.kubernetes.io/do-loadbalancer-name annotation.
Specifies a custom name for the Load Balancer. Existing Load Balancers will be renamed. The name must adhere to the following rules:
it must not be longer than 255 characters
it must start with an alphanumeric character
it must consist of alphanumeric characters or the '.' (dot) or '-' (dash) characters
except for the final character which must not be '-' (dash)
If no custom name is specified, a default name is chosen consisting of the character a appended by the Service UID.
Your case:
resource "helm_release" "nginx_ingress_chart" {
name = "nginx-ingress-controller"
namespace = "default"
repository = "https://charts.bitnami.com/bitnami"
chart = "nginx-ingress-controller"
set {
name = "service.type"
value = "LoadBalancer"
}
set {
name = "service.annotations.kubernetes\\.digitalocean\\.com/load-balancer-id"
value = digitalocean_loadbalancer.do_lb.id
}
set {
name = "service.annotations.kubernetes\\.digitalocean\\.com/load-balancer-name"
value = "do-lb"
}
depends_on = [
digitalocean_loadbalancer.do_lb,
]
}

list of objects (blocks for network)

In openstack_compute_instance_v2, Terraform can attach the existing networks, while I have 1 or n network to attach, in module:
...
variable "vm_network" {
type = "list"
}
resource "openstack_compute_instance_v2" "singlevm" {
name = "${var.vm_name}"
image_id = "${var.vm_image}"
key_pair = "${var.vm_keypair}"
security_groups = "${var.vm_sg}"
flavor_name = "${var.vm_size}"
network = "${var.vm_network}"
}
in my .tf file:
module "singlevm" {
...
vm_network = {"name"="NETWORK1"}
vm_network = {"name"="NETWORK2"}
}
Terraform returns expected object, got invalid error.
What am I doing wrong here?
That's not how you specify a list in your .tf file that sources the module.
Instead you should have something more like:
variable "vm_network" { default = [ "NETWORK1", "NETWORK2" ] }
module "singlevm" {
...
vm_network = "${var.vm_network}"
}

Additional Datasource with Difference Table Name

I have a domain called Test that has been running look-up SQL queries to select from another database.
I want to get away from this implementation and goto multiple datasource support of Grails 2.0. However, the table name in the other database is a called Panel.
Is it possible to map the domain to the alternative database and also map which table it selects from?
// Datasource.groovy
development {
dataSource {
dbCreate = 'create-drop' // one of 'create', 'create-drop','update'
url = "jdbc:sqlserver://machine\\SQLEXPRESS;databaseName=primarydb"
username = "user"
password = "password"
}
dataSource_otherdb {
url = "jdbc:sqlserver://remoteserver:1433;databaseName=otherdb"
}
}
// Test.groovy
class Test {
String name
int key
String abbreviation
boolean active = true
static mapping = {
sort name: "asc"
datasource("otherdb")
}
}
There is a "table"configuration in mapping closure for that.
// Test.groovy
class Test {
String name
int key
String abbreviation
boolean active = true
static mapping = {
table 'Panel' //customize table name
sort name: "asc"
datasource("otherdb")
}
}

Resources