resource "aws_instance" "dove-web" {
ami = var.AMIS[var.REGION]
instance_type = "t2.micro"
subnet_id = aws_subnet.dove-pub-1.id
key_name = aws_key_pair.rsa.key_name
vpc_security_group_ids = [aws_security_group.dove_stack_sg.id]
tags = {
Name = "cool"
}
provisioner "file" {
source = "web.sh"
destination = "/home/ubuntu/web.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /home/ubuntu/web.sh",
"sudo /home/ubuntu/web.sh"
]
}
# privat key
resource "tls_private_key" "rsa" {
algorithm = "RSA"
rsa_bits = 4096
}
# public key
resource "aws_key_pair" "rsa" {
key_name = var.PUB_key
public_key = tls_private_key.rsa.public_key_openssh
provisioner "local-exec" { # Generate "terraform-key-pair.pem" in current directory
command = <<-EOT
echo '${tls_private_key.rsa.private_key_pem}' > ./'${var.PUB_key}'.pem
chmod 400 ./'${var.PUB_key}'.pem
EOT
}
}
# store key localy
/*resource "local_file" "TF-key" {
content = tls_private_key.rsa.private_key_pem
filename = "tfkey"
}
*/
}
output "PublicIP" {
value = aws_instance.dove-web.public_ip
}
$ terraform validate
╷
│ Error: Reference to undeclared resource
│
│ on instance.tf line 6, in resource "aws_instance" "dove-web":
│ 6: key_name = aws_key_pair.rsa.key_name
│
│ A managed resource "aws_key_pair" "rsa" has not been declared in the root module.
I need to output the Primary Connection or Secondary Connection Strings to use this connection string as an input value in Azure Data Factory MongoApi Linked Services to connect the database to upload the Json files from Azure storage account to Azure cosmos db. But I'm getting the error message while output the connection strings using terraform
Can Someone please check and help me in this with detailed explanation is much appreciated.
output "cosmosdb_connection_strings" {
value = data.azurerm_cosmosdb_account.example.connection_strings
sensitive = true
}
Error: Unsupported attribute
│
│ on outputs.tf line 21, in output "cosmosdb_connection_strings":
│ 21: value = data.azurerm_cosmosdb_account.example.connection_strings
│
│ This object has no argument, nested block, or exported attribute named "connection_strings"
I tried to reproduce the same in my environment:
resource "azurerm_cosmosdb_account" "db" {
name = "tfex-cosmos-db-31960"
location = "westus2"
resource_group_name = data.azurerm_resource_group.example.name
offer_type = "Standard"
kind = "MongoDB"
enable_automatic_failover = true
capabilities {
name = "EnableAggregationPipeline"
}
capabilities {
name = "mongoEnableDocLevelTTL"
}
capabilities {
name = "MongoDBv3.4"
}
capabilities {
name = "EnableMongo"
}
consistency_policy {
consistency_level = "BoundedStaleness"
max_interval_in_seconds = 300
max_staleness_prefix = 100000
}
geo_location {
location = "eastus"
failover_priority = 0
}
}
You can get the output using below code:
output "cosmosdb_connectionstrings" {
value = "AccountEndpoint=${azurerm_cosmosdb_account.db.endpoint};AccountKey=${azurerm_cosmosdb_account.db.primary_key};"
sensitive = true
}
I have below terraform azurerm provider version:
terraform {
required_providers {
azapi = {
source = "azure/azapi"
version = "=0.1.0"
}
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.2"
}
Try upgrade you terraform version.
You can even traverse the array of connection strings and output required one whith below code:
output "cosmosdb_connectionstrings" {
value = tostring("${azurerm_cosmosdb_account.db.connection_strings[0]}")
sensitive = true
}
Result:
As they are sensitive you cannot see output values to the UI, but you can export to required resource.
I Have created a keyvault and exported the connection strings to keyvault.
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "example" {
name = "kaexamplekeyvault"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
enabled_for_disk_encryption = true
tenant_id = data.azurerm_client_config.current.tenant_id
soft_delete_retention_days = 7
purge_protection_enabled = false
sku_name = "standard"
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"Get","List", "Backup", "Create"
]
secret_permissions = [
"Get","List", "Backup", "Delete", "Purge", "Recover", "Restore", "Set"
]
storage_permissions = [
"Get", "List", "Backup", "Delete", "DeleteSAS", "GetSAS", "ListSAS", "Purge", "Recover", "RegenerateKey", "Restore", "Set", "SetSAS", "Update",
]
}
}
resource "azurerm_key_vault_secret" "example" {
count = length(azurerm_cosmosdb_account.db.connection_strings)
name = "ASCosmosDBConnectionString-${count.index}"
value = tostring("${azurerm_cosmosdb_account.db.connection_strings[count.index]}")
key_vault_id = azurerm_key_vault.example.id
}
Then you can check the connection string values in your keyvault.
check the version and click on show secret from which you can copy the secret value which is connection string.
I have found two ways and implemented both ways were working.
In the first way I can be able to store the primary connection string of the cosmos db using azurerm_cosmosdb_account.acc.connection_strings[0] with index number. So, it will only store the Primary Connection String.
resource "azurerm_key_vault_secret" "ewo11" {
name = "Cosmos-DB-Primary-String"
value = azurerm_cosmosdb_account.acc.connection_strings[0]
key_vault_id = azurerm_key_vault.ewo11.id
depends_on = [
azurerm_key_vault.ewo11,
azurerm_key_vault_access_policy.aduser,
azurerm_key_vault_access_policy.demo-terraform-automation
]
}
In the Second Way is I'm creating it manually by using join function. I have found some common values in the connection string, like wise I have creating and I'm successfully able to connect with this string.
output "cosmosdb_account_primary_key" {
value = azurerm_cosmosdb_account.acc.primary_key
sensitive = true
}
locals {
kind = "mongodb"
db_name = azurerm_cosmosdb_account.acc.name
common_value = ".mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName="
}
output "cosmosdb_connection_strings" {
value = join("", [local.kind, ":", "//", azurerm_cosmosdb_account.acc.name, ":", azurerm_cosmosdb_account.acc.primary_key, "#", local.db_name, local.common_value, "#", local.db_name, "#"])
sensitive = true
}
resource "azurerm_key_vault_secret" "example" {
name = "cosmos-connection-string"
value = join("", [local.kind, ":", "//", azurerm_cosmosdb_account.acc.name, ":", azurerm_cosmosdb_account.acc.primary_key, "#", local.db_name, local.common_value, "#", local.db_name, "#"])
key_vault_id = data.azurerm_key_vault.example.id
}
In both ways I can be able to fix the problems.
If we want to see the sensitive values, we check those values in terraform.tfstate file. It will be available when we call them in outputs.
I'm trying to create a CloudWatch alarm that will cycle through instances defined in data.tf and for each on of these to cycle through the volume id's
data.tf
data "aws_instances" "instance_cloudwatch" {
instance_tags = {
Type = var.type
}
}
data "aws_ebs_volumes" "cw_volumes" {
tags = {
Name = var.name
}
}
data "aws_ebs_volume" "cw_volume" {
for_each = toset(data.aws_ebs_volumes.cw_volumes.ids)
filter {
name = "volume-id"
values = [each.value]
}
}
In the resource I created
locals {
vol_map = {
for pair in setproduct(data.aws_instances.instance_cloudwatch.ids,data.aws_ebs_volume.cw_volume.*.id) : "${pair[0]}-${pair[1]}" => {
id = pair[0]
vol = pair[1]
}
}
}
And then I try to use these pairs in the alarm dimensions
resource "aws_cloudwatch_metric_alarm" "some_alarm" {
for_each = local.vol_map
...
dimensions = {
InstanceId = each.value.id
VolumeId = each.value.vol
}
When I run terraform apply I get this error
Error: Unsupported attribute
for pair in setproduct(data.aws_instances.instance_cloudwatch.ids,data.aws_ebs_volume.cw_volume..id) : "${pair[0]}-${pair[1]}" => {*
This object does not have an attribute named "id" I tried volume_id and got the same error
The issue is that you can't use the .*. syntax (in data.aws_instances.instance_cloudwatch.ids,data.aws_ebs_volume.cw_volume.*.id) on a resource that you created with for_each. The .*. syntax only works when you use count. This is because it only works with arrays/lists, and for_each creates a map.
Try values(data.aws_instances.instance_cloudwatch.ids,data.aws_ebs_volume.cw_volume).*.id. This will get the data.aws_ebs_volume.cw_volume values as a list instead of a map, so you can then use .*. on them.
There are process.cpuUsage() function on NodeJS and Deno.memoryUsage() function to get memory usage on Deno
also there are a process module for Deno at https://deno.land/std#0.123.0/node/process.ts
but its not including something like .cpuUsage()
so is there are a way to get current cpu usage on Deno ?
At the time I write this answer, it's not natively possible to obtain sampled CPU load data in Deno.
If you want this data now, you can get it one of two ways:
Using the Foreign Function Interface API
Use the subprocess API
I'll provide a code sample below for how to get the data by installing Node.js and using the second method:
node_eval.ts:
type MaybePromise<T> = T | Promise<T>;
type Decodable = Parameters<TextDecoder['decode']>[0];
const decoder = new TextDecoder();
async function trimDecodable (decodable: MaybePromise<Decodable>): Promise<string> {
return decoder.decode(await decodable).trim();
}
/**
* Evaluates the provided script using Node.js (like
* [`eval`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval))
* and returns `stdout`.
*
* Uses the resolved version of `node` according to the host environemnt.
*
* Requires `node` to be available in `$PATH`.
* Requires permission `--allow-run=node`.
*/
export async function evaluateUsingNodeJS (script: string): Promise<string> {
const cmd = ['node', '-e', script];
const proc = Deno.run({cmd, stderr: 'piped', stdout: 'piped'});
const [{code}, stderr, stdout] = await Promise.all([
proc.status(),
trimDecodable(proc.stderrOutput()),
trimDecodable(proc.output()),
]);
if (code !== 0) {
const msg = stderr ? `\n${stderr}` : '';
throw new Error(`The "node" subprocess exited with a non-zero status code (${code}). If output was emitted to stderr, it is included below.${msg}`);
}
return stdout;
}
mod.ts:
import {evaluateUsingNodeJS} from './node_eval.ts';
export const cpuTimesKeys: readonly (keyof CPUTimes)[] =
['user', 'nice', 'sys', 'idle', 'irq'];
export type CPUTimes = {
/** The number of milliseconds the CPU has spent in user mode */
user: number;
/**
* The number of milliseconds the CPU has spent in nice mode
*
* `nice` values are POSIX-only.
* On Windows, the nice values of all processors are always `0`.
*/
nice: number;
/** The number of milliseconds the CPU has spent in sys mode */
sys: number;
/** The number of milliseconds the CPU has spent in idle mode */
idle: number;
/** The number of milliseconds the CPU has spent in irq mode */
irq: number;
};
export type CPUCoreInfo = {
model: string;
/** in MHz */
speed: number;
times: CPUTimes;
};
/**
* Requires `node` to be available in `$PATH`.
* Requires permission `--allow-run=node`.
*/
export async function sampleCPUsUsingNodeJS (): Promise<CPUCoreInfo[]> {
const script = `console.log(JSON.stringify(require('os').cpus()));`;
const stdout = await evaluateUsingNodeJS(script);
try {
return JSON.parse(stdout) as CPUCoreInfo[];
}
catch (ex) {
const cause = ex instanceof Error ? ex : new Error(String(ex));
throw new Error(`The "node" subprocess output couldn't be parsed`, {cause});
}
}
/**
* (Same as `CPUCoreInfo`, but) aliased in recognition of the transfromation,
* in order to provide JSDoc info regarding the transformed type
*/
export type TransformedCoreInfo = Omit<CPUCoreInfo, 'times'> & {
/** Properties are decimal percentage of total time */
times: Record<keyof CPUCoreInfo['times'], number>;
};
/** Converts each time value (in ms) to a decimal percentage of their sum */
export function coreInfoAsPercentages (coreInfo: CPUCoreInfo): TransformedCoreInfo {
const timeEntries = Object.entries(coreInfo.times) as [
name: keyof CPUCoreInfo['times'],
ms: number,
][];
const sum = timeEntries.reduce((sum, [, ms]) => sum + ms, 0);
for (const [index, [, ms]] of timeEntries.entries()) {
timeEntries[index][1] = ms / sum;
}
const times = Object.fromEntries(timeEntries) as TransformedCoreInfo['times'];
return {...coreInfo, times};
}
example.ts:
import {
coreInfoAsPercentages,
cpuTimesKeys,
sampleCPUsUsingNodeJS,
type CPUCoreInfo,
} from './mod.ts';
function anonymizeProcessorAttributes <T extends CPUCoreInfo>(coreInfoArray: T[]): T[] {
return coreInfoArray.map(info => ({
...info,
model: 'REDACTED',
speed: NaN,
}));
}
// Get the CPU info
const cpuCoreInfoArr = await sampleCPUsUsingNodeJS();
// Anonymizing my personal device details (but you would probably not use this)
const anonymized = anonymizeProcessorAttributes(cpuCoreInfoArr);
// JSON for log data
const jsonLogData = JSON.stringify(anonymized);
console.log(jsonLogData);
// Or, for purely visual inspection,
// round the percentages for greater scannability...
const roundedPercentages = anonymized.map(coreInfo => {
const asPercentages = coreInfoAsPercentages(coreInfo);
for (const key of cpuTimesKeys) {
asPercentages.times[key] = Math.round(asPercentages.times[key] * 100);
}
return asPercentages;
});
// and log in tabular format
console.table(roundedPercentages.map(({times}) => times));
In the console:
% deno run --allow-run=node example.ts
[{"model":"REDACTED","speed":null,"times":{"user":2890870,"nice":0,"sys":2290610,"idle":17913530,"irq":0}},{"model":"REDACTED","speed":null,"times":{"user":218270,"nice":0,"sys":188200,"idle":22687790,"irq":0}},{"model":"REDACTED","speed":null,"times":{"user":2509660,"nice":0,"sys":1473010,"idle":19111680,"irq":0}},{"model":"REDACTED","speed":null,"times":{"user":221630,"nice":0,"sys":174140,"idle":22698480,"irq":0}},{"model":"REDACTED","speed":null,"times":{"user":2161140,"nice":0,"sys":1086970,"idle":19846200,"irq":0}},{"model":"REDACTED","speed":null,"times":{"user":221800,"nice":0,"sys":157620,"idle":22714800,"irq":0}},{"model":"REDACTED","speed":null,"times":{"user":1905230,"nice":0,"sys":897140,"idle":20291910,"irq":0}},{"model":"REDACTED","speed":null,"times":{"user":224060,"nice":0,"sys":146460,"idle":22723700,"irq":0}}]
┌───────┬──────┬──────┬─────┬──────┬─────┐
│ (idx) │ user │ nice │ sys │ idle │ irq │
├───────┼──────┼──────┼─────┼──────┼─────┤
│ 0 │ 13 │ 0 │ 10 │ 78 │ 0 │
│ 1 │ 1 │ 0 │ 1 │ 98 │ 0 │
│ 2 │ 11 │ 0 │ 6 │ 83 │ 0 │
│ 3 │ 1 │ 0 │ 1 │ 98 │ 0 │
│ 4 │ 9 │ 0 │ 5 │ 86 │ 0 │
│ 5 │ 1 │ 0 │ 1 │ 98 │ 0 │
│ 6 │ 8 │ 0 │ 4 │ 88 │ 0 │
│ 7 │ 1 │ 0 │ 1 │ 98 │ 0 │
└───────┴──────┴──────┴─────┴──────┴─────┘
You can use https://deno.land/std#0.123.0/node/os.ts where cpus() gives CPUCoreInfo[]
In openstack_compute_instance_v2, Terraform can attach the existing networks, while I have 1 or n network to attach, in module:
...
variable "vm_network" {
type = "list"
}
resource "openstack_compute_instance_v2" "singlevm" {
name = "${var.vm_name}"
image_id = "${var.vm_image}"
key_pair = "${var.vm_keypair}"
security_groups = "${var.vm_sg}"
flavor_name = "${var.vm_size}"
network = "${var.vm_network}"
}
in my .tf file:
module "singlevm" {
...
vm_network = {"name"="NETWORK1"}
vm_network = {"name"="NETWORK2"}
}
Terraform returns expected object, got invalid error.
What am I doing wrong here?
That's not how you specify a list in your .tf file that sources the module.
Instead you should have something more like:
variable "vm_network" { default = [ "NETWORK1", "NETWORK2" ] }
module "singlevm" {
...
vm_network = "${var.vm_network}"
}