i would to rigger for attach authorization to route, as the pic.
enter image description here
when i used the same way "attach" the authorizer to a route in terraform .
terraform Apply complete!
resource "aws_apigatewayv2_deployment" "deployment_trigger" {
api_id = aws_apigatewayv2_api.logservice-apigw.id
description = "trigger deployment"
triggers = {
redeployment = sha1(join(",", tolist([
jsonencode(aws_apigatewayv2_integration.Lambda_Integration),
jsonencode(aws_apigatewayv2_route.route),
])))
}
lifecycle {
create_before_destroy = true
}
}
but i used the same way attach the authorizer to a route ,terraform also apply complete ,acyually awsconsole was failed.
resource "aws_apigatewayv2_authorizer" "authorizer_non_prod" {
###checkov:skip=CKV_AWS_76: "Ensure API Gateway has Access Logging enabled"
name = var.authorizer_name
authorizer_type = "REQUEST"
authorizer_uri = data.aws_lambda_function.lambda_logservice[2].invoke_arn
api_id = aws_apigatewayv2_api.logservice-apigw.id
authorizer_payload_format_version = "2.0"
identity_sources = ["$request.header.Authorization"]
authorizer_result_ttl_in_seconds = var.authorizer_result_ttl_in_seconds
enable_simple_responses = var.enable_simple_responses
authorizer_credentials_arn = aws_iam_role.log_service_lambda_role.arn
}
resource "aws_apigatewayv2_deployment" "authorization_triggers" {
api_id = aws_apigatewayv2_api.logservice-apigw.id
description = "trigger deployment"
triggers = {
redeployment = sha1(join(",", tolist([
jsonencode(aws_apigatewayv2_authorizer.authorizer_non_prod),
jsonencode(aws_apigatewayv2_route.route),
])))
}
lifecycle {
create_before_destroy = true
}
}
how can i fix ? thanks!
enter image description here
I need to output the Primary Connection or Secondary Connection Strings to use this connection string as an input value in Azure Data Factory MongoApi Linked Services to connect the database to upload the Json files from Azure storage account to Azure cosmos db. But I'm getting the error message while output the connection strings using terraform
Can Someone please check and help me in this with detailed explanation is much appreciated.
output "cosmosdb_connection_strings" {
value = data.azurerm_cosmosdb_account.example.connection_strings
sensitive = true
}
Error: Unsupported attribute
│
│ on outputs.tf line 21, in output "cosmosdb_connection_strings":
│ 21: value = data.azurerm_cosmosdb_account.example.connection_strings
│
│ This object has no argument, nested block, or exported attribute named "connection_strings"
I tried to reproduce the same in my environment:
resource "azurerm_cosmosdb_account" "db" {
name = "tfex-cosmos-db-31960"
location = "westus2"
resource_group_name = data.azurerm_resource_group.example.name
offer_type = "Standard"
kind = "MongoDB"
enable_automatic_failover = true
capabilities {
name = "EnableAggregationPipeline"
}
capabilities {
name = "mongoEnableDocLevelTTL"
}
capabilities {
name = "MongoDBv3.4"
}
capabilities {
name = "EnableMongo"
}
consistency_policy {
consistency_level = "BoundedStaleness"
max_interval_in_seconds = 300
max_staleness_prefix = 100000
}
geo_location {
location = "eastus"
failover_priority = 0
}
}
You can get the output using below code:
output "cosmosdb_connectionstrings" {
value = "AccountEndpoint=${azurerm_cosmosdb_account.db.endpoint};AccountKey=${azurerm_cosmosdb_account.db.primary_key};"
sensitive = true
}
I have below terraform azurerm provider version:
terraform {
required_providers {
azapi = {
source = "azure/azapi"
version = "=0.1.0"
}
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.2"
}
Try upgrade you terraform version.
You can even traverse the array of connection strings and output required one whith below code:
output "cosmosdb_connectionstrings" {
value = tostring("${azurerm_cosmosdb_account.db.connection_strings[0]}")
sensitive = true
}
Result:
As they are sensitive you cannot see output values to the UI, but you can export to required resource.
I Have created a keyvault and exported the connection strings to keyvault.
data "azurerm_client_config" "current" {}
resource "azurerm_key_vault" "example" {
name = "kaexamplekeyvault"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
enabled_for_disk_encryption = true
tenant_id = data.azurerm_client_config.current.tenant_id
soft_delete_retention_days = 7
purge_protection_enabled = false
sku_name = "standard"
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"Get","List", "Backup", "Create"
]
secret_permissions = [
"Get","List", "Backup", "Delete", "Purge", "Recover", "Restore", "Set"
]
storage_permissions = [
"Get", "List", "Backup", "Delete", "DeleteSAS", "GetSAS", "ListSAS", "Purge", "Recover", "RegenerateKey", "Restore", "Set", "SetSAS", "Update",
]
}
}
resource "azurerm_key_vault_secret" "example" {
count = length(azurerm_cosmosdb_account.db.connection_strings)
name = "ASCosmosDBConnectionString-${count.index}"
value = tostring("${azurerm_cosmosdb_account.db.connection_strings[count.index]}")
key_vault_id = azurerm_key_vault.example.id
}
Then you can check the connection string values in your keyvault.
check the version and click on show secret from which you can copy the secret value which is connection string.
I have found two ways and implemented both ways were working.
In the first way I can be able to store the primary connection string of the cosmos db using azurerm_cosmosdb_account.acc.connection_strings[0] with index number. So, it will only store the Primary Connection String.
resource "azurerm_key_vault_secret" "ewo11" {
name = "Cosmos-DB-Primary-String"
value = azurerm_cosmosdb_account.acc.connection_strings[0]
key_vault_id = azurerm_key_vault.ewo11.id
depends_on = [
azurerm_key_vault.ewo11,
azurerm_key_vault_access_policy.aduser,
azurerm_key_vault_access_policy.demo-terraform-automation
]
}
In the Second Way is I'm creating it manually by using join function. I have found some common values in the connection string, like wise I have creating and I'm successfully able to connect with this string.
output "cosmosdb_account_primary_key" {
value = azurerm_cosmosdb_account.acc.primary_key
sensitive = true
}
locals {
kind = "mongodb"
db_name = azurerm_cosmosdb_account.acc.name
common_value = ".mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName="
}
output "cosmosdb_connection_strings" {
value = join("", [local.kind, ":", "//", azurerm_cosmosdb_account.acc.name, ":", azurerm_cosmosdb_account.acc.primary_key, "#", local.db_name, local.common_value, "#", local.db_name, "#"])
sensitive = true
}
resource "azurerm_key_vault_secret" "example" {
name = "cosmos-connection-string"
value = join("", [local.kind, ":", "//", azurerm_cosmosdb_account.acc.name, ":", azurerm_cosmosdb_account.acc.primary_key, "#", local.db_name, local.common_value, "#", local.db_name, "#"])
key_vault_id = data.azurerm_key_vault.example.id
}
In both ways I can be able to fix the problems.
If we want to see the sensitive values, we check those values in terraform.tfstate file. It will be available when we call them in outputs.
If I call flowSession.getCounterpartyFlowInfo() from a unit test using MockNetwork, it returns FlowInfo(flowVersion=1, appName=<unknown>)
Here is my current MockNetwork configuration:
network = MockNetwork(
MockNetworkParameters(
cordappsForAllNodes = listOf(
TestCordapp.findCordapp("com.example.contract"),
TestCordapp.findCordapp("com.example.workflow")
),
networkParameters = testNetworkParameters(
minimumPlatformVersion = 5
)
)
)
Is there a way to specify the appName of an application running in a mock network?
I don't think there is a configuration for that. The appName is derived from the jar file name by removing the '.jar' extension.
For the MockNode, the packages are scanned and classes are loaded.
Here is how it's derived:
val Class<out FlowLogic<*>>.appName: String
get() {
val jarFile = location.toPath()
return if (jarFile.isRegularFile() && jarFile.toString().endsWith(".jar")) {
jarFile.fileName.toString().removeSuffix(".jar")
} else {
"<unknown>"
}
}
I have an application that I am deploying to AWS on ECS using Terraform. I would like to set it up so that it can perform a health check over HTTP, but require authentication and perform all other activities over HTTPS. However, I have no idea how to set this up and nothing I'm trying is working. Ideally, the HTTPS port should be 8080 (not sure if this is possible, if it's not please advise; it's not a big deal to use a different port). The HTTP port would ideally also be 8080 but can be different as well (I know nothing about networking and not sure if I can hit the same port with 2 different protocols to access 2 different sets of endpoints).
Here's the most recent iteration of what I've tried (abbreviated to only show the parts I think are important); I've tried other things as well but this is the most recent:
resource "aws_ecs_service" "service" {
name = "${var.app_name}-service"
cluster = aws_ecs_cluster.main.id
task_definition = aws_ecs_task_definition.app.arn
desired_count = var.app_count
launch_type = "FARGATE"
health_check_grace_period_seconds = 240
network_configuration {
security_groups = [var.security_group_id]
subnets = var.private_subnet_ids
assign_public_ip = true
}
load_balancer {
target_group_arn = aws_alb_target_group.app.id
container_name = var.app_name
container_port = var.app_port
}
depends_on = [aws_alb_listener.listener]
}
resource "aws_alb_target_group" "app" {
name = "${var.app_name}-target-group"
port = 443
protocol = "HTTPS"
vpc_id = var.vpc_id
target_type = "ip"
health_check {
interval = "30"
protocol = "HTTPS"
matcher = "200-299"
timeout = "3"
path = var.app_health_check_path
unhealthy_threshold = "3"
}
}
resource "aws_alb_listener" "listener" {
load_balancer_arn = var.alb_arn
port = var.alb_listener_port
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = var.ssl_cert
default_action {
type = "fixed-response"
fixed_response {
content_type = "text/plain"
message_body = "contact admin for ${var.app_name} access"
status_code = 403
}
}
}
resource "aws_lb_listener_rule" "cf_auth_listener_rule" {
listener_arn = aws_alb_listener.listener.arn
priority = 101
action {
type = "forward"
target_group_arn = aws_alb_target_group.app.arn
}
condition {
http_header {
http_header_name = <HEADER_NAME>
values = [<HEADER_VALUE>]
}
}
}
resource "aws_lb_listener_rule" "health_check_listener_rule" {
listener_arn = aws_alb_listener.listener.arn
priority = 1
action {
type = "forward"
target_group_arn = aws_alb_target_group.app.arn
}
condition {
path_pattern {
values = [var.app_health_check_path]
}
}
}
The problem I'm having is that my service is starting, but then immediately shutting down due to failed health check. It seems the ALB can't contact the health check endpoint to perform the health check.
I am using Openstack Kilo and Terraform V 0.10
I need to attach multiple interfaces of same network to an instance.
I've tried the following attempts:
Adding network block three times in openstack_compute_instance_v2 using same network:
resource "openstack_compute_instance_v2" "VM`1" {
name = "VM1"
count = "1"
image_name = "image"
flavor_name = "flavor"
network = {
uuid = "${openstack_networking_network_v2.NET_1.id}"
}
network = {
uuid = "${openstack_networking_network_v2.NET_1.id}"
}
network = {
uuid = "${openstack_networking_network_v2.NET_1.id}"
}
}
Created three ports of same network and tried to add them in compute_instance:
resource "openstack_compute_instance_v2" "VM1" {
name = "VM1"
count = "1"
image_name = "image"
flavor_name = "flavor"
network = {
port = "${openstack_networking_port_v2.port_1.id}"
}
network = {
port = "${openstack_networking_port_v2.port_2.id}"
}
network = {
port = "${openstack_networking_port_v2.port_3.id}"
}
}
Unfortunately both did not work.
I am able to launch instance with single port.
Post creation I wanted to add additional interfaces.
Literally, I wanted to do the below post VM creation with single interface:
nova interface-attach --net-id $NET_1 "$VM1"
nova interface-attach --net-id $NET_1 "$VM1"