Create terraform rules from list - openstack

I am using Terraform v0.11.11.
I want to be able to write a deployment script for openstack that takes a list of ip or ip range or arbitrary length that I want to white list in a vm for port 22, let say
ip_list = ["11.11.0.0/16","22.22.22.0/8", "33.33.33.33" ...]
It there a syntax to have the rule applying properly?
This is not working,
"openstack_compute_secgroup_v2" "secgroup_1" {
name = "a_cluster"
description = "some security group"
rule {
from_port = 22
to_port = 22
ip_protocol = "tcp"
cidr = "${var.ip_list}"
}
it returns the following:
$ terraform apply
Error: module.openstack.openstack_compute_secgroup_v2.secgroup_1: rule.3.cidr must be a single value, not a list
but is there a way to do it right?

I didn't have environment to test below command. The idea should be in right direction.
You should be fine to adjust
resource "openstack_compute_secgroup_v2" "secgroup_1" {
count = "${length(${var.ip_list})}"
name = "a_cluster_${count.index}"
description = "some security group"
rule {
from_port = 22
to_port = 22
ip_protocol = "tcp"
cidr = "${element(var.ip_list, count.index)}"
}
}

If you can upgrade to terraform 0.12 use the dynamic nested block:
with ip_list = ["11.11.0.0/16","22.22.22.0/8", "33.33.33.33" ...]
"openstack_compute_secgroup_v2" "secgroup_1" {
name = "a_cluster"
description = "some security group"
dynamic "rule" {
for_each = ${var.ip_list}
content{
from_port = 22
to_port = 22
ip_protocol = "tcp"
cidr = rule
}
}
}

Related

How to retrieve the EKS kubeconfig?

I have defined an aws_eks_cluster and aws_eks_node_group as follows:
resource "aws_eks_cluster" "example" {
count = var.create_eks_cluster ? 1 : 0
name = local.cluster_name
role_arn = aws_iam_role.example[count.index].arn
vpc_config {
subnet_ids = [
aws_subnet.main2.id,
aws_subnet.main3.id
]
security_group_ids = [
module.network.security_group_allow_all_from_client_ip,
module.network.security_group_main_id
]
endpoint_private_access = true
endpoint_public_access = false
}
# Ensure that IAM Role permissions are created before and deleted after EKS Cluster handling.
# Otherwise, EKS will not be able to properly delete EKS managed EC2 infrastructure such as Security Groups.
depends_on = [
aws_iam_role_policy_attachment.example-AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.example-AmazonEKSVPCResourceController,
]
}
resource "aws_eks_node_group" "example" {
count = var.create_eks_cluster ? 1 : 0
cluster_name = aws_eks_cluster.example[count.index].name
node_group_name = random_uuid.deployment_uuid.result
node_role_arn = aws_iam_role.eks-node-group-example[count.index].arn
subnet_ids = [
aws_subnet.main2.id,
aws_subnet.main3.id
]
scaling_config {
desired_size = 1
max_size = 5
min_size = 1
}
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.example-AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.example-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.example-AmazonEC2ContainerRegistryReadOnly,
]
}
How can I retrieve the KubeConfig?
I have seen that the kubeconfig is available as an output on the eks module.
Do I need to replace aws_eks_cluster and aws_eks_node_group with the eks module?
The EKS module composes a kubeconfig based on a template.
You can include that template alongside your terraform code.
You will need to provide default values for all the variables in the templatefile function call and reference your own EKS resource name. It's fine to drop all the coalescelist functions too.
e.g.:
locals {
kubeconfig = templatefile("templates/kubeconfig.tpl", {
kubeconfig_name = local.kubeconfig_name
endpoint = aws_eks_cluster.example.endpoint
cluster_auth_base64 = aws_eks_cluster.example.certificate_authority[0].data
aws_authenticator_command = "aws-iam-authenticator"
aws_authenticator_command_args = ["token", "-i", aws_eks_cluster.example.name]
aws_authenticator_additional_args = []
aws_authenticator_env_variables = {}
})
}
output "kubeconfig" { value = local.kubeconfig }

Not able to initialise Vault

I am trying to initilaise vault with below configurations
vault.hcl
path "*"{
capabilities = [ "read", "list", "update","create" ]
}
vault.conf
backend "file" {
path = "/vault/vaultsecrets"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 1
tls_cert_file = "/vault/certs/host.pem"
tls_key_file = "/vault/certs/host.key"
tls_cipher_suites = "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA"
tls_prefer_server_cipher_suites = "true"
}
disable_mlock = "true"
and it gives me following error
{'errors': ['failed to initialize barrier: failed to persist keyring: mkdir /vault/vaultsecrets/core: permission denied']}
I feel this something to do with file permissions but not sure where to change it.
Note:works fines with vault:1.0.1 but throws the above error when I migrate to vault:1.4.3
Thanks in advance

Issues after upgrade to terraform 0.12

It's sad that terraform is not backward compatible.
data "aws_security_group" "security_groupdev" {
filter {
name = "group-name"
values = ["SecurityGroupdev"]
}
}
resource "aws_instance" "ec2_instance" {
count = "${var.ec2_instance_count}"
...
}
resource "aws_network_interface_sg_attachment" "sg_attachment" {
security_group_id = "${data.aws_security_group.security_groupdev.id}"
network_interface_id = "${aws_instance.ec2_instance.primary_network_interface_id}"
}
but after upgrading it to Terraform 0.12 I have started facing issues and I am not able to get the syntax for TF0.12.
Error: Missing resource instance key
on ..\resources\ec2_instance\main.tf line 101, in resource "aws_network_interface_sg_attachment" "sg_attachment":
101: network_interface_id = "${aws_instance.ec2_instance.primary_network_interface_id}"
Because aws_instance.ec2_instance has "count" set, its attributes must be
accessed on specific instances.
I tried "${aws_instance.ec2_instance[count.index].primary_network_interface_id}" but no luck.

Corda: Trying to put the RPC Permissions on an external database

I'm trying to put the RPC Permissions, along with the users and their password on an external database. I've followed the documentation for Corda v. 3.3 (https://docs.corda.net/clientrpc.html#rpc-security-management).
It says that I need to create a "security" field for the node in question and fill out all the necessary information. I've done it, but as soon as I try to deploy the Node, it gives me this error:
"Could not set unknown property 'security' for object of type net.corda.plugins.Node."
The node's information looks like this in the build.gradle document:
node {
name "O=myOrganisation,L=Lisbon,C=PT"
p2pPort 10024
rpcSettings {
address("localhost:10025")
adminAddress("localhost:10026")
}
security = {
authService = {
dataSource = {
type = "DB"
passwordEncryption = "SHIRO_1_CRYPT"
connection = {
jdbcUrl = "localhost:3306"
username = "*******"
password = "*******"
driverClassName = "com.mysql.jdbc.Driver"
}
}
}
}
cordapps = [
"$project.group:cordapp:$project.version"
]
}
You are confusing two syntaxes:
The syntax for configuring a node block inside a Cordform task such as deployNodes
The syntax for configuring a node directly via node.conf
The security settings are for inside node.conf. You have to create the node first, then modify the node's node.conf with these settings once it has been created.
Corda 4 will introduce an extraConfig option for use inside Cordfrom node blocks, as described here.

Terraform and docker networking

I have defined a terraform recipe with a docker provisioner like this:
provider "docker" {
host = "tcp://127.0.0.1:2375/"
}
# Create the network
resource "docker_network" "private_network" {
name = "${var.customer_name}_network"
}
resource "docker_container" "goagent" {
image = "${docker_image.goagent.latest}"
name = "${var.customer_name}_goagent"
command = [ "/bin/sh", "-c", "/usr/bin/supervisord" ]
network_mode = "bridge"
networks = [ "${docker_network.private_network.name}" ]
hostname = "${var.customer_name}_goagent"
}
resource "docker_image" "goagent" {
name = "local/goagent"
}
I would expect that the container will be connected just to the network created on the fly (using the variable customer_name).
But what I see is that the container gets connected also to the default bridge network (172.17.0.0/16), so it gets connected to two networks.
Is there a way to configure the container in terraform in a way that it gets connected only to the network I specify in the networks list?
Apparently this is an unresolved bug as of 0.10.8

Resources