Terraform AWS - Creation of multiple AWS S2S VPN - terraform-provider-aws

Requirement is to establish communication from multiple offices to AWS through Terraform. Below is the code so far where the customer gateway and virtual private gateway will be created based on the variable map cgw (Lets assume 5 customer gateways)
resource "aws_customer_gateway" "cgw-main" {
for_each = toset(var.cgw)
ip_address = each.value
bgp_asn = 65000
type = "ipsec.1"
}
resource "aws_vpn_gateway" "vpn-gw-main" {
count = length(var.cgw)
vpc_id = aws_vpc.myvpc.id
}
resource "aws_vpn_connection" "vpn-main" {
customer_gateway_id= ??
vpn_gateway_id = ??
type="ipsec.1"
static_routes_only = true
}
How do i dynamically pickup each created customer gateway and map to created vpn gateway in aws_vpn_connection for 5 iterations?

Related

AWS EKS load balancer creation fails while creating for EKS cluster

I have created an EKS cluster with two public subnets, however, created one worker node (instance type = t3.small) in one of the public subnets successfully.
I am able to create Nginx deployment and nodePort service and able to query the deployment and other k8s objects and also able to access this web application using node port (<Service-public-ip:nodeport>) successfully.
I am trying to create a load balancer alb and nlb, but both are failing
The whole setup is using the terraform file. I need help to identify why lb (both types) creation is failing, and how can i fix this in my terraform files
Terraform file for network load balancer:
resource "kubernetes_service_v1" "nlb-nginx-service" {
metadata {
name = "nlb-nginx-service"
annotations = {
"service.beta.kubernetes.io/aws-load-balancer-type" = "external"
"service.beta.kubernetes.io/aws-load-balancer-nlb-target-type" = "ip"
"service.beta.kubernetes.io/aws-load-balancer-scheme" = "internet-facing"
}
}
spec {
selector = {
app = kubernetes_deployment_v1.nginx-application.spec.0.selector.0.match_labels.app
}
port {
name = "http"
port = 80
target_port = 80
}
type = "LoadBalancer"
}
}
ERROR I get when I describe the nlb service
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 119s service-controller Ensuring load balancer
PS F:\k8s-terraform-project\terraform-on-aws-ec2-main\k8s>
terraform file for application load balancer:
resource "kubernetes_service_v1" "alb-nginx-service" {
metadata {
name = "alb-nginx-service"
annotations = {
"kubernetes.io/ingress.class" = "alb"
}
}
spec {
selector = {
app = kubernetes_deployment_v1.nginx-application.spec.0.selector.0.match_labels.app
}
port {
name = "http"
port = 80
target_port = 80
}
type = "LoadBalancer"
}
}
ERROR I get when i describe the nlb service
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 52s (x6 over 3m29s) service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 52s (x6 over 3m28s) service-controller Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB
PS F:\k8s-terraform-project\terraform-on-aws-ec2-main\k8s>
Steps I took to identify, but unfortunately didnt work
tried to create these services on the cluster one by one
checked the services logs but didnt not get a clue
Seems somewhere lb is not able to find a public subnet to place the lb service in aws, as it says 'could not find any suitable subnets for creating the ELB', but not aware where to mention/assign public subnet for my lb

terraform aws_acm_certificate_validation.cert_api: Still creating... [4m21s elapsed] until timeout

The ACM Certificate Validation never completes, it times out after about 45 mins, looking at the AWS Hosted Zone for the domain, it has a cname record. It never reaches the create the Api Gateway Domain section.
main.tf
resource "aws_acm_certificate" "cert_api" {
domain_name = var.api_domain
validation_method = "DNS"
tags = {
Name = var.api_domain
}
}
resource "aws_acm_certificate_validation" "cert_api" {
certificate_arn = aws_acm_certificate.cert_api.arn
validation_record_fqdns = aws_route53_record.cert_api_validations.*.fqdn
}
resource "aws_route53_zone" "api" {
name = var.api_domain
}
resource "aws_route53_record" "cert_api_validations" {
allow_overwrite = true
count = length(aws_acm_certificate.cert_api.domain_validation_options)
zone_id = aws_route53_zone.api.zone_id
name = element(aws_acm_certificate.cert_api.domain_validation_options.*.resource_record_name, count.index)
type = element(aws_acm_certificate.cert_api.domain_validation_options.*.resource_record_type, count.index)
records = [element(aws_acm_certificate.cert_api.domain_validation_options.*.resource_record_value, count.index)]
ttl = 60
}
resource "aws_route53_record" "api-a" {
name = aws_apigatewayv2_domain_name.api.domain_name
type = "A"
zone_id = aws_route53_zone.api.zone_id
alias {
name = aws_apigatewayv2_domain_name.api.domain_name_configuration[0].target_domain_name
zone_id = aws_apigatewayv2_domain_name.api.domain_name_configuration[0].hosted_zone_id
evaluate_target_health = false
}
}
resource "aws_apigatewayv2_domain_name" "api" {
domain_name = var.api_domain
domain_name_configuration {
certificate_arn = aws_acm_certificate.cert_api.arn
endpoint_type = "REGIONAL"
security_policy = "TLS_1_2"
}
}
If the hosted zone is destroyed and re-provisioned, new name server records are associated with the new hosted zone. However,
the domain name might still have the previous name server records
associated with it.
If AWS Route 53 is used as the domain name registrar, head to Route 53 > Registered domains > ${your-domain-name} > Add or edit name servers and add the
newly associated name server records from the hosted zone to the registered domain.

using terraform to create aws NFS file share

any idea how i can use terraform to create nfs file share?
I need to create the s3 bucket
then
create nfs file share on existing storage gateway which I need to use the bucket name I created in step 1
any idea how to do in terraform?
You will need 3 terraform resources :
aws_s3_bucket
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-nfs-bucket"
acl = "private"
tags = {
Author = "me"
Environment = "dev"
}
}
aws_storagegateway_gateway
resource "aws_storagegateway_gateway" "my-storagegateway" {
gateway_ip_address = "1.2.3.4"
gateway_name = "storage-gateway"
gateway_timezone = "GMT"
gateway_type = "FILE_S3"
}
aws_storagegateway_nfs_file_share
resource "aws_storagegateway_nfs_file_share" "my_bucket" {
client_list = ["0.0.0.0/0"]
gateway_arn = aws_storagegateway_gateway.my-storagegateway.arn
location_arn = aws_s3_bucket.my_bucket.arn
role_arn = aws_iam_role.my-role.arn
}
You will also need in the role_arn key-value the ARN of the AWS Identity and Access Management (IAM) role that a file gateway assumes when it accesses the underlying storage.
aws_iam_role
Managing your file gateway

GCP GKE cannot access Redis Memorystore

I have Kubernetes cluster in GCP (GKE) that cannot connecto to Memorystore(Redis).
All of the resources related to the project are in a dedicated network.
network module:
resource "google_compute_network" "my_project" {
name = "my_project"
auto_create_subnetworks = true
}
output "my_project_network_self_link" {
value = google_compute_network.my_project_network.self_link
}
I use the network in the GKE cluster (network = "${var.network_link}"):
resource "google_container_cluster" "my_project" {
name = "my_project-cluster"
location = "us-central1"
node_locations = ["us-central1-a", "us-central1-b"]
network = "${var.network_link}"
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
Node pools omitted.
and I set the network as authorized_network in the memorystore configuration:
resource "google_redis_instance" "cache" {
name = "my_project-redis"
tier = "STANDARD_HA"
memory_size_gb = 1
authorized_network = "${var.network_link}"
# location_id = "us-central1-a"
redis_version = "REDIS_4_0"
display_name = "my_project Redis cache"
}
variable "network_link" {
description = "The link of the network instance is in"
type = string
}
I guess that the problem is related to the network, because previously using the default network this was working fine.
Currently the GKE nodes are in us-central1-a and us-central1-b(specified in the TF script) and memory store is in us-central1-c. So the cluster and the Redis are in the same VPC but in different sub-networks. Could this be the problem?
I had to add the following section to the cluster module in terraform:
ip_allocation_policy {
cluster_ipv4_cidr_block = ""
services_ipv4_cidr_block = ""
}
This seems to enable the VPC-native (alias IP) property of the cluster.

Advertised P2P messaging addresses changes when compatibilityZoneURL is informed

I'm trying to setup an environment on separate machines:
Server1: Node
Server2: Bridge
Server3: Float
When I execute the node registration or:
java -jar corda.jar --just-generate-node-info
The address on nodeInfo-XXX is generated correctly, pointing to Server3 IP (float), which I've put on p2paddress on node.conf.
But when I inform the compatibilityZoneURL parameter to a configured Cordite Network Map service, and start the node, the nodeInfo-XXX and the "Advertised P2P messaging addresses" just changes to the Server1 IP, although this IP doesn't appear in node.conf.
My node.conf:
myLegalName="O=Node Test,L=Sao Paulo,C=BR"
p2pAddress="float-server-IP-or-alias:10005"
rpcSettings {
useSsl = false
standAloneBroker = false
address="0.0.0.0:10031"
adminAddress="0.0.0.0:10061"
}
security {
authService {
dataSource {
type=INMEMORY
users=[
{
password=test
permissions=[
ALL
]
user=user1
}
]
}
}
}
useTestClock = false
enterpriseConfiguration = {
externalBridge = false
mutualExclusionConfiguration = {
on = true
updateInterval = 20000
waitInterval = 40000
}
}
devMode=false
compatibilityZoneURL : "http://10.102.32.106:8080/"
keyStorePassword = "cordacadevpass"
trustStorePassword = "trustpass"
Edit: I'm using Corda Enterprise v3.1
Are you able to try adding the following line to your node.conf:
detectPublicIp = false
From the docs:
This flag toggles the auto IP detection behaviour, it is enabled by default. On startup the node will attempt to discover its externally visible IP address first by looking for any public addresses on its network interfaces, and then by sending an IP discovery request to the network map service. Set to false to disable.
Let us know if this works.

Resources