terraform aws_acm_certificate_validation.cert_api: Still creating... [4m21s elapsed] until timeout - terraform-provider-aws

The ACM Certificate Validation never completes, it times out after about 45 mins, looking at the AWS Hosted Zone for the domain, it has a cname record. It never reaches the create the Api Gateway Domain section.
main.tf
resource "aws_acm_certificate" "cert_api" {
domain_name = var.api_domain
validation_method = "DNS"
tags = {
Name = var.api_domain
}
}
resource "aws_acm_certificate_validation" "cert_api" {
certificate_arn = aws_acm_certificate.cert_api.arn
validation_record_fqdns = aws_route53_record.cert_api_validations.*.fqdn
}
resource "aws_route53_zone" "api" {
name = var.api_domain
}
resource "aws_route53_record" "cert_api_validations" {
allow_overwrite = true
count = length(aws_acm_certificate.cert_api.domain_validation_options)
zone_id = aws_route53_zone.api.zone_id
name = element(aws_acm_certificate.cert_api.domain_validation_options.*.resource_record_name, count.index)
type = element(aws_acm_certificate.cert_api.domain_validation_options.*.resource_record_type, count.index)
records = [element(aws_acm_certificate.cert_api.domain_validation_options.*.resource_record_value, count.index)]
ttl = 60
}
resource "aws_route53_record" "api-a" {
name = aws_apigatewayv2_domain_name.api.domain_name
type = "A"
zone_id = aws_route53_zone.api.zone_id
alias {
name = aws_apigatewayv2_domain_name.api.domain_name_configuration[0].target_domain_name
zone_id = aws_apigatewayv2_domain_name.api.domain_name_configuration[0].hosted_zone_id
evaluate_target_health = false
}
}
resource "aws_apigatewayv2_domain_name" "api" {
domain_name = var.api_domain
domain_name_configuration {
certificate_arn = aws_acm_certificate.cert_api.arn
endpoint_type = "REGIONAL"
security_policy = "TLS_1_2"
}
}

If the hosted zone is destroyed and re-provisioned, new name server records are associated with the new hosted zone. However,
the domain name might still have the previous name server records
associated with it.
If AWS Route 53 is used as the domain name registrar, head to Route 53 > Registered domains > ${your-domain-name} > Add or edit name servers and add the
newly associated name server records from the hosted zone to the registered domain.

Related

Terraform AWS - Creation of multiple AWS S2S VPN

Requirement is to establish communication from multiple offices to AWS through Terraform. Below is the code so far where the customer gateway and virtual private gateway will be created based on the variable map cgw (Lets assume 5 customer gateways)
resource "aws_customer_gateway" "cgw-main" {
for_each = toset(var.cgw)
ip_address = each.value
bgp_asn = 65000
type = "ipsec.1"
}
resource "aws_vpn_gateway" "vpn-gw-main" {
count = length(var.cgw)
vpc_id = aws_vpc.myvpc.id
}
resource "aws_vpn_connection" "vpn-main" {
customer_gateway_id= ??
vpn_gateway_id = ??
type="ipsec.1"
static_routes_only = true
}
How do i dynamically pickup each created customer gateway and map to created vpn gateway in aws_vpn_connection for 5 iterations?

using terraform to create aws NFS file share

any idea how i can use terraform to create nfs file share?
I need to create the s3 bucket
then
create nfs file share on existing storage gateway which I need to use the bucket name I created in step 1
any idea how to do in terraform?
You will need 3 terraform resources :
aws_s3_bucket
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-nfs-bucket"
acl = "private"
tags = {
Author = "me"
Environment = "dev"
}
}
aws_storagegateway_gateway
resource "aws_storagegateway_gateway" "my-storagegateway" {
gateway_ip_address = "1.2.3.4"
gateway_name = "storage-gateway"
gateway_timezone = "GMT"
gateway_type = "FILE_S3"
}
aws_storagegateway_nfs_file_share
resource "aws_storagegateway_nfs_file_share" "my_bucket" {
client_list = ["0.0.0.0/0"]
gateway_arn = aws_storagegateway_gateway.my-storagegateway.arn
location_arn = aws_s3_bucket.my_bucket.arn
role_arn = aws_iam_role.my-role.arn
}
You will also need in the role_arn key-value the ARN of the AWS Identity and Access Management (IAM) role that a file gateway assumes when it accesses the underlying storage.
aws_iam_role
Managing your file gateway

RabbitMQ SSL Configuration: DotNet Client

I am trying to connect (dotnet client) to RabbitMQ. I enabled the Peer verification option from the RabbitMQ config file.
_factory = new ConnectionFactory
{
HostName = Endpoint,
UserName = Username,
Password = Password,
Port = 5671,
VirtualHost = "/",
AutomaticRecoveryEnabled = true
};
sslOption = new SslOption
{
Version = SslProtocols.Tls12,
Enabled = true,
AcceptablePolicyErrors = System.Net.Security.SslPolicyErrors.RemoteCertificateChainErrors
| System.Net.Security.SslPolicyErrors.RemoteCertificateNameMismatch,
ServerName = "", // ?
Certs = X509CertCollection
}
Below are my client certification details which I am passing through "X509CertCollection".
CertSubject: CN=myhostname, O=MyOrganizationName, C=US // myhostname is the name of my client host.
So, if I pass "myhostname" value into sslOption.ServerName, it works. If I pass some garbage value, it still works.
As per documentation of RabbitMQ, these two value should be match i.e. certCN value and serverName. What will be the value of sslOption.ServerName here and why?
My Bad. I found the reason. Posting as it might help someone.
Reason: As I set a policy "System.Net.Security.SslPolicyErrors.RemoteCertificateNameMismatch".

GCP GKE cannot access Redis Memorystore

I have Kubernetes cluster in GCP (GKE) that cannot connecto to Memorystore(Redis).
All of the resources related to the project are in a dedicated network.
network module:
resource "google_compute_network" "my_project" {
name = "my_project"
auto_create_subnetworks = true
}
output "my_project_network_self_link" {
value = google_compute_network.my_project_network.self_link
}
I use the network in the GKE cluster (network = "${var.network_link}"):
resource "google_container_cluster" "my_project" {
name = "my_project-cluster"
location = "us-central1"
node_locations = ["us-central1-a", "us-central1-b"]
network = "${var.network_link}"
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
Node pools omitted.
and I set the network as authorized_network in the memorystore configuration:
resource "google_redis_instance" "cache" {
name = "my_project-redis"
tier = "STANDARD_HA"
memory_size_gb = 1
authorized_network = "${var.network_link}"
# location_id = "us-central1-a"
redis_version = "REDIS_4_0"
display_name = "my_project Redis cache"
}
variable "network_link" {
description = "The link of the network instance is in"
type = string
}
I guess that the problem is related to the network, because previously using the default network this was working fine.
Currently the GKE nodes are in us-central1-a and us-central1-b(specified in the TF script) and memory store is in us-central1-c. So the cluster and the Redis are in the same VPC but in different sub-networks. Could this be the problem?
I had to add the following section to the cluster module in terraform:
ip_allocation_policy {
cluster_ipv4_cidr_block = ""
services_ipv4_cidr_block = ""
}
This seems to enable the VPC-native (alias IP) property of the cluster.

update exisitng terraform compute instance when added new "components"

I am new with terraform, but I have created an openstack compute instance like this:
provider "openstack" {
auth_url = "https://my-auth/v2.0/"
domain_name = "default"
alias = "alias"
user_name = "username"
tenant_name = "tenantname"
password = "pwd"
region = "region"
}
# Import SSH key pair into openstack project
resource "openstack_compute_keypair_v2" "keypair" {
provider = "myprovider"
name = "keypair"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}
# Create a new virtual machine
resource "openstack_compute_instance_v2" "compute_instance" {
name = "compute_instance" # Instance Name
provider = "myprovider" # Instance distr
image_name = "Centos 7" # Image name
flavor_name = "b2-7" # Machine type name
key_pair = "${openstack_compute_keypair_v2.keypair.name}"
network {
name = "Ext-Net"
}
}
For maintainability and flexibility reasons I would like to add some "components" in the same instance, it could be anything, but here I have tried with a provisionner file and remote execution.
Indeed, when I add this arguments in my compute instance, I noticed that my compute instance will not be updated. For example:
provider "openstack" {
auth_url = "https://my-auth/v2.0/"
domain_name = "default"
alias = "alias"
user_name = "username"
tenant_name = "tenantname"
password = "pwd"
region = "region"
}
resource "openstack_compute_keypair_v2" "keypair" {
provider = "myprovider"
name = "keypair"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}
resource "openstack_compute_instance_v2" "compute_instance" {
name = "compute_instance" # Instance Name
provider = "myprovider" # Instance distr
image_name = "Centos 7" # Image name
flavor_name = "b2-7" # Machine type name
key_pair = "${openstack_compute_keypair_v2.keypair.name}"
network {
name = "Ext-Net"
}
# Add a provisionner file on the ressource
provisioner "file" {
source = "foo_scripts/bar-setup.sh"
destination = "/tmp/bar-setup.sh"
connection {
type = "ssh"
user = "user"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
# execute server setup file
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/bar-setup.sh",
"sudo bash /tmp/bar-setup.sh",
]
connection {
type = "ssh"
user = "centos"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
Indeed, by adding the provionner file on the ressource, when I run the command terraform plan or terraform apply, nothing change on my instance. I have infos messages notify me that:
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
What it's the right way to apply my changes to my compute instance.
Following Terraform documentation:
Provisioners are used to execute scripts on a local or remote machine as part of resource creation or destruction.
If you want the provisionners to run again, you should destroy (terraform destroy) and create (terraform apply) the resource again.
There's no way that Terraform can check the state of a local or a remote execution, it's not like there's an API call that can tell you what happen on your custom code - bar-setup.sh.
That would be like magic, or actual Magic.
Terraforms' for managing the infrastructure, the config of the instance, and not really for the content on the instance. Immutable content and recreating is the true path here. Making a completely new instance. However if it's your Hammer there are ways.
If you taint the resource that you want to update, then when terraform is run again next time the resource will be re-executed. But heed what I said about Hammers.
Alternatively you could leverage your CM tool of choice to manage the content of your instance - Chef/Ansible or create the images (i.e. immutable) used by Openstack via a tool like packer and update those. I'd do the latter.

Resources