AWS EKS load balancer creation fails while creating for EKS cluster - terraform-provider-aws

I have created an EKS cluster with two public subnets, however, created one worker node (instance type = t3.small) in one of the public subnets successfully.
I am able to create Nginx deployment and nodePort service and able to query the deployment and other k8s objects and also able to access this web application using node port (<Service-public-ip:nodeport>) successfully.
I am trying to create a load balancer alb and nlb, but both are failing
The whole setup is using the terraform file. I need help to identify why lb (both types) creation is failing, and how can i fix this in my terraform files
Terraform file for network load balancer:
resource "kubernetes_service_v1" "nlb-nginx-service" {
metadata {
name = "nlb-nginx-service"
annotations = {
"service.beta.kubernetes.io/aws-load-balancer-type" = "external"
"service.beta.kubernetes.io/aws-load-balancer-nlb-target-type" = "ip"
"service.beta.kubernetes.io/aws-load-balancer-scheme" = "internet-facing"
}
}
spec {
selector = {
app = kubernetes_deployment_v1.nginx-application.spec.0.selector.0.match_labels.app
}
port {
name = "http"
port = 80
target_port = 80
}
type = "LoadBalancer"
}
}
ERROR I get when I describe the nlb service
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 119s service-controller Ensuring load balancer
PS F:\k8s-terraform-project\terraform-on-aws-ec2-main\k8s>
terraform file for application load balancer:
resource "kubernetes_service_v1" "alb-nginx-service" {
metadata {
name = "alb-nginx-service"
annotations = {
"kubernetes.io/ingress.class" = "alb"
}
}
spec {
selector = {
app = kubernetes_deployment_v1.nginx-application.spec.0.selector.0.match_labels.app
}
port {
name = "http"
port = 80
target_port = 80
}
type = "LoadBalancer"
}
}
ERROR I get when i describe the nlb service
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 52s (x6 over 3m29s) service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 52s (x6 over 3m28s) service-controller Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB
PS F:\k8s-terraform-project\terraform-on-aws-ec2-main\k8s>
Steps I took to identify, but unfortunately didnt work
tried to create these services on the cluster one by one
checked the services logs but didnt not get a clue
Seems somewhere lb is not able to find a public subnet to place the lb service in aws, as it says 'could not find any suitable subnets for creating the ELB', but not aware where to mention/assign public subnet for my lb

Related

Terraform AWS - Creation of multiple AWS S2S VPN

Requirement is to establish communication from multiple offices to AWS through Terraform. Below is the code so far where the customer gateway and virtual private gateway will be created based on the variable map cgw (Lets assume 5 customer gateways)
resource "aws_customer_gateway" "cgw-main" {
for_each = toset(var.cgw)
ip_address = each.value
bgp_asn = 65000
type = "ipsec.1"
}
resource "aws_vpn_gateway" "vpn-gw-main" {
count = length(var.cgw)
vpc_id = aws_vpc.myvpc.id
}
resource "aws_vpn_connection" "vpn-main" {
customer_gateway_id= ??
vpn_gateway_id = ??
type="ipsec.1"
static_routes_only = true
}
How do i dynamically pickup each created customer gateway and map to created vpn gateway in aws_vpn_connection for 5 iterations?

GCP GKE cannot access Redis Memorystore

I have Kubernetes cluster in GCP (GKE) that cannot connecto to Memorystore(Redis).
All of the resources related to the project are in a dedicated network.
network module:
resource "google_compute_network" "my_project" {
name = "my_project"
auto_create_subnetworks = true
}
output "my_project_network_self_link" {
value = google_compute_network.my_project_network.self_link
}
I use the network in the GKE cluster (network = "${var.network_link}"):
resource "google_container_cluster" "my_project" {
name = "my_project-cluster"
location = "us-central1"
node_locations = ["us-central1-a", "us-central1-b"]
network = "${var.network_link}"
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
Node pools omitted.
and I set the network as authorized_network in the memorystore configuration:
resource "google_redis_instance" "cache" {
name = "my_project-redis"
tier = "STANDARD_HA"
memory_size_gb = 1
authorized_network = "${var.network_link}"
# location_id = "us-central1-a"
redis_version = "REDIS_4_0"
display_name = "my_project Redis cache"
}
variable "network_link" {
description = "The link of the network instance is in"
type = string
}
I guess that the problem is related to the network, because previously using the default network this was working fine.
Currently the GKE nodes are in us-central1-a and us-central1-b(specified in the TF script) and memory store is in us-central1-c. So the cluster and the Redis are in the same VPC but in different sub-networks. Could this be the problem?
I had to add the following section to the cluster module in terraform:
ip_allocation_policy {
cluster_ipv4_cidr_block = ""
services_ipv4_cidr_block = ""
}
This seems to enable the VPC-native (alias IP) property of the cluster.

Terraform Provisioner "local-exec" not working as expected | VPC Peering Connection Accept issue

I'm unable to get the auto accept peering done through the work around mentioned in the link (Why am I getting a permissions error when attempting to auto_accept vpc peering in Terraform?"] via provisioner option
See below Terraform code of mine. Can some one help me out?
provider "aws" {
region = "us-east-1"
profile = "default"
}
provider "aws" {
region = "us-east-1"
profile = "peer"
alias = "peer"
}
data "aws_caller_identity" "peer" {
provider = "aws.peer"
}
resource "aws_vpc_peering_connection" "service-peer" {
vpc_id = "vpc-123a56789bc"
peer_vpc_id = "vpc-YYYYYY"
peer_owner_id = "012345678901"
peer_region = "us-east-1"
accepter {
allow_remote_vpc_dns_resolution = true
}
requester {
allow_remote_vpc_dns_resolution = true
}
provisioner "local-exec" {
command = "aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id=${aws_vpc_peering_connection.service-peer.id} --region=us-east-1 --profile=peer"
}
}
Output I'm getting:
Error: Error applying plan:
1 error(s) occurred:
* aws_vpc_peering_connection.servicehub-peer: 1 error(s) occurred:
* aws_vpc_peering_connection.servicehub-peer: Unable to modify peering options. The VPC Peering Connection "pcx-08ebd316c82acacd9" is not active. Please set `auto_accept` attribute to `true`, or activate VPC Peering Connection manually.
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure
Where as I'm able to run the aws cli command successfully via linux shell, outside the terraform template. Let me know if I'm missing out something in the terraform script.
Try with moving out your "local-exec" and add depends on link with your VPC peering.
resource "null_resource" "peering-provision" {
depends_on = ["aws_vpc_peering_connection.service-peer"]
provisioner "local-exec" {
command = "aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id=${aws_vpc_peering_connection.service-peer.id} --region=us-east-1 --profile=peer"
}
}
As said Koe it's may be better to use auto_accept option.

How to create private subnet on OVH using Terraform?

How am I supposed to create a private network/subnet on OVH using Terraform?
There is a common resource related to OpenStack (openstack_networking_subnet_v2) and ovh-specific (ovh_publiccloud_private_network_subnet) if you use ovh provider.
I am asking because when I follow this guide, my private network interface does not get ipv4-address assigned on interface (looks like the same problem was already described in this question: Private network creation with Terraform on OVH's Openstack). I can see an ip-addr in Horizon control-panel, but when i ssh to instance using Ext-net ipv4 addr and type ifconfig I see there is no ipv4 addr assigned for private network interface. Interface is UP but no ipv4 assigned. I just use Terraform code from guide like this:
# Create a private sub network
resource "ovh_publiccloud_private_network_subnet" "private_subnet" {
# Get the id of the resource ovh_publiccloud_private_network named
# private_network
network_id = "${ovh_publiccloud_private_network.private_network.id}"
project_id = "${var.project_id}" # With OS_TENANT_ID your tenant id's project
region = "WAW1" # With OS_REGION_NAME the OS_REGION_NAME environment variable
network = "192.168.42.0/24" # Global network
start = "192.168.42.2" # First IP for the subnet
end = "192.168.42.200" # Last IP for the subnet
dhcp = false # Deactivate the DHCP service
provider = "ovh.ovh" # Provider's name
}
resource "openstack_compute_instance_v2" "front" {
# Number of time the instance will be created
count = "${length(var.front_private_ip)}"
provider = "openstack.ovh" # Provider's name
name = "front" # Instance's name
flavor_name = "s1-2" # Flavor's name
image_name = "CoreOS Stable" # Image's name
key_pair = "${openstack_compute_keypair_v2.test_keypair.name}"
security_groups = ["default"] # Add into a security group
network = [
{
name = "Ext-Net" # Public interface name
}
,
{
# Private interface name
name = "${ovh_publiccloud_private_network.private_network.name}"
# Give an IP address depending on count.index
fixed_ip_v4 = "192.168.42.4"
}
]
}
So as I said the above example does not work for me (because I have to manually assign private ipv4-addr on interface while I would like Terraform to do it for me). Then I discovered terraform-ovh-publiccloud-network module on OVH github. Tried simple example from this repo (copy-pasted from ReadMe) and I can see that second interface on Bastion node gets Ipv4 addr assigned from private net range successfully. From module's code I can also see that openstack_networking_subnet_v2 resource is used instead of OVH-specific ovh_publiccloud_private_network_subnet? Why and what is the difference between them? Which one am I supposed to use when I write my own Terraform definition from scratch?
My goal is just to create a private network/subnet and create a compute instance with two interfaces (connected to public Ext-Net and private subnet I just created). Please provide me a short working example for OVH if you have such experience or let me know if I am missing something.
You can rent a /24 of public IPs from OVH for like $800. But you gotta do that first.

How to create a VM with multiple NICs with Terraform on Openstack

I try to use Terraform to deploy some machines on an Openstack Cloud.
I have no problem to create networks, subnet, keys, security groups and rules, floating ip, network ports (with security groups attached), but, when I try to create compute instances with two NICs (network ports created before), I have a syntax error with no hint to resolve it.
Could you help me please ?
My code is:
resource "openstack_compute_instance_v2" "RNGPR-REBOND-01" {
name = "RNGPR-REBOND-01"
flavor_name = "${var.MyFlavor}"
image_id = "${var.MyImage}"
key_pair = "${var.CODOB}-keypair"
network {
port = "${openstack_networking_port_v2.RNGPR-REBOND-01-eth0.id}"
access_network = true
}
network {
port = "${openstack_networking_port_v2.RNGPR-REBOND-01-eth1.id}"
}
floating_ip = "${openstack_compute_floatingip_v2.FloatingIp-RNGPR-REBOND-01.address}"
}
resource "openstack_compute_instance_v2" "RNGPR-LB-01" {
name = "RNGPR-LB-01"
flavor_name = "${var.MyFlavor}"
image_id = "${var.MyImage}"
key_pair = "${var.CODOB}-keypair"
network {
port = "${openstack_networking_port_v2.RNGPR-LB-01-eth0.id}"
}
network {
port = "${openstack_networking_port_v2.RNGPR-LB-01-eth1.id}"
}
floating_ip = "${openstack_compute_floatingip_v2.FloatingIp-RNGPR-LB-01.address}"
}
And the syntax error is:
Error applying plan:
2 error(s) occurred:
* openstack_compute_instance_v2.RNGPR-REBOND-01: Error creating OpenStack server: Invalid request due to incorrect syntax or missing required parameters.
* openstack_compute_instance_v2.RNGPR-LB-01: Error creating OpenStack server: Invalid request due to incorrect syntax or missing required parameters.
.
From my experience, these error messages aren't very helpful.
I would first set TF_LOG=DEBUG and OS_DEBUG=1 wherever you are running terraform. This will print error messages that are actually beneficial.
One time I was trying to create a server with a key pair that my user didn't have access to in openstack. I was receiving the same error and didn't figure it out until Debugging was set.

Resources