Route Propagation on AWS via Terraform - terraform-provider-aws

My company uses AWS as a cloud provider and Terraform to do our Infrastructure as code piece. I need to make a change to the way our traffic routes in AWS. We currently have 1 NAT gateway. So if the AZ that this live sin went down we'd lose external connectivity from our instances that live on our private subnets.
I've created 2 extra NAT GW's. One in each AZ. I have done all of this through Terraform ok but I've run into a stumbling block when it comes to the routing.
I've created this type of setup, where you have a routing table for both private and public subnets in each AZ
NAT GW Architecture and routing
We have a Direct Connect and use BGP to advertise our Datacentre networks to AWS. I can't seem to figure out how to enable route propagation on the private subnet route tables so that our on-prem networks get populated in these route tables.
resource "aws_route_table" "private-subnet-a-routes" {
vpc_id = "${aws_vpc.foo.id}"
propogating_vgws "${aws_vgw.foo.id}"
I have tried that but get the below error
resource 'aws_route_table.private-subnet-a-routes' config: unknown resource 'aws_vgw.foo' referenced in variable aws_vgw.foo.id
Does anyone know how to set routes to be propagated on a route table from the main VGW in your VPC?
Thanks in advance
Chris

Not sure that answers your question but might give you some ideas:
resource "aws_route" "nat" {
count = "${var.num_availability_zones}"
route_table_id = "${element(aws_route_table.private.*.id, count.index)}"
destination_cidr_block = "0.0.0.0/0"
nat_gateway_id = "${element(aws_nat_gateway.nat.*.id, count.index)}"
depends_on = ["aws_internet_gateway.main", "aws_route_table.private"]
}
https://www.terraform.io/docs/providers/aws/d/route.html

Related

Mirror requests from cloudrun service to other cloudrun service

I'm currently working on a project where we are using Google Cloud. Within the Cloud we are using CloudRun to provide our services. One of these services is rather complex and has many different configuration options. To validate how these configurations affect the quality of the results and also to evaluate the quality of changes to the service, I would like to proceed as follows:
in addition to the existing service I deploy another instance of the service which contains the changes
I mirror all incoming requests and let both services process them, only the responses from the initial service are returned, but the responses from both services are stored
This allows me to create a detailed evaluation of the differences between the two services without having to provide the user with potentially worse responses.
For the implementation I have setup a NGINX which mirrors the requests. This is also deployed as a CloudRun service. This now accepts all requests and takes care of the authentication. The original service and the mirrored version have been configured in such a way that they can only be accessed internally and should therefore be accessed via a VPC network.
I have tried all possible combinations for the configuration of these parts but I always get 403 or 502 errors.
I have tried setting the NGINX service to the HTTP and HTTPS routes from the service, and I have tried all the VPC Connector settings. When I set the ingress from the service to ALL it works perfectly if I configure the service with HTTPS and port 443 in NGINX. As soon as I set the ingress to Internal I get errors with HTTPS -> 403 and with HTTP -> 502.
Does anyone have experience in this regard and can give me tips on how to solve this problem? Would be very grateful for any help.
If your Cloud Run service are internally accessible (ingress control set to internal only), you need to perform your request from your VPC.
Therefore, as you perfectly did, you plugged a serverless VPC connector on your NGINX service.
The set up is correct. Now, why it works when you route ALL the egress traffic and not only the private traffic to your VPC connector?
In fact, Cloud Run is a public resource, with a public URL, and even if you set the ingress to internal. This param say "the traffic must come to the VPC" and not say "I'm plugged to the VPC with a private IP".
So, to go to your VPC and access a public ressource (Your cloud run services), you need to route ALL the traffic to your VPC, even the public one.

Is it possible to route a block of public IPs to an EC2 instance?

I am trying to associate a pool of public proxies (1,024) to an EC2 instance on AWS. This EC2 instance is setup with squid proxy to forward all of my http requests. The goal is to be able to be able to pass in any of these public IPs to my requests and it will route it to this instance which is communicating with the Internet Gateway. Per AWS documentations https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html, there is a max number of IPs that can be associated to an instance. Is there a way to get around this or somehow leverage a custom route table to get this to work?

Kubernetes in GKE - Nginx Ingress deployment - Public IP assigned to Ingress resource

TLDR, Why does the Ingress Resource have a public IP? Really, I'm seeking the why.
The description "Where 107.178.254.228 is the IP allocated by the Ingress controller to satisfy this Ingress." from the Kubernetes documentation doesn't really satisfy my need to understand it fully.
My understanding of the resource is that in this instance, it's acting as a pseudo-Nginx configuration, which does make sense to me based solely on the configuration elements. Though, the sticking point is why does itself, the resource, have a public IP? Following labs for this implementation, I also found that SSH is listening on this resource publicly, which I find strange. In testing, from the controller, the network path to this IP does egress the network so this isn't the case of another public IP being assigned on a VIF so that traffic can be routed on a local interface.
FWIW, my testing has been entirely in GKE but based on the documentation this seems to be simply "how it works" across platforms.
Ingress used in GKE or cloud provider will have "the same effect" as a service Load Balancer in the way that it will create a load balancer resource on your cloud provider. That explains why it has a public IP.
If you don't need to have a Global load balancer (gce in annotation) you could limit yourself to a simple load balancer service.

Kubernetes update changes static+reserved external IPs for nodes in Google Cloud

I have three nodes in my google container cluster.
Everytime i perform a kubernetes update through the web-ui on the cluster in Google Container Engine.
My external IP's change, and i have to manually assign the previous IP on all three instances in Google Cloud Console.
These are reserved static external IP set up using the following guide.
Reserving a static external IP
Has anyone run into the same problem? Starting to think this is a bug.
Perhaps you can set up the same static outbound external IP for all the instances to use, but i cannot find any information on how to do so, that would be a solution as long as it persists through updates, otherwise we've got the same issue.
It's only updates that cause this, not restarts.
I was having the same problem as you. We found some solutions.
KubeIP - But this needed a cluster 1.10 or higher. Ours is 1.8
NAT - At GCP documentation they talk about this method. It was too complex for me.
Our Solution
We followed the documentation for assign IP addresses on GCE. Used the command line.
Using this method, we didn't have any problems so far. I don't know the risks for it yet. If anyone has an idea, it would be good.
We basically just ran:
gcloud compute instances delete-access-config [INSTANCE_NAME] --access-config-name [CONFIG_NAME]
gcloud compute instances add-access-config [INSTANCE_NAME] --access-config-name "external-nat-static" --address [IP_ADDRESS]
If anyone have any feedback on this solution. Please give it to us.
#Ahmet Alp Balkan - Google
You should not rely on the IP addresses of each individual node. Instances can come and go (especially when you use Cluster Autoscaler), and their IP addresses can change.
You should always be exposing your applications with Service or Ingress and IP addresses of the load balancers created with these resources do not change between upgrades. Further you can convert IP address on a load balancer to a static (reserved) IP address.
I see that you're assigning static IP addresses to your nodes. I don't see any reason to do that. When you expose your services with Service/Ingress resources, you can associate a static external IP to them.
See this tutorial: https://cloud.google.com/container-engine/docs/tutorials/http-balancer#step_5_optional_configuring_a_static_ip_address

AWS Datapipeline ServiceAccessSecurityGroup

When I try to create an EMRcluster resource with those properties:
Emr Managed Master Security Group Id
Emr Managed Slave Security Group Id
I have this error : Terminated with errors. You must also specify a ServiceAccessSecurityGroup if you use custom security
Service Access Security Group: Besides the firewall settings mentioned in the 2 security groups mentioned. Internet traffic between AWS EMR Service servers(you dont have any control over this, completely managed by AWS) and your Slave EMR instance, has to be allowed.
This security group contains 2 entries
HTTPS* (8443) TCP (6) 8443 ElasticMapReduce-Slave-Private(sg-id)
HTTPS* (8443) TCP (6) 8443 Default Security Group of VPC
Without this EMR will not work with DataPipeline. Neither Datapipeline specifies a way to list this in pipeline definition. AWS team is aware of this.
So, as a workaround please use the custom template provided by AWS, and clone, edit accordingly to your needs.
Thanks, #blamblam for pointing that out. The previous steps assume, servers have already been created in the private subnets, and you need to allow communication automatically.
For launching in private subnet, we will include one more setting, Subnet Id, this will launch your EMR in private subnets.
Hope, that helps.

Resources