There is a post on what is dualstack aws elb.
link to what is dualstack elb
I can't understand why bother since Route53 does the job?
Related
I'm looking for a generic way to expose multiple GKE TCP services to the outside world. I want SSL that's terminated at cluster edge. I would also prefer client certificate based auth, if possible.
My current use case is to access PostgreSQL services deployed in GKE from private data centers (and only from there). But basically I'm interested in a solution that works for any TCP based service without builtin SSL and auth.
One option would be to deploy an nginx as a reverse proxy for the TCP service, expose the nginx with a service of type LoadBalancer (L4, network load balancer), and configure the nginx with SSL and client certificate validation.
Is there a better, more GKE native way to achieve it ?
To the best of my knowledge, there is no GKE-native way to achieve exactly what you need.
If this was only dealing with HTTP-based traffic, you could simply use GKE Ingress for HTTP(S) Load Balancing but taking into consideration:
But basically I'm interested in a solution that works for any TCP
based service without builtin SSL and auth.
this is not your use case.
So you can either stay with what you've already set up as it seems to work well or as an alternative you can use:
β
nginx ingress, which unlike GKE ingress is able to expose to the external world not only HTTP/HTTPS-based traffic, but also can proxy TCP connections coming to arbitrary ports.
β
You can use TLS termination proxy as a sidecar (something like this one or this one) behind External TCP/UDP Network Load Balancer. As it is not a proxy but a pass-through LB, it cannot provide SSL termination and will only be able to pass the encrypted TCP traffic to the backend Pod where it needs be handled by the above mentioned sidecar.
β From the GCP-native load balancing solutions presented in this table only SSL Proxy may seem useful at first glance as it can handle TCP traffic with SSL offload, however βit supports only limited set of well-known TCP ports and as far as I understand, you need to be able to expose arbitrary TCP ports so this won't help you much:
SSL Proxy Load Balancing support for the following ports: 25, 43, 110, 143, 195, 443, 465, 587, 700, 993, 995, 1883, 3389, 5222, 5432,
5671, 5672, 5900, 5901, 6379, 8085, 8099, 9092, 9200, and 9300. When
you use Google- managed SSL certificates with SSL Proxy Load
Balancing, the frontend port for traffic must be 443 to enable the
Google-managed SSL certificates to be provisioned and renewed.
What port do communications to AWS DynamoDB use? Is it HTTPS on port 443?
The only references I can find are to do with the stand-alone DynamoDB that AWS provides (not the cloud version) which looks like it is HTTP over port 8000.
All AWS APIs use the standard HTTPS port 443.
I setup an Elastic Beanstalk with load balancer forwarding port 80 to port 5000 on EC2 instance. My EC2 instance listens on port 5000, not port 80. The EC2 instance has a private ip 172.31.14.151. On another EC2 which is in the same subnet as the EC2 running the Springboot web server, I got http responses for the two following http request:
curl 172.31.14.151:5000
curl 172.31.14.151:80
I do not understand why I got http response from 172.31.14.15:80. The EC2 I am running the curl command is on the same subnet as the EC2 running webserver. The http request should not go through any router and not through load balancer. But the webserver is running on port 5000, not port 80.
Is there a Nginx instance running on the EC2 instance with webserver?
If I configure the webserver to listen on port 80 and let the Elastic loadbalancer forward port 80 to port 80 on EC2 instance, I got Nginx 502 bad gateway response from doing the curl request
curl 172.31.14.151:80
I don't know which Elastic Beanstalk Solution Stack you are using, but most of the AWS Solution Stacks come coupled with Proxy Servers by default. For example, if you're running Java SE the proxy server is NGINX, but if you're running Java with Tomcat the proxy server is Apache.
By default these proxies accept HTTP Traffic on the default HTTP port (80), manage the connections, then proxy the requests from the backing application server (In your case, port 5000). This helps manage the connection to the backing application, as well as serve static content, or if you configure them correctly, customized Error messages based on the HTTP Status code. I'd suggest that if you can, you send the load balancer traffic to port 80 because Apache or NGINX can usually handle connection load better than most custom applications.
Have you check inbound rules on the security group that you've use ?
Is there a Nginx instance running on the EC2 instance with webserver? - Yes is it. When you create new environment, you can choose pre-configured platform, and choose NodeJS Platform.
If your application is heterogeneous applications, is better to use container. You can deploy your containerized applications on Elasticbeanstalk or use Elastic Container Service instead.
I have an ELB (in EC2 classic) running and one of my client want to hardcode an IP to his firewall rule to access our site.
I know that ELB doesn't provide static IP but is there a way to set up an instance (just for them) that they could hit and be used as gateway to our API?
(I was thinking of using HA Proxy on OpsWorks but it points directly to my instances and I need something that points to my ELB because SSL resolution happens at this level)
Any recommendation would be very helpful.
I assume you are running one or more instances behind your ELB.
You should be able to assign an elastic IP to one of those instances. In EC2 classic, EIPs will need to get reattached when you restart your instance.
I have the following scenario:
Elastic beanstalk with N instances
ELB for load balancing to the EBS
External datacenter with IP Filtering
Since I can't filter by name (fqdn) and I can't filter for a single IP either, is there a way to make all the request that came from AWS Machines have only one IP or maybe use a third machine to serve as proxy for the calls for the AWS Machines and then attach a EIP on it.
Not really. Or at least, if there's a way to do it, I'd love to hear about it. One of the biggest problems with beanstalk is its requirement to exist outside of VPCs, and thus, in arbitrary Amazon IP space. About the only workaround I've found for this after talking to AWS engineers is to forward traffic from them to something like a bastion server, and allow the bastion server to communicate with your data center firewall. Maybe there's something I'm missing, but I know of no other way to get it working without some server in between the beanstalk instances and the data center; not if the IP of the server matters.