Is it possible to route a block of public IPs to an EC2 instance? - networking

I am trying to associate a pool of public proxies (1,024) to an EC2 instance on AWS. This EC2 instance is setup with squid proxy to forward all of my http requests. The goal is to be able to be able to pass in any of these public IPs to my requests and it will route it to this instance which is communicating with the Internet Gateway. Per AWS documentations https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html, there is a max number of IPs that can be associated to an instance. Is there a way to get around this or somehow leverage a custom route table to get this to work?

Related

Mirror requests from cloudrun service to other cloudrun service

I'm currently working on a project where we are using Google Cloud. Within the Cloud we are using CloudRun to provide our services. One of these services is rather complex and has many different configuration options. To validate how these configurations affect the quality of the results and also to evaluate the quality of changes to the service, I would like to proceed as follows:
in addition to the existing service I deploy another instance of the service which contains the changes
I mirror all incoming requests and let both services process them, only the responses from the initial service are returned, but the responses from both services are stored
This allows me to create a detailed evaluation of the differences between the two services without having to provide the user with potentially worse responses.
For the implementation I have setup a NGINX which mirrors the requests. This is also deployed as a CloudRun service. This now accepts all requests and takes care of the authentication. The original service and the mirrored version have been configured in such a way that they can only be accessed internally and should therefore be accessed via a VPC network.
I have tried all possible combinations for the configuration of these parts but I always get 403 or 502 errors.
I have tried setting the NGINX service to the HTTP and HTTPS routes from the service, and I have tried all the VPC Connector settings. When I set the ingress from the service to ALL it works perfectly if I configure the service with HTTPS and port 443 in NGINX. As soon as I set the ingress to Internal I get errors with HTTPS -> 403 and with HTTP -> 502.
Does anyone have experience in this regard and can give me tips on how to solve this problem? Would be very grateful for any help.
If your Cloud Run service are internally accessible (ingress control set to internal only), you need to perform your request from your VPC.
Therefore, as you perfectly did, you plugged a serverless VPC connector on your NGINX service.
The set up is correct. Now, why it works when you route ALL the egress traffic and not only the private traffic to your VPC connector?
In fact, Cloud Run is a public resource, with a public URL, and even if you set the ingress to internal. This param say "the traffic must come to the VPC" and not say "I'm plugged to the VPC with a private IP".
So, to go to your VPC and access a public ressource (Your cloud run services), you need to route ALL the traffic to your VPC, even the public one.

Understanding products of NIGNX PLUS, Controller, Ingress Controller, and Instance Management

As far as I know, Instance Management and the Controller have the same functions, which managing NGINX Plus and the Instances. but it does not make more sense.
So my question is
What are the differences between Instance Management and Controller?
What is Ingress Controller?
Nginx Instance Management: NGINX Instance Manager empowers you to
Automate configuration and monitoring using APIs.
For example, if you have multiple servers using Nginx then in Nginx Plus service provides a dashboard where all the events can be monitored, Including spikes on specific events, Or think as one of the servers has not been updated from having multiple VM, monitor the list of inventory. To achieve nginx-agant needs to install along with Nginx server on the host.
Ensure your fleet of NGINX web servers and proxies have fixes for
active CVEs
Seamlessly integrate with third‑party monitoring solutions such as
Prometheus and Grafana for insights
Nginx Controller: NGINX Controller is cloud‑agnostic and includes a set of enterprise‑grade services that give you a clear line of sight to apps in development, test, or production. With per‑app analytics, you gain new insights into app performance and reliability so you can pinpoint performance issues before they impact production.
Example: To enable the ingress you did need an Ingress Controller to enabled first.
Nginx Ingress: Each LoadBalancer service requires its own load balancer with its own public IP address, whereas an Ingress only requires one, even when providing access to dozens of services. When a client sends an HTTP request to the Ingress, the host and path in the request determine which service the request is forwarded to.
For example Google Kubernetes Controller

Kubernetes in GKE - Nginx Ingress deployment - Public IP assigned to Ingress resource

TLDR, Why does the Ingress Resource have a public IP? Really, I'm seeking the why.
The description "Where 107.178.254.228 is the IP allocated by the Ingress controller to satisfy this Ingress." from the Kubernetes documentation doesn't really satisfy my need to understand it fully.
My understanding of the resource is that in this instance, it's acting as a pseudo-Nginx configuration, which does make sense to me based solely on the configuration elements. Though, the sticking point is why does itself, the resource, have a public IP? Following labs for this implementation, I also found that SSH is listening on this resource publicly, which I find strange. In testing, from the controller, the network path to this IP does egress the network so this isn't the case of another public IP being assigned on a VIF so that traffic can be routed on a local interface.
FWIW, my testing has been entirely in GKE but based on the documentation this seems to be simply "how it works" across platforms.
Ingress used in GKE or cloud provider will have "the same effect" as a service Load Balancer in the way that it will create a load balancer resource on your cloud provider. That explains why it has a public IP.
If you don't need to have a Global load balancer (gce in annotation) you could limit yourself to a simple load balancer service.

Can AWS Load Balancer be configured to filter out requests?

I have a Django app deployed on AWS Elastic Beanstalk. Django is configured to only serve requests that comes for a specific hostname (ALLOWED_HOSTS). If the host information in the request doesn't match, it will raise return 500 response code, that is fine.
But, I have noticed that I get quite many of those, either sending requests vis IP address, or via other domain names. So, I would like to configure the setup so that the load balancer rejects the request if it doesn't have the proper hostname in the header information.
Is this possible to do? I have been trying to go over settings in the AWS Console, but cannot find any information how to do this. I could patch the EC2 instances to reject those request so it doesn't reach Django at all, but I would like to stop it as early as possible.
Flow now:
Client -> Load Balancer -> EC2 instance -> Nginx -> Django
<-500 error- Django
What I want:
Client -> Load Balancer
<-reject- Load Balancer
Elastic Load Balancer cannot be configured to filter out requests.
If your allowed connections are based on IP address, then you can use VPC ACLs to allow only connections from certain IP addresses. All others will receive failed connections at the ELB level.
If your allowed connections are not based on IP address you can take a look at CloudFront in combination with Amazon Web Application Firewall (WAF).
WAF can be configured to filter at the web request level by IP address, URL, query string, headers, etc.

Kubernetes Service Deployment

I have recently started exploring kuberenetes and done with practical implementation of pods,services and replication Controller on google cloud. I have some doubts over service and network access .
First, Where is the service deployed which will work as load balancer for group of pods ?
Second, does the request to access an application running in pod using a service load balancer go through master or direct to minions nodes ?
A service proxy runs on each node on the cluster. From inside the cluster, when you make a request to a service IP, it is intercepted by the service proxy and routed to a pod matching the label selector for the service. If you have specified an external load balancer for your service, the load balancer will pick a node to send the request to, at which point it will be captured by the proxy and directed to an appropriate pod. If you are using public IPs, then your router will send the request to the node with the public IP where it will be captured by the proxy and directed to an appropriate pod.
If you followed by description, you can see that service requests do not go through the master. They bounce through a proxy running on the nodes.
As an aside, there is also a proxy running on the master, which you can use to reach nodes, services, pods, but this proxy isn't in the packet path for services that you create within the cluster.

Resources