In-cluster nginx proxy (sidecar) in Kubernetes to re-write headers - nginx

Background
I have an in-cluster Kubernetes pod/application that works fine when accessing it via an nginx-ingress ingress controller (requires specific Host HTTP header), but it cannot be accessed by other in-cluster pods/applications (i.e. for testing) due to the pods using different host names (e.g. service-name.namespace.svc.cluster.local) rather than the FQDN of the K8S master (in the LAN).
Plan So Far
I think the only way to (easily) resolve this is to setup an in-cluster forward-proxy nginx instance. Ideally, the service is either a side-car for the pod that needs to have headers re-written, or it needs to be a general in-cluster proxy that multiple services can access.
Question
How would I setup an in-cluster nginx forward proxy service?
Should it be a sidecar, or a general service any pod can access?
Work So Far
The linked "similar" questions don't appear to be helpful for my use case (i.e. don't show how to configure an in-cluster proxy), or are focused on proxying to IPs external to the cluster (i.e. I need to proxy HTTP requests, and re-write their headers, to in-cluster resources).

Related

Reverse proxy with http inbound, https outbound, and parent proxy

I have an application that needs to use a proxy (call it proxy1) to access some https endpoints outside of its network. The application doesn't support proxy settings, so I'd like to provide it a reverse proxy url, and I would prefer not to provide tls certs for proxy1, so I would use http for application -> proxy1.
I don't have access to the application host or forward proxy mentioned below, so I cannot configure networking there.
The endpoints the application needs are https, so proxy1 must make its outbound connections via https.
Finally, this whole setup is within a corporate network that requires a forward proxy (call it proxy2) for outbound internet, so my proxy1 needs to chain to proxy2 / use it as a parent.
I tried squid and it worked well for http only, but I couldn't get it to accept http inbound while using https outbound. Squid easily supported the parent proxy2.
I tried haproxy, but had the same result as with squid.
I tried nginx and it did what I wanted with http -> proxy -> https, but doesn't support a parent proxy. I considered setting up socat as in this answer, or using proxy_pass and proxy_set_header as in this answer, but I can't shake the feeling there's a cleaner way to achieve the requirements.
This doesn't seem like an outlandish setup, is it? Or is there a preferred approach for it? Ideally one using squid or nginx.
You can achive this without the complexity by using a port forwarder like socat. Just install it on a host to do the forwarding (or locally on the app server if you wish to) and create a listener that forwards connections through the proxy server. Then on your application host use a local name resolution overide to map the FQDN to the forwarder.
So, the final config should be the app server using a URI that points to the forwarding server (using its address if no name resolution excists), which has a socat listener that points to the the corporate proxy. No reverse proxy required.
socat TCP4-LISTEN:443,reuseaddr,fork \
PROXY:{proxy_address}:{endpoint_fqdn}:443,proxyport={proxy_port}
Just update with your parameters.

How can I get the real IP of a client when using an ELB in front of my Nginx Ingress?

I have the following setup:
Client -> AWS ELB -> Nginx Ingress -> Pod
In the ELB logs, I can see the real IP of these clients. ELB sends it as the X-Forwarded-For header value to my Ingress controller.
I need to set the whitelist-source-range in the Ingress for the application, but the issue is that it uses the remote IP address, not the one in the X-Forwarded-For header.
I can see some solutions here:
Transform ALB into an NLB, so it preserves the originating client's IP
Make the Nginx controller source range whitelist based on the X-Forwarded-For header
Make the Nginx controller transform the request originating IP into the one in the header
The first is not ideal for me. I didn't want to maintain and pay for another load balancer. I don't know if the second is possible. I think the third is feasible, yet I have no idea how to do it. I know there's something related, which is the proxy protocol, but I don't see how it works, and I don't want to add something I don't understand into my production environment.
The load balancer is for several applications in my Kubernetes environment, so adding these IPs to the whitelist in the security group is not ideal.
How could I solve this issue?
My last resource will be to use Cloudflare. I want to keep as much of my configuration as possible inside Kubernetes, but I'll go for it if it's impossible.
Edit: this doesn't solve my problems, I have CIDRs to whitelist, not a specific IP.
So if you are on AWS, why are you using an Nginx Ingress controller?
You can use the AWS Load Balancer Controller, which will provision and/or manage AWS ALB's automatically.
For your particular usecase, you could add a WAF WebACL for a particular target in the ALB. You can do that manually, or use the alb.ingress.kubernetes.io/wafv2-acl-arn annotation for the AWS Load Balancer Controller.
Alternatively, you could run a separate ALB in a separate subnet, and setup IP whitelisting via the security group of that ALB's. And then make sure that separate ALB is only used for those ingresses that need it.
Again, when using the AWS Load Balancer Controller instead of the Nginx Ingress Controller, this can be done automatically, without extra maintenance.

How to make Kubernetes service load balance based on client IP instead of NGINX reverse proxy IP

I have configured NGINX as a reverse proxy with web sockets enabled for a backend web application with multiple replicas. The request from NGINX does a proxy_pass to a Kubernetes service which in turn load balances the request to the endpoints mapped to the service. I need to ensure that the request from a particular client is proxied to the same Kubernetes back end pod for the life cycle of that access, basically maintaining session persistence.
Tried setting the sessionAffinity: ClientIP in the Kubernetes service, however this does the routing based on the client IP which is of the NGINX proxy. Is there a way to make the Kubernetes service do the affinity based on the actual client IP from where the request originated and not the NGINX internal pod IP ?
This is not an option with Nginx. Or rather it's not an option with anything in userspace like this without a lot of very fancy network manipulation. You'll need to find another option, usually an app-specific proxy rules in the outermost HTTP proxy layer.

Differences between HTTP Web Server and Ingress?

I am learning the world of k8s and there is a lot of talk about ingress and ingress controllers. Conceptually it sounds identical to a web server which I will define as a service that proxies HTTP requests to web application servers. It can serve up certificates and do basic load balancing...
Whereas ingress: Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. Ingress may provide load balancing, SSL termination and name-based virtual hosting.
https://kubernetes.io/docs/concepts/services-networking/ingress/
They sound the same! So what exactly is the difference here? I can't be the only one confused by this right?
In general Web Server's is responsible for accepting and fulfilling requests from clients.
A web server‘s fundamental job is to accept and fulfill requests from clients for static content from a website (HTML pages, files, images, video, and so on). The client is almost always a browser or mobile application and the request takes the form of a Hypertext Transfer Protocol (HTTP) message, as does the web server’s response.
Lately you can find many web servers like Apache or Nginx.
Kubernetes Ingress is an API object. In IBM blog - What is Kubernetes Ingress and why is it useful?
Kubernetes Ingress is an API object that provides routing rules to manage external users' access to the services in a Kubernetes cluster, typically via HTTPS/HTTP. With Ingress, you can easily set up rules for routing traffic without creating a bunch of Load Balancers or exposing each service on the node. This makes it the best option to use in production environments.
Also in Kubernetes Ingress Docs you can find that Kubernetes Ingress needs Ingress Controller.
You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
There are many ingress controllers like Nginx, Ambassador, Apache, etc.
To sum up:
To use Ingress you need some Web Server as Ingress Controller.
Kubernetes Ingress is an Kubernetes object which helps user to configure Web Server (like Nginx) in Kubernetes Clusters.
As you pointed in documentation it allows you to configure some HTTP/HTTPS routing, traffic load balancing, terminate SSL / TLS, etc.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

Forward HTTPS client ip from Google Container Engine

I'm running an nginx service in a docker container with Google Container Engine which forwards specific domain names to other services, like API, Frontend, etc. I have simple cluster for that with configured services. Nginx Service is Load Balance.
The REMOTE_ADDR environmental variable always contains an internal address in the Kubernetes cluster. I looked for is HTTP_X_FORWARDED_FOR but it's missing from the request headers. Is it possible to configure the service to save the external client ip in the requests?
With the current implementation of L3 balancing (as of Kubernetes 1.4) it isn't possible to get the source IP address for a connection to your service.
It sounds like your use case might be well served by using an Ingress object (or by manually creating an HTTP/S load balancer), which will put the source IP address into a the X-Forwarded-For HTTP header for easy retrieval by your backends.

Resources