I'm currently working on copying AWS EKS cluster to Azure AKS.
In our EKS we use external Nginx with proxy protocol to identify the client real IP and check if it is whitelisted in our Nginx.
In AWS to do so we added to the Kubernetes service annotation aws-load-balancer-proxy-protocol to support Nginx proxy_protocol directive.
Now the day has come and we want to run our cluster also on Azure AKS and I'm trying to do the same mechanism.
I saw that AKS Load Balancer hashes the IPs so I removed the proxy_protocol directive from my Nginx conf, I tried several things, I understand that Azure Load Balancer is not used as a proxy but I did read here:
AKS Load Balancer Standard
I tried whitelisting IPs at the level of the Kubernetes service using the loadBalancerSourceRanges api instead on the Nginx level.
But I think that the Load Balancer sends the IP to the cluster already hashed (is it the right term?) and the cluster seem to ignore the ips under loadBalancerSourceRanges and pass them through.
I'm stuck now trying to understand where I lack the knowledge, I tried to handle it from both ends (load balancer and kubernetes service) and they both seem not to cooperate with me.
Given my failures, what is the "right" way of passing the client real IP address to my AKS cluster?
From the docs: https://learn.microsoft.com/en-us/azure/aks/ingress-basic#create-an-ingress-controller
If you would like to enable client source IP preservation for requests
to containers in your cluster, add --set controller.service.externalTrafficPolicy=Local to the Helm install
command. The client source IP is stored in the request header under
X-Forwarded-For. When using an ingress controller with client source
IP preservation enabled, SSL pass-through will not work.
More information here as well: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
You can use the real_ip and geo modules to create the IP whitelist configuration. Alternatively, the loadBalancerSourceRanges should let you whitelist any client IP ranges by updating the associated NSG.
Related
I followed this guide: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes on how to setup an Nginx Ingress with Cert Manager with Kubernetes having DigitalOcean as a cloud provider.
The tutorial worked fine, I was able to setup everything according to what it was written. Though, (as it is stated) following the tutorial one ends up with three pods of which only one is in "Running 1/1", while the other two are "Down". Also when checking the comments section, it seems that it is quite a problem. Since if all the traffic gets routed to only 1 pods, it is not really scalable. Or am I missing something? Quoting from their tutorial:
Note: By default the Nginx Ingress LoadBalancer Service has
service.spec.externalTrafficPolicy set to the value Local, which
routes all load balancer traffic to nodes running Nginx Ingress Pods.
The other nodes will deliberately fail load balancer health checks so
that Ingress traffic does not get routed to them.
Mainly my question is: Is there a best practice that I am missing in order to have Kubernetes hosting my website? It seems I have to choose either scalability (having all the pods healthy and running) or getting IP of the client visitor.
And for whoever will ever find himself/herself in my situation, this is the reply I got from the DigitaOcean Support:
Unfortunately with that Kubernetes setup it would show those other
nodes as down without additional traffic configuration. It is possible
to skip the nginx ingress part and just use a DigitalOcean load
balancer but this again does require a good deal of setup and can be
more difficult then easy.
The suggestion to have a website with analytics (IP) and scalable was to setup a droplet with Nginx and setup a LoadBalancer to it. More specifically:
As for using a droplet this would be a normal website configuration
with Nginx as your webserver configured to serve content to your app.
You would have full access to your application and the Nginx logs on
the droplet itself. Putting a load balancer in front of this would
require additional configuration as load balancers do not pass the
x-forward header so the IP addresses of clients would not show up in
the logs by default. You would need to configured proxy protocol on
the load balancer and in your nginx configuration to be able to obtain
those IPs.
https://www.digitalocean.com/blog/load-balancers-now-support-proxy-protocol/
This is also a bit more complex unfortunately.
Hope it might save some time to someone
I'm exploring the nginx container as a load balancer. I'm using a Kubernetes Service in the upstream module.
Here is a snippet of my nginx.conf...
upstream backend {
hash $remote_addr consistent;
server mgmt-1; # <-- these entries are Kubernetes Services
server mgmt-2;
server mgmt-3;
}
Right now each service maps to only ONE pod. Which of course, is not ideal.
So, the resolver directive for nginx+ is said to consistently update the endpoint IP, in the event that the Service is changed/moved/deleted/ephemeralized, etc.
Question: If I wanted to scale up my pods, resulting in multiple endpoints maintained by one Service, would the nginx resolver extract and load the entire set of endpoints? Or will it not know what to do? Thanks
You can test this for yourself. It's quite fun. You spin up a pod running Ubuntu for example and then kubectl exec -it podname /bin/sh into the pod and from there you can run nslookup mgmt-1 (if they're in the same namespace).
What you will find is that the service resolves to a single IP. This IP is unique to the service. When you actually send the service IP traffic on a port it is listening on it will make it to the endpoints.
To answer your question. I do not believe Nginx will be able to tell mgmt-1 has three endpoints (for example) while mgmt-2 has 1 endpoint (for example). Nginx will only see the service IP.
To see the service IP for yourself you can run kubectl get svc/servicename -n namespacename
I have two services deployed to a single cluster in K8s. One is IS4, the other is a client application.
According to leastprivilege, the internal service must also use the FQDN.
The issue I'm having when developing locally (via skaffold & Docker) is that the internal service resolves the FQDN to 127.0.0.1 (the cluster). Is there any way to ensure that it resolves correctly and routes to the correct service?
Another issue is that internally the services communicate on HTTP, and publicly they expose HTTPS. With a URL rewrite I'm able to resolve the DNS part, but I'm unable to change the HTTPS calls to HTTP as NGINX isn't called, it's a call direct to the service. If there is some inter-service ruleset I can hook into (similar to ingress) I believe I could use that to terminate TLS and things would work.
Edit for clarification:
I mean, I'm not deploying to AKS. When deployed to AKS this isn't an issue.
HTTPS is explosed via NGingx ingress, which terminates TLS.
I have a Load Balancer IP provided by OVH that I want to use with Nginx Ingress Controller but on a on-premises cluster. There are several guide s to do that using OVH Managed Kubernetes but it is not possible for me since I already a cluster.
I tried to use the LoadBalancerIP option using Helm and without Helm as well...
You should expose Nginx Ingress Controller as NodePort and point your OVH Load Balancer to your workers as endpoints.
User ---> OVH LB ----> Nginx Ingress on workers
Thank you for both you answer. I tried what you recommended but I think I'm missing a point. TO be more clear :
1/ The user part -> I have a OVH LB connected to a server of 3 node, this LB selects a node to be used by a user (round robin)
2/ Once a node had been selected, the user should be able to access to whatever service inside Kubernetes even if the service is not on this node by using the LoadBalancer IP.
For the 2nd point, I tried to expose/create an endpoint for the Nginx Ingress Controller where I gave the LB's IP, but I don't know if I have to create an Ingress object for each service (only 2-3 like grafana, prometheus..). I tried it but it didn't work. I also tried to create an Ingress for the service where I gave the LB IP but it didn't work. Note that my k8s cluster is on LXD containers which are inside 3 connected servers (one LXD container by server node). Also, concerning the OVH LoadBalancer, I'm not very confident with the purpose of Outbound IPs (which is a CIDR range)..
I understand that my OVH LB cannot be auto provisioned, but since its job is done outside of k8s (just attributing a node to a user), the problem is : how does the node can access the service based on a URL like grafana.example.com? I was using MetalLB as an internal LB and it worked fine but now i'm struggling with the OVH LB..
I have some services running in Kubernetes. I need an NGINX in front of them, to redirect traffic according to the URLs, handle SSL encryption and load balancing.
There is a working nginx.conf for that scenario. What I´m missing is the right way to set up the architecture on gcloud.
Is it correct to launch a StatefulSet with nginx and have a Loadbalancing Service expose NGINX? Do I understand it right, that gcloud LB would pass the configured Ports ( f.e. 80 + 443) to my NGINX service, where I can handle the rest and forward the traffic to the backend services?
You don't really need a StatefulSet, a Deployment will do since nginx is already being fronted by a gcloud TCP load balancer, if for any reason one of your nginx pods is down the gcloud load balancer will not forward traffic to it. Since you already have a gcloud load balancer you will have to use a NodePort Service type and you will have to point your gcloud load balancer to all the nodes on your K8s cluster on that specific port.
Note that your nginx.conf will have to know how to route to all the services internally in your K8s cluster. I recommend you set up an nginx ingress controller, which will basically manage the nginx.conf for you through an Ingress resource and you can also expose it as a LoadBalancer Service type.