I'm exploring the nginx container as a load balancer. I'm using a Kubernetes Service in the upstream module.
Here is a snippet of my nginx.conf...
upstream backend {
hash $remote_addr consistent;
server mgmt-1; # <-- these entries are Kubernetes Services
server mgmt-2;
server mgmt-3;
}
Right now each service maps to only ONE pod. Which of course, is not ideal.
So, the resolver directive for nginx+ is said to consistently update the endpoint IP, in the event that the Service is changed/moved/deleted/ephemeralized, etc.
Question: If I wanted to scale up my pods, resulting in multiple endpoints maintained by one Service, would the nginx resolver extract and load the entire set of endpoints? Or will it not know what to do? Thanks
You can test this for yourself. It's quite fun. You spin up a pod running Ubuntu for example and then kubectl exec -it podname /bin/sh into the pod and from there you can run nslookup mgmt-1 (if they're in the same namespace).
What you will find is that the service resolves to a single IP. This IP is unique to the service. When you actually send the service IP traffic on a port it is listening on it will make it to the endpoints.
To answer your question. I do not believe Nginx will be able to tell mgmt-1 has three endpoints (for example) while mgmt-2 has 1 endpoint (for example). Nginx will only see the service IP.
To see the service IP for yourself you can run kubectl get svc/servicename -n namespacename
Related
I'm currently working on copying AWS EKS cluster to Azure AKS.
In our EKS we use external Nginx with proxy protocol to identify the client real IP and check if it is whitelisted in our Nginx.
In AWS to do so we added to the Kubernetes service annotation aws-load-balancer-proxy-protocol to support Nginx proxy_protocol directive.
Now the day has come and we want to run our cluster also on Azure AKS and I'm trying to do the same mechanism.
I saw that AKS Load Balancer hashes the IPs so I removed the proxy_protocol directive from my Nginx conf, I tried several things, I understand that Azure Load Balancer is not used as a proxy but I did read here:
AKS Load Balancer Standard
I tried whitelisting IPs at the level of the Kubernetes service using the loadBalancerSourceRanges api instead on the Nginx level.
But I think that the Load Balancer sends the IP to the cluster already hashed (is it the right term?) and the cluster seem to ignore the ips under loadBalancerSourceRanges and pass them through.
I'm stuck now trying to understand where I lack the knowledge, I tried to handle it from both ends (load balancer and kubernetes service) and they both seem not to cooperate with me.
Given my failures, what is the "right" way of passing the client real IP address to my AKS cluster?
From the docs: https://learn.microsoft.com/en-us/azure/aks/ingress-basic#create-an-ingress-controller
If you would like to enable client source IP preservation for requests
to containers in your cluster, add --set controller.service.externalTrafficPolicy=Local to the Helm install
command. The client source IP is stored in the request header under
X-Forwarded-For. When using an ingress controller with client source
IP preservation enabled, SSL pass-through will not work.
More information here as well: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
You can use the real_ip and geo modules to create the IP whitelist configuration. Alternatively, the loadBalancerSourceRanges should let you whitelist any client IP ranges by updating the associated NSG.
I have a Load Balancer IP provided by OVH that I want to use with Nginx Ingress Controller but on a on-premises cluster. There are several guide s to do that using OVH Managed Kubernetes but it is not possible for me since I already a cluster.
I tried to use the LoadBalancerIP option using Helm and without Helm as well...
You should expose Nginx Ingress Controller as NodePort and point your OVH Load Balancer to your workers as endpoints.
User ---> OVH LB ----> Nginx Ingress on workers
Thank you for both you answer. I tried what you recommended but I think I'm missing a point. TO be more clear :
1/ The user part -> I have a OVH LB connected to a server of 3 node, this LB selects a node to be used by a user (round robin)
2/ Once a node had been selected, the user should be able to access to whatever service inside Kubernetes even if the service is not on this node by using the LoadBalancer IP.
For the 2nd point, I tried to expose/create an endpoint for the Nginx Ingress Controller where I gave the LB's IP, but I don't know if I have to create an Ingress object for each service (only 2-3 like grafana, prometheus..). I tried it but it didn't work. I also tried to create an Ingress for the service where I gave the LB IP but it didn't work. Note that my k8s cluster is on LXD containers which are inside 3 connected servers (one LXD container by server node). Also, concerning the OVH LoadBalancer, I'm not very confident with the purpose of Outbound IPs (which is a CIDR range)..
I understand that my OVH LB cannot be auto provisioned, but since its job is done outside of k8s (just attributing a node to a user), the problem is : how does the node can access the service based on a URL like grafana.example.com? I was using MetalLB as an internal LB and it worked fine but now i'm struggling with the OVH LB..
I have some services running in Kubernetes. I need an NGINX in front of them, to redirect traffic according to the URLs, handle SSL encryption and load balancing.
There is a working nginx.conf for that scenario. What I´m missing is the right way to set up the architecture on gcloud.
Is it correct to launch a StatefulSet with nginx and have a Loadbalancing Service expose NGINX? Do I understand it right, that gcloud LB would pass the configured Ports ( f.e. 80 + 443) to my NGINX service, where I can handle the rest and forward the traffic to the backend services?
You don't really need a StatefulSet, a Deployment will do since nginx is already being fronted by a gcloud TCP load balancer, if for any reason one of your nginx pods is down the gcloud load balancer will not forward traffic to it. Since you already have a gcloud load balancer you will have to use a NodePort Service type and you will have to point your gcloud load balancer to all the nodes on your K8s cluster on that specific port.
Note that your nginx.conf will have to know how to route to all the services internally in your K8s cluster. I recommend you set up an nginx ingress controller, which will basically manage the nginx.conf for you through an Ingress resource and you can also expose it as a LoadBalancer Service type.
I have created a service using this manual: https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address-service/
The service has IP in this example (10.32.0.16, by kubectl describe services example-service command) and we can create proxy_pass rule: proxy_pass http://10.32.0.16:8080; in external (outside the cluster) nginx.
This IP is always different (it depends on number of services, etc..). How can I specify this service for my external nginx?
An alternative, that I found very powerful, is to setup the nginx inside the cluster using the official nginx ingress controller.
Then you can have both load-balanced/HA nginx and have kubernetes automatically update its config from ingress rules.
At the moment Traefik seems very popular for such cases.
It's taking over nginx ingress too...
You can either:
specify a fixed IP for a service
proxy to the service DNS name
I'm looking for a way, if possible to route a request to for eg. team.mysite.com to team.default.svc.cluster.local using nginx. This way I could have multiple wordpress sites using different sub domains on my domains and working as explained above. Basically calling xyz.mysite.com would have the request forwarded to xyz.default.svc.cluster.local, provided the service exists.
Note:
I have the kube-dns service running at 10.254.0.2
Is this possible? And how exactly would I do this?
Thanks.
Edit:
Going over this again I could possibly use variables in the ngonx.conf i.e $host.$host.default.svc.cluster.conf where $host is $host.mydomain.com.
I'd need a way to let nginx resolve the kube dns services also a way to part out the xyz in xyz.mydomain.com in the nginx.conf and assigning it to $host
If your nodes have a public IP address you could use an Ingress resource to do exactly what you describe. An Ingress usually defines a list of paths and their target Service, so your service should be running before you try to expose it.
Ingress controllers then dynamically configure Web servers that listen on a defined list of ports for you (typically the host ports 80 and 443 as you may have guessed) using the Ingress configuration mentioned above, and can be deployed using the Kubernetes resource type of your choice: Pod, ReplicationController, DaemonSet.
Once your Ingress rules are configured and your Ingress controller is deployed, your can point the A or CNAME DNS records of your domain to the node(s) running the actual Web server(s).