I have a cluster of proxy servers on GKE, and I'm trying to figure out how to load balance CONNECT requests to these.
Without GKE, I'm using the nginx stream module (http://nginx.org/en/docs/stream/ngx_stream_core_module.html) which works perfectly.
GCP load balancers do not accept CONNECT requests, so I'm trying to take my existing nginx configuration file and apply it to an nginx ingress resource for GKE. Is this possible?
Related
I have deployed nginx and it created the Internal TCP Network Loadbalancer, which is a Layer4 LB , on GKE. Application works as expected.
However if I want to use a GKE's Layer7 HTTP(S) LoadBalancer , is there a way through nginx ? I know there are some annotations for AWS , but not sure about GKE.
We tried creating a HTTP(S) LoadBalancer using GKE Ingress . It created but there are some issues with it and we are unable to use application. So can we use nginx controller to create a L7 Loadbalancer?
I'm currently working on copying AWS EKS cluster to Azure AKS.
In our EKS we use external Nginx with proxy protocol to identify the client real IP and check if it is whitelisted in our Nginx.
In AWS to do so we added to the Kubernetes service annotation aws-load-balancer-proxy-protocol to support Nginx proxy_protocol directive.
Now the day has come and we want to run our cluster also on Azure AKS and I'm trying to do the same mechanism.
I saw that AKS Load Balancer hashes the IPs so I removed the proxy_protocol directive from my Nginx conf, I tried several things, I understand that Azure Load Balancer is not used as a proxy but I did read here:
AKS Load Balancer Standard
I tried whitelisting IPs at the level of the Kubernetes service using the loadBalancerSourceRanges api instead on the Nginx level.
But I think that the Load Balancer sends the IP to the cluster already hashed (is it the right term?) and the cluster seem to ignore the ips under loadBalancerSourceRanges and pass them through.
I'm stuck now trying to understand where I lack the knowledge, I tried to handle it from both ends (load balancer and kubernetes service) and they both seem not to cooperate with me.
Given my failures, what is the "right" way of passing the client real IP address to my AKS cluster?
From the docs: https://learn.microsoft.com/en-us/azure/aks/ingress-basic#create-an-ingress-controller
If you would like to enable client source IP preservation for requests
to containers in your cluster, add --set controller.service.externalTrafficPolicy=Local to the Helm install
command. The client source IP is stored in the request header under
X-Forwarded-For. When using an ingress controller with client source
IP preservation enabled, SSL pass-through will not work.
More information here as well: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
You can use the real_ip and geo modules to create the IP whitelist configuration. Alternatively, the loadBalancerSourceRanges should let you whitelist any client IP ranges by updating the associated NSG.
I have deployed my application in a cluster of 3 nodes. Now to make this application externally accessible, I have followed this documentation and integrated nginx ingress controller.
Now when I checked my Google's Load Balancer console, I can see a new load balancer created and everything works fine. But the strange thing is I found two of my nodes are unhealthy and only one node is accepting connection. Then I found this discussion and understood that the only node running nginx ingress controller pod will be healthy for load balancer.
Now I feel hard to understand this data flow and the use of external load balancer here. We use external load balancer to balance the load to multiple machines. But with this configuration external load balancer will always forward traffic to the node with nginx ingress controller pod. If that is correct, what is the role of external load balance here?
You can have more than one replica of the Nginx ingress controller pods deployed across more than one kubernetes nodes for high availability purpose to reduce the possibility of downtime in case one kubernetes node is unavailable. The LoadBalancer will send the request to one of those nginx ingress Controller pods. From nginx ingress controller pods it will forwarded to any of the backend pods. The role of the external load balancer is to expose nginx ingress controller pods outside the cluster. Because NodePort is not recommended for usage in production and ClusterIP can not be used expose pods outside the cluster, hence LoadBalancer is the viable option.
I have a kubernetes cluster with 20 worker nodes. My main application is a Flask API that serves thousands of android/ios requests per minute. The way my Kubernetes deployment is configured is that each pod has 2 containers - flask/python server and nginx. The flask app runs on-top of gunicorn with meinheld workers (20 workers per pod).
My question is: do I need to be running nginx in each of the pods alongside the flask app or can I just use a main nginx ingress controller as a proxy buffering layer?
NOTE:
I am using ELB to route external traffic to my internal k8s cluster.
Is not too strange to have a proxy on every pod, in fact, istio injects one envoy container per pod as a proxy to control de ingress and egress traffic and also to having more accurate metrics.
Check de documentation https://istio.io/
But if you don't want to manage a service mesh by the moment you can avoid the nginx and use directly the port mapping on the services an ingress definition.
I don't see any reason to have a nginx container for every other flask container. You can have one nginx container as API gateway to your entire set of apis
I have some services running in Kubernetes. I need an NGINX in front of them, to redirect traffic according to the URLs, handle SSL encryption and load balancing.
There is a working nginx.conf for that scenario. What I´m missing is the right way to set up the architecture on gcloud.
Is it correct to launch a StatefulSet with nginx and have a Loadbalancing Service expose NGINX? Do I understand it right, that gcloud LB would pass the configured Ports ( f.e. 80 + 443) to my NGINX service, where I can handle the rest and forward the traffic to the backend services?
You don't really need a StatefulSet, a Deployment will do since nginx is already being fronted by a gcloud TCP load balancer, if for any reason one of your nginx pods is down the gcloud load balancer will not forward traffic to it. Since you already have a gcloud load balancer you will have to use a NodePort Service type and you will have to point your gcloud load balancer to all the nodes on your K8s cluster on that specific port.
Note that your nginx.conf will have to know how to route to all the services internally in your K8s cluster. I recommend you set up an nginx ingress controller, which will basically manage the nginx.conf for you through an Ingress resource and you can also expose it as a LoadBalancer Service type.