Should I run nginx in every Kubernetes pod? - nginx

I have a kubernetes cluster with 20 worker nodes. My main application is a Flask API that serves thousands of android/ios requests per minute. The way my Kubernetes deployment is configured is that each pod has 2 containers - flask/python server and nginx. The flask app runs on-top of gunicorn with meinheld workers (20 workers per pod).
My question is: do I need to be running nginx in each of the pods alongside the flask app or can I just use a main nginx ingress controller as a proxy buffering layer?
NOTE:
I am using ELB to route external traffic to my internal k8s cluster.

Is not too strange to have a proxy on every pod, in fact, istio injects one envoy container per pod as a proxy to control de ingress and egress traffic and also to having more accurate metrics.
Check de documentation https://istio.io/
But if you don't want to manage a service mesh by the moment you can avoid the nginx and use directly the port mapping on the services an ingress definition.

I don't see any reason to have a nginx container for every other flask container. You can have one nginx container as API gateway to your entire set of apis

Related

How to scale web application in kubernetes?

Let's consider a python web application deployed under uWSGI via Nginx.
HTTP client ↔ Nginx ↔ Socket/HTTP ↔ uWSGI (web server) ↔ webapp
Where nginx is used as reverse proxy / load balancer.
How to scale this kind of applications in kubernetes?
Several options come to my mind:
Deploy nginx and uWSGI in a single pod. Simple approach.
Deploy nginx + uWSGI in single container? Violate the “one process per container” principle.
Deploy only a uWSGI (via HTTP). Omit the usage of nginx.
or there is another solution, involving nginx ingress/load balancer services?
It depends.
I see two scenarios:
Ingress is used
In this case there's no need to have nginx server within the pod, but it can be ingress-nginx which will be balancing traffic across a kubernetes cluster. You can find a good example in this comment on GitHub issue.
No ingress is used.
In this case I'd go with option 1 - Deploy nginx and uWSGI in a single pod. Simple approach.. This way you can easily scale in/out your application and don't have any complicated/unnecessary dependencies.
In case you're not familiar with what ingress is, please find kubernetes documentation - ingress.

Kubernetes ingress communication with nginx inside pod (serving SPA)

I have single page application. It was developed for using with HTTP2 (no bundles and other HTTP1.1 optimization). But now I need to move it to kubernetes. If I understand correctly, I can use ingress controller with TLS termination and HTTP2, but how will ingress communicate with pod? Will it use HTTP1.1?
I tried to deploy this application, and saw a decrease in response performance on every downloaded file. The biggest decrease was when downloading the first files.
What is the best way to serve static files from a webserver (nginx) inside a pod through ingress with HTTP2?

What is the role of an external Load Balancer if we are using nginx ingress controller?

I have deployed my application in a cluster of 3 nodes. Now to make this application externally accessible, I have followed this documentation and integrated nginx ingress controller.
Now when I checked my Google's Load Balancer console, I can see a new load balancer created and everything works fine. But the strange thing is I found two of my nodes are unhealthy and only one node is accepting connection. Then I found this discussion and understood that the only node running nginx ingress controller pod will be healthy for load balancer.
Now I feel hard to understand this data flow and the use of external load balancer here. We use external load balancer to balance the load to multiple machines. But with this configuration external load balancer will always forward traffic to the node with nginx ingress controller pod. If that is correct, what is the role of external load balance here?
You can have more than one replica of the Nginx ingress controller pods deployed across more than one kubernetes nodes for high availability purpose to reduce the possibility of downtime in case one kubernetes node is unavailable. The LoadBalancer will send the request to one of those nginx ingress Controller pods. From nginx ingress controller pods it will forwarded to any of the backend pods. The role of the external load balancer is to expose nginx ingress controller pods outside the cluster. Because NodePort is not recommended for usage in production and ClusterIP can not be used expose pods outside the cluster, hence LoadBalancer is the viable option.

NGINX loadbalancing on Kubernetes

I have some services running in Kubernetes. I need an NGINX in front of them, to redirect traffic according to the URLs, handle SSL encryption and load balancing.
There is a working nginx.conf for that scenario. What I´m missing is the right way to set up the architecture on gcloud.
Is it correct to launch a StatefulSet with nginx and have a Loadbalancing Service expose NGINX? Do I understand it right, that gcloud LB would pass the configured Ports ( f.e. 80 + 443) to my NGINX service, where I can handle the rest and forward the traffic to the backend services?
You don't really need a StatefulSet, a Deployment will do since nginx is already being fronted by a gcloud TCP load balancer, if for any reason one of your nginx pods is down the gcloud load balancer will not forward traffic to it. Since you already have a gcloud load balancer you will have to use a NodePort Service type and you will have to point your gcloud load balancer to all the nodes on your K8s cluster on that specific port.
Note that your nginx.conf will have to know how to route to all the services internally in your K8s cluster. I recommend you set up an nginx ingress controller, which will basically manage the nginx.conf for you through an Ingress resource and you can also expose it as a LoadBalancer Service type.

Nginx to load balance deployment inside a service kubernetes

I want to use Nginx to load balance a kubernetes deployment.
The deployment is part of a service. It contains pod which can be scaled. I want NGINX to be part of the service without being scaled.
I know that I can use NGINX as an external load balancer by configuring it with external dns resolver. With that it can get the IP of the pods scaled and apply its own load balanced rules.
Is it possible to make NGINX part of the service? Then how to do the DNS resolution to the pods? In that case, which pods the service name is refered to?
I would like to avoid the declaration of two services to keep a single definition of the setup which represent a microservice.
More generally, how can I declare in a same service:
a unit which is scaled
a backend, not scaled
a database, not scaled
Thanks all
You can't have NGINX as part of the service. Service doesn't contain any pods, deployment does. It sounds like you want an ingress service, that would be a load balancer any and all services on the cluster
EDIT:
An ingress controller in essence is a deployment of NGINX exposed publicly as a service acting as a load balancer/fan out. The deployment scans your cluster for ingress resources and reconfigures NGINX to forward requests to appropriate services.
Typically people deploy a single controller that acts as the load balancer for all of your microservices. You can fan out based on DNS, URI, other headers and so on. You can also do TLS termination, add basic auth to specific services, it's even possible to splice in NGINX config snippets directly into the ingress resources.

Resources