How to scale web application in kubernetes? - nginx

Let's consider a python web application deployed under uWSGI via Nginx.
HTTP client ↔ Nginx ↔ Socket/HTTP ↔ uWSGI (web server) ↔ webapp
Where nginx is used as reverse proxy / load balancer.
How to scale this kind of applications in kubernetes?
Several options come to my mind:
Deploy nginx and uWSGI in a single pod. Simple approach.
Deploy nginx + uWSGI in single container? Violate the “one process per container” principle.
Deploy only a uWSGI (via HTTP). Omit the usage of nginx.
or there is another solution, involving nginx ingress/load balancer services?

It depends.
I see two scenarios:
Ingress is used
In this case there's no need to have nginx server within the pod, but it can be ingress-nginx which will be balancing traffic across a kubernetes cluster. You can find a good example in this comment on GitHub issue.
No ingress is used.
In this case I'd go with option 1 - Deploy nginx and uWSGI in a single pod. Simple approach.. This way you can easily scale in/out your application and don't have any complicated/unnecessary dependencies.
In case you're not familiar with what ingress is, please find kubernetes documentation - ingress.

Related

Is it possible to reverse proxy to native running applications from a containerized Nginx in K3S?

On my server I run some applications directly on the host. In parallel I have a single-node K3S that also contains a few applications. To be able to manage the traffic routing and HTTPS certificates to the individual services in a central place I want to use Nginx. In the cluster runs a traefik ingress controller which I use for the routing in this context.
To be able to reverse proxy to each application, no matter if it runs directly on the host or in a container in K3S, Nginx must be able to reach the applications locally, no matter where it runs (without the traffic leaving the server). E.g. proxy myservice.mydomain.com to localhost:8080 from Nginx should end up on the webserver of a nativly running application and myservice2.mydomain.com to the webserver of a container in K3S.
Now, is this possible if the Nginx runs in the K3S cluster or do I have to install it directly on the host machine?
If you want to use Nginx that way yes you can do it.
keeping Nginx in front of Host and K3S also.
You can expose your service as NodePort from K3s and while local servie that you will be running on Host machine will be also running on one Port.
in this Nginx will forward the traffic like
Nginx -> Port-(8080) MachineIp: 8080 -> Application on K3s
|
Port-(3000) MachineIp: 3000 -> Application running on Host
Example : https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/

Difference between using nginx pod as reverser proxy vs nginx ingress

So if I have 10 services that I need to expose to the outside world and use path-based routing to connect to different services, I can create an Nginx pod and service type LoadBalancer
I can then create Nginx configurations and can redirect to different services depending upon the URL path. After exploring more, I came to know about Nginx ingress which can also do the same. What is the difference between the two approaches and which approach is better?
In both cases, you are running an Nginx reverse proxy in a Kubernetes pod inside the cluster. There is not much technical difference between them.
If you run the proxy yourself, you have complete control over the Nginx version and the configuration, which could be desirable if you have very specific needs and you are already an Nginx expert. The most significant downside is that you need to manually reconfigure and redeploy the central proxy if you add or change a service.
If you use Kubernetes ingress, the cluster maintains the Nginx configuration for you, and it regenerates it based on Ingress objects that can be deployed per-service. This is easier to maintain if you are not an Nginx expert, and you can add and remove services without touching the centralized configuration.
The Kubernetes ingress system in principle can also plug in alternate proxies; it is not limited to Nginx. However, its available configuration is somewhat limited, and some of the other proxies I've looked at recommend using their own custom Kubernetes resources instead of the standard Ingress objects.

Kubernetes ingress communication with nginx inside pod (serving SPA)

I have single page application. It was developed for using with HTTP2 (no bundles and other HTTP1.1 optimization). But now I need to move it to kubernetes. If I understand correctly, I can use ingress controller with TLS termination and HTTP2, but how will ingress communicate with pod? Will it use HTTP1.1?
I tried to deploy this application, and saw a decrease in response performance on every downloaded file. The biggest decrease was when downloading the first files.
What is the best way to serve static files from a webserver (nginx) inside a pod through ingress with HTTP2?

Should I run nginx in every Kubernetes pod?

I have a kubernetes cluster with 20 worker nodes. My main application is a Flask API that serves thousands of android/ios requests per minute. The way my Kubernetes deployment is configured is that each pod has 2 containers - flask/python server and nginx. The flask app runs on-top of gunicorn with meinheld workers (20 workers per pod).
My question is: do I need to be running nginx in each of the pods alongside the flask app or can I just use a main nginx ingress controller as a proxy buffering layer?
NOTE:
I am using ELB to route external traffic to my internal k8s cluster.
Is not too strange to have a proxy on every pod, in fact, istio injects one envoy container per pod as a proxy to control de ingress and egress traffic and also to having more accurate metrics.
Check de documentation https://istio.io/
But if you don't want to manage a service mesh by the moment you can avoid the nginx and use directly the port mapping on the services an ingress definition.
I don't see any reason to have a nginx container for every other flask container. You can have one nginx container as API gateway to your entire set of apis

NGINX loadbalancing on Kubernetes

I have some services running in Kubernetes. I need an NGINX in front of them, to redirect traffic according to the URLs, handle SSL encryption and load balancing.
There is a working nginx.conf for that scenario. What I´m missing is the right way to set up the architecture on gcloud.
Is it correct to launch a StatefulSet with nginx and have a Loadbalancing Service expose NGINX? Do I understand it right, that gcloud LB would pass the configured Ports ( f.e. 80 + 443) to my NGINX service, where I can handle the rest and forward the traffic to the backend services?
You don't really need a StatefulSet, a Deployment will do since nginx is already being fronted by a gcloud TCP load balancer, if for any reason one of your nginx pods is down the gcloud load balancer will not forward traffic to it. Since you already have a gcloud load balancer you will have to use a NodePort Service type and you will have to point your gcloud load balancer to all the nodes on your K8s cluster on that specific port.
Note that your nginx.conf will have to know how to route to all the services internally in your K8s cluster. I recommend you set up an nginx ingress controller, which will basically manage the nginx.conf for you through an Ingress resource and you can also expose it as a LoadBalancer Service type.

Resources