Kubernetes ingress communication with nginx inside pod (serving SPA) - nginx

I have single page application. It was developed for using with HTTP2 (no bundles and other HTTP1.1 optimization). But now I need to move it to kubernetes. If I understand correctly, I can use ingress controller with TLS termination and HTTP2, but how will ingress communicate with pod? Will it use HTTP1.1?
I tried to deploy this application, and saw a decrease in response performance on every downloaded file. The biggest decrease was when downloading the first files.
What is the best way to serve static files from a webserver (nginx) inside a pod through ingress with HTTP2?

Related

How to scale web application in kubernetes?

Let's consider a python web application deployed under uWSGI via Nginx.
HTTP client ↔ Nginx ↔ Socket/HTTP ↔ uWSGI (web server) ↔ webapp
Where nginx is used as reverse proxy / load balancer.
How to scale this kind of applications in kubernetes?
Several options come to my mind:
Deploy nginx and uWSGI in a single pod. Simple approach.
Deploy nginx + uWSGI in single container? Violate the “one process per container” principle.
Deploy only a uWSGI (via HTTP). Omit the usage of nginx.
or there is another solution, involving nginx ingress/load balancer services?
It depends.
I see two scenarios:
Ingress is used
In this case there's no need to have nginx server within the pod, but it can be ingress-nginx which will be balancing traffic across a kubernetes cluster. You can find a good example in this comment on GitHub issue.
No ingress is used.
In this case I'd go with option 1 - Deploy nginx and uWSGI in a single pod. Simple approach.. This way you can easily scale in/out your application and don't have any complicated/unnecessary dependencies.
In case you're not familiar with what ingress is, please find kubernetes documentation - ingress.

Why should we deploy static web apps in a separate nginx container?

All the tutorials I've found on the Internet demonstrate how to deploy a web site/app in an nginx container and set up an Ingress to proxy related requests to that container.
That seems kind of redundant to me. Why not to place the content on a volume, mount it to ingress-nginx-controller and serve static assets from there. It seems to be better from performance perspective: no additional proxy and no additional container (with nginx) that consumes compute resources.
An Ingress resource is an abstract concept that was designed to route HTTP/S requests to kubernetes services. It defines a limited subset of that functionality to remain generic and be implemented by many types of ingress controller.
ingress-nginx is only one implementation of an ingress controller, that happens to also be good at serving static content. HAProxy, Traefik or the AWS lb are at the other end of the scale.
An nginx based ingress controller could add custom annotations to support something custom like what you propose, but I imagine you would have a hard time arguing that into the project. Internally, they already add a second nginx instance for the default backend.
If one proxy hop is a critical differentiator in performance then you would probably get more benefit hosting the static content on an edge CDN/device. Another alternative is to not rely on an Ingress and publish the service directly to the outside world.
You need a container to run the application. Ingress service is just used to access that application outside the Kubernetes cluster. You cannot just keep the content in a volume and mount it to the ingress controller. One Ingress controller is used to route requests for multiple applications.

Should I run nginx in every Kubernetes pod?

I have a kubernetes cluster with 20 worker nodes. My main application is a Flask API that serves thousands of android/ios requests per minute. The way my Kubernetes deployment is configured is that each pod has 2 containers - flask/python server and nginx. The flask app runs on-top of gunicorn with meinheld workers (20 workers per pod).
My question is: do I need to be running nginx in each of the pods alongside the flask app or can I just use a main nginx ingress controller as a proxy buffering layer?
NOTE:
I am using ELB to route external traffic to my internal k8s cluster.
Is not too strange to have a proxy on every pod, in fact, istio injects one envoy container per pod as a proxy to control de ingress and egress traffic and also to having more accurate metrics.
Check de documentation https://istio.io/
But if you don't want to manage a service mesh by the moment you can avoid the nginx and use directly the port mapping on the services an ingress definition.
I don't see any reason to have a nginx container for every other flask container. You can have one nginx container as API gateway to your entire set of apis

Nginx to load balance deployment inside a service kubernetes

I want to use Nginx to load balance a kubernetes deployment.
The deployment is part of a service. It contains pod which can be scaled. I want NGINX to be part of the service without being scaled.
I know that I can use NGINX as an external load balancer by configuring it with external dns resolver. With that it can get the IP of the pods scaled and apply its own load balanced rules.
Is it possible to make NGINX part of the service? Then how to do the DNS resolution to the pods? In that case, which pods the service name is refered to?
I would like to avoid the declaration of two services to keep a single definition of the setup which represent a microservice.
More generally, how can I declare in a same service:
a unit which is scaled
a backend, not scaled
a database, not scaled
Thanks all
You can't have NGINX as part of the service. Service doesn't contain any pods, deployment does. It sounds like you want an ingress service, that would be a load balancer any and all services on the cluster
EDIT:
An ingress controller in essence is a deployment of NGINX exposed publicly as a service acting as a load balancer/fan out. The deployment scans your cluster for ingress resources and reconfigures NGINX to forward requests to appropriate services.
Typically people deploy a single controller that acts as the load balancer for all of your microservices. You can fan out based on DNS, URI, other headers and so on. You can also do TLS termination, add basic auth to specific services, it's even possible to splice in NGINX config snippets directly into the ingress resources.

How to use nginx with Kubernetes (GKE) and Google HTTPS load balancer

We make use of Ingress to create HTTPS load balancers that forward directly to our (typically nodejs) services. However, recently we have wanted more control of traffic in front of nodejs which the Google load balancer doesn't provide.
Standardised, custom error pages
Standard rewrite rules (e.g redirect http to https)
Decouple pod readinessProbes from load balancer health checks (so we can still serve custom error pages when there are no healthy pods).
We use nginx in other parts of our stack so this seems like a good choice, and I have seen several examples of nginx being used to front services in Kubernetes, typically in one of two configurations.
An nginx container in every pod forwarding traffic directly to the application on localhost.
A separate nginx Deployment & Service, scaled independently and forwarding traffic to the appropriate Kubernetes Service.
What are the pros/cons of each method and how should I determine which one is most appropriate for our use case?
Following on from Vincent H, I'd suggest pipelining the Google HTTPS Load Balancer to an nginx ingress controller.
As you've mentioned, this can scale independently; have it's own health checks; and you can standardise your error pages.
We've achieved this by having a single kubernetes.io/ingress.class: "gce" ingress object, which has a default backend of our nginx ingress controller. All our other ingress objects are annotated with kubernetes.io/ingress.class: "nginx".
We're using the controller documented here: https://github.com/kubernetes/ingress/tree/master/controllers/nginx. With a custom /etc/nginx/template/nginx.tmpl allowing complete control over the ingress.
For complete transparency, we haven't (yet) setup custom error pages in the nginx controller, however the documentation appears straight forward.
One of the requirements listed is that you want to decouple the pod readinessProbes so you can serve custom error pages. If you are adding a nginx container to every pod this isn't possible. Then the pod will be restarted when one of the containers in the pod fails to adhere to the liveness/readiness probes. Personally I also prefer to decouple as much as you can so you can scale independent pods, assign custom machine types if needed, and it saves you some resources by starting only the amount of nginx instances you really need (mostly memory).

Resources