Best practice of deploying a flask-api on google production kubernetes cluster - nginx

A flask-api (using gunicorn) is used as an inference api of a deep learning model.
This specific inference process is very cpu intensive (not using gpu yet).
What is the best practice of deploying it to a kubernetes cluster, based on these aspects:
should I create multiple pods handling requests using single gunicorn worker or less pods enabling gunicorn multiple workers? (node memory footprint)
since google provides to expose your deployment as a service using an external loadbalancer,
do I need a nginx web server on my flask-gunicorn stack?
creating multiple identical pods on the same node, is it more memory intensive than handling all these requests using multithreading on a single pod?

More smaller pods is generally better, provided you're staying under "thousands". It is easier for the cluster to place a pod that requires 1 CPU and 1 GB of RAM 16 times than it is to place a single pod that requires 16 CPU and 16 GB RAM once. You usually want multiple replicas for redundancy, to tolerate node failure, and for zero-downtime upgrades in any case.
If the Istio Ingress system works for you, you may not need separate a URL-routing layer (Nginx) inside your cluster. If you're okay with having direct access to your Gunicorn servers with no routing or filtering in front of that, directly pointing a LoadBalancer Service at them is a valid choice.
Running 16 copies of 1 application will generally need more memory than 1 copy with 16 threads; how much more depends on the application.
In particular, if you load your model into memory and the model itself is large, but your multi-threaded setup can share a single copy of it, 1 large pod could use significantly less memory than 16 small pods. If the model is COPYed directly into the Docker image and the application code mmap()s it then you'd probably get to share memory at the kernel layer.
If the model itself is small and most of the memory is used in the processing, it will still use "more" memory to have multiple pods, but it would just be the cost of your runtime system and HTTP service; it shouldn't substantially change the memory required per thread/task/pod if that isn't otherwise shared.

Related

Typical resource request required for an nginx file explorer deployed on kubernetes

I have 2 nfs mounts of 100TB each i.e. 200TB in total. I have mounted these 2 on Kubernetes container. My file server is a typical log server that holds a mix of data types like JSON, HTML, images, logs and text files, etc. The size of files also varies a lot. I am kind of guessing what should be the ideal resource request for this kubernetes container? My assumption,
As this is file reads its i/o intensive operation, CPU should be high
Since we may have a large file size transferred over, Memory should also be high.
Just wanted to check if my assumptions are right?
Posting this community wiki answer to set a baseline and to show one possible set of actions that should led to solution.
Feel free to edit and expand.
As I stated previously, this setup will heavily depend on case to case basis and giving the approximate could be misleading. In my opinion the best course of actions to take would be:
Install monitoring tools
Deploy the application for testing
Simulate the load
Install monitoring tools
There are a lot of monitoring tools that can retrieve the data about the CPU and Memory usage of your Pods. You will need to choose the one that suits your workloads and infrastructure best.
Some of them are:
Prometheus.io
Elastic.co
Datadoghq.com
Deploy the application for testing
This can also be a quite wide topic considering the fact that the exact requirements and the infrastructure is not known. One of many questions is if the Deployment should have a steady replica amount or should use some kind of Horizontal Pod Autoscaling (basing on CPU and/or Memory). The access modes on the storage shouldn't matter as NFS supports RWX.
The basic implementation of the Deployment that could be used can be found in the official Kubernetes documentation:
Kubernetes.io: Docs: Concepts: Workloads: Controllers: Deployment: Creating a deployment
Kubernetes.io: Docs: Concepts: Storage: Volumes: NFS
Simulate the load
The simulation part could go either as a real life usage or by using a tool to simulate the load. You would need in this part to choose the option/tool that suits your requirements the most. This part will show you the approximate resources that should be allocated to your nginx file explorer.
A side note!
In my testing I've used ab to check if the load was divided equally by X amount of replicas.
Additional resources
I do recommend to check the official guide on official Kubernetes documentation regarding managing resources:
Kubernetes.io: Docs: Concepts: Configuration: Manage resources containers
I also think that the VPA could help you in the whole process as:
Vertical Pod Autoscaler (VPA) frees the users from necessity of setting up-to-date resource limits and requests for the containers in their pods. When configured, it will set the requests automatically based on usage and thus allow proper scheduling onto nodes so that appropriate resource amount is available for each pod. It will also maintain ratios between limits and requests that were specified in initial containers configuration.
It can both down-scale pods that are over-requesting resources, and also up-scale pods that are under-requesting resources based on their usage over time.
-- Github.com: Kubernetes: Autoscaler: Vertical Pod Autoscaler
I'd reckon you could also look on this answer:
Stackoverflow.com: Answers: PromQL query to find CPU and memory used for the last week

Load balancing on same server

I research about Kubernetes and actually saw that they do load balancer on a same node. So if I'm not wrong, one node means one server machine, so what good it be if doing load balancer on the same server machine. Because it will use same CPU and RAM to handle requests. First I thought that load balancing would do on separate machine to share resource of CPU and RAM. So I wanna know the point of doing load balancing on same server.
If you can do it on one node , it doesn't mean that you should do it , specially in production environment.
the production cluster will have least 3 or 5 nodes min
kubernetes will spread the replicas across the cluster nodes in balancing node workload , pods ends up on different nodes
you can also configure on which nodes your pods land
use advanced scheduling , pod affinity and anti-affinity
you can also plug you own schedular , that will not allow placing the replica pods of the same app on the same node
then you define a service to loadbalance across pods on different nodes
kube proxy will do the rest
here is a useful read:
https://itnext.io/keep-you-kubernetes-cluster-balanced-the-secret-to-high-availability-17edf60d9cb7
So you generally need to choose a level of availability you are
comfortable with. For example, if you are running three nodes in three
separate availability zones, you may choose to be resilient to a
single node failure. Losing two nodes might bring your application
down but the odds of loosing two data centres in separate availability
zones are low.
The bottom line is that there is no universal approach; only you can
know what works for your business and the level of risk you deem
acceptable.
I guess you mean how Services do automatical load-balancing. Imagine you have a Deployment with 2 replicas on your one node and a Service. Traffic to the Pods goes through the Service so if that were not load-balancing then everything would go to just one Pod and the other Pod would get nothing. You could then handle more load by spreading evenly and still be confident that traffic will be served if one Pod dies.
You can also load-balance traffic coming into the cluster from outside so that the entrypoint to the cluster isn't always the same node. But that is a different level of load-balancing. Even with one node you can still want load-balancing for the Services within the cluster. See Clarify Ingress load balancer on load-balancing of external entrypoint.

Custom TCP proxy for high availability cluster

I'm in a high availability project which includes deployment of 2-node high availability cluster for hot replacement of services (applications) running on the cluster nodes. The applications have inbound and outbound tcp connections as well as process udp traffic (mainly for communicating with ntp server).
The problem is pretty standard until one needs to provide a hot migration of services to backup node with all the data stored in RAM. Applications are agnostic of backup mechanisms and it is highly undesirable to modify them.
As only approach to this problem, I've come off with a duplication approach assuming that both cluster nodes will run the same applications repeating calculations of each other. In case of failure the primary server the backup server will become a primary.
However, I have not found any ready solution for proxy which will have synchronous port mirroring. No existing proxy servers (haproxy, dante, 3proxy etc.) support such feature as far as I know. Have I missed something, or I should write a new one from scratch?
A rough sketch of the functionality can be found here:
p.s. I assume that it is possible to compare traffic from the two clones of the same application...

How many Tornado Instances and How many Nginx Worker Processes

Suppose I am running a web application using Tornado and running them behind Nginx as a Load Balancer. Please tell me the best practices for certain things.
1. If I am running the service in an AWS EC2 instance, then How many NGINX worker processes should I run for a given x number of VCPUs for any particular instance. Lets say I am running on an EC2 instance with 2 VCPUs, then how many worker processes should I run? It would be better if I know the general rule for it. Also, in what conditions should I increase the number of workers as against the general rule?
2. Now after I set my Nginx as load balancer, it boils down to my Tornado Application. So, how many Tornado instances should I run given x number of VCPUs in an EC2 instance? As mentioned in the doc, its good to have 1 instance per processor, but is that the best condition? If yes, then in what scenario, should I look for increasing the number of instances per processor? If not, than what is the best rule?
NOTE : I am running the instances via Supervisord as my process management program.
3. Now if my application does a lot of async calls to MySQL Database and MongooseIM server, all running on the same host, then will the number of Tornado Instances per processor should be changed? If yes, then what is the rule? If not, then what is the best practice?
If you are running nginx on a machine by itself, then you should give it as many worker processes as you have CPUs. If you're running it on the same machine as Tornado then you probably want to give it fewer (maybe just one). But it's better to be too high than too low here, so if you're unsure it's fine to use the number of CPUs. You'll want more nginx workers if you're using TLS (especially with stronger security settings) or serving a lot of static files, and fewer if it's just a proxy to Tornado.
One Tornado instance per CPU is the best starting point. You might decrease this number if your application does a lot with threads or if there are other things running on the same machine, and you might increase it if you do any synchronous database/network calls without threads.
As long as your database calls are asynchronous, they do not affect how many Tornado processes you should run.

Load balancing using nginx

I have a machine with only nginx installed without passenger that acts as load balancer with ips of some machines in its upstream list. All the app machines have nginx with phusion passenger that serve the main application. Now some of the application machines are of medium type while others are large type. As far as I know the default nginx load balancing scheme is round robin. As the load is distributed among the large and medium machines equally, if the traffic is large the medium machines get overloaded and when its less the large machines resources are wasted. Now I use newrelic to monitor the cpu and memory on these machines and a script to get the data from newrelic, so is there any way to use this data to decide the traffic route on load balancer.
One way I know is to monitor and the mark machines in upstream good or bad and then replace the upstream with the good ones and reload the nginx.conf each time without complete restart. So my second question, is the way correct. In other words does it have any drawbacks or will it cause any issues?
Third and more general question is there a better way tackle this issue of load balancing?
You can use another load balancing algorithm that will distribute load more fair: http://nginx.org/r/least_conn or/and configure weights.
Making decision based on current cpu/memory usage isn't a good idea if your goal is faster request processing instead of meaningless numbers

Resources