How Uvicorn workers works, and how many do I need for a slim machine? - fastapi

The application I deploy is FastAPI with Uvicorn under K8s.
While trying to understand how I want to Dockerize the application I understood I want to implement Uvicorn without Gunicorn and to add a system of scale up/down by the load of the requests the application is getting.
I did a lot of load testing and discovered that with the default of 1 Uvicorn worker I'm getting 3.5 RPS, while changing the workers to 8 I can get easly 22 RPS (didn't check for more since its great results for me).
Now what I was expecting regarding the resources is that the CPU that I will have to provide will be with a limit of 8 (I assume every worker works on one process and thread), but I saw only increase in the memory usage, but barley in the CPU. maybe its because the app don't use that much CPU but indeed its possible for it to use more than 1 CPU? so far it didn't used more than one CPU.
How are the Uvicorn workers works? how should I calculate the workers number I need for the app? I didn't find useful information.
Again, my goal is to keep a slim machine of 1 cpu, with Autoscaling system.

Related

Gunicorn CPU usage increasing to a very high value

We are using Gunicorn with Nginx. After every time we restart gunicorn, the CPU usage took by Gunicorn keeps on increasing gradually. This increases from 0.5% to around 85% in a matter of 3-4 days. On restarting gunicorn, it comes down to 0.5%.
Please suggest what can cause this issue and how to go forward to debug and fix this.
Check workers configuration. Try use the following: cores * 2 -1
Check your application, seems that your application is blocking / freezing threads. Add timeout to all api calls, database queries, etc.
You can add an APM software to analyze your application, for example datadog.

IIS holding up requests in queue instead of processing those

I'm executing a load test against an application hosted in Azure. It's a cloud service with 3 instances behind an internal load balancer (Hash based load balancing mode).
When I execute the load test, it queues request even though the req/sec and total current request to IIS is quite low. I'm not sure what could be the problem.
Any suggestions?
Adding few screenshot of performance counters which might help you take decision.
Click on image to view original image.
Edit-1: Per request from Rohit Rajan,
Cloud Service is having 2 instances (meaning 2 VMs), each of them having 14 GBs of RAM and 8 cores.
I'm executing a Step load pattern start with 100 and add 100,150 user every 5 minutes, till 4-5 hours until the load reaches to 10,000 VUs.
Any call to external system are written async. Database calls are synchronous.
There is no straight forward answer to your question. One possible way would be to explore additional investigation options.
Based on your explanation, there seems to be a bottleneck within the application which is causing the requests to queue-up.
In order to investigate this, collect a memory dump when you see the requests queuing up and then use DebugDiag to run a hang analysis on it.
There are several ways to gather the memory dump.
Task Manager
Procdump.exe
Debug Diagnostics
Process Explorer
Once you have the memory dump you can install debug diag and then run analysis on it. It will generate a report which can help you get started.
Debug Diagnostics download: https://www.microsoft.com/en-us/download/details.aspx?id=49924

Puma Stops Running for Rails App on EC2 Instance with Nginx (using Capistrano/Capistrano Puma)

My top-level question is, how can I get Puma to stop failing. But that is really made up of lots of smaller questions. I will number and bold each of them, to try to make this question answerable.
I am hosting a Rails application on an EC2 instance that is a t2.nano. This is admittedly, a very small box--but I don't expect my website to receive any traffic. I configured everything successfully with Nginx and Puma using Capistrano and Capistrano Puma. Everything was great, until one day I went to my website and saw the Nginx 504 message.
I opened the Nginx error log and saw that it could not connect to Puma:
connect() to unix:/home/deploy/myapp/shared/tmp/sockets/puma.sock failed (111: Connection refused) while connecting to upstream, client: xxx.xxx.xxx.xxx, server: localhost, request: "GET / HTTP/1.0", upstream: "http://unix:/home/deploy/myapp/shared/tmp/sockets/puma.sock:/500.html", host: "myapp.com"
Debugging this, I learned that Puma had stopped running. That is why Nginx could not connect to it. I think there are two problems here: the first, is that Puma should not stop running. The server is tiny, but there is no traffic. the second, is that when Puma does fail, it should restart gracefully. However, I am just focusing on the first issue for now. Because if Puma is constantly restarting, it seems reasonable that sometimes it kills the process in a harsh way.
To debug this, I opened htop. Sure enough, the machine was running without any memory to spare. This makes sense--I am running a database, rails app, webserver, and memcache on one tiny machine. It keeps running out of memory and killing Puma.
I looked into the Puma configuration I had set up with Capistrano. In config/deploy.rb I had these lines--
set :puma_threads, [0, 8]
set :puma_workers, 0
I read all about puma_workers and puma_threads. I also learned that Nginx has its own workers. Puma processes are very expensive. What makes Puma cool is that it is properly muli-threaded--so the independent processes are awesome. It sounds like each worker has its own set of threads--so if there are 4 workers with 8 threads, there will be 32 processes. But in my case, I want to use very little memory. 2 processes sound good to me. 1. Is my understanding of workers and threads correct?
I updated my config/deploy.rb file and deployed, with 0 puma_workers and min=0, max=2 threads.
It appears the configuration for Nginx lives here: /etc/nginx/nginx.conf. And the configuration for Puma lives here: /home/deploy/myapp/shared/puma.rb. I would have expected my updates in config/deploy.rb to have had Capistano edit the config files. No luck--my min, max threads was still set to 0,8. 2. Is it correct to try and update these values through config/deploy.rb when using Capistano?
Also--I opened the nginx.conf and saw worker_processes 4;. 3. Was this set to four when I installed Nginx or did Capistano set this default?
I opened htop and sure enough I had lots of Puma processes. Therefore, I edited my config files manually and restarted Puma and Nginx.
I changed the number of Nginx workers from 4 to 1. Looking in htop, this worked. I now only had 1 Nginx worker. However, the Nginx workers were never very expensive (compared to the Puma threads). So I don't think this matters much.
However, there were still more than 2 Puma threads--there were 6. On a lark, I changed the minimum number of threads from 0 to 1--thinking 0 isn't a possible number so maybe it's setting a default. This increased the number of Puma processes to 9. I also tried changing the number of puma_workers to 1, for the same reason, and the number of processes increased. 4. What does it mean to have 0 threads and/or workers?
I then tried to kill one of the puma processes manually (sudo kill xxxxx), and then all of the Puma processes died.
5. What do I have to do to have just 2 puma processes?
As you can see, my understanding of Puma is not great and the lines between what Puma vs Nginx vs Capistano touches is not clear. Any help is greatly appreciated. I haven't been able to find great resources regarding this issue.
This is what I've learned--
if Puma stops working, make sure you have enough memory to handle to number of workers and threads that you specified. each Puma process is pretty expensive.
if you set 0 workers, Puma will not run in cluster mode. it is recommended to run MRI using cluster mode.
threads are set per cluster. if you have 2 works and 0,8 threads that means you will have two works and each will have between 1 and 8 threads.
Puma uses processes in addition to the threads. Puma has a PID for the parent process. if you are using cluster mode, it has a PID to manager the clusters. if you are using cluster mode, it also has a PID for each cluster. then, there are a fixed number of PIDs to run other tasks (per cluster). without cluster mode, there are 5 fixed PIDs. with cluster mode, there are 7 fixed PIDs.
this is all to say--if you see more processes than you expect, this is why. also--when you add a new worker you add a significant amount of expensive processes. make sure you have the space.
i have a small app, and things seem to be working nicely with 1 worker and min=1, max=4 threads. having a max of 8 threads looks to be what kept killing puma for me.
To answer my original questions--
Yes, the explanation above of workers and threads is correct.
capistrano-puma appears to only set puma config with the first deploy.
I think the nginx config is created when nginx is installed.
0 workers means you are running puma without cluster mode. It is impossible to have 0 threads. I believe 0,8 is the same as 1,8.
Puma needs to run processes in addition to the threads you request. It is impossible to have puma running with only 2 or 3 PIDs. These processes run addition tasks.
A suspect for Puma hangs
The thing with Puma is that it's the only mainstream project that encourages the use of threading in MRI Ruby (well, anyway, Heroku encourages).
This is why we sometimes see statements from people working on Puma about how people think that Puma has various kinds of issues, while the problem is elsewhere, and it is, and it affects only Puma :P
"We" have discovered and fixed in the past some very freaky and nasty Ruby GC issues on heavy duty use of threads in Ruby MRI with some freaky corner cases (remember http://blog.skylight.io/hunting-for-leaks-in-ruby/) and who is to say this is not the last of such freaky issues that people attribute to Puma?
Try disabling threading for a while, see how it goes, and let us know, maybe the rabbit lies there, again
Docs explaining threads vs clustered mode vs workers
Thread pool docs: https://github.com/puma/puma#thread-pool
Clustered mode docs: https://github.com/puma/puma#clustered-mode
puma.rb options: https://github.com/puma/puma/blob/master/examples/config.rb
Under Thread pool the docs explain how to set up the number of worker threads. Remember, Puma is/was primarily a JRuby thing and MRI support & forking was added only later as an afterthought, the ordering of configuration entries in the docs (how to set up threading before how to set up forking) is a consequence of this.
The docs state:
Puma utilizes a dynamic thread pool which you can modify. You can set the minimum and maximum number of threads that are available in the pool with the -t (or --threads) flag:
Puma 2 offers clustered mode, allowing you to use forked processes to handle multiple incoming requests concurrently, in addition to threads already provided.
Meaning, Puma will always thread, it's what it does, if you tell it to do 0/1 thread, it will do 1 thread so it can serve requests.
Additionally, if you set the number of workers (processes) to > 1, Puma will run in "Clustered mode" which means it will fork and each fork will thread,
i.e. -w 3 -t4:4 will result with 3 processes running 4 threads each, allowing you to concurrently server 12 requests.
Puma docs don't specify which and how many processes Puma will use for it's internals, but just an educated guess is that at the very minimum it needs to run all of the workers + 1 master process to manage them, deliver data to them, start them, stop them, channel their logs etc.

How many Tornado Instances and How many Nginx Worker Processes

Suppose I am running a web application using Tornado and running them behind Nginx as a Load Balancer. Please tell me the best practices for certain things.
1. If I am running the service in an AWS EC2 instance, then How many NGINX worker processes should I run for a given x number of VCPUs for any particular instance. Lets say I am running on an EC2 instance with 2 VCPUs, then how many worker processes should I run? It would be better if I know the general rule for it. Also, in what conditions should I increase the number of workers as against the general rule?
2. Now after I set my Nginx as load balancer, it boils down to my Tornado Application. So, how many Tornado instances should I run given x number of VCPUs in an EC2 instance? As mentioned in the doc, its good to have 1 instance per processor, but is that the best condition? If yes, then in what scenario, should I look for increasing the number of instances per processor? If not, than what is the best rule?
NOTE : I am running the instances via Supervisord as my process management program.
3. Now if my application does a lot of async calls to MySQL Database and MongooseIM server, all running on the same host, then will the number of Tornado Instances per processor should be changed? If yes, then what is the rule? If not, then what is the best practice?
If you are running nginx on a machine by itself, then you should give it as many worker processes as you have CPUs. If you're running it on the same machine as Tornado then you probably want to give it fewer (maybe just one). But it's better to be too high than too low here, so if you're unsure it's fine to use the number of CPUs. You'll want more nginx workers if you're using TLS (especially with stronger security settings) or serving a lot of static files, and fewer if it's just a proxy to Tornado.
One Tornado instance per CPU is the best starting point. You might decrease this number if your application does a lot with threads or if there are other things running on the same machine, and you might increase it if you do any synchronous database/network calls without threads.
As long as your database calls are asynchronous, they do not affect how many Tornado processes you should run.

How come my app dies on AppHarbor?

I have a web app that will run forever (at least for a few days) on my local machine using the technique (hack?) described in Jeff Atwood's post: https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
However when I run it on App Harbor my app doesn't run for more than an hour or so (I'm not sure when it dies) as long as I hit the site it stays up so I'm assuming it is being killed after an idle period, but I'm not sure why.
My app doesn't save any state or persist anything. It makes web service calls and survives errors in any calls.
I turned on a ping service to keep my app alive but I'm curious why this works on my local machine but not on App Harbor?
The guys behind App Harbor pays for EC2 instances for all running apps, so they naturally want to limit the cpu usage as much as possible. One way to achieve this is to shut down unused applications very fast and only restart them when someone actually try to access them. Paid hosting should not be limited in this way.
(As far as I have been informed they are able to host around 100k sites on less than twenty medium instances which is certainly quite impressive and calls for a very economic use of resources.)
To overcome the limitation you would need a cron job to ping your app harbour site. But this of course a quite recursive problem since you need app harbour to act as a cron job ;)
AppHarbor recycles the Application Pool frequently to keep sleeping websites from using idle CPU time. This is simply the price you pay of using a shared website hosting plan.
If you really want to run a background job then you should be using AppHarbor's background workers, since this is exactly the type of task they were built to run.
http://support.appharbor.com/kb/getting-started/background-workers
Simply build a new console application that runs your logic and include it in your solution. When you push the code the workers will be started automatically. If you happen to already have other exe's in your solution make sure to edit the app.config and set the 'deploy background worker' value to false.

Resources