How to make a UNIX socket faster? - nginx

I'm running a Google Cloud Compute VM as my application server for an app that's available on iOS and Android. The server runs Django within uWSGI, fronted with nginx. The communication between uWSGI and nginx happens through a unix file socket.
Recently I started noticing timeouts at client end. I did a bit of experimentation, and found that uWSGI sometimes errors out while writing data to the file socket. When I increase the 'max-time' parameter at the client end, it goes through smoothly. For example, a sample request that returns about 200KB of json data, takes about 1 sec for Django to compute. But the UNIX socket seems to take another 1-2 secs, which seems too high for a 200KB response. If the client is expecting a response within 2 secs, this often leads to a write error (as shown in the screenshot below) at uWSGI. When I increase the timeout at the client end, it goes through smoothly.
I want to know if there are some configuration changes that can make reading and writing on a UNIX socket faster. 200KB is a very minor size for a JSON response from my server - so I won't be able to bring it down. And I can't have a timeout of more than 2 secs at my client (iOS or Android), for business reasons.

Several unix entities are represented by files but are no file at all. Pipes and sockets are examples of entities represented by files that are not files.
So, writing, and reading from a unix socket is not bound to file system I/O and does not share file system time responses. In fact, unix socket is one of fastest ways of IPC, being more efficient than a TCP socket, since it does not use network I/O at all.
That stated, here is some hints on how to solve your particular problem:
Evaluate your app for performance issues. Profile it and check where it might be spending too much time. Usually, I/O is the main villain on performance issues. Also, bad algorithms, linear searches on long lists are also common guilties.
Check your configuration on both web server and your application gateway.
Check processes scheduling. If everybody is running on the same box, process concurrency may be an issue for heavy loads. Be sure to have all processes running under proper priorities.
Good luck!

Related

How to send 50.000 HTTP requests in a few seconds?

I want to create a load test for a feature of my app. It’s using a Google App Engine and a VM. The user sends HTTP requests to the App Engine. It’s realistic that this Engine gets thousands of requests in a few seconds. So I want to create a load test, where I send 20.000 - 50.000 in a timeframe of 1-10 seconds.
How would you solve this problem?
I started to try using Google Cloud Task, because it seems perfect for this. You schedule HTTP requests for a specific timepoint. The docs say that there is a limit of 500 tasks per second per queue. If you need more tasks per second, you can split this tasks into multiple queues. I did this, but Google Cloud Tasks does not execute all the scheduled task at the given timepoint. One queue needs 2-5 minutes to execute 500 requests, which are all scheduled for the same second :thinking_face:
I also tried a TypeScript script running asynchronous node-fetch requests, but I need for 5.000 requests 77 seconds on my macbook.
I don't think you can get 50.000 HTTP requests "in a few seconds" from "your macbook", it's better to consider going for a special load testing tool (which can be deployed onto GCP virtual machine in order to minimize network latency and traffic costs)
The tool choice is up to you, either you need to have powerful enough machine type so it would be able to conduct 50k requests "in a few seconds" from a single virtual machine or the tool needs to have the feature of running in clustered mode so you could kick off several machines and they would send the requests together at the same moment of time.
Given you mention TypeScript you might want to try out k6 tool (it doesn't scale though) or check out Open Source Load Testing Tools: Which One Should You Use? to see what are other options, none of them provides JavaScript API however several don't require programming languages knowledge at all
A tool you could consider using is siege.
This is Linux based and to prevent any additional cost by testing from an outside system out of GCP.
You could deploy siege on a relatively large machine or a few machines inside GCP.
It is fairly simple to set up, but since you mention that you need 20-50k in a span of a few seconds, siege by default only allows 255 requests per second. You can make this larger, though, so it can fit your needs.
You would need to play around on how many connections a machine can establish, since each machine will have a certain limit based on CPU, Memory and number of network sockets. You could just increase the -c number, until the machine gives an "Error: system resources exhausted" error or something similar. Experiment with what your virtual machine on GCP can handle.

uWSGI and Flask: keep objects in memory between requests

My stack is uWSGI, flask and nginx currently. I have a need to store data between requests (basically I receive push notifications from another service about events to the server and I want to store those events in the server memory, so client can just query server every n milliseconds, to receive latest update).
Normally this would not work, because of many reasons. One is a good deployment requires you to have several processes in uwsgi in production (and even maybe several machines to scale this out). But my case is very specific: I'm building a web app for a piece of hardware (You can think of your home router configuration page as a good example). This means no need to scale. I also do not have a database (at least not a traditional one) and probably normally 1-2 clients simultaneously.
if I specify --processes 1 --threads 4 in uwsgi, is this enough to ensure the data is kept in the memory as a single instance? Or do I also need to use --threads 1?
I'm also aware that some web servers clear memory randomly from time to time and restart the hosted app. Does nginx/uwsgi do that and where can I read about the rules?
I'd also welcome advises on how to design all of this, if there are better ways to handle this. Please note that I do not consider using any persistant storage for this - this does not worth the effort and may be even impossible due to hardware limitations.
Just to clarify: When I'm talking about one instance of data, I'm thinking of my app.py executing exactly one time and keeping the instances defined there for as long as the server lives.
If you don't need data to persist past a server restart, why not just build a cache object into you application that can do push and pop operations?
A simple array of objects should suffice, one flask route pushes new data to the array and another can pop the data off the array.

What does it mean when a new process is started for every request sent to CGI?

I'm trying to understand servlets and advantages it offers over CGI. It was mentioned that a new process is started everytime in CGI and it is slow compared to servlet. Can someone explain what exactly is a process here and how servlet it beneficial over CGI ?
A CGI can be thought of a normal executable - it's a program that is run, does something, then ends. Like a dos or shell command. The issue is that there is a small amount of overhead in starting up such an executable, with the operating system allocating memory, loading the program into memory, running it, then deallocating everything. If you're running a website with many 100's of requests per second, this overhead can become significant, with potential many copies of this CGI ending up in memory should many concurrent HTTP requests hit the server.
Servlets on the other hand, have its resources allocated just once, for one single instance held in memory. This single instance can process many HTTP requests concurrently, the single instance sharing it's allocated resources between all requests. This can be an issue - instance and static variables may be corrupted if two requests attempt to access. However, the advantages of efficiency and speed far outweighs this.

Can uploads be too fast?

I'm not sure if this is the right place to ask this, but I'll do it anyway.
I have developed an uploader for the Umbraco CMS that lets people upload a queue of files in one go. This uses some simple flash app that just calls a .NET ashx to upload the files one at a time. When one is done, the next one starts.
Recently I've had a user hit a problem where 1 or 2 uploads will go up fine, but then the rest fail. This happens for himself and a client of his. After some debugging, he thinks he's found the problem, but it seems weird so was wondering if anyone else has had this problem?
Both him and his client are on a fibre optic broadband connection so have got really fast upload speeds. When it was tested on a lesser speed broadband connection, all the files were uploaded no problem. According to one of his developer friends, apparently they had come across it before and had to put a slight delay in the upload script to make it work.
Does this sound possible? Had anyone else hit this problem? Is there a known workaround to prevent the uploads from failing?
I have not struck this precise problem before, but I have done a lot of diagnosis of DSL and broadband troubleshooting before, so will do my best to answer this.
There are 2 possible causes for this particular symptom, both generally outside of your network control (I would have thought).
1) Packet loss
Of course where some links receive a very high volume of traffic then they can chose to just dump a lot of data (eg all that is over that link maximum set size), but TCP/IP should be controlling that, and also expecting that sort of thing to drop from time to time, so this seems less likely.
2) The receiving server
May have some HTTP bottlenecks into that server or even the receiving server CPU / RAM etc, may be at capacity.
From a troubleshooting perspective, even if these symptoms shouldn't (in theory) exist, the fact that they do, and you have a specific
Next steps if you really need to understand how it is all working might be to get some sort of packet sniffer (like WireShark) to try to work out at a packet level what exactly is happening.
Also Socket programming can often program directly to the TCP/IP sockets, so you would be processing at the lower network layers, and seeing the responses and timeouts etc.
Also if you control the receiving server, then you can do the same from that end, or at least review the error logs to see what is getting thrown up as a problem.
A really basic method could be to send a pathping to the receiving server if that is possible, and that might highlight slow nodes getting the server, or packet loss between your local machine and the end server.
The upshot? Put in a slow down function in the upload code, and that should at least make the code work.
Get in touch if you need any analysis of the WireShark stuff.
I have run into a similar problem with an MVC2 website using Flash uploader and Firefox. The servers were load balanced with a Big-IP load balancer. What we found out in debugging this is that Flash, in Firefox, did not send the session ID on continuation requests and the load balancer would send continuation requests off to another server. Because the user had no session on the new server, the request failed.
If a file could be sent in one chunk, it would upload fine. If it required a second chunk, it failed. Because of this the upload would fail after an undetermined number of files being uploaded.
To fix it, I wrote a Silverlight uploader.

Long running-time script in PHP causes NGINX server to get very busy

I'll try to be very specific on this - it won't be easy, so please try to follow.
We have a script that runs with PHP on NGINX - PHP-fpm FastCGI.
This script gets information from the user trying to access it, and runs some algorithm on real-time. It cannot be a scheduled process running in the background.
Sometimes, it even takes for the page between 5-12 seconds to load - and it's ok.
Generally, we collect data from the user and make several outgoing request to third-party servers, collect the data, analyse it and return a response for the user .
The problem is,
There are many users running this script, and the server gets very busy - since they're all active connection on the server, waiting for a response.
We have 2 servers running under 1 load balancer, and that's not enough.
Sometimes the servers have more the 1,500 active connections at a time. You can imagine how these servers respond at that timeframe.
I'm looking for a solution.
We can add more and more servers to the LB, but it just sounds absurd that it's the only solution there is.
We ran over that script and optimized it to the maximum, I can promise you that -
There is no real solution for the long-time running of that script, since it depends on 3rd party servers that take time to respond to us on live traffic.
Is there a solution you can think of, to keep this script as it is -
but somehow to lower the impact of these active connection on the overall servers' functioning?
Sometimes, they just simply stop to respond.
Thank you very much for reading!
3 months old question, I know, but I cant help it thinking that:
In case you're sure that the sum of the network work for all requests to the third-party servers plus the corresponding processing of the responses inside your PHP script is much lower than the limits of your hardware.
Your PHP script is then likely inefficiently busy-looping until all responses come back from the third-party servers
If I were dealing with such an issue I'd do:
Stop using your custom external C++ curl thing, as the PHP script is busy-waiting for it anyways.
Google and read up on non-busy-looping usage of PHP's curl-multi implementation
Hope this makes sense.
My advice is to set limited timeouts for requests and to use asynchronous requests for each third-party request.
For example, for your page you have to display results of 5 third-party requests. It means, that inside script you call cURL or file_get_contents 5 times, but script becomes frozen for each timeout from third party. Step by step. It means, that if for each request you have to wait 10 seconds for the response - you'll have 50 seconds in total.
User calls the script -> script wait to end -> server is loaded for 50 seconds
Now, if each request to third party will be sent asynchronously - it will reduce script's load time to the maximum request delay. So, you'll have few smaller scripts, that will live shorter life - and it will decrease load on the server.
User calls the script -> script is loaded -> requests are sent -> there are no scripts that are waiting for the response and consuming resources of your server
May the AJAX be with you! ;)
This is a very old question, but since I had a similar problem I can share my solution. Long running scripts impact various parts of the system and cause stresses in webservers (in active connects), php-fpm and mysql/other databases. These tend to cause a number of knock on effects such as other requests starting to fail.
Firstly make sure you have netdata (https://github.com/netdata/netdata) installed on the server. If you are running many instances you might find having a Grafana/Prometheus setup is worth it too.
Next make sure it can see the PHP FPM process, Mysql and Nginx. There are many many things Netdata shows, but for this problem, my key metrics were:
Connections (mysql_local.connections) - is the database full of connections
PHP-FPM Active Connections (phpfpm_local.connections) - is PHP failing to keep up
PHP-FPM Request Duration (phpfpm_local.request_duration) - Is the time to process going thru the roof?
Disk Utilization Time (disk_util.sda) - this shows if the disk cannot keep up (100% = bad under load)
Users Open Files (users.files)
Make sure that you have sufficient file handles (https://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/), and the disk is not fully occupied. Both of these will block you from making stuff work, so make them big on the troublesome server.
Next check Nginx has enough resources in nginx.conf:
worker_processes auto;
worker_rlimit_nofile 30000;
events {
worker_connections 768;
}
This will give you time to work out what is going wrong.
Next look at php-fpm (/etc/php/7.2/fpm/pool.d/www.conf):
set pm.max_spare_servers high such as 100
set pm.max_requests = 500 - just in case you have a script that doesn't free itself properly.
Then watch. The problem for me was every request blocks an incoming connection. More requests to the same script will block more connections. The machine can be operating fine, but a single slow script doing a curl hit or a slow SQL statement will take that connection for its entire duration, so 30 seconds = 1 less php process to handle incoming requests. Eventually you hit 500, and you run out. If you can increase the number of FPM processes to match the frequency of slow script requests to the number seconds they run for. So if the script takes 2 seconds, and gets hit 2 times a second, you will need a constant 4 additional fpm worker threads chewed up doing nothing.
If you can do that, stop there - the extra effort beyond that is probably not worth it. If it still feels like it will be an issue - create a second php-fpm instance on the box and send all requests for the slow script to that new instance. This allows you to fail those requests discretely in the case of too much run time. This will give you the power to do two important things:
Control the amount of resources devoted to the slow script
Mean that all other scripts are never blocked by the slow script and (assuming the OS limits are high enough) never affected by resource limits.
Hope that helps someone struggling under load!

Resources