Nginx + PHP-FPM slow with more concurrent AJAX requests - nginx

I switched from common LAMP stack (Linux+Apache+MySQL+PHP) to nginx + PHP-FPM mostly because of the speed. The speed increase is incredible - not measured but it looks like for a project using both Zend (old libraries) and Zend 2 (new apps) for backend and Bootstrap + CoffeeScript + Backbone.js on the frontend the site renders 2 to 3 times faster!
The only drawback is for pages on which too many concurrent AJAX requests are called. Most of the times one page calls up to 5 different AJAX requests to load data on render but few of them require even 10 to 20 concurrent requests. In this case the rendering is slowed down 2 to 4 times when compared to rendering on Apache (comparison could now be done only on two different servers while the one running Apache is older and overall slower - but it can render pages with many concurrent AJAX requests much quicker).
This is my PHP-FPM configuration (regarding the pool manager):
pm = dynamic
pm.max_children = 20
pm.start_servers = 3
pm.min_spare_servers = 2
pm.max_spare_servers = 4
Increasing pm.max_children to 40 doesn't seem to have any influence on the speed though after changing from the default value 5 to current 20 I could see some speed increase.
I have also increased the worker_processes for nginx to value 4 (number of the cores) while keeping the worker_connections on the default 1024 value.
Is there anything else I should change in order to make the pages with more concurrent AJAX requests running much faster?

Related

Apache random slow image loading

I have a weird issue on my sites where some images load slow. I have caching in place (CloudFlare caching, Brotli compression enabled), this is referring to the first "uncached" load. All of the images have been compressed to the maximum extent.
I'm wondering why some of the images have such a delay on the first load and if there's anything I can do to fix it.
Here's the network result from a site I didn't have cached.
As you can see, it doesn't seem to matter how large the images are. Some larger ones load faster, while some smaller ones are delayed.
Apache Global Configuration settings are as follows (default):
Start Servers 5
Minimum Spare Servers 5
Maximum Spare Servers 5
Server Limit 256
Max Request Workers 150
Max Connections Per Child 10000
Keep-Alive On
Keep-Alive Timeout 5
Timeout 300
Is there some configuration needed to allow these images to all resolve quickly? The CPU usage when loading my sites (uncached) is minuscule, it never goes over 1%.
In total, I count 17 images (all under 5kb) loading on this particular site.
I understand that Ngix/Litespeed would probably speed up the loading, but this question is strictly related to Apache 2.4+, without either of those installed.
Digital Ocean $20 droplet (2x CPUs - Intel E5-2650 v4, 4gb ram, 80gb ssd).
Apache 2.4+/CentOS/cPanel 90.
Edit: Removing the Apache cache headers and relying on only Cloudflare solved the "random delay". But still the question remains, why does the first "uncached" version take so long to load small images?

Throughput result on JMeter increase suddenly when using 100 threads

I tested my local website using Nginx with PHP-FPM currently by using Apache JMeter. I try to do simple concurrent load testing on it.
Here is my test plan configuration, with 3 thread groups:-
Number of threads: 10, 50, 100
Ramp-up period: 0
Loop count: 1
In the test plan itself, I have 5 different pages represent 5 HTTP requests.
But then when I used 100 threads, the throughput (request/sec) is increased than when using 50 threads. I have run this many times, the result still same.
What is really happening here? I'm still amateur about JMeter. Your help would be appreciated.
Possibility of increase in throughput on increasing the load is generally:
Increased error rate, more errors will increase the throughput.
hope this will help.

How to investigate latency of a webserver running wordpress

How to investigate the 2.95s latency on the very first response from VPS????
My VPS (2 Core, 4 GB RAM, 100 GB HDD) is hosted on a reputed service.
My server has centos 6.5, nginx 1.4.4, php 5.4.23, mysql 5.5.35, wordpress 3.7 with W3 Total Cache. Caching seems to work. Nginx conf enabled Gzip for all media.
When I look through chrome dev tools in network panel, the very first GET request made is getting response in around 2.9 seconds. In other words, time taken for html generation + network travel is 2.9 seconds.
Then starting from the first response, the whole site is getting loaded in next 2.2 seconds - taking the total time to 5.x seconds.
Test php page that queries db and renders the page is having under 70 milliseconds latency in the first step.
Whats the scope for improvement other than increasing CPU cores? Is it possible to tune up the server with some settings or for the amount of given page complexity (theme, etc) this is it and nothing can be done other than hardware addition?
Disk IP perf: DD command results 1.1 GB copied, 3.5 - 6 s, 180 - 300 MB/s
PS: I am aware of other SO questions, most of them recommend some cache plugin, apache mod setting, etc, I am posting this after I have spent enough time digging through them.
xDebug will show you, per script, how much time your server spends executing it http://xdebug.org/docs/profiler

php-fpm not scaling as well as php-fastcgi

I'm trying to optimize a PHP site to scale under high loads.
I'm currently using Nginx, APC and also Redis as a database cache.
All that works well and scales much better than stock.
My question is in regard to php-fpm:
I load tested using php-fpm VS php-fastcgi, in theory I should use php-fpm as it has better process handling and also should play better with APC since php-fastcgi processes can't share the same APC cache, and use more memory, if I understand it right.
Now the thing is under a heavy load test, php-fastcgi performed better, it's not faster but it "holds" longer, whereas php-fpm started giving timeouts and errors much sooner.
Does that make any sense ?
Probably I just have not configured php-fpm optimally maybe, but I tried a variety of settings and could not match php-fastcgi under that high volume load test scenario.
Any recommendations / comments / best practices / settings to try would be appreciated.
Thanks.
I mostly messed with the number of servers:
pm.start_servers = 20
pm.min_spare_servers = 10
pm.max_spare_servers = 100
pm.max_requests = 5000

asp.net high number of Request Queued and Context switching

We have a fairly popular site that has around 4 mil users a month. It is hosted on a Dedicated Box with 16 gb of Ram, 2 procc with 24 cores.
At any given time the CPU is always under 40% and the memory is under 12 GB but at the highest traffic we see a very poor performance. The site is very very slow. We have 2 app pools one for our main site and one for our forum. Only the site is being slow. We don't have any restrictions on cpu or memory per app pool.
I have looked at he Performance counters and I saw something very interesting. At our peek time for some reason Request are being queued. Overall context switching numbers are very high around 30 - 110 000 k.
As i understand high context switching is caused by locks. Can anyone give me an example code that would cause a high number of context switches.
I am not too concerned with the context switching, and i don't think the numbers are huge. You have a lot of threads running in IIS (since its a 24 core machine), and higher context switching numbers re expected. However, I am definitely concerned with the request queuing.
I would do several things and see how it affects your performance counters:
Your server CPU is evidently under-utilized, since you run below 40% all the time. You can try to set a higher value of "Threads per processor limit" in IIS until you get to a 50-60% utilization. An optimal value of threads per core by the books is 20, but it depends on the scenario, and you can experiment with higher or lower values. I would recommend trying setting a value >=30. Low CPU utilization can also be a sign of blocking IO operations.
Adjust the "Queue Length" settings in IIS properties. If you have configured the "Threads per processor limit" to be 20, then you should configure the Queue Length to be 20 x 24 cores = 480. Again, if the requests are getting Queued, that can be a sign that all your threads are blocked serving other requests or blocked waiting for an IO response.
Don't serve your static files from IIS. Move them to a CDN, amazon S3 or whatever else. This will significantly improve your server performance, because 1,000s of Server requests will go somewhere else! If you MUST serve the files from IIS, than configure IIS file compression. In addition use expire headers for your static content, so they get cached on the client, which will save a lot of bandwidth.
Use Async IO wherever possible (reading/writing from disk, db, network etc.) in your ASP.NET controllers, handlers etc. to make sure you are using your threads optimally. Blocking the available threads using blocking IO (which is done in 95% of the ASP.NET apps i have seen in my life) could easily cause the thread pool to be fully utilized under heavy load, and Queuing would occur.
Do a general optimization to prevent the number of requests that hit your server, and the processing time of single requests. This can include Minification and Bundling of your CSS/JS files, refactoring your Javascript to do less roundtrips to the server, refactoring your controller/handler methods to be faster etc. I have added links below to Google and Yahoo recommendations.
Disable ASP.NET debugging in IIS.
Google and Yahoo recommendations:
https://developers.google.com/speed/docs/insights/rules
https://developer.yahoo.com/performance/rules.html
If you follow all these advices, i am sure you will get some improvements!

Resources