APC is quick - then bogs down server - nginx

I am tentatively serving up a crap site with a good amount of traffic until new development is finished.
The server is a 4GB cloud server with 2 CPU cores running NGINX, PHP-FPM, APC, Memcached and a cloud database server (Rackspace).
The site, to let you know how bad it is, gave me an uncached load of 1.2 with JUST be roaming around on it quickly. Terrible. 170 queries per page, some with 2000 records or more. Terrible.
So, being on Joomla, I enabled APC. QUICKLY snapped up the site to more than livable while we develop.
Now the site is live and consistently has 30 - 60 live visitors according to GA.
Here's the weird part. Regardless if I use APC or Memcached, as the site runs quickly at first after resetting php-fpm.. and then it goes for a while and gradually loads up and the CPU is at 1.x, 2.x and upward gradually. Never coming back down even after visits subdue a bit.
Why is this happening? I've scoured the internet looking for any consistent direction for php-fpm settings, APC settings, etc.. It's so mis-mosh out there so Im hoping for some sound advice on calculating and determining what settings need to be as the demand changes.
Below are my settings - at this point the only thing I can think of would be to CRON "service php-fpm restart" every 30 minutes or so.
[APC]
apc.stat = 1
apc.max_file_size = 2M
apc.localcache = 1
apc.localcache.size = 128M
apc.shm_segments = 1
apc.ttl = 3600
apc.user_ttl = 600
apc.gc_ttl = 3600
apc.cache_by_default = 1
apc.filters =
apc.write_lock = 1
apc.num_files_hint = 7000
apc.user_entries_hint = 5000
apc.shm_size = 64M
apc.mmap_file_mask = /tmp/apc.XXXXXX
apc.include_once_override = 0
apc.file_update_protection = 2
apc.canonicalize = 1
apc.report_autofilter = 0
apc.stat_ctime = 0
apc.stat = 0
(this also ends up fragmenting pretty hard - I have apc.php available if anyone needs more information)
pm = dynamic
pm.max_children = 80
pm.start_servers = 32
pm.min_spare_servers = 16
pm.max_spare_servers = 56
pm.max_requests = 1000
(I've played with these.. never seems to make much difference but I don't think I've found any sound advice either)
Any help or pointers would be greatly appreciated :-/

Related

Nginx + PHP-FPM slow with more concurrent AJAX requests

I switched from common LAMP stack (Linux+Apache+MySQL+PHP) to nginx + PHP-FPM mostly because of the speed. The speed increase is incredible - not measured but it looks like for a project using both Zend (old libraries) and Zend 2 (new apps) for backend and Bootstrap + CoffeeScript + Backbone.js on the frontend the site renders 2 to 3 times faster!
The only drawback is for pages on which too many concurrent AJAX requests are called. Most of the times one page calls up to 5 different AJAX requests to load data on render but few of them require even 10 to 20 concurrent requests. In this case the rendering is slowed down 2 to 4 times when compared to rendering on Apache (comparison could now be done only on two different servers while the one running Apache is older and overall slower - but it can render pages with many concurrent AJAX requests much quicker).
This is my PHP-FPM configuration (regarding the pool manager):
pm = dynamic
pm.max_children = 20
pm.start_servers = 3
pm.min_spare_servers = 2
pm.max_spare_servers = 4
Increasing pm.max_children to 40 doesn't seem to have any influence on the speed though after changing from the default value 5 to current 20 I could see some speed increase.
I have also increased the worker_processes for nginx to value 4 (number of the cores) while keeping the worker_connections on the default 1024 value.
Is there anything else I should change in order to make the pages with more concurrent AJAX requests running much faster?

Throughput result on JMeter increase suddenly when using 100 threads

I tested my local website using Nginx with PHP-FPM currently by using Apache JMeter. I try to do simple concurrent load testing on it.
Here is my test plan configuration, with 3 thread groups:-
Number of threads: 10, 50, 100
Ramp-up period: 0
Loop count: 1
In the test plan itself, I have 5 different pages represent 5 HTTP requests.
But then when I used 100 threads, the throughput (request/sec) is increased than when using 50 threads. I have run this many times, the result still same.
What is really happening here? I'm still amateur about JMeter. Your help would be appreciated.
Possibility of increase in throughput on increasing the load is generally:
Increased error rate, more errors will increase the throughput.
hope this will help.

MySQL 5.0, InnoDB Table, Slow inserts when heavy traffic.

I have INNODB table that stores user navigation details once user logs in.
I have simple INSERT statement for this purpose.
But sometimes this INSERT will take 15-24 secs when there is heavy traffic otherwise for single user it comes in micro seconds.
Server has 2GB RAM.
Below is MySQL configuration details:
max_connections=500
# You can set .._buffer_pool_size up to 50 - 80 % of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 800M
innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 200M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2
table_cache = 90
query_cache_size = 256M
query_cache_limit = 256M
thread_cache_size = 16
sort_buffer_size = 64M
innodb_thread_concurrency=8
innodb_flush_method=O_DIRECT
innodb_buffer_pool_instances=8
Thanks.
As a first measure have you considered updating? 5.0 is old. It's end of product lifecycle was reached. There has not been any changes to it since two years. There were made serious improvements to different aspects of the whole DBMS in versions 5.1 and 5.5. You should seriously consider upgrading.
You might want to try the tuning primer as another direction in what options you can change.
You can also check with SHOW FULL PROCESSLIST in what state single threads of MySQL are hanging. Maybe you spot something relevant.

How to investigate latency of a webserver running wordpress

How to investigate the 2.95s latency on the very first response from VPS????
My VPS (2 Core, 4 GB RAM, 100 GB HDD) is hosted on a reputed service.
My server has centos 6.5, nginx 1.4.4, php 5.4.23, mysql 5.5.35, wordpress 3.7 with W3 Total Cache. Caching seems to work. Nginx conf enabled Gzip for all media.
When I look through chrome dev tools in network panel, the very first GET request made is getting response in around 2.9 seconds. In other words, time taken for html generation + network travel is 2.9 seconds.
Then starting from the first response, the whole site is getting loaded in next 2.2 seconds - taking the total time to 5.x seconds.
Test php page that queries db and renders the page is having under 70 milliseconds latency in the first step.
Whats the scope for improvement other than increasing CPU cores? Is it possible to tune up the server with some settings or for the amount of given page complexity (theme, etc) this is it and nothing can be done other than hardware addition?
Disk IP perf: DD command results 1.1 GB copied, 3.5 - 6 s, 180 - 300 MB/s
PS: I am aware of other SO questions, most of them recommend some cache plugin, apache mod setting, etc, I am posting this after I have spent enough time digging through them.
xDebug will show you, per script, how much time your server spends executing it http://xdebug.org/docs/profiler

php-fpm not scaling as well as php-fastcgi

I'm trying to optimize a PHP site to scale under high loads.
I'm currently using Nginx, APC and also Redis as a database cache.
All that works well and scales much better than stock.
My question is in regard to php-fpm:
I load tested using php-fpm VS php-fastcgi, in theory I should use php-fpm as it has better process handling and also should play better with APC since php-fastcgi processes can't share the same APC cache, and use more memory, if I understand it right.
Now the thing is under a heavy load test, php-fastcgi performed better, it's not faster but it "holds" longer, whereas php-fpm started giving timeouts and errors much sooner.
Does that make any sense ?
Probably I just have not configured php-fpm optimally maybe, but I tried a variety of settings and could not match php-fastcgi under that high volume load test scenario.
Any recommendations / comments / best practices / settings to try would be appreciated.
Thanks.
I mostly messed with the number of servers:
pm.start_servers = 20
pm.min_spare_servers = 10
pm.max_spare_servers = 100
pm.max_requests = 5000

Resources