Optimize APC Caching - wordpress

here is a link to how my APC is running : [removed]
As you can see, it fills up pretty quickly and my Cache Full Count goes over 1000 sometimes
My website uses Wordpress.
I notice that every time I make a new post or edit a post, 2 things happen.
1) APC Memory "USED" resets
2) I get a whole lot of Fragments
I've tried giving more Memory to APC (512 mb) but then it crashes sometimes, it seems 384 is best. I also have a Cron job that restarts apache, clearing all APC of fragments and used memory, every 4 hours. Again, my apache crashes if APC is running for a long period of time, I think due to the fragment buildup.
Should I use the apc.Filters and filter out some stuff that should not be cached?
I am really beginner at this sort of stuff, so if someone can explain with full instructions, Thank you very much !!!

I work as a Linux Systems Admin, the wordpress server runs 5 different WordPress installs. If you are running just one, I will comment the configurations to consider.
APC / PHP Versions, 3.1.9 / 5.3.7
Here is my complete apc.conf,
apc.enabled=1
apc.shm_segments=1
; I would try 32M per WP install, go from there
apc.shm_size=128M
; Relative to approx cached PHP files,
apc.num_files_hint=512
; Relative to approx WP size W/ APC Object Cache Backend,
apc.user_entries_hint=4096
apc.ttl=7200
apc.use_request_time=1
apc.user_ttl=7200
apc.gc_ttl=3600
apc.cache_by_default=1
apc.filters
apc.mmap_file_mask=/tmp/apc.XXXXXX
apc.file_update_protection=2
apc.enable_cli=0
apc.max_file_size=2M
;This should be used when you are finished with PHP file changes.
;As you must clear the APC cache to recompile already cached files.
;If you are still developing, set this to 1.
apc.stat=0
apc.stat_ctime=0
apc.canonicalize=1
apc.write_lock=1
apc.report_autofilter=0
apc.rfc1867=0
apc.rfc1867_prefix =upload_
apc.rfc1867_name=APC_UPLOAD_PROGRESS
apc.rfc1867_freq=0
apc.rfc1867_ttl=3600
;This MUST be 0, WP can have errors otherwise!
apc.include_once_override=0
apc.lazy_classes=0
apc.lazy_functions=0
apc.coredump_unmap=0
apc.file_md5=0
apc.preload_path
#Chris_O, your configuration is not optimal in a few aspects.
1. apc.shm_segments=3
If you run a modern Linux Distro, your SHM should be sufficiantly large enough.
If it is too small search how to set sysctl.conf entries, You can check like this.
#Check Max Segment size
cat /proc/sys/kernel/shmmax
Exception when running on certain BSD's, or Other Unix's, Or managed hosts you don't control. There is disadvantages to not having a contiguous segment, read details of APC for that info.
2. apc.enable_cli=1
BAD BAD BAD, this is for debug only! Every time you run php-cli, it clears the APC cache.
3. apc.max_file_size=10M
Unnecessary and ridiculous! If you had a file that big, it would eat 1/3rd of that small 32M SHM. Even though you specify 3, they don't just act like one big segment in three pieces. Regardless WP doesn't even have single PHP files even close to that size.
'hope I helped people with their apc.conf.

The APC ttl should take care of fragment build up. I usually set it at 7200. I am running it on a small VPS with WordPress and my settings are:
apc.enabled=1
apc.shm_segments=3
apc.shm_size=32
apc.ttl=7200
apc.user_ttl=7200
apc.num_files_hint=2048
apc.mmap_file_mask=/tmp/apc.XXXXXX
apc.enable_cli=1
apc.max_file_size=10M
You will also get a lot more benefit from it by using WordPress's built in object cache and Mark Jaquith wrote a really good drop in plugin that should also help with some of your fragmentation issues when saving or editing a post.

You really should set apc.stat=0 on your production server and it will prevent APC from actually going to the IO to check if the file has been changed.
Check out documentation first: http://php.net/manual/en/apc.configuration.php

Related

Uploading larger files with User-Agent python-requests/2.2.1 results in RemoteDisconnected

Using the python library requests and uploading larger files I will get the error RemoteDisconnected('Remote end closed connection without response').
However it will work if I change the default User-Agent of the library to something like "Mozilla/5.0".
Does anybody know the reason for this behaviour ?
Edit: Only happens with Property X-Explode-Archive: true
Are there any specific pattern of timeout that you could highlight in this case?
For example: It times out after 60 seconds every time (of that sort)?
I would suggest to check the logs from all the medium configured with the Artifactory instance. Like, Reverse-proxy & the embedded-tomcat too. As the issue is specific to large-sized files, correlate the timeout pattern with the timeouts configured from all the entities which would give us a hint towards this issue.

Why isn't Carbon writing Whisper data points as per updated storage-schema retention?

My original carbon storage-schema config was set to 10s:1w, 60s:1y and was working fine for months. I've recently updated it to 1s:7d, 10s:30d, 60s,1y. I've resized all my whisper files to reflect the new retention schema using the following bit of bash:
collectd_dir="/opt/graphite/storage/whisper/collectd/"
retention="1s:7d 1m:30d 15m:1y"
find $collectd_dir -type f -name '*.wsp' | parallel whisper-resize.py \
--nobackup {} $retention \;
I've confirmed that they've been updated using whisper-info.py with the correct retention and data points. I've also confirmed that the storage-schema is valid using a storage-schema validation script.
The carbon-cache{1..8}, carbon-relay, carbon-aggregator, and collectd services have been stopped before the whisper resizing, then started once the resizing was complete.
However, when checking in on a Grafana dashboard, I'm seeing empty graphs with correct data points (per sec, but no data) on collectd plugin charts; but with the graphs that are providing data, it's showing data and data points every 10s (old retention), instead of 1s.
The /var/log/carbon/console.log is looking good, and the collectd whisper files all have carbon user access, so no permission denied issues when writing.
When running an ngrep on port 2003 on the graphite host, I'm seeing connections to the relay, along with metrics being sent. Those metrics are then getting relayed to a pool of 8 caches to their pickle port.
Has anyone else experienced similar issues, or can possibly help me diagnose the issue further? Have I missed something here?
So it took me a little while to figure this out. It had nothing to do with the local_settings.py file like some of the old responses, but it had to do with the Interval function in the collectd.conf.
A lot of the older responses mentioned that you needed to include 'Interval 1' inside each Plugin container. I think this would have been great due to the control of each metric. However, that would create config errors in my logs, and break the metric. Setting 'Interval 1' at top level of the config resolved my issues.

Why Symfony3 so slow?

I installed Symfony3 framework-standard-edition. I'm trying to open the home page(app.php prod) and it is loaded 300-400ms.
This is my profiler information:
also I use php7.
Why it is so long?
You can try to optimize Zend OPCache.
Here are some recommended settings
opcache.revalidate_freq
Basically put, how often (in seconds) should the code cache expire and check if your code has changed. 0 means it checks your PHP code every single request (which adds lots of stat syscalls). Set it to 0 in your development environment. Production doesn't matter because of the next setting.
opcache.validate_timestamps
When this is enabled, PHP will check the file timestamp per your opcache.revalidate_freq value.
When it's disabled, opcache.revaliate_freq is ignored and PHP files are NEVER checked for updated code. So, if you modify your code, the changes won't actually run until you restart or reload PHP (you force a reload with kill -SIGUSR2).
Yes, this is a pain in the ass, but you should use it. Why? While you're updating or deploying code, new code files can get mixed with old ones— the results are unknown. It's unsafe as hell
opcache.max_accelerated_files
Controls how many PHP files, at most, can be held in memory at once. It's important that your project has LESS FILES than whatever you set this at. For a codebase at ~6000 files, I use the prime number 8000 for maxacceleratedfiles.
You can run find . -type f -print | grep php | wc -l to quickly calculate the number of files in your codebase.
opcache.memory_consumption
The default is 64MB. You can use the function opcachegetstatus() to tell how much memory opcache is consuming and if you need to increase the amount.
opcache.interned_strings_buffer
A pretty neat setting with like 0 documentation. PHP uses a technique called string interning to improve performance— so, for example, if you have the string "foobar" 1000 times in your code, internally PHP will store 1 immutable variable for this string and just use a pointer to it for the other 999 times you use it. Cool.
This setting takes it to the next level— instead of having a pool of these immutable string for each SINGLE php-fpm process, this setting shares it across ALL of your php-fpm processes. It saves memory and improves performance, especially in big applications.
The value is set in megabytes, so set it to "16" for 16MB. The default is low, 4MB.
opcache.fast_shutdown
Another interesting setting with no useful documentation. "Allows for faster shutdown".
Oh okay. Like that helps me. What this actually does is provide a faster mechanism for calling the destructors in your code at the end of a single request to speed up the response and recycle php workers so they're ready for the next incoming request faster.
Set it to 1 and turn it on.
opcache=1
opcache.memory_consumption=256
opcache.interned_strings_buffer=16
opcache.max_accelerated_files=8000
opcache.validate_timestamps=0
opcache.revalidate_freq=0
opcache.fast_shutdown=1
I hope it will help improve your performances
[EDIT]
You might also want to look at this answer:
Are Doctrine relations affecting application performance?
TheMrbikus, try some optimization with the following elements:
Use APC
Use Bootstrap files
Reference: http://symfony.com/doc/current/performance.html
Use the OPCache PHP7
Use Apache PHP-FPM.
E-mail sending process, and may slow down during the form rendering operations. Create a blank test Controller.

Symfony2 - Random Failed to start the session

We have a site with Symfony2 with some traffic.
Every day the site begins to fail with this error for 1 or 2 minutes (15-20 errors). This occurs at random hours, could not find a pattern. It does not fit even to peak hours.
2015-10-09 02:23:57.635 [2015-10-09 06:23:38] request.CRITICAL: Uncaught PHP Exception RuntimeException: "Failed to start the session" at /var/www/thing.com/httpdocs/app/cache/prod/classes.php line 121 {"exception":"[object] (RuntimeException(code: 0): Failed to start the session at /var/www/thing.com/httpdocs/app/cache/prod/classes.php:121)"} []
Doesn't seem to be a double header problem or double start problem.
Site does not interact with any PHP legacy code that could be messing with the sessions.
Sessions are stored in the database so a file problem is discarded.
Lowered the session duration so the session table does not get too big and the problem persists.
Think It could be a problem with HWIOAuthBundle and it's facebook login, but cannot find where is the conflict.
Also the site uses a lot of render_esi for caching with Symfony2 internal cache system.
Update -------------------------------------------------
Emptied the /var/lib/php/sessions folder of older session files than were not being used.
Lowered the session lifespan. Sql entries in the sessions table went from ~3 Million to ~1.3 Million.
Seems that the problem is gone but this is not a real solution.
My guess is that the pdo_handler in symfony2 has a performance problem.
Maybe someone with more knowledge in this matter (pdo_handler, table optimization) can point a real solution for high traffic.
Where does your PHP installation save sessions to?
[You can find this in your php.ini file in the session.save_path setting, assuming you have CLI access]
It is very likely PHP uses your servers /tmp folder. If this folder is full at any point, then PHP can't create new sessions.
You can see the current size of your /tmp folder with:
du -ch /tmp/ |grep total
If, as is common, the /tmp folder is on its own partition, you can see its maximum size with :
df -h
Some programs can suddenly guzzle Gbs of this folder for their purposes.

what is best config for csf about synflood but web response still fast

my server down random every day 4-5 time cause get high load very quick..
I have install csf and with some config server now stable.. load around 5.
BUT the big isuse is : the real user very hard to access website specially from IE browser you can test at xaluan.com, it also timeout some time.
the flowing is config using in csf:
SYNFLOOD = "1"
SYNFLOOD_RATE = "100/s"
SYNFLOOD_BURST = "10"
CONNLIMIT = "80;30"
PORTFLOOD = "80;tcp;70;5"
CT_LIMIT = "29"
other config may same as default
i playing around with this config for a week but still not good..
If increase the rate up to SYNFLOOD_RATE = "140/s" or more.. the website response very fast.. only bad side effect of server load increase so fast, normal 20 and may be up to few hundred in peak time ..
my need is response time fast but load still low.. please help
thanks
ps: server runing nginx frontend, apache, mysql, php ,, the home page has around 70 elements which will cached in browser in fist time access..
my server down random every day 4-5 time cause get high load very quick
There can be many reasons for this. try nice top -c -d 2 from command line and check what process is causing too much load. You can't simply blame csf for that.
Load may get high if the DB disk I/O is high. Its better to install mytop DB monitoring tool in the server and check if that the reason.
For installing mytop use this link http://bloke.org/linux/installing-mytop-on-cpanel/
I hope this will help you to monitor DB load usage

Resources