Wordpress w3total cache: Disk, Opcache or memcache - wordpress

I have a wordpress site that has about 200.000 pages views everyday. I run this on a VPS with 6GB ram. I have w3total cache installed right now but the page is still loading slow.
What i am wondering now is what cache should i use for my site Disk, Opcache or memcache, and should i use one on alla or how should i set it up? currently i am using basic disk cache only.
My question is basically, should i change anything, and if yes to what? or should i just stick with basic disk caching as is ?
I have not yet tried to change anything so i dont make it worse as i am quite unsure.

The answer to this question is effectively 7 years old, so I'll update the answer here. Please note that I am assuming that you are using W3TC for your website.
Disk Cache
Disk cache can generally be used for website with low/medium traffic. One of the problems with high traffic website is that Disk Cache gets corrupted if you use it with Object Cache and Database Cache.
On a medium/high traffic website running on a single server I would recommend the following:
Page Cache : Use Disk Cache Enhanced
Object Cache : Use Memcached
Database Cache : Use Memcached
OPcache
OPcache stores the compiled PHP code in the memory and definitely increases the code execution speed.
It is important to understand that the OPcache is a PHP Code Cache and does not store any pages, database queries or objects.
You can read more about OPcache here:
https://www.sitepoint.com/understanding-opcache/
MemCached
Memcache stores data in the RAM as key-value pairs and definitely improves the speed of the website as accessing the memory is much faster than accessing the Hard Disk.
Please note that MemCached will increase your memory usage significantly so it's definitely worth trying different configurations on test environment.
You can read more information on how to install Memcached on a Linux Server here: https://easyengine.io/tutorials/php/memcache/
Hope this will help :)

Related

Gradual increase in I/O usage on website

So I've attached the resource usage of my wordpress website over the past 30 days.
You can see the I/O usage has been getting higher and more frequent. I think this is a problem that has caused a massive drop in visits to my site.
I asked my host why this is and he said backs up usually contribute largely to this. Only thing is, I backup once a month not every day.
I've tried optimising my database, disabling plugins but I don't understand why it keeps getting higher.
I have a Analytics plugin that refreshes every hour but I've had that all year and I/O usage only started getting high recently.
The only thing I can think of is Wp Super Cache and CloudFlare not working well together. I've tried different configurations but hasn't helped.
Any help would be appreciated.
I think this is a pretty standard IO log, Over time your db does get a lot bigger and so does your users who end up using a lot of IO. I dont think there is anything to panic right away, but obviously if this is a very huge difference from what you are used to see normally then i think you should look into it seriously. I take caching very seriously and i usually use W3 total cache for this kind of performance optimization. Its a bit tricky in the begining but once you are used to it, it very easy.
I know you might just want to improve the IO, for which mostly you just need caching but here are somethings that i would do to get the most performance out of a site.
1) If you are using a VPS or dedicated server install memcache or something like Redis, and then configure your plugin according to it. You might have to enable it in your php.ini file but once installed you will see the difference. It will execute the code and give you a save the results in the RAM, on the next request instead of executing the php code it will just hand over the same results. Now it depends on your website, and whether you want to cache it or not. You can setup individual pages to use caching as well.
2) If your plugin has options to automatically minify and combine html/css/js files then use it, if not then you should minify and combine them into a single file or as less number of files as possible and then manually upload to your server. It will reduce a lot of time that is spent on requesting a file and waiting for getting the response back. Its usually in milliseconds but if you have a lot of files then it does add up to seconds + unnecessary load on the server.
3)If your plugin has gzip feature, then enable it. It will allow your users to download the gzipped css and js files instead of the original large files. This will enormously reduce the number of bits a browser have to download on every attempt.
4) Enable caching of files on the browser, your plugin might already have this, but if not then you will have to set some headers which will tell the browser to cache the css and js files in the user browser. So the next time when the user goes to the next page on your website, instead of calling the css/js files from the server it loads them directly from the Cache.
5) Upload your css/js/images files to a CDN, that way whenever someone requests a file it will use the shortest route to get your users browser.
6) If your site is not just a personal blog and is making serious money or you just want to please all the huge growing number your users. Then i would suggest you look into auto scaling server platforms, where you set some triggers and the number of servers automatically increase when facing a lot of users / IO and once the number of users go back to normal it automatically scales down. One of the big boys for this sort of service would be AWS beanstalk, microsoft azure. Or you can use beanstalkd with digital ocean which is a cheap alternative.
7) Wordpress is quite compatible with facebook's HHVM which is an opensource virtual machine designed to use php as just in time (JIT). Php is an interpreted language i.e its written in C/C++ (you can checkout the code at github), so when ever you refresh a page, hundred's of line of php code is interpretted by C++ and then compiled and executed. What HHVM does is that it compiles the code and keep it in memory, so when someone else requests the same page it already has a compiled version so it just executes and serves it. So it removes 30-40% of the compiling time from every request, which in turn makes your site 30-40% faster. Now PHP7 is already out last month and it does have a lot of performance upgrades, so if you are still not sure about HHVM you should definitely try upgrading to PHP7.

Are Drupal sites naturally slow?

I know next to nothing about Drupal but I do have a question. We had a site, written in straight HTML and PHP, that loaded the main page in 1-2 seconds and made 25 requests to the server to get the data it needed. A new Drupal version of the site takes 5-6 seconds to load the main page, which is no more complicated than the old page, and makes 127 requests (I'm watching Firebug NET) to the server to get the data it needs.
Is this typical?
Thanks.
Yep a 3x performance hit is natural to Drupal, or most of large scale PHP application framework. Bootstraping Drupal is a costly operation as it requires loading a lot of files. Drupal is also known to perform too much DB queries in order to produce a single page.
The first step is to enable page caching and JS/CSS aggregation. This can be done from the administration page at Administration >> Configuration >> Performance (in Drupal 7).
But a 1-2 seconds load time on a lightweight PHP site is a sign of a either overloaded or badly tuned hosting. You should ensure you site is running in a recent PHP version (PHP is getting faster and faster with each version). Also enable APC (or any other opcache), even with the default settings it can greatly improve Drupal's performances. With APC, try increasing the shared memory size (eg. apc.shm_size = 64 in php.ini).
You should also try profiling your site to identify the actual bottle necks. With Drupal making several requests per page, the DB quickly becomes the bottle neck. Drupal support using multiple slave servers for read queries.
About the database, Drupal uses an internal cache which by default is stored in the database. So this cache does not deal well with overloaded database. Drupal's cache is pluggable. It can be configured to use memcache, redis or mongodb for its storage. This could greatly reduce the load on the database.
Yes drupal is slow.
Thats why we use caching mecahnisms if ur page is making too many requests
See if u can aggregate ur CSS and JS(This will reduce number of
HTML calls. u can do this from admin)
Use CDN
use memcache or varnish cache
use page cache in apache.
Note:-please provide some actual data split up with some load testing tools
How much requests are sent to server? it also matters but drupal has solutions for it. Drupal combine all css files in to a single file to make server calls low, and similarly for js files.
But the speed also matters on server side code, database operations. Drupal is a powerful system which makes complex things easy (and yes easy things complex) and provides such capabilities so that a user can make a complete portal without a line of coding. But all these features come by the cost of performance. Internally drupal do lots of operations and it makes it slow.
Those operations includes views and block operations and the more complex the view / block / form is, the more operations there will be, and hence it will take more time.
Also if the site contents are increased then it will be become more slow. Because drupal consider every content as a node, and for all of your content types (for example news, cms pages, testimonials and so on) data is stored in a single node table (some other tables are also used, but your main contents are stored in node table). So when the contents are increased, the load on that single table is increased, which cause slow database operations, because the more big your table size is, the more operation time it will be taking.
I may be wrong, but Drupal is slow :P

Is memcached worth running on only one server?

I'm running a dv server at MediaTemple with 4 Gigs of RAM, and I'm just getting into looking at using memcached for my large Wordpress install. I understand that even though memcached is primarily designed to be used with a multiple-server setup, it can be used on only one machine - i.e., one server that is running both the cached website and memcached.
But my question is: Is it worth the trouble to run it on a single server? If I just configure the database caching on the W3 Total Cache Wordpress plugin, would that pretty much have the same effect as configuring memcached to run on the same server as my WP install? My thanks in advance for any insight you can share --
Yes, Memcached is great to run on single servers - but it especially excels when sharing cache results between clusters, speeding up each host as well as the cluster as a whole by eliminating much of the processing and look-ups cluster-wide.
Running Memcached with W3 Total Cache works brilliantly, it caches tons of stuff automatically as well as giving you the ability to directly store the WP database cache and object cache there too.
However with WordPress and single server installs I would maybe suggest you use a php accelerator instead, something like APC.
APC is primarily an opcode cache designed to speed up php execution by pre-compiling it and serving up the bytecode from a shared memory cache. It also gives you the ability to store keyed data (the database cache and object cache using W3 Total Cache) just the same as memcached.

DRUPAL & Memory limit

Some pages on Drupal use more memory than other pages. I think it's a waste of server resources to reserve 64M or more to all pages in Drupal only because the modules' page (or a section with graphics) reaches this peak and I want to avoid white pages when doing changes.
So, my question is: Is it a good practice to manage different memory limits programmatically, depending of the section or page? Some pages use 32M or less, so I think it's better to optimize specific sections of a web app with specific limits.
I've read a lot about optimization practices but I haven't found handling memory limits dynamically or a Drupal module dealing this kind of matter or applying this approach.
The memory limit is really just telling Drupal how much memory it's allowed to use. Drupal isn't going to "reserve" memory. It doesn't really "manage" memory at all the way you think. It'll use whatever it needs (of the allowed) when it needs it, and if it needed more, you'll get an error. If it needs less than the memory limit, it'll use less.
The minimum required available memory for Drupal 7 to run is 32MB, but a recommended number would be closer to 128MB.
http://drupal.org/requirements#php
PHP memory requirements can vary significantly depending on the modules in use on your site. Drupal 6 core requires PHP's memory_limit to be at least 16MB. Drupal 7 core requires 32MB. Warning messages will be shown if the PHP configuration does not meet these requirements. However, while these values may be sufficient for a default Drupal installation, a production site with a number of commonly used modules enabled (CCK, Views etc.) could require 64 MB or more. Some installations may require much more, especially with media-rich implementations. If you are using a hosting service it is important to verify that your host can provide sufficient memory for the set of modules you are deploying or may deploy in the future. (See the Increase PHP memory limit page in the Troubleshooting FAQ for additional information on modifying the PHP memory limit.)
Drupal is very memory heavy. When warming up an instance of drupal for the very first time, it tries to allocate memory to the views, cache etc..
Be sure to place this inside sites/default/settings.php
ini_set('memory_limit','128M');

How to set up memcache on nginx+fastcgi

On an ubuntu server, I have a drupal site which uses nginx+fastcgi as webserver and uses xcache. I am quite happy with the configuration but trying to set up memcache hoping to boost the site's speed, but I am not sure how to do so.
After installing memcached, i added extension=memcache.so to /etc/php5/cgi/php.ini and I see that memcache process is running.
However, after a few hours, instead of better performance I just see higher server load (average 5 instead of usual 2). So I appreciate your hint to set up memcache. (I know that I could use nginx as reverse proxy to apache, and define memcache on apache but I am particularly keen to avoid apache by any means).
Memcache is just key-value storage. It's useless, if your application doesn't know, how to use it.
By adding extension=memcache.so to php.ini, you are only enable memcache api in php.
After that, you must teach drupal, how to use memcache, to store some data in it.
I don't realy know, how to configure drupal to use memcache, but i think, it's very possible, and may be vary easy. Just look to some configuration files of drupal.
Pretty late to the game here, but if you're only on one server, memcached is just going to slow you down. Look into caching locally with APC (or, in your case, xcache's local caching). I'm sure Drupal will have plugins for these. My guess is you're using xcache for an opcode cache, but not using its memory cache abilities.
Any type of caching is not a silver bullet. Like CyberDem0n mentioned, your application has to be smart enough to use it: "cache this, don't cache that, pull this from cache, etc etc."
Memcached is great only if you are dealing with multiple servers and need a shared cache. If you have one server, you are wasting time with the overhead of a network call when you can just get the object out of memory (or even filesystem, which is faster than network in most cases).

Resources