On an ubuntu server, I have a drupal site which uses nginx+fastcgi as webserver and uses xcache. I am quite happy with the configuration but trying to set up memcache hoping to boost the site's speed, but I am not sure how to do so.
After installing memcached, i added extension=memcache.so to /etc/php5/cgi/php.ini and I see that memcache process is running.
However, after a few hours, instead of better performance I just see higher server load (average 5 instead of usual 2). So I appreciate your hint to set up memcache. (I know that I could use nginx as reverse proxy to apache, and define memcache on apache but I am particularly keen to avoid apache by any means).
Memcache is just key-value storage. It's useless, if your application doesn't know, how to use it.
By adding extension=memcache.so to php.ini, you are only enable memcache api in php.
After that, you must teach drupal, how to use memcache, to store some data in it.
I don't realy know, how to configure drupal to use memcache, but i think, it's very possible, and may be vary easy. Just look to some configuration files of drupal.
Pretty late to the game here, but if you're only on one server, memcached is just going to slow you down. Look into caching locally with APC (or, in your case, xcache's local caching). I'm sure Drupal will have plugins for these. My guess is you're using xcache for an opcode cache, but not using its memory cache abilities.
Any type of caching is not a silver bullet. Like CyberDem0n mentioned, your application has to be smart enough to use it: "cache this, don't cache that, pull this from cache, etc etc."
Memcached is great only if you are dealing with multiple servers and need a shared cache. If you have one server, you are wasting time with the overhead of a network call when you can just get the object out of memory (or even filesystem, which is faster than network in most cases).
Related
My company uses SilverStripe v3.1.21, along with the Subsite module to display and administer a number of clients' websites that sell products. This results in close to 200 subsites and a page count in the tens of thousands. The websites are very slow to load and tools such as Google's PageSpeed tell us page speeds are poor. We've already done step like combining and minimising the JS and compressing resources such as imaging, which gave some improvements, however the pages remain slow. The system was handed to us in this state and further hardware upgrades are not on the table as an option, nor are gaining additional resources for redevelopment.
We've taken a look at the static publish module (https://github.com/silverstripe/silverstripe-staticpublisher) and found that when we generating static pages the pages become fast and get a good score on the various tools, however the process to regenerate all of these pages takes over 14 hours, which is unacceptable given these products are updated from an external source daily. We also find that the regeneration process is a memory hog, as the module builds all of the pages in memory before dumping to file, causing the process to crash. We've had to alter the process to go subsite-by-subsite just to make it run.
We then took a look at the static publishing queue module (https://github.com/silverstripe/silverstripe-staticpublishqueue), which seemed to address our issues by having it queue pages as needed for regeneration, making it much more responsive to changes. However, the module seems to be very buggy and often crashes when generating pages.
Has anyone had experience using these modules (or similar) with larger sites and may be able to provide any pointers or ideas on how to implement static publishing successfully?
We are using staticpublishqueue currently on several sites. The only problem we've had with it is crashing due to long builds and poor locking. Or to be precise it doesn't actually crash but keeps spawning more and more instances until the server becomes unresponsible.
I think we have a fix for this in our fork. At least we haven't had any problems after using the modified locking. You could try installing the fork instead of the official version. If this fixes things for you maybe we should make a pull request :)
First of: We only use staticpublishqueue, I don't have any experience in regards to the sub site module. So I can't speak for your exact combination.
We are using staticpublishqueue on a huge site. Setup: We have multiple servers running the SilverStripe Website. They share a MySQL Database and use Redis as a session store.
One great thing about staticpublishqueue: you can run it in parallel. So the servers all run an instance of staticpublishqueue and publish into a shared folder, which is then synced to a nginx load balancer in front of the actual webservers. Works quite nice, but it does not scale indefinitely. At some point the staticpublishqueue instances start to pick the same record to render and waste resources. I think about 6 is the max for us.
Couple of things we learned regarding staticpublishqueue:
do not run to many instances at the same time (see above)
make sure it has enough ram
make sure it runs as the same user as the website
the record look it uses is not compatible with a MariaDB Galera Cluster
If possible switch to SilverStripe 3.6.x and PHP7. The performance gain is huge.
We are migrating away from staticpublishqueue to Cloudflare (or maybe another CDN). Why? Because if a page that is requested has not been rendered yet the server will render it for each request individually and then throw it away. Until the que does a separate render for the cache. Total waste of resources, especially if you purge your cache after a sitewide layout change or something.
I am wondering if it is feasible to deploy wordpress as a series of lambda functions on AWS API gateway. Any pointers on the feasibility/gotchas would be greatly appreciated!
Thanks in advance,
PKK
You'll have a lot of things to consider with persistence and even before that, Lambda doesn't support PHP. I'd probably look at Microsoft Azure Functions instead that do support PHP and do have persistent storage.
While other languages (such as Go, Rust, Swift etc.) can be "wrapped" to run in AWS Lambda with relative ease, compiling PHP targeting the same platform and running it is a bit different (and certainly more painstaking). Think about all the various PHP modules you'd need for starters. Moreover, I can't imagine performance will be as good as something like a Go binary.
If you can do something clever with the Phalcon framework and come up with an easy build and deploy process, then maayyyybee.
Though, you'd probably need to really overhaul something like WordPress which was not designed for this at all. It still uses some pretty old conventions due to the age of the project and while that is all well and good for your typical PHP server, it's a different ball game in the sense of this "portable" PHP installation.
Keep in mind that PHP sessions are relied upon as well and so you're going to need to move those elsewhere due to the lack of persistence with AWS Lambda. You can probably find some sort of plugin for WordPress that works with Redis?? I have to imagine something like that has been built by now... But there will be many complications.
I would seriously consider using Azure Functions to begin with OR using Docker and forgoing the pricing model that cloud functions offers. You can still find some pretty cheap and scalable hosting out there.
What I've done previously was use AWS ECS (Docker) with EFS (network storage) for persistence and RDS for the database. While this doesn't carry the same pricing model as Lambda, it is still cost efficient. You can set up your ECS Service to autoscale up and down. So that way you're running the bare minimum until you need more.
I've written a more in depth article about it here: https://serifandsemaphore.io/how-to-host-wordpress-like-a-boss-b5993fcfbd8e#.n6fbnf8ii ... but it's basically just the idea of running WordPress in Docker and using EFS to offload the persistent storage issues. You can swap many of the pieces of the puzzle out if you like. Use a database hosted in some other Docker service or Compose or where ever. That part need not be RDS for example. Even your storage could be handled in a different way, though EFS worked pretty well! The only major thing to note about EFS is the write speed. Most WordPress sites are read heavy though. Your mileage will vary depending on your needs.
Is it possible? Yes, anything is possible with enough time and effort. Is it worth it? That is a question best to ask yourself.
PHP can be run on Lambda as per the documentation located here: https://aws.amazon.com/blogs/compute/scripting-languages-for-aws-lambda-running-php-ruby-and-go/ .
The bigger initial problem as stated in other comments is a persistent file system. S3 for media storage is doable via Wordpress plugin (again from the comments) but any other persistent storage for the request / script execution is the initial biggest hurdle. Tackle one problem at a time till you get to the end!
I have a wordpress site that has about 200.000 pages views everyday. I run this on a VPS with 6GB ram. I have w3total cache installed right now but the page is still loading slow.
What i am wondering now is what cache should i use for my site Disk, Opcache or memcache, and should i use one on alla or how should i set it up? currently i am using basic disk cache only.
My question is basically, should i change anything, and if yes to what? or should i just stick with basic disk caching as is ?
I have not yet tried to change anything so i dont make it worse as i am quite unsure.
The answer to this question is effectively 7 years old, so I'll update the answer here. Please note that I am assuming that you are using W3TC for your website.
Disk Cache
Disk cache can generally be used for website with low/medium traffic. One of the problems with high traffic website is that Disk Cache gets corrupted if you use it with Object Cache and Database Cache.
On a medium/high traffic website running on a single server I would recommend the following:
Page Cache : Use Disk Cache Enhanced
Object Cache : Use Memcached
Database Cache : Use Memcached
OPcache
OPcache stores the compiled PHP code in the memory and definitely increases the code execution speed.
It is important to understand that the OPcache is a PHP Code Cache and does not store any pages, database queries or objects.
You can read more about OPcache here:
https://www.sitepoint.com/understanding-opcache/
MemCached
Memcache stores data in the RAM as key-value pairs and definitely improves the speed of the website as accessing the memory is much faster than accessing the Hard Disk.
Please note that MemCached will increase your memory usage significantly so it's definitely worth trying different configurations on test environment.
You can read more information on how to install Memcached on a Linux Server here: https://easyengine.io/tutorials/php/memcache/
Hope this will help :)
I'm running a dv server at MediaTemple with 4 Gigs of RAM, and I'm just getting into looking at using memcached for my large Wordpress install. I understand that even though memcached is primarily designed to be used with a multiple-server setup, it can be used on only one machine - i.e., one server that is running both the cached website and memcached.
But my question is: Is it worth the trouble to run it on a single server? If I just configure the database caching on the W3 Total Cache Wordpress plugin, would that pretty much have the same effect as configuring memcached to run on the same server as my WP install? My thanks in advance for any insight you can share --
Yes, Memcached is great to run on single servers - but it especially excels when sharing cache results between clusters, speeding up each host as well as the cluster as a whole by eliminating much of the processing and look-ups cluster-wide.
Running Memcached with W3 Total Cache works brilliantly, it caches tons of stuff automatically as well as giving you the ability to directly store the WP database cache and object cache there too.
However with WordPress and single server installs I would maybe suggest you use a php accelerator instead, something like APC.
APC is primarily an opcode cache designed to speed up php execution by pre-compiling it and serving up the bytecode from a shared memory cache. It also gives you the ability to store keyed data (the database cache and object cache using W3 Total Cache) just the same as memcached.
Our company currently runs two Windows 2003 servers (a web server & a MSSQL 8 database server). We're planning to add another couple of servers for redundancy / availability purposes in a web farm setup. Our web sites are predominately ASP.NET, we do have a few PHP sites, but these are mainly static with no DB.
Does anyone who has been through this process have any gotchas or other points I should be aware of? And would using Windows Server 2008 offer any additional advantages for this situation (so I can convince my boss to upgrade :) ?
Thanks.
If you have dynamic load balancing (i.e. My first request goes to server X, but my next Request may go to server Y or Z), you will find out that In-Proc Sessions do not work. So you will either need sticky Sessions (your load balancer will ALWAYS send me (=my session) to server X) or out-of-process sessions (i.e. stored in an SQL Server).
Like Michael says, you'll need to take care of your session. Ideally make it lean and store out of process. You'll have similar challenge with cache depending on how you use it and might be interested in looking towards a more robust caching technology if you only use asp caching.
Don't forget things like machine keys and validation in your web.config. The machineKeys need to be consistant across your servers.
Read up on IIS7 and you should be able to pick out several good examples to show off to your boss.
A web farm can give you opportunities and challenges with deployment that should not be overlooked.
Without specifc experience to the setup above but to general moves of this kind. I would recommend phasing the approach. That is, move to Windows 2008 first and then farm.
One additional thing to look at is your deployment plan. Deployment plans seem to be sadly overlooked and/or undervalued. Remember that you are deploying to multiple nodes and you want to take into account how you want to deploy and test in a logical fashion.
For example, assume you have four nodes in your farm. Do you pull two out of the cluster and update and test, then swapping out the other two to repeat? Determine if your current deployment process fits in with the answer you provide. Just because you have X times the amount of servers does not mean that you want or need to do X times the amount of work.
Just revisiting the caching part of the conversation for a moment. You should definitely take a look at a distributed caching solution. If you are pre-caching data and using callbacks with cache removals, you can really put a pounding on the database if you are not careful. Also, a lot of the distributed caching solutions offer some level of session state management, as well. I have been very much enjoying Microsoft's Velocity project, although it is just a second CTP release and not ready for production.
In addition to what others have said, you might want to consider looking into Richard Campbell's (of .NET Rocks!) product:
http://www.strangeloopnetworks.com/
We use the ASP.NET State Server for handling out sessions. This comes free with windows server 2003/2008.
We then have to make sure the machine key's are the same (a setting in your web.config files).
I then manually take each site offline (using app.offline or whatever the magic file is called). Alternatively, u can use IIS and just turn the site off and the offline site 'on'.
That's about it. You could worry about distributed caching, but that's pretty hard-core stuff. You can get a lot of good millage out of the default Output Caching with ASP.NET. I'd start there, before you delve into the complexity (and cost, for some products) if you're going to do distributed caching.
Oh, we're using an F5 load balancer that does NOT do sticky sessions, so we need to maintain our sessions .. which is why we're using the ASP.NET state server.
One other gotcha aside from the Session issues described by the other posters is if the apps are writing to the local file system. Scaling out to a web farm would break the apps if they assume the files are on the local PC. For example, uploaded files might be available or not depending on which server is hit. Changing the paths to point to a shared drive should fix this.