NGINX and memcached - full page caching and TTL - wordpress

I'm using nginx, memcached and APC for all my sites. What I host is a wordpress site, a vBulletin forum and some other sites.
I've set up nginx and memcached so that nginx first checks the memcached server to see if it has an entry for the full page, if it doesnt pass the request along to PHP and cache the full page - then display it to the user, see link for configuration: http://pastebin.com/ZFSrA9e5
Currently the vBulletin forum is using the "$config['Datastore']['class'] = 'vB_Datastore_Memcached';" and the WP blog is using the Memcached Object Cache (http://wordpress.org/extend/plugins/memcached/)
I am only caching WP as the full page in memcached (as explained above) at the moment to see if I run into any issues - so far so good.
What I want to achieve is good loading times and low load. The issues I've ran into/questions I have ran into are these:
How do I know that for example a user logs in for the first time, memcached caches the request for the first user. Then the next user comes and memcached serves the cached page for the first user - does anything take this into account/prevent this?
How/when will memcached/nginx flush the full-site cache in order to update the cache?
Am I recommended to run both APC and memcached? As far as I'm aware; memcached caches small values and apc caches the compiled PHP code, correct?
Would be awesome if someone could enlighten me on these questions.

1) Your cache response solely depends of this:
set $memcached_key "wordpress:$request_uri";
So each cached-entry depends only of URI and user auth information does not make sense. Second request will be same as first one because it will have same memcache keys. If you want to store separate cache-keys for each logged user you'll need to set more distinct key, something like this:
set $memcached_key "wordpress:$request_uri$scheme$host$cookie_PHPSESSID";
2) This depends of WP-plugin. Nginx never flushes the cache, to make force-flush you'll need to restart memcache.
3) Yes, both of them do different things, APC caches compiled PHP code, so it dont have to compile each time with each request (it only recompiles with server restart or when php file is changed). Memcache stores some portions of page or the whole page (your scenario) in memory and when KEY provided by nginx found in memcache, PHP is not even involved - whole page serves directly from memcahced memory.
hope this helps)

Related

Cloudflare optimization techniques (free plan)

OK, so I'm trying to benefit from the CF's free plan and squeeze as much as I can out of it. The main goal is to get the site served from the CF cache so it will load faster in the browser, if only for the first visit and search engines. It is a Wordpress so it can be a little slower than other sites.
So, to have CF cache properly I have set the following rules. You probably know that under the free plan 3 is the maximum:
https://example.com/wp-content/*
Browser Cache TTL: a year, Cache Level: Cache Everything, Edge Cache TTL: a month
https://example.com/wp-admin/*
Security Level: High, Cache Level: Bypass, Disable Apps, Disable Performance
https://example.com/*
Auto Minify: HTML, CSS & JS, Browser Cache TTL: 30 minutes, Cache Level: No Query String, Edge Cache TTL: 2 hours, Email Obfuscation: On, Automatic HTTPS Rewrites: On
Exactly in this order. These should allow CF to cache the files stored in the wp-content (uploads etc) for the maximum amount of time, then ignore and bypass the wp-admin and finally serve all the others (products in my case, blog articles, pages and so on) from its cache, although these should have a shorter time. I've also set the caching level in the Cloudflare dashboard to 'No query string'.
So far CF caches all the above and first time visitors or search engines should get a super fast page.
Next, I've added the following in the site's footer:
<script>jQuery(document).ready(function(){var e="?"+(new Date).getTime();jQuery("a").each(function(){jQuery(this).attr("href",jQuery(this).attr("href")+e)})})</script>
This script appends the current date to all links on the page. By doing this I want the visitor to get the latest version of the page (ie from my server), not the one stored by CF, because CF should not cache ULRs such as https://example.com/samplepage?234523445345 as it was instructed previously, in both the cache settings and the page rules.
Now, what I'm worried about is CF caching pages belonging to logged in members, such as account details. While the string javascript does work and the members would click a link such as /account?23456456 and therefore the page should not get cached, I have to wonder 'what if?'.
So, is there any better way to achieve what I am trying to (fast loading without caching members pages and sensitive details, such as shopping cart)? Or is this the maximum I can get out of the free plan?
In your case. Completely wordpress site? It is really very simple than other platforms to optimise. A new service called. Automatic Platform optimisation (APO). enable this in your cloudflare and install this in your wordpress plugin. Then connect the cloudflare to wordpress through APO.. And try to cache everything from your origin server. This will reduce the TTFB and RTT. This two will defenitely satisfy your site performance and speed.

Wordpress logging requests into a database

I am trying to create a plugin which logs http requests from users into a database. So far I've logged the requests for php files by hooking my function to the init function. But now I want to know if I can also log requests for files such as images, documents, etc. Is there any php code executed when someone requests files? Thank you.
Not by default, no. The normal mod_rewrite rules (not to be confused with WP's own rewrite rules) Wordpress uses specifically exclude any existing files such as images, css or javascript files. Those will be handled directly by Apache.
You obviously could add a custom script that runs on each request, logs the access to the database, reads those files and prints their content to the client, but it would come at a considerable cost, I'm afraid.
Apache, albeit not the fastest webserver around, is much, much faster in delivering a file to a client than running a php script, setting up a database connection, logging etc pp would be.
You'd get much higher server load, and probably noticeably slower page loads.
Instead, I recommend that you parse the access logs. They'll most likely contain all of the data you're looking for, and if you have access to the configuration, you can add specific headers sent by the client. You can easily do this with a cronjob that runs once a day, and it doesn't even have to run on the same server.

Is the ASP.NET Cache independent for each host header set in IIS7

I have a site that dynamically loads website contents based on domain host name, served from IIS7. All of the domains share a cached collection of settings. The settings are being flushed from the cache on almost every page request, it seems. This is verified by logging the times at which the Cache value is null and reloaded from SQL. This codes works as expected on other servers and sites. Is it possible that ASP.NET Cache is being stored separately for each domain host name?
Having different host headers for your site will not affect the cache.
There are a few reasons why your Cache might be getting flushed. Off the top of my head I would say either your AppDomain is getting dumped, your web.config file is getting updated, or some piece of code is explicitly expiring/clearing out your cache.
The cache is per application, I would look at a few other items.
Is your application pool recycling (Timeout, memory limit, file changes, other)
Do you have Web Gardening Enabled, this would create different buckets for each garden thread
One other thing to check -- how much memory is available? The ASP.NET cache will start ejecting stuff left and right once it senses a memory crunch. Remember, RAM is expensive and valuable storage . . .

akamai refresh cache before deployment and do cutover at specified time

My objective is to achieve zero downtime during deployment. My site uses akamai as CDN. Lets say I do have primary and secondary cluster of IIS servers. During deployment, the updates are made to secondary cluster. Before switchover from primary to secondary, can I request akamai to cache the content and do a cutover at a specified time?
The problem you are going to have is to guarantee that your content is cached on ALL akamai servers. Is the issue that you want to force content to be refreshed as soon as you cutover?
There are a few options here.
1 - Use a version in the requests "?v=1". This version would ALWAYS be requested from origin and would be appended to every request. As soon as you update your site, update the version on origin, so that the next request will append "?v=2" thus "busting" the cache and forcing an origin hit for all requests
2 - Change your akamai config to "honor webserver TTLs". You can then set very low or almost 0 TTLs right before you cut over and then increase gradually after you cutover
3 - Configure akamai to use If-MOdified-Since. This will force akamai to "validate" if any requests have changed.
4 - Use ECCU which can purge a whole directory, but this can take up to 40 minutes, but should be manageable during a maint window.
I don't think this would be possible based on my experience with Akamai (but things change faster than I can keep up with) - you can flush the content manually (at a cost) so you could flush /* we used to do this for particular files during deployments (never /* because we had over 1.2M URLs) but I can't see how Akamai could cache a non-visible version of your site for instant cut-over without having some secondary domain and origin.
However I have also found that Akamai are pretty good to deal with and it would definitely be worth contacting them in relation to a solution.

drupal persistent login, why use?

Why do i have to use some persistent-login module to make my users login into Drupal 6
for longer period of time.
Why changing php.ini or settings.php doesnt works ?
from a "webtools" I see my cookies being expired in next day, but after few hours it happend to log me out.
This is like spontagenious action, no pattern to follow / predict.
why this "keep-alive" login exists in drupal ??
You do not have to use the persistent module to achieve longer log in periods. You can simply adjust the ini_set() calls for the session.* PHP settings in your Drupal instances settings.php file (especially session.cookie_lifetime and session.gc_maxlifetime).
If adjusting those does not work for you, you should ensure that your server configuration allows overriding them from PHP.
Also, Drupal uses the standard PHP session storage mechanisms - if there are other PHP apps running on the same server, they might interfere with your session lifetime settings, depending on storage path configurations. See point 2 in this answer for information on that.
The persistent login module aims to make the configuration easier, but especially adds more features, as e.g. allowing a general remember me option while still requiring reauthentication for sensitive operations (like changing the password) to minimize the risks associated with long login periods.
Check this article linked from the modules project page, as well as this article linked from there for some in depth explanations concerning the handling of persistent logins.
Drupal overrides the internal php session save handler in include/bootstrap.ini and has some non standard session code there. Haven't followed it through though.
Beyond that Drupal's settings.php will override php.ini.

Resources