How Find Concurrent request per second made to server using Google Analytics? - google-analytics

Our website is being running for the past 6 months.
I would like to know how to detect concurrent users we are serving or the requests per second we are getting" so that we can do some performance tweaking.
We use Apache, PHP(Typo3 CMS), Google Analytics and AWStats.
Thank you.

The new Google Analytics Interface has an option to view the users in real time.
This will only show you the views to your HTML ages though (or any call to GA, like file downloads if configured).
It will not show you people accessing assets such as images, CSS or javascript files.
To increase the performance of TYPO3 there are a couple of things to consider:
USER instead of USER_INT plugins
if a user is logged in, switch the caching of the extension with a condition (see code at the end, SO doesn't like code in bullet lists)
use a PHP cache such as APC, see this discussion: apc vs eaccelerator vs xcache
use a reverse proxy, such as varnish in combination with the TYPO3 extension moc_varnish
plugin.tx_myplugin = USER
[loginUser = *]
plugin.tx_myplugin = USER_INT
[global]

Related

Cloudflare optimization techniques (free plan)

OK, so I'm trying to benefit from the CF's free plan and squeeze as much as I can out of it. The main goal is to get the site served from the CF cache so it will load faster in the browser, if only for the first visit and search engines. It is a Wordpress so it can be a little slower than other sites.
So, to have CF cache properly I have set the following rules. You probably know that under the free plan 3 is the maximum:
https://example.com/wp-content/*
Browser Cache TTL: a year, Cache Level: Cache Everything, Edge Cache TTL: a month
https://example.com/wp-admin/*
Security Level: High, Cache Level: Bypass, Disable Apps, Disable Performance
https://example.com/*
Auto Minify: HTML, CSS & JS, Browser Cache TTL: 30 minutes, Cache Level: No Query String, Edge Cache TTL: 2 hours, Email Obfuscation: On, Automatic HTTPS Rewrites: On
Exactly in this order. These should allow CF to cache the files stored in the wp-content (uploads etc) for the maximum amount of time, then ignore and bypass the wp-admin and finally serve all the others (products in my case, blog articles, pages and so on) from its cache, although these should have a shorter time. I've also set the caching level in the Cloudflare dashboard to 'No query string'.
So far CF caches all the above and first time visitors or search engines should get a super fast page.
Next, I've added the following in the site's footer:
<script>jQuery(document).ready(function(){var e="?"+(new Date).getTime();jQuery("a").each(function(){jQuery(this).attr("href",jQuery(this).attr("href")+e)})})</script>
This script appends the current date to all links on the page. By doing this I want the visitor to get the latest version of the page (ie from my server), not the one stored by CF, because CF should not cache ULRs such as https://example.com/samplepage?234523445345 as it was instructed previously, in both the cache settings and the page rules.
Now, what I'm worried about is CF caching pages belonging to logged in members, such as account details. While the string javascript does work and the members would click a link such as /account?23456456 and therefore the page should not get cached, I have to wonder 'what if?'.
So, is there any better way to achieve what I am trying to (fast loading without caching members pages and sensitive details, such as shopping cart)? Or is this the maximum I can get out of the free plan?
In your case. Completely wordpress site? It is really very simple than other platforms to optimise. A new service called. Automatic Platform optimisation (APO). enable this in your cloudflare and install this in your wordpress plugin. Then connect the cloudflare to wordpress through APO.. And try to cache everything from your origin server. This will reduce the TTFB and RTT. This two will defenitely satisfy your site performance and speed.

Wordpress logging requests into a database

I am trying to create a plugin which logs http requests from users into a database. So far I've logged the requests for php files by hooking my function to the init function. But now I want to know if I can also log requests for files such as images, documents, etc. Is there any php code executed when someone requests files? Thank you.
Not by default, no. The normal mod_rewrite rules (not to be confused with WP's own rewrite rules) Wordpress uses specifically exclude any existing files such as images, css or javascript files. Those will be handled directly by Apache.
You obviously could add a custom script that runs on each request, logs the access to the database, reads those files and prints their content to the client, but it would come at a considerable cost, I'm afraid.
Apache, albeit not the fastest webserver around, is much, much faster in delivering a file to a client than running a php script, setting up a database connection, logging etc pp would be.
You'd get much higher server load, and probably noticeably slower page loads.
Instead, I recommend that you parse the access logs. They'll most likely contain all of the data you're looking for, and if you have access to the configuration, you can add specific headers sent by the client. You can easily do this with a cronjob that runs once a day, and it doesn't even have to run on the same server.

accessing WordPress DB from remote server

Need some advice before starting develop some things.. I've 15 WordPress websites on different installs, and I've remote server which gets data 24/7 from those websites.
I've reached a point that I want the server to modify the websites based on his calculated data.
The things are this:
Should I allow the server the access the WP DB remotely and modify things without using WP on the circle?
Or, use WP REST API and supply some secured routes which provide data and accept data and make those changes?
My instinct is to use the WP API, but. After all its a PHP (nginx+apache) which have some limits (timeout for example) and I find it hard to run hard and long process on the WP itself.
I can divide the tasks to different levels, for example:
fetching data (simple get)
make some process on the remote server
loop and modify in small batches to another route
My concerns are that this circle require perfect match between remote server and WP API, and any change or fix on WP side brings plugins update on the websites which is not much fun.
Hope for any ideas and suggests to make it forward.
"use WP REST API and supply some secured routes which provide data and accept data and make those changes", indeed.
i don't know why timeout or another limits may cause a problem - but using API is the best way for such kind of cases. You can avoid timeout problems with some adjustments on web servers side.
Or you can increase memory, timeout limit exclusively for requested server.
f.e.
if ($_SERVER["remote_attr"]=='YOUR_MAIN_SERVER_IP') {
ini_set('max_execution_time',1000);
ini_set('memory_limit','1024M');
}

NGINX and memcached - full page caching and TTL

I'm using nginx, memcached and APC for all my sites. What I host is a wordpress site, a vBulletin forum and some other sites.
I've set up nginx and memcached so that nginx first checks the memcached server to see if it has an entry for the full page, if it doesnt pass the request along to PHP and cache the full page - then display it to the user, see link for configuration: http://pastebin.com/ZFSrA9e5
Currently the vBulletin forum is using the "$config['Datastore']['class'] = 'vB_Datastore_Memcached';" and the WP blog is using the Memcached Object Cache (http://wordpress.org/extend/plugins/memcached/)
I am only caching WP as the full page in memcached (as explained above) at the moment to see if I run into any issues - so far so good.
What I want to achieve is good loading times and low load. The issues I've ran into/questions I have ran into are these:
How do I know that for example a user logs in for the first time, memcached caches the request for the first user. Then the next user comes and memcached serves the cached page for the first user - does anything take this into account/prevent this?
How/when will memcached/nginx flush the full-site cache in order to update the cache?
Am I recommended to run both APC and memcached? As far as I'm aware; memcached caches small values and apc caches the compiled PHP code, correct?
Would be awesome if someone could enlighten me on these questions.
1) Your cache response solely depends of this:
set $memcached_key "wordpress:$request_uri";
So each cached-entry depends only of URI and user auth information does not make sense. Second request will be same as first one because it will have same memcache keys. If you want to store separate cache-keys for each logged user you'll need to set more distinct key, something like this:
set $memcached_key "wordpress:$request_uri$scheme$host$cookie_PHPSESSID";
2) This depends of WP-plugin. Nginx never flushes the cache, to make force-flush you'll need to restart memcache.
3) Yes, both of them do different things, APC caches compiled PHP code, so it dont have to compile each time with each request (it only recompiles with server restart or when php file is changed). Memcache stores some portions of page or the whole page (your scenario) in memory and when KEY provided by nginx found in memcache, PHP is not even involved - whole page serves directly from memcahced memory.
hope this helps)

drupal persistent login, why use?

Why do i have to use some persistent-login module to make my users login into Drupal 6
for longer period of time.
Why changing php.ini or settings.php doesnt works ?
from a "webtools" I see my cookies being expired in next day, but after few hours it happend to log me out.
This is like spontagenious action, no pattern to follow / predict.
why this "keep-alive" login exists in drupal ??
You do not have to use the persistent module to achieve longer log in periods. You can simply adjust the ini_set() calls for the session.* PHP settings in your Drupal instances settings.php file (especially session.cookie_lifetime and session.gc_maxlifetime).
If adjusting those does not work for you, you should ensure that your server configuration allows overriding them from PHP.
Also, Drupal uses the standard PHP session storage mechanisms - if there are other PHP apps running on the same server, they might interfere with your session lifetime settings, depending on storage path configurations. See point 2 in this answer for information on that.
The persistent login module aims to make the configuration easier, but especially adds more features, as e.g. allowing a general remember me option while still requiring reauthentication for sensitive operations (like changing the password) to minimize the risks associated with long login periods.
Check this article linked from the modules project page, as well as this article linked from there for some in depth explanations concerning the handling of persistent logins.
Drupal overrides the internal php session save handler in include/bootstrap.ini and has some non standard session code there. Haven't followed it through though.
Beyond that Drupal's settings.php will override php.ini.

Resources