We have a very active web page which has lots of ajax and regular updates via jquery. It can load a huge amount of data (< 100k per minute) every user in peak situations and we had 2,000 people online during the last peak.
What we would like to do is count the number of concurrent users. If over 500 (and not a registered user) then bad luck, hit the road!
Has anyone got a class or some other process? Our server recycles every hour so I am thinking of an application level variable that adds one to the current count if successful (gold users are exempt from the test but are added to the quota so we may have 600 users).
Has anyone else played with this idea?
TIA
Just some ideas...
application.lock()
application('visitors') = application('visitors') + 1
application.unlock()
You should stress-test this solutions up to the numbers you want to allow. It will probably work is my fair guess.
Consider counting the ajax url page instead, that gives a more accurate estimate of the load. When going for session's you will not know when I've left. Counting via the Ajax line gives a more accurate number of visitors.
Just suggestion: in GLOBAL.ASA on Session OnStart you could increase count of running Sessions in some global (Application) variable.
Do not forget to decrease it in GLOBAL.ASA on Session OnEnd
Related
I am new to GA and need help interpreting the results. I built a simple report that shows username, # of sessions, # of unique page views, and average time on page. I am seeing some odd results that I can't explain.
I have users that have 0 sessions but positive unique page views. How can this happen? Doesn't visiting a page automatically initiate a session?
I have other users with 0 sessions, 0 page visits, but a positive avgTimeOnPage. I am at a loss to explain this. Any ideas?
Thanks!
This can be a tricky question, this can happen due to 2 reasons:
1 .- If you are mixing different scope you can have reported with an unexpected Behaviour. For example, if you're using the dimension as Pagepath (a hit Scope) with Sessions metrics (Session Scope), this can give you page with no sessions due to the query on the database only take the first element on the sessions. In that way, the totals of sessions matches with all the other reports, in other way this report becomes incomparable. Think that is an object behind. There is an easy post explaining what happens.
https://help.analyticsedge.com/googleanalytics/misunderstood-metrics-sessions-for-pages/
2.- There is a second issue that can cause this but is way less common, that is sent only a non-interaction hit with no pageview, this can cause creations of sessions will
This question relates to WordPress's wp-cron function but is general enough to apply to any DB-intensive calculation.
I'm creating a site theme that needs to calculate a time-decaying rating for all content in the system at regular intervals. This rating determines the order of posts on the homepage, which is paged to allow visitors to potentially view all content. This rating value needs to be calculated frequently to make sure the site has fresh content listed in the proper order.
The rating calculation is not heavy but the rating needs to be calculated for, potentially, 1,000s of items and doing that hourly via wp-cron will start to cause problems for sites with lots of content. Ignoring the impact on page load (wp-cron processes requests on page loads once a certain interval has been reached), at some point the script will reach a time limit. Setting up the site to use "plain ol' cron" will solve the page loading issue but not the timeout one.
Assuming that I have no control over the sites that this will run on, what's the best way to handle this rating calculation on a regular basis? A few things that came to mind:
Only calculate the rating for the most recent 1,000 posts, assuming that the rest won't be seen much. I don't like the idea of ignoring all old content, though.
Calculate the first, say, 100 or so, then only calculate the rating for older groups if those pages are loaded. This might be hard to get right, though, and lead to incorrect listing and ratings (which isn't a huge problem for older content but something I'd like to avoid)
Batch process 100 or so at regular intervals, keeping track of the last one processed. This would cycle through the whole body of content eventually.
Any other ideas? Thanks in advance!
Depending on the host, you're in for a potentially sticky situation. Let me outline a couple of ideal cases and you can pick/choose where you need to.
Option 1
Mirror the database first and use a secondary app (WordPress or otherwise) to do the calculations asynchronously against that DB mirror. When they're done, they can update a static file in the project root, write data to a shared Memcached instance, trigger a POST to WordPress' admin_post endpoint to write some internal state, whatever.
The idea here is that you're removing your active site from the equation. The last thing you want to do is have a costly cron job lock the live site's database or cause queries to slow down as it does its indexing.
Option 2
Offload the calculation entirely to a separate application. Tracking ratings in real time with WordPress is a poor idea as it bypasses page caching and triggers an uncachable request every time a new rating comes in. Pushing this off to a second server means your WordPress site is super fast, and it also means you can have the second server do the calculations for you in the first place.
If you're already using something like Elastic Search on the site, you can add ratings as an added indexing facet. Then just update posts as ratings change, and use the ES API to query most popular posts later.
Alternatively, you can use a hosted service like Keen IO to record and aggregate ratings.
Option 3
Still use cron, but don't schedule it as a cron job in WordPress. Instead, write a WP CLI routine that does the reindexing for you. Then, schedule real cron jobs to process the job.
This has the advantage of using PHP's command line version, which can be configured to skip the timeouts and memory limits imposed on the FPM/CGI/whatever version used to serve the site. It also means you don't have to wait for site traffic to trigger the job - and a long-running job won't block other cron events within WordPress from firing.
If using this process, I would set the job to run hourly and, each hour, run a batch of 1/24th of the total posts in the database. You can keep track of offsets or even processed post IDs in the database, the point is just that you're silently re-indexing posts throughout the day.
I wanted to know if there is a way in IIS to limit calls per user? Say A user can only make 100 calls per minute. If a user "foo" makes 100 calls in less than one minute they are stopped from making calls there on.
but if user "bar" is making < 100 calls per minute he should not be blocked
Any way to set that up in IIS?
It is the web application itself who should set such limits,
https://www.nuget.org/packages/MvcThrottle/
As far as I know no. But you can do bitrate throttling:
http://www.iis.net/downloads/microsoft/bit-rate-throttling
Maybe that will help.
I need to display how many users are browsing my site. The site is running on iis7, and we are using asp.net 3.5.
Is the number of active sessions a good way to go?
The number does not need to be very accurate.
No history is needed, I just want to know how many users are "online" right now, and display that on the page itself.
You can use Windows Performance counters for this (perfmon)
ASP.NET Applications > Sessions Active counter.
You can access these performance counters using the System.Diagnostics namespace.
This code worked for me:
PerformanceCounter pc = new PerformanceCounter("ASP.NET Applications",
"Sessions Active", "__Total__", "MYSERVERHOSTNAME.domain");
while (true)
{
Console.WriteLine(pc.NextValue());
System.Threading.Thread.Sleep(1000);
}
I had this problem so take a look here if the counter seems too high: http://support.microsoft.com/kb/969722
As a general principle, you have to define what you mean by the number of users online.
For example, Sessions usually last for a predefined duration, such as 30 minutes. Depending on how long you expect users to be on your site, the duration of a session could be largely attributed to idle time when the user is not on your site.
In general you want people that have been online in the last n minutes. Sessions give you this statistic for one period of time (the configured session expiry), but there are many other time measures that would potentially be more relevant.
One way to accomplish this is to simply have the IIS logs shove their data in a table in your database instead of the local file system. This is pretty easy to configure at the web server level.
Then you could easily graph that and show usage throughout the day, current, weekly, etc.
Of course, if you have a highly trafficked site, then this would result in a tremendous amount of data collected.
For EPiServer websites you may want to have a look at Live Monitor: http://www.episerver.com/add-on-store/episerver-live-monitor/
I have a page that shows statistics for users, this cannot be cached because each user has different statistics and there are many thus the real time query must be made.
What the way to avoid database server overload when user will click F5's to refresh or to ask different queries in short time intervals ?
I think #Jens A. is halfway there - this is a perfect case for caching, calculate the stats, stick them into the cache with a fixed expiry time and then only calculate them if they're not in the cache. By having the expiry time set to an appropriate value (5 minutes, less?) the stats will still be reasonably up to date and will change (update) at a reasonable without having to be calculated every time if the pages are being refreshed continuously.
You could store the generated statistics in your database for some time, and just show the old values if the statistics are requested again.