Profiling ASP.net applications over the long term? - asp.net

What is the accepted way to instrument a web-site to record execution statistics?
How long it takes to X
For example, i want to know how long it takes to perform some operation, e.g. validating the user's credentials with the Active Directory server:
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
A lot of people will suggest using Tracing, of various kinds, to output, or log, or record, the interesting performance metrics:
var sw = new System.Diagnostics.Stopwatch();
sw.Start();
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
sw.Stop();
//write a number to a log
WriteToLog("TimeToCheckCredentials", sw.ElapsedTicks);
Not an X; all X
The problem with this is that i'm not interested in how long it took to validate a user's credentials against Active Directory. i'm interested in how long it took to validate thousands of user's credentials in ActiveDirectory:
var sw = new System.Diagnostics.Stopwatch();
sw.Start();
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
sw.Stop();
timeToCheckCredentialsSum = timeToCheckCredentialsSum + sw.ElapsedTicks;
timeToCheckCredentialsCount = timeToCheckCredentialsCount + 1;
if ((sw.ElapsedTicks < timeToCheckCredentialMin) || (timeToCheckCredentialMin == 0))
timeToCheckCredentialMin = sw.ElapsedTicks;
if ((sw.ElapsedTicks > timeToCheckCredentialMax) || (timeToCheckCredentialMax == 0))
timeToCheckCredentialMax = sw.ElapsedTicks;
oldMean = timeToCheckCredentialsAverage;
newMean = timeToCheckCredentailsSum / timeToCheckCredentialsCount;
timeToCheckCredentialsAverage = newMean;
if (timeToCheckCredentailsCount > 2)
{
timeToCheckCredentailsVariance = (
((timeToCheckCredentialsCount -2)*timeToCheckCredentailsVariance + (sw.ElapsedTicks-oldMean)*(sw.ElapsedTicks-newMean))
/ (timeToCheckCredentialsCount -1))
}
else
timeToCheckCredentailsVariance = 0;
Which is a lot of boilerplate code that can easily be abstracted away into:
var sw = new System.Diagnostics.Stopwatch();
sw.Start();
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
sw.Stop();
//record the sample
Profiler.AddSample("TimeToCheckCredentials", sw.ElapsedTicks);
Which is still a lot of boilerplate code, that can be abstracted into:
Profiler.Start("TimeToCheckCredentials");
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
Profiler.Stop("TimeToCheckCredentials");
Now i have some statistics sitting in memory. i can let the web-site run for a few months, and at any time i can connect to the server and look at the profiling statistics. This is very much the ability of SQL Server to present it's own running history in various reports:
But ASP kills apps without warning
The problem is that this is an ASP.net web-site/application. Randomly throughout the course of a year, the web-server will decide to shut down the application, by recycling the application pool:
perhaps it has been idle for 3 weeks
perhaps it reached the maximum recycle time limit (e.g. 24 hours)
perhaps a date on a file changed, and the web-server has to recompile the application
When the web-server decides to shut down, all my statistics are lost.
Are there any ASP.net performance/instrumentation frameworks that solve this problem?
Try persisting to SQL Server
i thought about storing my statistics in SQL Server. Much like ASP.net session state can be stored in SQL Server after every request is complete, i could store my values in SQL Server every time:
void AddSample(String sampleName, long elapsedTicks)
{
using (IDbConnection conn = CreateDatabaseConnection())
{
ExecuteAddSampleStoredProcedure(conn, sampleName, elapsedTicks);
}
}
Except now i've introduced a huge latency into my application. This profiling code is called many thousand times a second. When the math is performed only in memory it takes few microseconds. Now it takes few dozen milliseconds. (Factor of 1,000; noticeable delay). That's not going to work.
Save only on application shutdown
i have considered registering my static helper class with the ASP.net hosting environment by implementing IRegisteredObject:
public class ShutdownNotification : IRegisteredObject
{
public void Stop(Boolean immediate)
{
Profiler.SaveStatisticsToDatabase();
}
}
But i'm curious what the right way to solve this problem is. Smarter people than me must have added profiling to ASP.net before.

We use Microsoft's Application Performance Monitoring for this. It captures page load times, DB call times, API call times, etc. When a page load is unexpectedly slow, it also alerts us and provides the stack trace along with the timings of various calls that impacted the load time. It's somewhat rudimentary but it does the trick and allowed us to verify that we didn't have any variations that were not performing as expected.
Advance warning: the UI only works in IE.
http://technet.microsoft.com/en-us/library/hh457578.aspx

Related

Google Chrome restores session cookies after a crash, how to avoid?

On Google Chrome (I saw this with version 35 on Windows 8.1, so far I didn't try other versions) when browser crashes (or you simply unplug power cable...) you'll be asked to recover previous session when you'll open it again. Good feature but it will restore session cookies too.
I don't want to discuss here if it's a bug or not anyway IMO it's a moderate security bug because a user with physical access to that machine may "provoke" a crash to stole unclosed sessions with all their content (you won't be asked to login again).
Finally my question is: how a web-site can avoid this? If I'm using plain ASP.NET authentication with session cookies I do not want they survive to a browser crash (even if computer is restarted!).
There is not something similar to a process ID in the User Agent string and JavaScript variables are all restored (so I can't store a random seed, generated - for example - server side). Is there anything else viable? Session timeout will handle this but usually it's pretty long and there will be an unsafe window I would eliminate.
I didn't find anything I can use as process id to be sure Chrome has not been restarted but there is a dirty workaround: if I setup a timer (let's say with an interval of five seconds) I can check how much time elapsed from last tick. If elapsed time is too long then session has been recovered and logout performed. Roughly something like this (for each page):
var lastTickTime = new Date();
setInterval(function () {
var currentTickTime = new Date();
// Difference is arbitrary and shouldn't be too small, here I suppose
// a 5 seconds timer with a maximum delay of 10 seconds.
if ((currentTickTime - lastTickTime) / 1000 > 10) {
// Perform logout
}
lastTickTime = currentTickTime;
}, 5000);
Of course it's not a perfect solution (because a malicious attacker may handle this and/or disable JavaScript) but so far it's better than nothing.
New answers with a better solution are more than welcome.
Adriano's suggestion makes is a good idea but the implementation is flawed. We need to remember the time from before the crash so we can compare it to the time after the crash. The easiest way to do that is to use sessionStorage.
const CRASH_DETECT_THRESHOLD_IN_MILLISECONDS = 10000;
const marker = parseInt(sessionStorage.getItem('crashDetectMarker') || new Date().valueOf());
const diff = new Date().valueOf() - marker;
console.log('diff', diff)
if (diff > CRASH_DETECT_THRESHOLD_IN_MILLISECONDS) {
alert('log out');
} else {
alert ('ok');
}
setInterval(() => {
sessionStorage.setItem('crashDetectMarker', new Date().valueOf());
}, 1000)
To test, you can simulate a Chrome crash by entering chrome://crash in the location bar.
Don't forget to clear out the crashDetectMarker when the user logs out.

Understanding the JIT; slow website

First off, this question has been covered a few times (I've done my research), and, for example, on the right side of the SO webpage is a list of related items... I have been through them all (or as many as I could find).
When I publish my pre-compiled .NET web application, it is very slow to load the first time.
I've read up on this, it's the JIT which I understand (sort of).
The problem is, after the home page loads (up to 20 seconds), many other pages load very fast.
However, it would appear that the only reason they load is because the resources have been loaded (or that they share the same compiled dlls). However, some pages still take a long time.
This indicates that maybe the JIT needs to compile different pages in different ways? If so, and using a contact form as an example (where the Thank You page needs to be compiled by the JIT and first time is slow), the user may hit the send button multiple times whilst waiting for the page to be shown.
After I load all these pages which use different models or different shared HTML content, the site loads quickly as expected. I assume this issue is a common problem?
Please note, I'm using .NET 4.0 but, there is no database, XML files etc. The only IO is if an email doesn't send and it writes the error to a log.
So, assuming my understanding is correct, what is the approach to not have to manually go through the website and load every page?
If the above is a little too broad, then can this be resolved in the settings/configuration in Visual Studio (2012) or the web.config file (excluding adding compilation debug=false)?
In this case, there are 2 problems
As per rene's comments, review this http://msdn.microsoft.com/en-us/library/ms972959.aspx... The helpful part was to add the following code to the global.asax file
const string sourceName = ".NET Runtime";
const string serverName = ".";
const string logName = "Application";
const string uriFormat = "\r\n\r\nURI: {0}\r\n\r\n";
const string exceptionFormat = "{0}: \"{1}\"\r\n{2}\r\n\r\n";
void Application_Error(Object sender, EventArgs ea) {
StringBuilder message = new StringBuilder();
if (Request != null) {
message.AppendFormat(uriFormat, Request.Path);
}
if (Server != null) {
Exception e;
for (e = Server.GetLastError(); e != null; e = e.InnerException) {
message.AppendFormat(exceptionFormat,
e.GetType().Name,
e.Message,
e.StackTrace);
}
}
if (!EventLog.SourceExists(sourceName)) {
EventLog.CreateEventSource(sourceName, logName);
}
EventLog Log = new EventLog(logName, serverName, sourceName);
Log.WriteEntry(message.ToString(), EventLogEntryType.Error);
//Server.ClearError(); // uncomment this to cancel the error
}
The server was maxing out during sending of the email! My code was fine, but, viewing Task Scheduler showed it was hitting 100% memory...
The solution was to monitor the errors shown by point 1 and fix it. Then, find out why the server was being throttled when sending an email!

Memory leak while sending response from rebus handler

I saw a very strange behavior in my rebus handler which is self hosted in exe. Right after sending response using bus.send method it adds up some memory consumed by process. I tried to look up object graph using memory profile and found that rebus is holding response message in serialized format somewhere.
Object graph was showing below hierarchy to the root.
System.Message --> CachedBodyMessage --> stream
Give me some pointers if anybody is aware of this thing.
I understand that a memory leak is a grave concern, but my belief is that it is unlikely that Rebus should contain a memory leak.
This belief is rooted in the fact that I have been running Windows Service-hosted Rebus endpoints in production for 1,5 years now, and several of them (e.g. the timeout managers) have sometimes been running for several months without being restarted.
I'd like to be absolutely bulletproof sure though, so I'm willing to investigate the issue you're reporting.
You're mentioning "CachedBodyMessage" - judging by the names of fields inside System.Messaging.Message, it sounds like it's something within MSMQ. To try to reproduce your issue, I coded the following test:
[Test, Ignore("Only works in RELEASE mode because otherwise object references are held on to for the duration of the method")]
public void DoesNotLeakMessages()
{
// arrange
const string inputQueueName = "test.leak.input";
var queue = new MsmqMessageQueue(inputQueueName);
disposables.Add(queue);
var body = Encoding.UTF8.GetBytes(new string('*', 32768));
var message = new TransportMessageToSend
{
Headers = new Dictionary<string, object> { { Headers.MessageId, "msg-1" } },
Body = body
};
var weakMessageRef = new WeakReference(message);
var weakBodyRef = new WeakReference(body);
// act
queue.Send(inputQueueName, message, new NoTransaction());
message = null;
body = null;
GC.Collect();
GC.WaitForPendingFinalizers();
// assert
Assert.That(weakMessageRef.IsAlive, Is.False, "Expected the message to have been collected");
Assert.That(weakBodyRef.IsAlive, Is.False, "Expected the body bytes to have been collected");
}
which verifies that the sent transport message is collected as it should (will only do this in RELEASE mode though, because of the way DEBUG mode holds on to object references within scope)
I'll try and run the TimePrinter sample now and leave it running for a while to see if I can reproduce the issue. If you stumble upon more information about e.g. exactly which objects are leaking, it would be very helpful.
Thanks again for taking the time to report your worries to me :)
Followup:
I've modified the TimePrinter sample so that it sends 50 msg/s and includes a 64 KB random string payload with each message, and I've tracked the memory usage for almost four hours now. As you can see, it does not look like memory is being leaked.
I'll leave it running the rest of the day, just to be sure.
Maybe you can tell me some more about why you suspected there was a memory leak in the first place?
Update:
As you can see from the trace, it has now been running for 7 hours and thus more than 1,200,000 messages containing more than 70 GB of data has been sent and consumed by the same process. If cached message bodies were leaking, I am pretty sure that we would have been able to see something rising on the graph.

Web Server Monitoring via asp.net web page

I would like to monitor the following on a web page:
Total response time
Total bytes
Throughput (requests/sec)
RAM used
Hard drive space and IO issues
Server CPU overhead
Errors (by error code)
MSSQL load
IIS errors
I host a small cluster of servers for web hosting. I need to create a hardware view within ASP.NET to get as close to a real-time snapshot as possible of what's going on.
I have heard of Spiceworks or other means for accomplishing this task. I agree that these are great tools, but I would like to code this and just keep it simple.
Here is some existing code I have come up with/found:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
namespace WebApplication1
{
public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
string[] logicalDrives = System.Environment.GetLogicalDrives();
//do stuff to put it in the view.
}
protected static string ToSizeString(double bytes)
{
var culture = CultureInfo.CurrentUICulture;
const string format = "#,0.0";
if (bytes < 1024)
return bytes.ToString("#,0", culture);
bytes /= 1024;
if (bytes < 1024)
return bytes.ToString(format, culture) + " KB";
bytes /= 1024;
if (bytes < 1024)
return bytes.ToString(format, culture) + " MB";
bytes /= 1024;
if (bytes < 1024)
return bytes.ToString(format, culture) + " GB";
bytes /= 1024;
return bytes.ToString(format, culture) + " TB";
}
public static string ToApproximateString(this TimeSpan time)
{
if (time.TotalDays > 14)
return ((int)(time.TotalDays / 7)).ToString("#,0.0") + " weeks";
if (14 - time.TotalDays < .75)
return "two weeks";
if (time.TotalDays > 1)
return time.TotalDays.ToString("#,0.0") + " days";
else if (time.TotalHours > 1)
return time.TotalHours.ToString("#,0.0") + " hours";
else if (time.TotalMinutes > 1)
return time.TotalMinutes.ToString("#,0.0") + " minutes";
else
return time.TotalSeconds.ToString("#,0.0") + " seconds";
}
}
}
Performance counters are exposed via the System.Diagnostics.PerformanceCounter class. Here are some Performance Counters for ASP.NET. And, another how-to.
Similar to what #Sumo said, you need to use Windows Performance Counters (PC), from the System.Diagnostics namespace.
Part of the problem with your question is that you are a little vague about what you want to measure from the perspective of PCs. PCs are very specific and very narrow; they measure one highly detailed metric. You will have to translate your requirements to the specific Windows PC that you want.
You said you want to measure:
Total response time
Total bytes
Throughput (reqs/sec)
Ram used
Hard Drives space
IO issues
Server CPU overhead
Errors (by error code)
MSSQL load
You should also consult the Windows Technet reference at http://technet.microsoft.com/en-us/library/cc776490(WS.10).aspx (it's W2K3, but it still applies to W2K8/R2). This will provide you with a wide overview and explanation of all the performance counters that you are looking for.
Running down each one:
Total response time
To my knowledge, there are no ASP.NET PCs that list this. And, it probably wouldn't be meaningful to you, anyway, as ASP.NET will also be responding to a wide variety of requests that you probably don't care how long it takes (i.e. anything ending with .axd). What I do in my projects is create a custom PC, but there are other techniques available (like using a custom trace listener).
Total bytes
Throughput (reqs/sec)
I believe there are PCs for both of these, although I think Total bytes might be listed under the Web Service category, whereas Throughput is probably an ASP.NET category.
RAM used
There is a Memory category, but you need to decide whether you are looking for working set size, physical RAM used, etc.
Hard drive free space
Check the LogicalDisk category
IO issues
What does this mean? Again, review the available PCs to see what seems most relevant.
Server CPU overhead
You will find this under the Processor category
Errors (by error code)
You can get the total number of errors thrown, or the rate at which exceptions get thrown, but if you want to collect the entries in the EventLog, you will need to use the EventLog classes in the System.Diagnostics namespace.
MSSQL load
I didn't find the reference overview of SQL Server PCs, but Brent Ozar is an expert, and he has a list of PCs to check here: http://www.brentozar.com/archive/2006/12/dba-101-using-perfmon-for-sql-performance-tuning/. This list is not likely to have changed much for SQL Server 2008/R2.
NOTES:
You may need to make sure that the identity for the application pool running your web application has been added to the computer's user group called Windows Performance Monitor Users.
You only need to open your counters for read-only access.
Performance Counters are components, and therefore implement IDisposable. Be sure you .Dispose() them (or, better still, use using() statements).
Use the .NextValue() method to get your values; there is almost never any need to use .RawValue or .NextSample().
I'm not giving you exact names for each counter, because it's very important that you really understand what each one measures and how useful it is to you, and only you can answer that. Experiment.
I would suggest using an analytic service such as New Relic. Page for .Net usage is here New Relic for .Net.

Session is timing out on the drop down selected index change

Session is timing out on the drop down selected index change
20 minutes ago | LINK
Hello Everyone,
I am facing a weird problem here. I have a report page on which i am using a drop down list which has different years. When user select the year=2009, i am displaying report for 2009 data. The code is given below. The website is live on our web server now. The page access havy data, so sometime it takes one minute or more to load the report for selected year and when that is the case my session expires and user is getting redirected to the default page. But the same thing works fine in the solution in my machine and in one of our local server. It is just not working on our live server. Please help me by posting the solutions if you know any.
I have also placed this line in my web.config but it is not helping:
Code:
protected void ddlYear_SelectedIndexChanged(object sender, EventArgs e)
{
if (Session["UserId"] != null)
{
Session["IsDetailedReportLoaded"] = false;
Session["IsScoreCardLoaded"] = false;
Session["IsChartLoaded"] = false;
Session["IsReportLoaded"] = false;
string strYear = ddlYear.SelectedValue;
LoadReport(Convert.ToInt16(strYear));
lblYear.Text = strYear;
lblAsOf.Text = strYear;
lblYear.Text = ddlYear.SelectedValue.ToString();
lblAsOf.Text = ddlYear.SelectedValue.ToString();
ddlYearDetail.SelectedValue = ddlYear.SelectedValue;
ddlYearScorecard.SelectedValue = ddlYear.SelectedValue;
ddlYearGraph.SelectedValue = ddlYear.SelectedValue;
mpeLoading.Hide();
}
else
Response.Redirect("Default.aspx");
}
Thanks,
Satish k.
One possible problem could be that the web server is running out of memory and forcing the app pool to recycle. This would flush the InProc Session memory. You could try using Sql Session State instead and see if that resolves the problem. Try monitoring the web server processes and see if they're recycling quickly.
You can place a
if(Session.IsNew)
check in your code and redirect/stop code execution appropriately.
I would check the Performance tab in IIS to see whether a bandwidth threshold is set.
Right click on website in IIS
Performance tab
Check "Bandwidth throttling" limit
If a treshold is set you might be hitting the maximum bandwidth (KB per second) limit. Either disable bandwidth throttling, or increase the limit.

Resources