Web Server Monitoring via asp.net web page - asp.net

I would like to monitor the following on a web page:
Total response time
Total bytes
Throughput (requests/sec)
RAM used
Hard drive space and IO issues
Server CPU overhead
Errors (by error code)
MSSQL load
IIS errors
I host a small cluster of servers for web hosting. I need to create a hardware view within ASP.NET to get as close to a real-time snapshot as possible of what's going on.
I have heard of Spiceworks or other means for accomplishing this task. I agree that these are great tools, but I would like to code this and just keep it simple.
Here is some existing code I have come up with/found:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
namespace WebApplication1
{
public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
string[] logicalDrives = System.Environment.GetLogicalDrives();
//do stuff to put it in the view.
}
protected static string ToSizeString(double bytes)
{
var culture = CultureInfo.CurrentUICulture;
const string format = "#,0.0";
if (bytes < 1024)
return bytes.ToString("#,0", culture);
bytes /= 1024;
if (bytes < 1024)
return bytes.ToString(format, culture) + " KB";
bytes /= 1024;
if (bytes < 1024)
return bytes.ToString(format, culture) + " MB";
bytes /= 1024;
if (bytes < 1024)
return bytes.ToString(format, culture) + " GB";
bytes /= 1024;
return bytes.ToString(format, culture) + " TB";
}
public static string ToApproximateString(this TimeSpan time)
{
if (time.TotalDays > 14)
return ((int)(time.TotalDays / 7)).ToString("#,0.0") + " weeks";
if (14 - time.TotalDays < .75)
return "two weeks";
if (time.TotalDays > 1)
return time.TotalDays.ToString("#,0.0") + " days";
else if (time.TotalHours > 1)
return time.TotalHours.ToString("#,0.0") + " hours";
else if (time.TotalMinutes > 1)
return time.TotalMinutes.ToString("#,0.0") + " minutes";
else
return time.TotalSeconds.ToString("#,0.0") + " seconds";
}
}
}

Performance counters are exposed via the System.Diagnostics.PerformanceCounter class. Here are some Performance Counters for ASP.NET. And, another how-to.

Similar to what #Sumo said, you need to use Windows Performance Counters (PC), from the System.Diagnostics namespace.
Part of the problem with your question is that you are a little vague about what you want to measure from the perspective of PCs. PCs are very specific and very narrow; they measure one highly detailed metric. You will have to translate your requirements to the specific Windows PC that you want.
You said you want to measure:
Total response time
Total bytes
Throughput (reqs/sec)
Ram used
Hard Drives space
IO issues
Server CPU overhead
Errors (by error code)
MSSQL load
You should also consult the Windows Technet reference at http://technet.microsoft.com/en-us/library/cc776490(WS.10).aspx (it's W2K3, but it still applies to W2K8/R2). This will provide you with a wide overview and explanation of all the performance counters that you are looking for.
Running down each one:
Total response time
To my knowledge, there are no ASP.NET PCs that list this. And, it probably wouldn't be meaningful to you, anyway, as ASP.NET will also be responding to a wide variety of requests that you probably don't care how long it takes (i.e. anything ending with .axd). What I do in my projects is create a custom PC, but there are other techniques available (like using a custom trace listener).
Total bytes
Throughput (reqs/sec)
I believe there are PCs for both of these, although I think Total bytes might be listed under the Web Service category, whereas Throughput is probably an ASP.NET category.
RAM used
There is a Memory category, but you need to decide whether you are looking for working set size, physical RAM used, etc.
Hard drive free space
Check the LogicalDisk category
IO issues
What does this mean? Again, review the available PCs to see what seems most relevant.
Server CPU overhead
You will find this under the Processor category
Errors (by error code)
You can get the total number of errors thrown, or the rate at which exceptions get thrown, but if you want to collect the entries in the EventLog, you will need to use the EventLog classes in the System.Diagnostics namespace.
MSSQL load
I didn't find the reference overview of SQL Server PCs, but Brent Ozar is an expert, and he has a list of PCs to check here: http://www.brentozar.com/archive/2006/12/dba-101-using-perfmon-for-sql-performance-tuning/. This list is not likely to have changed much for SQL Server 2008/R2.
NOTES:
You may need to make sure that the identity for the application pool running your web application has been added to the computer's user group called Windows Performance Monitor Users.
You only need to open your counters for read-only access.
Performance Counters are components, and therefore implement IDisposable. Be sure you .Dispose() them (or, better still, use using() statements).
Use the .NextValue() method to get your values; there is almost never any need to use .RawValue or .NextSample().
I'm not giving you exact names for each counter, because it's very important that you really understand what each one measures and how useful it is to you, and only you can answer that. Experiment.

I would suggest using an analytic service such as New Relic. Page for .Net usage is here New Relic for .Net.

Related

System.Io.Directory::GetFiles() Polling from AX 2009, Only Seeing New Files Every 10s

I wrote code in AX 2009 to poll a directory on a network drive, every 1 second, waiting for a response file from another system. I noticed that using a file explorer window, I could see the file appear, yet my code was not seeing and processing the file for several seconds - up to 9 seconds (and 9 polls) after the file appeared!
The AX code calls System.IO.Directory::GetFiles() using ClrInterop:
interopPerm = new InteropPermission(InteropKind::ClrInterop);
interopPerm.assert();
files = System.IO.Directory::GetFiles(#POLLDIR,'*.csv');
// etc...
CodeAccessPermission::revertAssert();
After much experimentation, it emerges that the first time in my program's lifetime, that I call ::GetFiles(), it starts a notional "ticking clock" with a period of 10 seconds. Only calls every 10 seconds find any new files that may have appeared, though they do still report files that were found on an earlier 10s "tick" since the first call to ::GetFiles().
If, when I start the program, the file is not there, then all the other calls to ::GetFiles(), 1 second after the first call, 2 seconds after, etc., up to 9 seconds after, simply do not see the file, even though it may have sitting there since 0.5s after the first call!
Then, reliably, and repeatably, the call 10s after the first call, will find the file. Then no calls from 11s to 19s will see any new file that might have appeared, yet the call 20s after the first call, will reliably see any new files. And so on, every 10 seconds.
Further investigation revealed that if the polled directory is on the AX AOS machine, this does not happen, and the file is found immediately, as one would expect, on the call after the file appears in the directory.
But this figure of 10s is reliable and repeatable, no matter what network drive I poll, no matter what server it's on.
Our network certainly doesn't have 10s of latency to see files; as I said, a file explorer window on the polled directory sees the file immediately.
What is going on?
Sounds like your issue is due to SMB caching - from this technet page:
Name, type, and ID
Directory Cache [DWORD] DirectoryCacheLifetime
Registry key the cache setting is controlled by
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Lanmanworkstation\Parameters
This is a cache of recent directory enumerations performed by the
client. Subsequent enumeration requests made by client applications as
well as metadata queries for files in the directory can be satisfied
from the cache. The client also uses the directory cache to determine
the presence or absence of a file in the directory and uses that
information to prevent clients from repeatedly attempting to open
files which are known not to exist on the server. This cache is likely
to affect distributed applications running on multiple computers
accessing a set of files on a server – where the applications use an
out of band mechanism to signal each other about
modification/addition/deletion of files on the server.
In short try to set the registry key
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Lanmanworkstation\Parameters\DirectoryCacheLifetime
to 0
Thanks to #Jan B. Kjeldsen , I have been able to solve my problem using FileSystemWatcher. Here is my implementation in X++ :
class SelTestThreadDirPolling
{
}
public server static Container SetStaticFileWatcher(str _dirPath,str _filenamePattern,int _timeoutMs)
{
InteropPermission interopPerm;
System.IO.FileSystemWatcher fw;
System.IO.WatcherChangeTypes watcherChangeType;
System.IO.WaitForChangedResult res;
Container cont;
str fileName;
str oldFileName;
str changeType;
;
interopPerm = new InteropPermission(InteropKind::ClrInterop);
interopPerm.assert();
fw = new System.IO.FileSystemWatcher();
fw.set_Path(_dirPath);
fw.set_IncludeSubdirectories(false);
fw.set_Filter(_filenamePattern);
watcherChangeType = ClrInterop::parseClrEnum('System.IO.WatcherChangeTypes', 'Created');
res = fw.WaitForChanged(watcherChangeType,_timeoutMs);
if (res.get_TimedOut()) return conNull();
fileName = res.get_Name();
//ChangeTypeName can be: Created, Deleted, Renamed and Changed
changeType = System.Enum::GetName(watcherChangeType.GetType(), res.get_ChangeType());
fw.Dispose();
CodeAccessPermission::revertAssert();
if (changeType == 'Renamed') oldFileName = res.get_OldName();
cont += fileName;
cont += changeType;
cont += oldFileName;
return cont;
}
void waitFileSystemWatcher(str _dirPath,str _filenamePattern,int _timeoutMs)
{
container cResult;
str filename,changeType,oldFilename;
;
cResult=SelTestThreadDirPolling::SetStaticFileWatcher(_dirPath,_filenamePattern,_timeoutMs);
if (cResult)
{
[filename,changeType,oldFilename]=cResult;
info(strfmt("filename=%1, changeType=%2, oldFilename=%3",filename,changeType,oldFilename));
}
else
{
info("TIMED OUT");
}
}
void run()
{;
this.waitFileSystemWatcher(#'\\myserver\mydir','filepattern*.csv',10000);
}
I should acknowledge the following for forming the basis of my X++ implementation:
https://blogs.msdn.microsoft.com/floditt/2008/09/01/how-to-implement-filesystemwatcher-with-x/
I would guess DAXaholic's answer is correct, but you could try other solutions like EnumerateFiles.
In your case I would rather wait for the files rather than poll for the files.
Using FileSystemWatcher there will be a minimal delay from file creation till your process wakes up. It is more tricky to use, but avoiding polling is a good thing. I have never used it over a network.

Understanding the JIT; slow website

First off, this question has been covered a few times (I've done my research), and, for example, on the right side of the SO webpage is a list of related items... I have been through them all (or as many as I could find).
When I publish my pre-compiled .NET web application, it is very slow to load the first time.
I've read up on this, it's the JIT which I understand (sort of).
The problem is, after the home page loads (up to 20 seconds), many other pages load very fast.
However, it would appear that the only reason they load is because the resources have been loaded (or that they share the same compiled dlls). However, some pages still take a long time.
This indicates that maybe the JIT needs to compile different pages in different ways? If so, and using a contact form as an example (where the Thank You page needs to be compiled by the JIT and first time is slow), the user may hit the send button multiple times whilst waiting for the page to be shown.
After I load all these pages which use different models or different shared HTML content, the site loads quickly as expected. I assume this issue is a common problem?
Please note, I'm using .NET 4.0 but, there is no database, XML files etc. The only IO is if an email doesn't send and it writes the error to a log.
So, assuming my understanding is correct, what is the approach to not have to manually go through the website and load every page?
If the above is a little too broad, then can this be resolved in the settings/configuration in Visual Studio (2012) or the web.config file (excluding adding compilation debug=false)?
In this case, there are 2 problems
As per rene's comments, review this http://msdn.microsoft.com/en-us/library/ms972959.aspx... The helpful part was to add the following code to the global.asax file
const string sourceName = ".NET Runtime";
const string serverName = ".";
const string logName = "Application";
const string uriFormat = "\r\n\r\nURI: {0}\r\n\r\n";
const string exceptionFormat = "{0}: \"{1}\"\r\n{2}\r\n\r\n";
void Application_Error(Object sender, EventArgs ea) {
StringBuilder message = new StringBuilder();
if (Request != null) {
message.AppendFormat(uriFormat, Request.Path);
}
if (Server != null) {
Exception e;
for (e = Server.GetLastError(); e != null; e = e.InnerException) {
message.AppendFormat(exceptionFormat,
e.GetType().Name,
e.Message,
e.StackTrace);
}
}
if (!EventLog.SourceExists(sourceName)) {
EventLog.CreateEventSource(sourceName, logName);
}
EventLog Log = new EventLog(logName, serverName, sourceName);
Log.WriteEntry(message.ToString(), EventLogEntryType.Error);
//Server.ClearError(); // uncomment this to cancel the error
}
The server was maxing out during sending of the email! My code was fine, but, viewing Task Scheduler showed it was hitting 100% memory...
The solution was to monitor the errors shown by point 1 and fix it. Then, find out why the server was being throttled when sending an email!

Memory leak while sending response from rebus handler

I saw a very strange behavior in my rebus handler which is self hosted in exe. Right after sending response using bus.send method it adds up some memory consumed by process. I tried to look up object graph using memory profile and found that rebus is holding response message in serialized format somewhere.
Object graph was showing below hierarchy to the root.
System.Message --> CachedBodyMessage --> stream
Give me some pointers if anybody is aware of this thing.
I understand that a memory leak is a grave concern, but my belief is that it is unlikely that Rebus should contain a memory leak.
This belief is rooted in the fact that I have been running Windows Service-hosted Rebus endpoints in production for 1,5 years now, and several of them (e.g. the timeout managers) have sometimes been running for several months without being restarted.
I'd like to be absolutely bulletproof sure though, so I'm willing to investigate the issue you're reporting.
You're mentioning "CachedBodyMessage" - judging by the names of fields inside System.Messaging.Message, it sounds like it's something within MSMQ. To try to reproduce your issue, I coded the following test:
[Test, Ignore("Only works in RELEASE mode because otherwise object references are held on to for the duration of the method")]
public void DoesNotLeakMessages()
{
// arrange
const string inputQueueName = "test.leak.input";
var queue = new MsmqMessageQueue(inputQueueName);
disposables.Add(queue);
var body = Encoding.UTF8.GetBytes(new string('*', 32768));
var message = new TransportMessageToSend
{
Headers = new Dictionary<string, object> { { Headers.MessageId, "msg-1" } },
Body = body
};
var weakMessageRef = new WeakReference(message);
var weakBodyRef = new WeakReference(body);
// act
queue.Send(inputQueueName, message, new NoTransaction());
message = null;
body = null;
GC.Collect();
GC.WaitForPendingFinalizers();
// assert
Assert.That(weakMessageRef.IsAlive, Is.False, "Expected the message to have been collected");
Assert.That(weakBodyRef.IsAlive, Is.False, "Expected the body bytes to have been collected");
}
which verifies that the sent transport message is collected as it should (will only do this in RELEASE mode though, because of the way DEBUG mode holds on to object references within scope)
I'll try and run the TimePrinter sample now and leave it running for a while to see if I can reproduce the issue. If you stumble upon more information about e.g. exactly which objects are leaking, it would be very helpful.
Thanks again for taking the time to report your worries to me :)
Followup:
I've modified the TimePrinter sample so that it sends 50 msg/s and includes a 64 KB random string payload with each message, and I've tracked the memory usage for almost four hours now. As you can see, it does not look like memory is being leaked.
I'll leave it running the rest of the day, just to be sure.
Maybe you can tell me some more about why you suspected there was a memory leak in the first place?
Update:
As you can see from the trace, it has now been running for 7 hours and thus more than 1,200,000 messages containing more than 70 GB of data has been sent and consumed by the same process. If cached message bodies were leaking, I am pretty sure that we would have been able to see something rising on the graph.

Profiling ASP.net applications over the long term?

What is the accepted way to instrument a web-site to record execution statistics?
How long it takes to X
For example, i want to know how long it takes to perform some operation, e.g. validating the user's credentials with the Active Directory server:
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
A lot of people will suggest using Tracing, of various kinds, to output, or log, or record, the interesting performance metrics:
var sw = new System.Diagnostics.Stopwatch();
sw.Start();
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
sw.Stop();
//write a number to a log
WriteToLog("TimeToCheckCredentials", sw.ElapsedTicks);
Not an X; all X
The problem with this is that i'm not interested in how long it took to validate a user's credentials against Active Directory. i'm interested in how long it took to validate thousands of user's credentials in ActiveDirectory:
var sw = new System.Diagnostics.Stopwatch();
sw.Start();
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
sw.Stop();
timeToCheckCredentialsSum = timeToCheckCredentialsSum + sw.ElapsedTicks;
timeToCheckCredentialsCount = timeToCheckCredentialsCount + 1;
if ((sw.ElapsedTicks < timeToCheckCredentialMin) || (timeToCheckCredentialMin == 0))
timeToCheckCredentialMin = sw.ElapsedTicks;
if ((sw.ElapsedTicks > timeToCheckCredentialMax) || (timeToCheckCredentialMax == 0))
timeToCheckCredentialMax = sw.ElapsedTicks;
oldMean = timeToCheckCredentialsAverage;
newMean = timeToCheckCredentailsSum / timeToCheckCredentialsCount;
timeToCheckCredentialsAverage = newMean;
if (timeToCheckCredentailsCount > 2)
{
timeToCheckCredentailsVariance = (
((timeToCheckCredentialsCount -2)*timeToCheckCredentailsVariance + (sw.ElapsedTicks-oldMean)*(sw.ElapsedTicks-newMean))
/ (timeToCheckCredentialsCount -1))
}
else
timeToCheckCredentailsVariance = 0;
Which is a lot of boilerplate code that can easily be abstracted away into:
var sw = new System.Diagnostics.Stopwatch();
sw.Start();
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
sw.Stop();
//record the sample
Profiler.AddSample("TimeToCheckCredentials", sw.ElapsedTicks);
Which is still a lot of boilerplate code, that can be abstracted into:
Profiler.Start("TimeToCheckCredentials");
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
Profiler.Stop("TimeToCheckCredentials");
Now i have some statistics sitting in memory. i can let the web-site run for a few months, and at any time i can connect to the server and look at the profiling statistics. This is very much the ability of SQL Server to present it's own running history in various reports:
But ASP kills apps without warning
The problem is that this is an ASP.net web-site/application. Randomly throughout the course of a year, the web-server will decide to shut down the application, by recycling the application pool:
perhaps it has been idle for 3 weeks
perhaps it reached the maximum recycle time limit (e.g. 24 hours)
perhaps a date on a file changed, and the web-server has to recompile the application
When the web-server decides to shut down, all my statistics are lost.
Are there any ASP.net performance/instrumentation frameworks that solve this problem?
Try persisting to SQL Server
i thought about storing my statistics in SQL Server. Much like ASP.net session state can be stored in SQL Server after every request is complete, i could store my values in SQL Server every time:
void AddSample(String sampleName, long elapsedTicks)
{
using (IDbConnection conn = CreateDatabaseConnection())
{
ExecuteAddSampleStoredProcedure(conn, sampleName, elapsedTicks);
}
}
Except now i've introduced a huge latency into my application. This profiling code is called many thousand times a second. When the math is performed only in memory it takes few microseconds. Now it takes few dozen milliseconds. (Factor of 1,000; noticeable delay). That's not going to work.
Save only on application shutdown
i have considered registering my static helper class with the ASP.net hosting environment by implementing IRegisteredObject:
public class ShutdownNotification : IRegisteredObject
{
public void Stop(Boolean immediate)
{
Profiler.SaveStatisticsToDatabase();
}
}
But i'm curious what the right way to solve this problem is. Smarter people than me must have added profiling to ASP.net before.
We use Microsoft's Application Performance Monitoring for this. It captures page load times, DB call times, API call times, etc. When a page load is unexpectedly slow, it also alerts us and provides the stack trace along with the timings of various calls that impacted the load time. It's somewhat rudimentary but it does the trick and allowed us to verify that we didn't have any variations that were not performing as expected.
Advance warning: the UI only works in IE.
http://technet.microsoft.com/en-us/library/hh457578.aspx

converting a windows shell IStream to std::ifstream/std::get_line

We have a lot of code written that makes use of the standard template library. I would like to integrate some of our apps into the windows shell, which should provide a better experience for our users.
One piece of integration involves a Shell Preview provider, the code is very straight forward, however, I’m stuck on the best way to implement something.
The shell is giving me, via my preview handler, an IStream object and I need to convert/adapt it to an std::ifstream object, primarily so that std::getline can get called further down the callstack.
I was wondering if there was a “standard” way of doing the adapting or do I need to role up my sleeves and code?
TIA.
Faffed around with this for a while:
std::stringstream buff;
BYTE ib[2048];
ULONG totread=0, read=0, sbuff = 2048;
HRESULT hr;
do {
hr = WinInputStream->Read(ib, sbuff, &read);
buff.write(ib, read);
totread+=read;
} while((sbuff == read) && SUCCEEDED(hr));
if(totread == 0) return false;
ifstream i;
TCHAR* ncbuff = const_cast<TCHAR*>(buff.str().c_str());
i.rdbuf()->pubsetbuf(ncbuff, buff.str().length());
But didn't like having to read it all into memory, for it to be processed again.
So I implemented my preview handler using IInitializeWithFile instead.

Resources