How do you measure the number of open database connections - asp.net

I am trying to determine if I have a database connection leak.
So I need to see the number of open connections.
I have some simple test code that creates a leak:
protected void Page_Load(object sender, EventArgs e)
{
for(int i = 0; i < 100; i++)
{
SqlConnection sql = new SqlConnection(#"Data Source=.\SQLExpress;UID=sa;PWD=fjg^%kls;Initial Catalog=ABC");
sql.Open();
}
}
Note there is no .Close and this does infact crash after being run 3 times in quick succession.
In order to measure the leak I am running the Performance monitor and measuring SQLServer: General Statistics/User Connections:
(source: yart.com.au)
However, these seem to be zero when I run my code:
(source: yart.com.au)
What should I change to actually see the connections?
ANSWER
I have approved an answer below. Even though it doesn't use the performance tools, its good enough for my use. Bottom line is I wanted to see how many connections remain open after opening a web page and this did the trick.

You can try running a query against the master db like this:
SELECT SPID,
STATUS,
PROGRAM_NAME,
LOGINAME=RTRIM(LOGINAME),
HOSTNAME,
CMD
FROM MASTER.DBO.SYSPROCESSES
WHERE DB_NAME(DBID) = 'TEST' AND DBID != 0
See this link for more details.

Have you tried running the sp_who stored proc? If there are stale open connections they should show up there.
To show just the sa users processes run:
EXEC sp_who 'sa'

Related

System.Io.Directory::GetFiles() Polling from AX 2009, Only Seeing New Files Every 10s

I wrote code in AX 2009 to poll a directory on a network drive, every 1 second, waiting for a response file from another system. I noticed that using a file explorer window, I could see the file appear, yet my code was not seeing and processing the file for several seconds - up to 9 seconds (and 9 polls) after the file appeared!
The AX code calls System.IO.Directory::GetFiles() using ClrInterop:
interopPerm = new InteropPermission(InteropKind::ClrInterop);
interopPerm.assert();
files = System.IO.Directory::GetFiles(#POLLDIR,'*.csv');
// etc...
CodeAccessPermission::revertAssert();
After much experimentation, it emerges that the first time in my program's lifetime, that I call ::GetFiles(), it starts a notional "ticking clock" with a period of 10 seconds. Only calls every 10 seconds find any new files that may have appeared, though they do still report files that were found on an earlier 10s "tick" since the first call to ::GetFiles().
If, when I start the program, the file is not there, then all the other calls to ::GetFiles(), 1 second after the first call, 2 seconds after, etc., up to 9 seconds after, simply do not see the file, even though it may have sitting there since 0.5s after the first call!
Then, reliably, and repeatably, the call 10s after the first call, will find the file. Then no calls from 11s to 19s will see any new file that might have appeared, yet the call 20s after the first call, will reliably see any new files. And so on, every 10 seconds.
Further investigation revealed that if the polled directory is on the AX AOS machine, this does not happen, and the file is found immediately, as one would expect, on the call after the file appears in the directory.
But this figure of 10s is reliable and repeatable, no matter what network drive I poll, no matter what server it's on.
Our network certainly doesn't have 10s of latency to see files; as I said, a file explorer window on the polled directory sees the file immediately.
What is going on?
Sounds like your issue is due to SMB caching - from this technet page:
Name, type, and ID
Directory Cache [DWORD] DirectoryCacheLifetime
Registry key the cache setting is controlled by
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Lanmanworkstation\Parameters
This is a cache of recent directory enumerations performed by the
client. Subsequent enumeration requests made by client applications as
well as metadata queries for files in the directory can be satisfied
from the cache. The client also uses the directory cache to determine
the presence or absence of a file in the directory and uses that
information to prevent clients from repeatedly attempting to open
files which are known not to exist on the server. This cache is likely
to affect distributed applications running on multiple computers
accessing a set of files on a server – where the applications use an
out of band mechanism to signal each other about
modification/addition/deletion of files on the server.
In short try to set the registry key
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Lanmanworkstation\Parameters\DirectoryCacheLifetime
to 0
Thanks to #Jan B. Kjeldsen , I have been able to solve my problem using FileSystemWatcher. Here is my implementation in X++ :
class SelTestThreadDirPolling
{
}
public server static Container SetStaticFileWatcher(str _dirPath,str _filenamePattern,int _timeoutMs)
{
InteropPermission interopPerm;
System.IO.FileSystemWatcher fw;
System.IO.WatcherChangeTypes watcherChangeType;
System.IO.WaitForChangedResult res;
Container cont;
str fileName;
str oldFileName;
str changeType;
;
interopPerm = new InteropPermission(InteropKind::ClrInterop);
interopPerm.assert();
fw = new System.IO.FileSystemWatcher();
fw.set_Path(_dirPath);
fw.set_IncludeSubdirectories(false);
fw.set_Filter(_filenamePattern);
watcherChangeType = ClrInterop::parseClrEnum('System.IO.WatcherChangeTypes', 'Created');
res = fw.WaitForChanged(watcherChangeType,_timeoutMs);
if (res.get_TimedOut()) return conNull();
fileName = res.get_Name();
//ChangeTypeName can be: Created, Deleted, Renamed and Changed
changeType = System.Enum::GetName(watcherChangeType.GetType(), res.get_ChangeType());
fw.Dispose();
CodeAccessPermission::revertAssert();
if (changeType == 'Renamed') oldFileName = res.get_OldName();
cont += fileName;
cont += changeType;
cont += oldFileName;
return cont;
}
void waitFileSystemWatcher(str _dirPath,str _filenamePattern,int _timeoutMs)
{
container cResult;
str filename,changeType,oldFilename;
;
cResult=SelTestThreadDirPolling::SetStaticFileWatcher(_dirPath,_filenamePattern,_timeoutMs);
if (cResult)
{
[filename,changeType,oldFilename]=cResult;
info(strfmt("filename=%1, changeType=%2, oldFilename=%3",filename,changeType,oldFilename));
}
else
{
info("TIMED OUT");
}
}
void run()
{;
this.waitFileSystemWatcher(#'\\myserver\mydir','filepattern*.csv',10000);
}
I should acknowledge the following for forming the basis of my X++ implementation:
https://blogs.msdn.microsoft.com/floditt/2008/09/01/how-to-implement-filesystemwatcher-with-x/
I would guess DAXaholic's answer is correct, but you could try other solutions like EnumerateFiles.
In your case I would rather wait for the files rather than poll for the files.
Using FileSystemWatcher there will be a minimal delay from file creation till your process wakes up. It is more tricky to use, but avoiding polling is a good thing. I have never used it over a network.

Understanding the JIT; slow website

First off, this question has been covered a few times (I've done my research), and, for example, on the right side of the SO webpage is a list of related items... I have been through them all (or as many as I could find).
When I publish my pre-compiled .NET web application, it is very slow to load the first time.
I've read up on this, it's the JIT which I understand (sort of).
The problem is, after the home page loads (up to 20 seconds), many other pages load very fast.
However, it would appear that the only reason they load is because the resources have been loaded (or that they share the same compiled dlls). However, some pages still take a long time.
This indicates that maybe the JIT needs to compile different pages in different ways? If so, and using a contact form as an example (where the Thank You page needs to be compiled by the JIT and first time is slow), the user may hit the send button multiple times whilst waiting for the page to be shown.
After I load all these pages which use different models or different shared HTML content, the site loads quickly as expected. I assume this issue is a common problem?
Please note, I'm using .NET 4.0 but, there is no database, XML files etc. The only IO is if an email doesn't send and it writes the error to a log.
So, assuming my understanding is correct, what is the approach to not have to manually go through the website and load every page?
If the above is a little too broad, then can this be resolved in the settings/configuration in Visual Studio (2012) or the web.config file (excluding adding compilation debug=false)?
In this case, there are 2 problems
As per rene's comments, review this http://msdn.microsoft.com/en-us/library/ms972959.aspx... The helpful part was to add the following code to the global.asax file
const string sourceName = ".NET Runtime";
const string serverName = ".";
const string logName = "Application";
const string uriFormat = "\r\n\r\nURI: {0}\r\n\r\n";
const string exceptionFormat = "{0}: \"{1}\"\r\n{2}\r\n\r\n";
void Application_Error(Object sender, EventArgs ea) {
StringBuilder message = new StringBuilder();
if (Request != null) {
message.AppendFormat(uriFormat, Request.Path);
}
if (Server != null) {
Exception e;
for (e = Server.GetLastError(); e != null; e = e.InnerException) {
message.AppendFormat(exceptionFormat,
e.GetType().Name,
e.Message,
e.StackTrace);
}
}
if (!EventLog.SourceExists(sourceName)) {
EventLog.CreateEventSource(sourceName, logName);
}
EventLog Log = new EventLog(logName, serverName, sourceName);
Log.WriteEntry(message.ToString(), EventLogEntryType.Error);
//Server.ClearError(); // uncomment this to cancel the error
}
The server was maxing out during sending of the email! My code was fine, but, viewing Task Scheduler showed it was hitting 100% memory...
The solution was to monitor the errors shown by point 1 and fix it. Then, find out why the server was being throttled when sending an email!

Profiling ASP.net applications over the long term?

What is the accepted way to instrument a web-site to record execution statistics?
How long it takes to X
For example, i want to know how long it takes to perform some operation, e.g. validating the user's credentials with the Active Directory server:
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
A lot of people will suggest using Tracing, of various kinds, to output, or log, or record, the interesting performance metrics:
var sw = new System.Diagnostics.Stopwatch();
sw.Start();
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
sw.Stop();
//write a number to a log
WriteToLog("TimeToCheckCredentials", sw.ElapsedTicks);
Not an X; all X
The problem with this is that i'm not interested in how long it took to validate a user's credentials against Active Directory. i'm interested in how long it took to validate thousands of user's credentials in ActiveDirectory:
var sw = new System.Diagnostics.Stopwatch();
sw.Start();
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
sw.Stop();
timeToCheckCredentialsSum = timeToCheckCredentialsSum + sw.ElapsedTicks;
timeToCheckCredentialsCount = timeToCheckCredentialsCount + 1;
if ((sw.ElapsedTicks < timeToCheckCredentialMin) || (timeToCheckCredentialMin == 0))
timeToCheckCredentialMin = sw.ElapsedTicks;
if ((sw.ElapsedTicks > timeToCheckCredentialMax) || (timeToCheckCredentialMax == 0))
timeToCheckCredentialMax = sw.ElapsedTicks;
oldMean = timeToCheckCredentialsAverage;
newMean = timeToCheckCredentailsSum / timeToCheckCredentialsCount;
timeToCheckCredentialsAverage = newMean;
if (timeToCheckCredentailsCount > 2)
{
timeToCheckCredentailsVariance = (
((timeToCheckCredentialsCount -2)*timeToCheckCredentailsVariance + (sw.ElapsedTicks-oldMean)*(sw.ElapsedTicks-newMean))
/ (timeToCheckCredentialsCount -1))
}
else
timeToCheckCredentailsVariance = 0;
Which is a lot of boilerplate code that can easily be abstracted away into:
var sw = new System.Diagnostics.Stopwatch();
sw.Start();
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
sw.Stop();
//record the sample
Profiler.AddSample("TimeToCheckCredentials", sw.ElapsedTicks);
Which is still a lot of boilerplate code, that can be abstracted into:
Profiler.Start("TimeToCheckCredentials");
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
Profiler.Stop("TimeToCheckCredentials");
Now i have some statistics sitting in memory. i can let the web-site run for a few months, and at any time i can connect to the server and look at the profiling statistics. This is very much the ability of SQL Server to present it's own running history in various reports:
But ASP kills apps without warning
The problem is that this is an ASP.net web-site/application. Randomly throughout the course of a year, the web-server will decide to shut down the application, by recycling the application pool:
perhaps it has been idle for 3 weeks
perhaps it reached the maximum recycle time limit (e.g. 24 hours)
perhaps a date on a file changed, and the web-server has to recompile the application
When the web-server decides to shut down, all my statistics are lost.
Are there any ASP.net performance/instrumentation frameworks that solve this problem?
Try persisting to SQL Server
i thought about storing my statistics in SQL Server. Much like ASP.net session state can be stored in SQL Server after every request is complete, i could store my values in SQL Server every time:
void AddSample(String sampleName, long elapsedTicks)
{
using (IDbConnection conn = CreateDatabaseConnection())
{
ExecuteAddSampleStoredProcedure(conn, sampleName, elapsedTicks);
}
}
Except now i've introduced a huge latency into my application. This profiling code is called many thousand times a second. When the math is performed only in memory it takes few microseconds. Now it takes few dozen milliseconds. (Factor of 1,000; noticeable delay). That's not going to work.
Save only on application shutdown
i have considered registering my static helper class with the ASP.net hosting environment by implementing IRegisteredObject:
public class ShutdownNotification : IRegisteredObject
{
public void Stop(Boolean immediate)
{
Profiler.SaveStatisticsToDatabase();
}
}
But i'm curious what the right way to solve this problem is. Smarter people than me must have added profiling to ASP.net before.
We use Microsoft's Application Performance Monitoring for this. It captures page load times, DB call times, API call times, etc. When a page load is unexpectedly slow, it also alerts us and provides the stack trace along with the timings of various calls that impacted the load time. It's somewhat rudimentary but it does the trick and allowed us to verify that we didn't have any variations that were not performing as expected.
Advance warning: the UI only works in IE.
http://technet.microsoft.com/en-us/library/hh457578.aspx

Web Server Monitoring via asp.net web page

I would like to monitor the following on a web page:
Total response time
Total bytes
Throughput (requests/sec)
RAM used
Hard drive space and IO issues
Server CPU overhead
Errors (by error code)
MSSQL load
IIS errors
I host a small cluster of servers for web hosting. I need to create a hardware view within ASP.NET to get as close to a real-time snapshot as possible of what's going on.
I have heard of Spiceworks or other means for accomplishing this task. I agree that these are great tools, but I would like to code this and just keep it simple.
Here is some existing code I have come up with/found:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
namespace WebApplication1
{
public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
string[] logicalDrives = System.Environment.GetLogicalDrives();
//do stuff to put it in the view.
}
protected static string ToSizeString(double bytes)
{
var culture = CultureInfo.CurrentUICulture;
const string format = "#,0.0";
if (bytes < 1024)
return bytes.ToString("#,0", culture);
bytes /= 1024;
if (bytes < 1024)
return bytes.ToString(format, culture) + " KB";
bytes /= 1024;
if (bytes < 1024)
return bytes.ToString(format, culture) + " MB";
bytes /= 1024;
if (bytes < 1024)
return bytes.ToString(format, culture) + " GB";
bytes /= 1024;
return bytes.ToString(format, culture) + " TB";
}
public static string ToApproximateString(this TimeSpan time)
{
if (time.TotalDays > 14)
return ((int)(time.TotalDays / 7)).ToString("#,0.0") + " weeks";
if (14 - time.TotalDays < .75)
return "two weeks";
if (time.TotalDays > 1)
return time.TotalDays.ToString("#,0.0") + " days";
else if (time.TotalHours > 1)
return time.TotalHours.ToString("#,0.0") + " hours";
else if (time.TotalMinutes > 1)
return time.TotalMinutes.ToString("#,0.0") + " minutes";
else
return time.TotalSeconds.ToString("#,0.0") + " seconds";
}
}
}
Performance counters are exposed via the System.Diagnostics.PerformanceCounter class. Here are some Performance Counters for ASP.NET. And, another how-to.
Similar to what #Sumo said, you need to use Windows Performance Counters (PC), from the System.Diagnostics namespace.
Part of the problem with your question is that you are a little vague about what you want to measure from the perspective of PCs. PCs are very specific and very narrow; they measure one highly detailed metric. You will have to translate your requirements to the specific Windows PC that you want.
You said you want to measure:
Total response time
Total bytes
Throughput (reqs/sec)
Ram used
Hard Drives space
IO issues
Server CPU overhead
Errors (by error code)
MSSQL load
You should also consult the Windows Technet reference at http://technet.microsoft.com/en-us/library/cc776490(WS.10).aspx (it's W2K3, but it still applies to W2K8/R2). This will provide you with a wide overview and explanation of all the performance counters that you are looking for.
Running down each one:
Total response time
To my knowledge, there are no ASP.NET PCs that list this. And, it probably wouldn't be meaningful to you, anyway, as ASP.NET will also be responding to a wide variety of requests that you probably don't care how long it takes (i.e. anything ending with .axd). What I do in my projects is create a custom PC, but there are other techniques available (like using a custom trace listener).
Total bytes
Throughput (reqs/sec)
I believe there are PCs for both of these, although I think Total bytes might be listed under the Web Service category, whereas Throughput is probably an ASP.NET category.
RAM used
There is a Memory category, but you need to decide whether you are looking for working set size, physical RAM used, etc.
Hard drive free space
Check the LogicalDisk category
IO issues
What does this mean? Again, review the available PCs to see what seems most relevant.
Server CPU overhead
You will find this under the Processor category
Errors (by error code)
You can get the total number of errors thrown, or the rate at which exceptions get thrown, but if you want to collect the entries in the EventLog, you will need to use the EventLog classes in the System.Diagnostics namespace.
MSSQL load
I didn't find the reference overview of SQL Server PCs, but Brent Ozar is an expert, and he has a list of PCs to check here: http://www.brentozar.com/archive/2006/12/dba-101-using-perfmon-for-sql-performance-tuning/. This list is not likely to have changed much for SQL Server 2008/R2.
NOTES:
You may need to make sure that the identity for the application pool running your web application has been added to the computer's user group called Windows Performance Monitor Users.
You only need to open your counters for read-only access.
Performance Counters are components, and therefore implement IDisposable. Be sure you .Dispose() them (or, better still, use using() statements).
Use the .NextValue() method to get your values; there is almost never any need to use .RawValue or .NextSample().
I'm not giving you exact names for each counter, because it's very important that you really understand what each one measures and how useful it is to you, and only you can answer that. Experiment.
I would suggest using an analytic service such as New Relic. Page for .Net usage is here New Relic for .Net.

Session is timing out on the drop down selected index change

Session is timing out on the drop down selected index change
20 minutes ago | LINK
Hello Everyone,
I am facing a weird problem here. I have a report page on which i am using a drop down list which has different years. When user select the year=2009, i am displaying report for 2009 data. The code is given below. The website is live on our web server now. The page access havy data, so sometime it takes one minute or more to load the report for selected year and when that is the case my session expires and user is getting redirected to the default page. But the same thing works fine in the solution in my machine and in one of our local server. It is just not working on our live server. Please help me by posting the solutions if you know any.
I have also placed this line in my web.config but it is not helping:
Code:
protected void ddlYear_SelectedIndexChanged(object sender, EventArgs e)
{
if (Session["UserId"] != null)
{
Session["IsDetailedReportLoaded"] = false;
Session["IsScoreCardLoaded"] = false;
Session["IsChartLoaded"] = false;
Session["IsReportLoaded"] = false;
string strYear = ddlYear.SelectedValue;
LoadReport(Convert.ToInt16(strYear));
lblYear.Text = strYear;
lblAsOf.Text = strYear;
lblYear.Text = ddlYear.SelectedValue.ToString();
lblAsOf.Text = ddlYear.SelectedValue.ToString();
ddlYearDetail.SelectedValue = ddlYear.SelectedValue;
ddlYearScorecard.SelectedValue = ddlYear.SelectedValue;
ddlYearGraph.SelectedValue = ddlYear.SelectedValue;
mpeLoading.Hide();
}
else
Response.Redirect("Default.aspx");
}
Thanks,
Satish k.
One possible problem could be that the web server is running out of memory and forcing the app pool to recycle. This would flush the InProc Session memory. You could try using Sql Session State instead and see if that resolves the problem. Try monitoring the web server processes and see if they're recycling quickly.
You can place a
if(Session.IsNew)
check in your code and redirect/stop code execution appropriately.
I would check the Performance tab in IIS to see whether a bandwidth threshold is set.
Right click on website in IIS
Performance tab
Check "Bandwidth throttling" limit
If a treshold is set you might be hitting the maximum bandwidth (KB per second) limit. Either disable bandwidth throttling, or increase the limit.

Resources