Quickest method of finding unclosed SqlConnection's - asp.net

I have an ASP.NET Web Application which is occassionally returning the following, irritating, error:
Timeout expired. The timeout period elapsed prior to obtaining a
connection from the pool. This may have occurred because all pooled
connections were in use and max pool size was reached.
My manager suspects this is because of an unclosed SqlConnection within the application. Therefore I'm currently manually checking through every single code file in the application to see if any connections were left open.
Is there a quicker solution to finding the root of this error?

Bunch of different options:
Use Resharper and its "search with pattern" (also called "Structural Search"): http://blogs.jetbrains.com/dotnet/2010/04/introducing-resharper-50-structural-search-and-replace/
Write a miniature syntax tree walker for FxCop or Roslyn to perform the same search (basically, a new SqlConnection not disposed)
Write a wrapper class for SqlConnection (more common than you might think), and track open/closed connections in there.
Attach WinDbg to a "live" or ideally erroring application, and walk your heap objects back to where they are rooted from.
Grab a proper profiler (SciTech is a neat one) that will give you call stacks for object creation.

Related

Can multiple requests affect users in one single IIS instance?

I'm having a problem on my application. It's an ASP.NET application set up on IIS 10.
Let's say one system page is accessible by 20 users. The page works perfectly (no logical error on coding) every action works and delivers the expected values requested by users.
The problem is, whenever someone requests let's say, the same method as another user at the same time (with different values), the application randomly throws an error to one of these users. We've checked for log errors and all of them are system index out of range errors, which never happened in our QA server.
I randomly thought about testing that exact scenario (adding different values with another user at the same time) and I saw it happen for the first time on the QA server. We've managed to reproduce the error multiple times.
While we don't discard the possibility that this could be another issue, did anyone else experience something like that?
The question is: Can IIS manage the same requests, multiple times at the same time within the same instance without any trouble? Does it run on multiple threads or something like that?
Thanks for taking time for answering this, if you need any info
Stick to your question
Yes IIS can handle very easily (more efficient as well)
As per your application concern without code I can't point out but you may consider few points
Is it happening for just one method or for all. If it happening for just one that means you are trying to use such a code that may used by another user
You are using such a array or list which is null or empty for other user. Like a user has First Name Followed by Last Name But other user don't fill last name and you are using that last name property
May be u r using HttpContext and trying to use same as for different users
May be You are using types which are not Thread safe
So these can be possible cases but without code we can't assume.
About your problem, for multiple requests from different user, iis will create a thread in the application pool for each request. For multiple requests from the same user, it will only run in one thread and affect only the user's instance. Unless the instance or resource is a shared resource and your code does not perform any lock operations.
IIS, including most web servers, use threads to process requests, so multiple requests will be executed in parallel unless you place a lock. A web server usually has a minimum and a maximum number of work programs. These work programs are adjusted according to the CPU or memory of the current hardware. If resources are exhausted, new requests will be queued until new resources are available.
So what you need to do may be to modify the application code to take multi-threading and synchronization into consideration.

Does asp.net lifecycle continue if I close the browser in the middle of processing?

I have an ASP.NET web page that connects to a number of databases and uses a number of files. I am not clear what happens if the end user closes the web page before it was finished loading i.e. does the ASP.NET life cycle end or will the server still try to generate the page and return it to the client? I have reasonable knowledge of the life cycle but I cannot find any documentation on this.
I am trying to locate a potential memory leak. I am trying to establish whether all of the code will run i.e. whether the connection will be disposed etc.
The code would still run. There is a property IsClientConnected on the HttpRequest object that can indicate whether the client is still connected if you are doing operations like streaming output in a loop.
Once the request to the page is generated, it will go through to the unload on the life cycle. It has no idea the client isn't there until it sends the information on the unload.
A unique aspect of this is the Dynamic Compilation portion. You can read up on it here: http://msdn.microsoft.com/en-us/library/ms366723
For more information the the ASP.NET Life Cycle, look here:
http://msdn.microsoft.com/en-us/library/ms178472.aspx#general_page_lifecycle_stages
So basically, a page is requested, ASP.NET uses the Dynamic Compilation to basically create the page, and then it attempts to send the page to the client. All code will be run in that you have specified in the code, no matter if the client is there or not to receive it.
This is a very simplified answer, but that is the basics. Your code is compiled, the request generates the response, then the response is sent. It isn't sent in pieces unless you explicitly tell it to.
Edit: Thanks to Chris Lively for the recommendation on changing the wording.
You mention tracking down a potential memory leak and the word "connection". I'm going to guess you mean a database connection.
You should ALWAYS wrap all of your connections and commands in using clauses. This will guarantee the connection/command is properly disposed of regardless of if an error occurs, client disconnects, etc.
There are plenty of examples here, but it boils down to something like:
using (SqlConnection conn = new SqlConnection(connStr)) {
using (SqlCommand cmd = new SqlCommand(conn)) {
// do something here.
}
}
If, for some reason, your code doesn't allow you to do it this way then I'd suggest the next thing you do is restructure it as you've done it wrong. A common problem is that some people will create a connection object at the top of the page execution then re-use that for the life of the page. This is guaranteed to lead to problems, including but not limited to: errors with the connection pool, loss of memory, random query issues, complete hosing of the app...
Don't worry about performance with establishing (and discarding) connections at the point you need them in code. Windows uses a connection pool that is lightning fast and will maintain connections for as long as needed even if your app signals that it's done.
Also note: you should use this pattern EVERY TIME you are using an un-managed class. Those always implement IDisposable.

asp.net interop memory limitation

I have an asp.net app which uses legacy COM interop library. Works fine until memory reaches somewhere around 500Mb and then it is no longer able to create new COM objects (get various exceptions, e.g. Creating an instance of the COM component with CLSID {FFFF-FFFF-FFFF-FFF-FFFFFF} from the IClassFactory failed due to the following error: 80070008.). It almost looks like it is hitting some kind of memory limit, but what is it? Can it be changed?
Solved! Turns out the object was creating a Window handle and we were hitting the 10K Window handles limit (except it was happening at 2K instances for some reason when inside IIS)
Solved! Turns out the object was creating a Window handle and we were hitting the 10K Window handles limit (except it was happening at 2K instances for some reason when inside IIS)
What OS, and is it 32-bit or 64-bit? What are you using to determine memory usage?
When you say you're explicitly releasing the objects, do you mean you're using Marshal.ReleaseComObject()?
I'm assuming you have AspCompat=true in your <%# Page > tag... wouldn't expect it to run at all if you didn't.
Can you give us some details on your COM object; what does it do, and can you post some code where you're calling it, including COM object signatures? How much memory would you expect a single object to take?
My first suspect, based only on the information that I've read so far, is that 500Mb is not truly the total memory in use, and/or that you're having a memory fragmentation issue. I've seen this occur with IIS processes when less than half of the memory is in use, and the errors tend to be random, depending on what object is being created at the time. BTW, 80070008 is 'not enough storage space'.
Process limits are 2GB on a 32-bit machine, of course, but even if a process isn't using the full 2GB, if there's not a contiguous block of memory of the size needed when creating an object, you'll get an out-of-memory error when you try to allocate. Lots of concurrent users implies lots of COM objects (and other objects) being allocated and released in a short period of time... which points to fragmentation as a suspect.
Coming up with an attack plan requires more info about the COM object and how it's being used.
Use a command pattern for queueing and executing the com interop in an asynchronous thread. This can free up the number of threads being used by iis, and allow you to control the number of calls/instances of the com app.
You may think about object pooling rather than creating every time a new object.

sporadic ASP.NET data error: "Cannot find table 0"

Having deployed a new build of an ASP.NET site in a production environment, I am logging dozens of data errors every second, almost always with the error "Cannot find table 0." We use datasets and frequently refer to Table[0], and while I understand the defensive coding practice of checking the dataset for tables before accessing Table[0], it's never been a problem in the past. A certain page will load fine one second, and then be missing one of its data-driven components the next. Just seeing if this rings a bell for anyone.
More detail: I used a different build server this time, and while I imagine the compiler settings are the same on both, I have a hard time thinking that there's a switch that makes 50% of my database calls come back with no tables. I also switched the project to VS 2008, but then reverted all of those changes when I switched back to VS 2005. I notice that the built assembly has a new MyLibrary.XmlSerializers.dll, where it didn't used to, but I also can't imagine that that's causing all the trouble. (It also doesn't fall down on calls to MyLibrary, or at least no more than any other time.)
Updated to add: I've discovered that the troublesome build is a "Release" build, where the working build was compiled as "Debug". Could that explain it?
Rolling back to the build before these changes fixed it. (Rebooting the SQL Server, the step we tried before that, did not.)
The trouble also seems to be load-based - this cruised through our integration and QA environments without a problem, and even our smoke test environment - the one that points to production data - is fine under light load.
Does this have the distinguishing characteristics of anything you might have seen in the past?
Bumping this old question because we have encountered the same issue and perhaps our solution would give more insight in what causes this.
Essentially this problem occurs in a production environment that is under very heavy load in a Windows service that uses multiple threads to process several jobs simultaneously (100 users use the same DB via ASP.NET web app and there are about 60 transactions/second on older hardware with SQL Server 2000).
No variables are shared, that is connections are opened anew, transaction is started, operations executed, transaction committed and connection closes.
Under heavy load sometimes one of the following exceptions occurs:
NullReferenceException: Object reference not set to an instance of an
object.
at System.Data.SqlClient.SqlInternalConnectionTds.get_IsLockedForBulkCopy()
or
System.Data.SqlClient.SqlException:
The server failed to resume the transaction. Desc:3400000178
or
New request is not allowed to start because it should come with valid transaction descriptor
or
This SqlTransaction has completed; it is no longer usable
It seems somehow the connection that is within the pool becomes corrupted and remains associated with previously used transactions. Furthermore, if such connection is retrieved from pool then sqlAdapter.Fill(dataset) results in an empty dataset, causing "Cannot find table 0". Because our service would retry the operation (reading job list) on failure and it would always get the same corrupt connection from the pool it would fail with this error until restarted.
We removed the issue by using SqlConnection.ClearPool(connection) on exception to make sure this connection is discarded from the pool and restructuring the application so less threads access the same resources simultaneously.
I have no clue who exactly caused this issue so I am not sure we have really fixed that, maybe just made it so rare it had not occurred again yet.
I've fought precisely this error message before. The key is that an underlying data method is swallowing a timeout exception.
You're probably doing something like this:
var table = GetEmployeeDataSet().Tables[0];
GetEmployeeDataSet is swallowing an exception, probably a timeout exception, which is why it only happens sporadically - it happens under load. You need to do the following to fix it:
Modify the underlying code to not swallow the exception, but rather let it bubble up to the next level so you can identify it properly.
Identify the query(s) causing the problem, and then rewrite, reindex, denormalize or throw hardware at the problem. See this for more info: System.Data.SqlClient.SqlException: Timeout expired
I've seen something similar. I believe our problem had to do with failed sessions being re-used (once the session object failed it went into a poor state and could not recover.) We fixed it by increasing the memory for the session pool and increasing the frequency of the web application recycling.
It also was "caused" by a new version that at first blush did not seem to have any change to cause such an effect. However, eventually it became clear that the logic of the program was opening and closing a lot more connections (maybe 20% more) than it used to. This small change pushed the limit of our prior configuration.
You might check the SQL Server logs for errors. Or, the Web server event log. It sounds like your connection pool could be out of open connections or your db could be out.
Which database calls changed between versions?
The error is obviously telling you one of your database calls isn't returning any data on occasion; I can't think of any cases where a code/assembly issue would cause it.
I have seen something like this when doing something with nHibernate Sessions in a non-thread-safe manner. That would explain why you only see it under load. Would need to see your code to guess at what isn't thread-safe though.

ASP.NET Lifecycle and long process

I know we need a better solution but we need to get this done this way for right now. We have a long import process that's fired when you click start import button on a aspx web page. It takes a long time..sometimes several hours. I changed the timeout and that's fine but I keep getting a connection server reset error after about an hour. I'm thinking it's the asp.net lifecycle and I'd like to know if there are settings in IIS I can change to make this lifecycle last longer.
You should almost certainly do the long-running work in a separate process (not just a separate thread).
Write a standalone program to do the import. Have it set a flag somewhere (a column in a database, for example) when it's done, and put lines into a logfile or database table to show progress.
That way your page just gets the job started. Afterwards, it can self-refresh once every few minutes, until the 'completed' flag is set. You can display the log table if you want to be sure it's still running and hasn't died.
This is pretty straightforward stuff, but if you need code examples they can be supplied.
One other point to consider which might explain the behaviour is that the aspnet_wp.exe recycles if too much memory is being consumed (do not confuse this with the page life cycle)
If your long process is taking up too much memory ASP.NET will launch a new process and reassign all existing request. I would suggest checking for this. You can do this by looking in task manager at the aspnet_wp and checking the memory size being used - if the size suddnely goes back down it has recycled.
You can change the memory limit in machine.config:
<system.web>
<processModel autoConfig="true"/>
Use memoryLimit to specify the maximum allowed memory size, as a percentage of total system memory that the worker process can consume before ASP.NET launches a new process and reassigns existing requests. (The default is 60)
<system.web>
<processModel autoConfig="true" memoryLimit="10"/>
If this is what is causing a problem for you, the only solution might be to have a separate process for your long operation. You will need to setup IIS accordingly to allow your other EXE the relevant permissions.
You can try running the process in a new thread. This means that the page will start the task and then finish the page's processing but the separate thread will still run in the background. You wont be able to have any visual feedback though so you may want to log progress to a database and display that in a separate page instead.
You can also try running this as an ajax call instead of a postback which has different limitations...
Since you recognize this is not the way to do this I wont list alternatives. Im sure you know what they are :)
Extending the timeout is definitely not the way to do it. Response times should be kept to an absolute minimum. If at all possible, I would try to shift this long-running task out of the ASP.NET application entirely and have it run as a separate process.
After that it's up to you how you want to proceed. You might want the process to dump its results into a file that the ASP application can poll for, either via AJAX or having the user hit F5.
If it's taking hours I would suggest a separate thread for this and perhaps email a notification when it is ready to download the result from the server (i.e. send a link to the finished result)
Or if it is important to have a UI in the client's browser (if they are going to be hanging around for n hours) then you could have a WebMethod which is called from the client (JavaScript) using SetInterval to periodically check if its done.

Resources