In my IIS Server, I have many application pools (like 6 to 7) and there are many ASP.NET applications running on each of them (ex. 25 applications per pool). They all are connected with Oracle database by using ADO.NET.
All applications are just working fine, but sometimes we get error like
Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
I know the possibilities for this like we are not closing our database connections properly. So here is my headache... I don't want to go each and every project to see where we forget to close connections it is very time taking task for us.
So is there any way to identify from which application connections are remaining opened? Can we see from IIS itself? Can we make some kind of utility to track from which project connection are remaining opened?
I'm not sure that it's a probleme of the connection to database. I think that you application are not disposing the context then the garbage collector can't clear memory. you can try to reduce the time for recylcling your application pools then check if you memory usage is decreasing or not.
Related
We are trying to diagnose an issue that occurred in our production environment last week. Long story short, the database connection pool seemed to be full of active connections from our ASP.NET 3.5 app that would not clear, even after restarting the application pool and IIS.
The senior DBA said that because the network connections occur at the operating system level, recycling the app and IIS did not sever the actual network connections, so SQL Server left the database connections to continue running, and our app was still unable to reach the database.
In looking up ways to force a database connection pool to reset, I found the static method SqlConnection.ClearAllPools(), with documentation explaining what it does, but little to nothing explaining when to call it. It seems like calling it at the beginning of Application_Start and the end of Application_End in my global.asax.cs is a good safety measure to protect the app from poisoned connection pools, though it would of course incur a performance hit on startup/shutdown times.
Is what I've described a good practice? Is there a better one? The goal is to allow a simple app restart to reset an app's mangled connection pool without having to restart the OS or the SQL Server service, which would affect many other apps.
Any guidance is much appreciated.
When a process dies, all network connection are always, always, always closed immediately. That's at the TCP level. Has nothing to do with ADO.NET and goes for all applications. Kill the browser, and all downloads stop. Kill the FTP client and all connections are closed immediately.
Also, the connection pool is per process. So clearing it when starting the app is useless because the pool is empty. Clearing it at shutdown is not necessary because all connections will (gracefully) shut down any moment.
Probably, your app is not returning connections to the pool. You must dispose of all connections after use in all cases. If you fail to do that, dangling connections will accumulate for an indefinite amount of time.
Clearing the pool does not free up dangling connections because those appear to be in use. How could ADO.NET tell that you'll never use them again? It can't.
Look at sys.dm_exec_connections to see who is holding connections open. You might increase the ADO.NET pool size as a stop-gap measure. SQL Server can take over 30k connections per instance. You'll normally never saturate that.
I have an IIS Web Server that hosts 400 web applications (distributed across 30 application pools). They are both ASP.NET applications and WCF Services end points. The server has 32GB of RAM and is usually running fast; although it's running at 95% memory usage. Worker processes each take between 500MB and 1.5GB of RAM.
I also have another box running SQL Server. That one has plenty of free memory.
Sometimes, the Web Server starts throwing SQL Timeout exceptions. A few per minutes at first and rapidly increasing to hundreds per minute; effectively making the server down. This problem affects applications in all pools. Some requests still complete but most of them don't. While this happens the CPU usage on the server is around 30% (which is the normal load on that box).
While this is happening, we can still use SQL Server Management Studio (from the IIS Server) to execute requests successfully (and fast).
The fix is to restart IIS. And then everything goes back to normal until the next time.
Because the server is running with very low memory, I feel like this is the cause. But I cannot explain the relationship between low memory and sudden bursts of SQL Timeout exceptions.
Any idea?
Memory pressure can trigger paging and garbage collection. Both introduce latency which would not be present otherwise.
GC'ing 32GB of data can take seconds. Why would all app processes GC at the same time? Because at about 95% memory utilization Windows sets a "low memory" event that the CLR listens to. It will try to release memory to help other processes.
If the applications get into a paging frenzy that would also explain huge delays in normal execution.
This is just guessing, though. You can try proving it by looking at the "Hard page faults/sec" counter. There also must be a counter for "full GC" or "Gen 2 GC".
The fix would be running at a higher margin to the physical memory limit.
The first problem is to discover where the timeout is happening. Can you tell from the stack trace if the timeout is happening when executing a request against the database, or when connecting to the database? (Or even connecting to the web server?)
Timeouts executing database requests can be a variety of causes. The problem might be in the database with blocking processes, database maintenance (also locking), deadlocks, etc. When apps are running slowly, do you see a lot of entries in sys.dm_exec_requests, and if so, what are their wait_types?
Even if you can run SQL in the query window while the web server is timing out, that doesn't mean there isn't massive blocking or deadlocking going on.
If it is a timeout connecting to the database, then it is possible the ADO connection pools are overwhelmed and not getting cleaned up, or the database has a connection limit, and the web services are timing out waiting for a connection.
One of the best ways to find out what is going on is to capture a memory dump of the w3wp.exe process and analyze it. Even if you aren't adept at a debugger like WinDbg, Microsoft's DebugDiag tool can produce some nice reports with helpful information.
SqlCommand.CommandTimeout
This property is the cumulative time-out for all network reads during command execution or processing of the results. A time-out can still occur after the first row is returned, and does not include user processing time, only network read time.
It is a client based time out. If stuff is getting queued due to memory constraints then that could cause a timeout.
Are you retrieving a lot of data from these queries?
If some queries return a lot of data consider breaking them up and give the user a next and prior button.
Have you considered asynch like BeginExecuteReader?
The advantage is no timeout.
It does not release the calling thread.
isExecutingFTSindexWordOnce = true;
sqlCmdFTSindexWordOnce.BeginExecuteNonQuery(callbackFTSindexWordOnce, sqlCmdFTSindexWordOnce);
// isExecutingFTSindexWordOnce set to false in the callback
Debug.WriteLine("Calling thread active");
But I agree with your comment how to respond to the request as the answer does not come back to the calling thread.
Sorry I am used to WPF where I just update a public property on the call back.
In an ASP.NET website we are storing sessions in SQL Server. All is working fine except that sessions frequently recycle. I've a timeout period set to 30 minutes but some times it recycles within a few minutes. We've a dedicated server, and website running under a 'classic' application pool. I've searched a lot on this problem, but didn't find a satisfactory answer. Any help will be greatly appreciated.
Note: mostly it happens on a page where there is lots of use of viewstate, I'm curious that is there a link of viewstate with session recycling?
We've experienced this problem when we either have a web farm (more than one web server servicing clients) or a web garden (more than one worker process in an application pool).
If you have a web farm, then you need to ensure that all web servers have the same machine key and that all instances have the exact same application path.
If you have a web garden, try dropping the maximum worker processes back to 1 to see if that resolves the issue.
While you are checking the IIS settings, you should probably also ensure that the application pool is not recycling regularly. This can be due any of the following specified in the app pool:
1) Private Memory Limit (the app pool is reset if the maximum amount of memory has been exceeded)
2) Regular Time Interval recycling (the app pool is automatically recycled after a specified number of minutes, defaulting to 1740, and/or at specific times).
3) Idle Time-out (the number of minutes of inactivity that can elapse before the application pool is automatically shutdown).
You should also check the event logs for reports of the application pool crashing or otherwise being recycled.
Update:
An additional thought:
If you have an application, such as anti-virus or backup software, that monitors your application's bin directory and modifies or changes attributes (such as the backup flag or timestamp) of files in that directory OR your web.config file, this will cause application recycling as well.
Does the same connection string used on two different physical servers hosting different web applications that talk to the same database draw connections from the same connection pool? Or are pooled connections confined to at the application level?
I ask because I inherited a 7 year old .NET 1.1 web application which is riddled with in-line SQL, unclosed and undisposed sql connection and datareader objects. Recently, I was tasked to write a small web app that is hosted on another server and talks to the same database and therefore used the same database connection string. I created a LINQ object to read and write the one table required by the app. Now the original .NET 1.1 app is throwing exceptions like
"Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached."
Maybe these are unreleated, but wanted to get your opinions to make sure I cover all my bases.
Thanks!
There is no way connections can be pooled between two separate machines. Your SQL Server will have a connection limit for total connections however.
This error is most likely occurring because the application is not returning connections to the connection pool. This can happen because the connection is not being disposed of correctly. This can happen due to poor code (does it use a using block, or a try catch finally?) or if using a SQLDataReader can cause the connection to stay open after the code to execute the SQL has exited.
Connection Pools are kept in your App Pool, so it shouldn't be possible for a separate machine to steal out of a separate boxes App Pool. Have a look here for some info on the connection pool. I'd also recommend slapping the performance counters on see bottom of this article to see what's going on in there a bit more.
Also might want to check the max number of connections on SQL Server. In management Studio
Right click on the Server name --> Properties --> Connections
look for "Maximum number of concurrent connections (0 = unlimited)"
After deploying an ASP.Net application to a new server, the first user to hit the app gets a long pause, presumably because the app is performing its initial compilation. However, this pause seems to also also occur after the application has timed out and unloaded itself from memory.
The first compilation would be tolerable as it only happpens once, but the 2nd one seems to me like it should be unnecessary....is there a workaround to address this issue? It would be nice to be able to extend the application timeout, but I only see a way to extend the session timeout, which I would rather not do, as this would keep all user sessions in memory for a long period of time.
That's IIS shutting down the worker process - the default is a 20 min idle time. In IIS6, you can configure the app pool and turn off the Idle timeout (on Performance tab). In IIS7, it's in the Advanced Settings.
A common way (though looks hackish to me) to keep the worker process alive is to have some program issue web requests to some URL in the application on a regular basis.
You can reduce the initial startup time by precompiling the ASP.NET app with aspnet_compiler.exe. This won't reduce the initial request time to zero (it'll still need to create the worker process and do other housekeeping.) but it'll noticeably reduce it.
I did find some recommended settings for the App Pools in IIS. The intended outcome of these settings is to reduce the number of times your application needs to go through the startup cycle. These settings should apply to IIS 6 as well as IIS 7:
Recycle Worker Process: Disable. Use this only if you have multiple websites and are trying to isolate processes.
Shutdown worker processes after being idle for (time in minutes): Disable unless you are sharing resources on your server. With this setting disabled, your website should not unload, ensuring that startup time for your site is as minimal as possible.