How to find leaking db connection pool handle? - asp.net

I'm seeing the dreaded "The timeout period elapsed prior to obtaining a connection from the pool" error.
I've searched the code for any unclosed db connections, but couldn't find any.
What I want to do is this: the next time we get this error, have the system dump a list of which procs or http requests are holding all the handles, so I can figure out which code is causing the problem.
Even better would be to see how long those handles had been held, so I could spot used-but-unclosed connections.
Is there any way to do this?

If you are lucky enough that connection creation/opening is centralized then the following class should make it easy to spot leaked connections. Enjoy :)
using System.Threading; // not to be confused with System.Timer
/// <summary>
/// This class can help identify db connection leaks (connections that are not closed after use).
/// Usage:
/// connection = new SqlConnection(..);
/// connection.Open()
/// #if DEBUG
/// new ConnectionLeakWatcher(connection);
/// #endif
/// That's it. Don't store a reference to the watcher. It will make itself available for garbage collection
/// once it has fulfilled its purpose. Watch the visual studio debug output for details on potentially leaked connections.
/// Note that a connection could possibly just be taking its time and may eventually be closed properly despite being flagged by this class.
/// So take the output with a pinch of salt.
/// </summary>
public class ConnectionLeakWatcher : IDisposable
{
private readonly Timer _timer = null;
//Store reference to connection so we can unsubscribe from state change events
private SqlConnection _connection = null;
private static int _idCounter = 0;
private readonly int _connectionId = ++_idCounter;
public ConnectionLeakWatcher(SqlConnection connection)
{
_connection = connection;
StackTrace = Environment.StackTrace;
connection.StateChange += ConnectionOnStateChange;
System.Diagnostics.Debug.WriteLine("Connection opened " + _connectionId);
_timer = new Timer(x =>
{
//The timeout expired without the connection being closed. Write to debug output the stack trace of the connection creation to assist in pinpointing the problem
System.Diagnostics.Debug.WriteLine("Suspected connection leak with origin: {0}{1}{0}Connection id: {2}", Environment.NewLine, StackTrace, _connectionId);
//That's it - we're done. Clean up by calling Dispose.
Dispose();
}, null, 10000, Timeout.Infinite);
}
private void ConnectionOnStateChange(object sender, StateChangeEventArgs stateChangeEventArgs)
{
//Connection state changed. Was it closed?
if (stateChangeEventArgs.CurrentState == ConnectionState.Closed)
{
//The connection was closed within the timeout
System.Diagnostics.Debug.WriteLine("Connection closed " + _connectionId);
//That's it - we're done. Clean up by calling Dispose.
Dispose();
}
}
public string StackTrace { get; set; }
#region Dispose
private bool _isDisposed = false;
public void Dispose()
{
if (_isDisposed) return;
_timer.Dispose();
if (_connection != null)
{
_connection.StateChange -= ConnectionOnStateChange;
_connection = null;
}
_isDisposed = true;
}
~ConnectionLeakWatcher()
{
Dispose();
}
#endregion
}

There are some good links for monitoring connection pools. Do a google search for ".net connection pool monitoring".
One article I referred to a while back was Bill Vaughn's article (Note this is old but still contains useful info). It has some info on monitoring connection pools, but some great insights as to where leaks could be occuring as well.
For monitoring, he suggests;
"Monitoring the connection pool
Okay, so you opened a connection and closed it and want to know if the
connection is still in place—languishing in the connection pool on an
air mattress. Well, there are several ways to determine how many
connections are still in place (still connected) and even what they
are doing. I discuss several of these here and in my book:
· Use the SQL Profiler with the SQLProfiler TSQL_Replay
template for the trace. For those of you familiar with the Profiler,
this is easier than polling using SP_WHO.
· Run SP_WHO or SP_WHO2, which return information from the
sysprocesses table on all working processes showing the current status
of each process. Generally, there’s one SPID server process per
connection. If you named your connection, using the Application Name
argument in the connection string, it’ll be easy to find.
· Use the Performance Monitor (PerfMon) to monitor the pools
and connections. I discuss this in detail next.
· Monitor performance counters in code. This option permits
you to display or simply monitor the health of your connection pool
and the number of established connections. I discuss this in a
subsequent section in this paper."
Edit:
As always, check out some of the other similar posts here on SO
Second Edit:
Once you've confirmed that connections aren't being reclaimed by the pool, another thing you could try is to utilise the StateChange event to confirm when connections are being opened and closed. If you are finding that there are a lot more state changes to opened than to closed, then that would indicate that there are leaks somewhere. You could also then log the data in the statechanged event along with a timestamp, and if you have any other logging on your application, you could start to parse the log files for instances where there appears to be state changes of closed to open, with no corresponding open to closed. See this link for more info on how to handle the StateChangedEvent.

i've used this
http://www.simple-talk.com/sql/performance/how-to-identify-slow-running-queries-with-sql-profiler/
to find long running stored procedures before, i can then work back and find the method that called the SP.
dont know if that'll help

Related

when using #StreamListener, customization to KafkaListenerContainerFactory are getting reflected in generated KafkaMessageListenerContainer?

I am using spring-cloud-stream with kafka binder to consume message from kafka . The application is basically consuming messages from kafka and updating a database.
There are scenarios when DB is down (which might last for hours) or some other temporary technical issues. Since in these scenarios there is no point in retrying a message for a limited amount of time and then move it to DLQ , i am trying to achieve infinite number of retries when we are getting certain type of exceptions (e.g. DBHostNotAvaialableException)
In order to achieve this i tried 2 approaches (facing issues in both the approaches) -
In this approach, Tried setting an errorhandler on container properties while configuring ConcurrentKafkaListenerContainerFactory bean but the error handler is not getting triggered at all. While debugging the flow i realized in the KafkaMessageListenerContainer that are created have the errorHandler field is null hence they use the default LoggingErrorHandler. Below are my container factory bean configurations -
the #StreamListener method for this approach is the same as 2nd approach except for the seek on consumer.
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object>
kafkaListenerContainerFactory(ConsumerFactory<String, Object> kafkaConsumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(kafkaConsumerFactory);
factory.getContainerProperties().setAckOnError(false);
ContainerProperties containerProperties = factory.getContainerProperties();
// even tried a custom implementation of RemainingRecordsErrorHandler but call never went in to the implementation
factory.getContainerProperties().setErrorHandler(new SeekToCurrentErrorHandler());
return factory;
}
Am i missing something while configuring factory bean or this bean is only relevant for #KafkaListener and not #StreamListener??
The second alternative was trying to achieve it using manual acknowledgement and seek, Inside a #StreamListener method getting Acknowledgment and Consumer from headers, in case a retryable exception is received, I do certain number of retries using retrytemplate and when those are exhausted I trigger a consumer.seek() . Example code below -
#StreamListener(MySink.INPUT)
public void processInput(Message<String> msg) {
MessageHeaders msgHeaders = msg.getHeaders();
Acknowledgment ack = msgHeaders.get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
Consumer<?,?> consumer = msgHeaders.get(KafkaHeaders.CONSUMER, Consumer.class);
Integer partition = msgHeaders.get(KafkaHeaders.RECEIVED_PARTITION_ID, Integer.class);
String topicName = msgHeaders.get(KafkaHeaders.RECEIVED_TOPIC, String.class);
Long offset = msgHeaders.get(KafkaHeaders.OFFSET, Long.class);
try {
retryTemplate.execute(
context -> {
// this is a sample service call to update database which might throw retryable exceptions like DBHostNotAvaialableException
consumeMessage(msg.getPayload());
return null;
}
);
}
catch (DBHostNotAvaialableException ex) {
// once retries as per retrytemplate are exhausted do a seek
consumer.seek(new TopicPartition(topicName, partition), offset);
}
catch (Exception ex) {
// if some other exception just log and put in dlq based on enableDlq property
logger.warn("some other business exception hence putting in dlq ");
throw ex;
}
if (ack != null) {
ack.acknowledge();
}
}
Problem with this approach - since I am doing consumer.seek() while there might be pending records from last poll those might be processed and committed if DB comes up during that period(hence out of order). Is there a way to clear those records while a seek is performed?
PS - we are currently in 2.0.3.RELEASE version of spring boot and Finchley.RELEASE or spring cloud dependencies (hence cannot use features like negative acknowledgement either and upgrade is not possible at this moment).
Spring Cloud Stream does not use a container factory. I already explained that to you in this answer.
Version 2.1 introduced the ListenerContainerCustomizer and if you add a bean of that type it will be called after the container is created.
Spring Boot 2.0 went end-of-life over a year ago and is no longer supported.
The answer I referred you shows how you can use reflection to add an error handler.
Doing the seek in the listener will only work if you have max.poll.records=1.

NHibernate: Get all opened sessions

I have an ASP.NET application with NHibernate, for some reason few developers forgot to close the sessions in some pages (like 20 I think), I know that the best solution is to go through each page and make sure the sessions are closed properly, but I can't do that kind of movement because the code is already on production. So I was trying to find a way to get all the opened sessions in the session factory and then close it using the master page or using an additional process but I can't find a way to do that.
So, is there a way to get all the opened sessions? or maybe set the session idle timeout or something, what do you suggest?. Thanks in advice.
As far as I know, there is no support for getting a list of open sessions from the session factory. I have my own method to keep an eye on open sessions and I use this construction:
Create a class with a weak reference to a ISession. This way you won't interupt the garbage collector if sessions are being garbage collected:
public class SessionInfo
{
private readonly WeakReference _session;
public SessionInfo(ISession session)
{
_session = new WeakReference(session);
}
public ISession Session
{
get { return (ISession)_session.Target; }
}
}
create a list for storing your open sessions:
List<SessionInfo> OpenSessions = new List<SessionInfo>();
and in the DAL (data access layer) I have this method:
public ISession GetNewSession()
{
if (_sessionFactory == null)
_sessionFactory = createSessionFactory();
ISession session = _sessionFactory.OpenSession();
OpenSessions.Add(new SessionInfo(session));
return session;
}
This way I maintain a list of open sessions I can query when needed. Perhaps this meets your needs?

Static variables and long running thread on IIS 7.5

Help me solve next problem.
I have ASP .NET MVC2 application. I run it on IIS 7.5. In one page user clicks button and handler for this button sends request to server (jquery.ajax). At server action in controller starts new thread (it makes long time import):
var thread = new Thread(RefreshCitiesInDatabase);
thread.Start();
State of import is available in static variable. New thread changes value of variable in the begin of work.
User can check state of import too with the help of this variable, which is used in view. And user sees import's state.
When I start this function few minutes everything is okey. On page I see right state of import, quantity of imported records is changed, I see changes in logs. But after few minutes begin troubles.
When I refresh page with import state sometimes I see that import is okey but sometimes I see page with default values about import (like application is just started), but after that again I can see page with normal import's state.
I tried to attach Visual Studio to IIS process and debug application. But when request comes to controller sometimes static variables have right values and sometimes they have default values (static int has 0, static string has "" etc.).
Tell me what I do wrong. May be I must start additional thread in other way?
Thanks in advance,
Dmitry
I add parts of code:
Controller:
public class ImportCitiesController : Controller
{
[Dependency]
public SaveCities SaveCities { get; set; }
//Start import
public JsonResult StartCitiesImport()
{
//Methos in core dll, which makes import
SaveCities.StartCitiesSaving();
return Json("ok");
}
//Get Information about import
public ActionResult GetImportState()
{
var model = new ImportCityStatusModel
{ NowImportProcessing = SaveCities.CitiesSaving };
return View(model);
}
}
Class in Core:
public class SaveCities
{
// Property equals true, when program are saving to database
public static bool CitiesSaving = false;
public void StartCitiesSaving()
{
var thread = new Thread(RefreshCitiesInDatabase);
thread.Start();
}
private static void RefreshCitiesInDatabase()
{
CitiesSaving = true;
//Processing......
CitiesSaving = false;
}
}
UPDATE
I think, I found problem, but still I don't know how solve it. My IIS uses application pool with parameter "Maximum Worker Processes" = 10. And all tasks in application are handled by few processes. And my request to controll about import's state always is handled by different processes. And they have different static variables. I guess it is right way for solving.
But I don't know how merge all static values in one place.
Without looking at the code, here are the obvious question. Are you sure your access is thread safe (that is do you properly use lock to update you value or even access it => C# thread safety with get/set) ?
A code sample could be nice.
thanks for the code, it seem that CitiesSaving is not locked properly before read/write you should hide the instance variable behind a property to handle all the locking. Marking this field as volatile could also help (see http://msdn.microsoft.com/en-us/library/aa645755(v=vs.71).aspx )

ASP.NET/Static class Race Condition?

I have an ASP.NET application with a lot of dynamic content. The content is the same for all users belonging to a particular client. To reduce the number of database hits required per request, I decided to cache client-level data. I created a static class ("ClientCache") to hold the data.
The most-often used method of the class is by far "GetClientData", which brings back a ClientData object containing all stored data for a particular client. ClientData is loaded lazily, though: if the requested client data is already cached, the caller gets the cached data; otherwise, the data is fetched, added to the cache and then returned to the caller.
Eventually I started getting intermittent crashes in the the GetClientData method on the line where the ClientData object is added to the cache. Here's the method body:
public static ClientData GetClientData(Guid fk_client)
{
if (_clients == null)
_clients = new Dictionary<Guid, ClientData>();
ClientData client;
if (_clients.ContainsKey(fk_client))
{
client = _clients[fk_client];
}
else
{
client = new ClientData(fk_client);
_clients.Add(fk_client, client);
}
return client;
}
The exception text is always something like "An object with the same key already exists."
Of course, I tried to write the code so that it just wasn't possible to add a client to the cache if it already existed.
At this point, I'm suspecting that I've got a race condition and the method is being executed twice concurrently, which could explain how the code would crash. What I'm confused about, though, is how the method could be executed twice concurrently at all. As far as I know, any ASP.NET application only ever fields one request at a time (that's why we can use HttpContext.Current).
So, is this bug likely a race condition that will require putting locks in critical sections? Or am I missing a more obvious bug?
If an ASP.NET application only handles one request at a time all ASP.NET sites would be in serious trouble. ASP.NET can process dozens at a time (typically 25 per CPU core).
You should use ASP.NET Cache instead of using your own dictionary to store your object. Operations on the cache are thread-safe.
Note you need to be sure that read operation on the object you store in the cache are threadsafe, unfortunately most .NET class simply state the instance members aren't thread-safe without trying to point any that may be.
Edit:
A comment to this answer states:-
Only atomic operations on the cache are thread safe. If you do something like check
if a key exists and then add it, that is NOT thread safe and can cause the item to
overwritten.
Its worth pointing out that if we feel we need to make such an operation atomic then the cache is probably not the right place for the resource.
I have quite a bit of code that does exactly as the comment describes. However the resource being stored will be the same in both places. Hence if an existing item on rare occasions gets overwritten the only the cost is that one thread unnecessarily generated a resource. The cost of this rare event is much less than the cost of trying to make the operation atomic every time an attempt to access it is made.
This is very easy to fix:
private _clientsLock = new Object();
public static ClientData GetClientData(Guid fk_client)
{
if (_clients == null)
lock (_clientsLock)
// Check again because another thread could have created a new
// dictionary in-between the lock and this check
if (_clients == null)
_clients = new Dictionary<Guid, ClientData>();
if (_clients.ContainsKey(fk_client))
// Don't need a lock here UNLESS there are also deletes. If there are
// deletes, then a lock like the one below (in the else) is necessary
return _clients[fk_client];
else
{
ClientData client = new ClientData(fk_client);
lock (_clientsLock)
// Again, check again because another thread could have added this
// this ClientData between the last ContainsKey check and this add
if (!clients.ContainsKey(fk_client))
_clients.Add(fk_client, client);
return client;
}
}
Keep in mind that whenever you mess with static classes, you have the potential for thread synchronization problems. If there's a static class-level list of some kind (in this case, _clients, the Dictionary object), there's DEFINITELY going to be thread synchronization issues to deal with.
Your code really does assume only one thread is in the function at a time.
This just simply won't be true in ASP.NET
If you insist on doing it this way, use a static semaphore to lock the area around this class.
you need thread safe & minimize lock.
see Double-checked locking (http://en.wikipedia.org/wiki/Double-checked_locking)
write simply with TryGetValue.
public static object lockClientsSingleton = new object();
public static ClientData GetClientData(Guid fk_client)
{
if (_clients == null) {
lock( lockClientsSingleton ) {
if( _clients==null ) {
_clients = new Dictionary``();
}
}
}
ClientData client;
if( !_clients.TryGetValue( fk_client, out client ) )
{
lock(_clients)
{
if( !_clients.TryGetValue( fk_client, out client ) )
{
client = new ClientData(fk_client)
_clients.Add( fk_client, client );
}
}
}
return client;
}

ASP.NET lock() doesn't work

i try to put a lock to a static string object to access to cache,, the lock() block executes in my local,but whenever i deploy it to the server, it locks forever. i write every single step to event log to see the process and lock(object) just causes the deadlock on the server. The command right after lock() is never executed as the i dont see an entry in the event log.
below is the code:
public static string CacheSyncObject = "CacheSync";
public static DataView GetUsers()
{
DataTable dtUsers = null;
if (HttpContext.Current.Cache["dtUsers"] != null)
{
Global.eventLogger.Write(String.Format("GetUsers() cache hit: {0}",dtUsers.Rows.Count));
return (HttpContext.Current.Cache["dtUsers"] as DataTable).Copy().DefaultView;
}
Global.eventLogger.Write("GetUsers() cache miss");
lock (CacheSyncObject)
{
Global.eventLogger.Write("GetUsers() locked SyncObject");
if (HttpContext.Current.Cache["dtUsers"] != null)
{
Global.eventLogger.Write("GetUsers() opps, another thread filled the cache, release lock");
return (HttpContext.Current.Cache["dtUsers"] as DataTable).Copy().DefaultView;
}
Global.eventLogger.Write("GetUsers() locked SyncObject"); ==> this is never written to the log, so which means to me that, lock() never executes.
You're locking on a string, which is a generally bad idea in .NET due to interning. The .NET runtime actually stores all identical literal strings only once, so you have little control over who sees a specific string.
I'm not sure how the ASP.NET runtime handles this, but the regular .NET runtime actually uses interning for the entire process which means that interned strings are shared even between different AppDomains. Thus you could be deadlocking between different instances of you method.
What happens if you use:
public static object CacheSyncObject = new object();

Resources