How to troubleshoot old Sql connections with open_tran > 0? - asp.net

We have an ASP.NET API web site which connects using NHibernate to a SQL Server.
The problem we are experiencing is that gradually throughout the day, the number of connections to the SQL server creeps up, and there are many connections that do not appear to be returned to the pool. By this, I mean that if I run the following query:
select * from master..sysprocesses s where datediff(minute, s.last_batch, getdate())>10
the number of rows returned just keeps climbing. Nothing in the API should be taking 10 minutes to complete. And there are connections in there from hours ago.
Here's another clue: the open_tran column of all these rows has a value of 1. So it seems to me that somewhere inside the API call, we're creating a transaction boundary, and that transaction is never being closed. Perhaps DTC may have a hand in this (we sometimes do connect to more than one database in a call).
The thing is, I haven't a clue how to troubleshoot this further. I've tried running DBCC INPUTBUFFER on the rogue spids, and there's nothing consistent between them.
What are some of the anti-patterns/other possible causes that might lead to this behavior?
Update: here's how the DB connection is being created. We're using StructureMap for Dependency Injection. We create two DB connections on each unit of work: one "normal" connection for regular read/write access, and an "uncommitted" connection that runs in a transaction with "ReadUncommitted" access (we were having a problem with table locking when reading from large tables).
Here's the code from the DI Registry:
For<ISession>().Transient().Use(context => context.GetInstance<ISessionFactory>().OpenSession());
For<ISessionUncommittedWrapper>().Transient().Use(context => new SessionUncommittedWrapper { Session = context.GetInstance<ISessionFactory>().OpenSession() });
Then, inside the unit of work middleware, we create a UnitOfWork (with a using block, of course), which takes an ISession and an ISessionUncommittedWrapper in the constructor. In the Begin() method, we have:
_uncommittedTransaction = SessionUncommittedWrapper.Session.BeginTransaction(IsolationLevel.ReadUncommitted);
which gets disposed (along with the ISession and ISessionUncommittedWrapper) in the UnitOfWork's Dispose() method.

I eventually found the problem.
The way I found the problem was by creating a logging table that tracked the creation and disposal of Sessions, along with the URI of the endpoint called. By querying all the undisposed connections, I found that in every case where the connection was not disposed, the path began with "/signalr".
<facepalm>D'oh!</facepalm>
Since the OWIN middleware was proactively creating the Sql connections, it was also doing so for SignalR, which in its nature, keeps the transaction open! So every client that logged in with SignalR was hogging two Sql connections.
I made the appropriate changes to exclude SignalR connections from the middleware, and now we have no more hanging Sql connections.

Related

sqlite3 + node: when to close db?

I'm using better-sqlite3 on Node, but I suspect my questions are applicable to node-sqlite3 as well.
I basically have 2 simple questions, relating to a server-rendered website:
Do I need to explicitly call .close() on the database? I seem to remember reading somewhere that it will automatically close when the current scope (like the current function) exits. What if I never call .close() in a web server scenario, taking a lot of requests?
If you have a bunch of different components (authentication, authorisation, localisation, payment, etc...) and each component may or may not need to access the database throughout the lifetime of a request (which are quite short-lived, except for payment), is it better to
have one db connection for the lifetime of the server and pass that around
have one db connection for the lifetime of the request and pass that around
open a new connection every time I need something, maybe 2-3 times per request (and close it either explicitly or implicitly when the function returns, if that's a thing)
Thank you
Joshua Wise's (better-sqlite3's creator) answer over on GitHub:
Database connections are automatically closed when they are garbage collected, which is non-deterministic. If you want to know that the connection is closed (rather than guessing), you should call .close().
You can just open one database connection for the entire thread (the entire process if you're not using worker threads), and share that connection between every request. Node.js is single-threaded, so you don't have to worry about simultaneous access, even if multiple requests are being handled concurrently. The one caveat is that you should never have a SQLite transaction open across multiple ticks of the event loop (i.e., don't use await between BEGIN and COMMIT), because then other requests could accidentally inject SQL into your transactions. Also, SQLite transactions are serialized (you can't have more than one at a time), so you should open and close them as quickly as possible; keeping them open across ticks of the event loop is bad for performance.

Maxmind Injecting new DatabaseReader as a singletone to avoid re-accessing the file again and again

In a .net core web app, I want inject a new DatabaseReader as a singleton. Therefore i use the AddSingelton in my Startup-Class.
services.AddSingleton(x => new DatabaseReader(pathToFile));
Do you think its a good idea to reuse DatabaseReader?
Thanks
A single connection is a bad idea - if access to the connection is properly locked, it means that website / application could only serve one user at a time.
This means that you are extremly limited in your application scalability and have no ability to get a lot of users.
There is also a problem when your connection is not locked very well, things can get weird.
For example, one thread might dispose the connection while another thread is trying to execute a command against it.
A better possibility is to use connection pooling by creating a new connection object when you need one. So you can handle many requests at the same time and your limitation should be the database.
Yes, you should reuse the DatabaseReader across concurrent requests. The reader is thread safe and does not rely on locks for that thread safety.

NPgsql connection pool and performance counters

First some background:
1. Understand how COnnection Pooling is being used by NPGSQL in ASP.NET REST API
Environment:
- We have a REST API controller that queries first a list of items (to RDS) then per each item in this list we need to obtain some additional values so we use a Parallel.ForEach statement
Every time we use a connection we dispose it properly
I've seen that every time this endpoint is called the number of connections increase and then they are removed ok.
Process:
I've followed http://www.npgsql.org/doc/performance.html#performance-counters to check on how NPGSQL is handling connections, also added the following to the connectionstring:
"CommandTimeout=50000;TIMEOUT=1024;POOLING=True;MINPOOLSIZE=1;MAXPOOLSIZE=100;Use Perf Counters=true;"
but I found a strange outcome:
NumberOfNonPooledConnections and NumberOfPooledConnections is always the same in my case (56) we are using a Parallel.ForEach to query several items.
The value for NumberOfActiveConnectionPools is 1.
At first I couldn't really understand how this is working, was it really using the connection pool ?
Then I stop the process removed the ";POOLING=True;" from the connection string and I have the same result.
Finally I set ";POOLING=false;" and execute again, now the NumberOfPooledConnections went to the roof it reached 2378, and then it started timing out opening new connections.
I also noted in RDS performance metrics that the number of connections never exeeced 110 connections.
So the questions would be:
What would be the criteria to set the MaxPoolSize parameter ? 100 seems the usual.
In ASP.NET the connection pool is handled by instance ? so all connections being made from the same Application Pool in IIS will be reused or is per execution?.
First, ASP.NET (the web side) has absolutely no effect on Npgsql's connection pooling or on ADO.NET in general, so it's better to reason about Npgsql and ADO.NET without thinking about web.
Second, you aren't saying which version of Npgsql you're using.
Beyond that, before looking at performance counters, what exactly is the problem you are seeing? Are you seeing too many connections at the PostgreSQL side? You can check this by querying pg_stat_activity.
If Npgsql pooling is on (Pooling=true in the connection string, it's also the default), then when you call NpgsqlConnection.Open() a physical connection will be taken from the pool if one is available. When you close or dispose that NpgsqlConnection, it will be returned to the pool to be reused later. If you're seeing physical connections going up too much at the PostgreSQL side, that is a probable sign that you are forgetting to close/dispose a connection in your code and you have a leak.
The performance counters feature can be useful to understand what's happening, but unfortunately it isn't well-tested and may contain bugs. So please make sure there's an actual issue before starting to look at it (and at the very least report the Npgsql version you're using).

ASP.NET Web site spawing hundereds of Connections to SQL Server Express instance - how can I identify culprit code?

This is a quite a big, quite a badly coded ASP.NET website that I am currently tasked with maintaining. One issue that I'm not sure how to solve is that at seemingly random times, the live web site will lock up completely, page access is ok, but anything that touches the database causes the application to hang indefinitely.
The cause seems to be many more open connections to the database than you would expect of a lowish level traffic web site. Activity monitor shows 150+ open connections, most with an Application value of '.NET SqlClient Data Provider', with the network service login.
A quick solution is to restart the SQL Server service (I've recycled the ASP.NET app pool also just to ensure that the application lets go of anything, and stops any code from reattempting to open connections if there was some sort of loop process that I'm unaware of). This doesn't however help me solve the problem.
The application uses a Microsoft SQLHelper class, which is a pretty common library, so I'm fairly confident that the code that uses this class will have connections closed where required.
I have however spotted a few DataReaders that are not closed properly. I think I'm right in saying that a DataReader can keep the underlying connection open even if that connection is closed because it is a connected class (Correct me if I'm wrong).
Something that it perculiar is that one of the admins restarted the server (not the database server, the actual server) and immediatley, the site would hang again. The culprit was again 150+ open database connections.
Does anybody have any diagnostic technique that they can share with me for working out where this is happening?
Update: SQL Server Log files show many entries like this (30+)
2010-10-15 13:28:53.15 spid54 Starting up database 'test_db'.
I'm wondering if the server is getting hit by an attacker. That would explain the many connections right after boot, and at seemingly random times.
Update: Have changed the AutoClose property, though still hunting for a solution!
Update 2: See my answer to this question for the solution!
Update:
Lots and lots of Starting up database: Set the AutoClose property to false : REF
You are correct about your DataReaders: make sure you close them. However, I have experienced many problems with connections spawning out of control even when connections were closed properly. Connection pooling didn't seem to be working as expected since each post-back created a new SqlConnection. To avoid this seemingly uneeded re-creation of the connection, adopted a Singleton approach to my DAL. So I create a single DataAdapter and send all my data requests through it. Although I've been told that this is unwise, I have not received any support for that claim (and am still eager to read any documentation/opinion to this effect: I want to get better, not be status quo). I have a DataAdapter class for you to consider if you like.
If you are in SQL 2005+, you should be able to use Activity Monitor to see the "Details" of each connection which sometimes gives you the last statement executed. Perhaps this will help you track the statement back to some place in code.
I would recommend downloading http://sites.google.com/site/sqlprofiler/ to see what queries are happening, and sort of work backwards from there. Good luck man!
Many types such as DbConnection, DbCommand and DbDataReader and their derived types (SqlConnection, SqlCommand and SqlReader) implement the IDisposable interface/pattern.
Without re-gurgitating what has already been written on the subject you can take advantage of this mechanism via the using statement.
As a rule of thumb you should always try to wrap your DbConnection, DbCommand and DbDataReader objects with the using statement which will generate MSIL to call IDisposable's Dispose method. Usually in the Dispose method there is code to clean up unmanaged resources such as closing database connections.
For example:
using(SqlConnection cn = new SqlConnection(connString))
{
using(SqlCommand cmd = new SqlCommand("SELECT * FROM MyTable", cn))
{
cmd.CommandType = CommandType.Text;
cn.Open();
using(SqlDataReader rdr = cmd.ExecuteReader())
{
while(rdr.Read())
{
... do stuff etc
}
}
}
}
This will ensure that the Dispose methods are called on all of the objects used immediately after use. For example the Dispose methods of the SqlConnection closes the underlying database connection right away instead of leaving it around for the next garbage collection run.
Little changes like this improve your applications ability to scale under heavy load. As you already know, "Acquire expensive resources late and release early". The using statement is a nice bit of syntactic sugar to help you out.
If you're using VB.NET 2.0 or later it has the same construct:
Using Statement (Visual Basic) (MSDN)
using Statement (C# Reference) (MSDN)
This issue came up again today and I managed to solve it quite easility, so I thought I'd post back here.
In this case, you can assume that the connection spamming is coming from the same code block, so to find the code block, I opened activity monitor and checked the details for some of the many open connections (Right click > details).
This showed me the offending SQL, which you can search for in your application code / stored procedures.
Once I'd found it, it was as I suspected, an unclosed data reader. Problem is now solved.

Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction

Just curious if anyone else has got this particular error and know how to solve it?
The scenario is as follow...
We have an ASP.NET web application using Enterprise Library running on Windows Server 2008 IIS farm connecting to a SQL Server 2008 cluster back end.
MSDTC is turned on. DB connections are pooled.
My suspicion is that somewhere along the line there is a failed MSDTC transaction, the connection got returned to the pool and the next query on a different page is picking up the misbehaving connection and got this particular error. Funny thing is we got this error on a query that has no need whatsoever with distributed transaction (committing to two database, etc.). We were only doing select query (no transaction) when we got the error.
We did SQL Profiling and the query got ran on the SQL Server, but never came back (since the MSDTC transaction was already aborted in the connection).
Some other related errors to accompany this are:
New request is not allowed to start
because it should come with valid
transaction descriptor.
Internal .Net Framework Data Provider error 60.
MSDTC has default 90 seconds timeout, if one query execute exceed this time limit, you will encounter this error when the transaction is trying to commit.
A bounty may help get the answer you seek, but you're probably going to get better answers if you give some code samples and give a better description of when the error occurs.
Does the error only intermittently occur? It sounds like it from your description.
Are you enclosing the close that you want to be done as a transaction in a using TransactionScope block as Microsoft recommends? This should help avoid weird transaction behavior. Recall that a using block makes sure that the object is always disposed regardless of exceptions thrown. See here: http://msdn.microsoft.com/en-us/library/ms172152.aspx
If you're using TransactionScope there is an argument System.TransactionScopeOption.RequiresNew that tells the framework to always create a new transaction for this block of code:
Using ts As New Transactions.TransactionScope(Transactions.TransactionScopeOption.RequiresNew)
' Do Stuff
End Using
Also, if you're suspicious that a connection is getting faulted and then put back into the connection pool, the likely solution is to enclose the code that may fault the connection in a Try-Catch block and Dispose the connection in the catch block.
Old question ... but ran into this issue past few days.
Could not find a good answer until now. Just wanted to share what I found out.
My scenario contains multiple sessions being opened by multiple session factories. I had to correctly rollback and wait and make sure the other transactions were no longer active. It seems that just rolling back one of them will rollback everything.
But after adding the Thread.Sleep() between rollbacks, it doesn't do the other and continues fine with the rollback. Subsequent hits that trigger the method don't result in the "New request is not allowed to start because it should come with valid transaction descriptor." error.
https://gist.github.com/josephvano/5766488
I have seen this before and the cause was exactly what you thought. As Rice suggested, make sure that you are correctly disposing of the db related objects to avoid this problem.

Resources