sqlconnection pooling problem - asp.net

Hi i got a web application(asp.net) where we just started getting "System.InvalidOperationException: Timeout expired." when trying to get a new sql connection.
So my guess some where in the code a connection is created but never disposed, but how would i go about to find where this happens? sadly most of the database communication does not use a datalayer it works directly with the sql data types...
Is there any thing i could enable in the web.config to trace what connections are open for longer then x seconds and where they where opened?

find everywhere your sqlconnection is used. Ensure it is in a using() block to automatically dispose of it. There is nothing built in to the web.config with this unfortunately. You may be able to try out ants memory profiler to help track this down. for ex:
http://www.developerfusion.com/review/59466/ants-memory-profiler-51/

Ok i found a way to track them down by using
EXEC SP_WHO2
DBCC INPUTBUFFER(SPID)
SP_WHO2 gives you information about the connections and by using DBCC INPUTBUFFER you can find out what command they ran last time.

.NET framework will not solve your architecture and logging issues so you will have to find the problem yourselves. What can help you:
Performance counters can show you utilization of connection pool
SQL Server Management Studio can show you connections in use, their activity and last executed statement. In Object Explorer select the server and in context menu go to Reports > Standard Reports > Activity xxx
Another approach is turning off the pool and using some profiler to collect information about your application and track not disposed connections.

Related

ORA-22337: the type of accessed object has been evolved - in application

Setting: ASP.Net application with Oracle backend, we utilize User Defined Types (UDTs) and use ODP.Net to communicate them between the front and back-ends.
Problem: I had to alter one of my UDTs attribute length, once I did that and tested in backend it worked fine, but when I run my site I keep getting the ORA-22337 error (in subject line)!!
You will not find much if you research this problem online, other than the useless Oracle error documentation you will not find anything helpful. The Oracle documentation says to close and re-open the connection, but that does not apply to my scenario
I already solved the problem by dropping and recreating the UDTs and NTs, but this is inefficient to have to do every time I need to modify one of my core UDTs, any ideas how to solve this without dropping and recreating everything?
If the error info says "Close and reopen the connection" as the solution and you are using a OracleConnection which has a connection pool in it, then simply Close()ing the connection is not good enough. It will just go back to the pool still open and when you "reconnnect" you will just get it back again. You'll need to Close all open connections and then call ClearPool() to make sure that all old connections in the pool are removed.

ASP.NET Connection loss handling

How would you go about and handle lost data from a sql connection loss on a ASP.NET application.
Lets say your running an algorithm adding and removing certain roles. A midst it, the connection to the SQL database is lost. And because the connection is not there, wont even be able to backtrack the steps done. The whole state is lost, leaving the database in an error nous condition.
Would you set the IIS Rapid Fail Protection to shut the site down upon 1 exception and manually force the function to run again (after connection have been fixed).
Or how is the professional way of handling it, i am quite new to it. Maybe there's something i do not know of it (such as iis maybe trying to rerun it/caching)
(Using entity framework)
This is not a coding problem in its own way, it is more of a question of best practice handling data loss with sql database on asp.net
You need to do batch SQL Operations inside a SQL Transaction. So that whatever the error, a rollback happens. This is a built-in SQL feature and nothing special needs to be done.
Once you start a SQL Transaction, a Commit is issued only when all operations succeed. The default behavior is normally Rollback in case of all other non-success scenarios.
If you're encountering issues with any specific logic, post the code snippet and we're glad to help.

Occasional System.ArgumentNullException using TransactionScope and MS DTC

I'm occasionaly getting this exception on our production server:
System.ArgumentNullException: Value cannot be null.
at System.Threading.Monitor.Enter(Object obj)
at System.Data.ProviderBase.DbConnectionPool.TransactedConnectionPool.TransactionEnded(Transaction transaction, DbConnectionInternal transactedObject)
at System.Data.SqlClient.SqlDelegatedTransaction.SinglePhaseCommit(SinglePhaseEnlistment enlistment)
at System.Transactions.TransactionStateDelegatedCommitting.EnterState(InternalTransaction tx)
at System.Transactions.CommittableTransaction.Commit()
at System.Transactions.TransactionScope.InternalDispose()
at System.Transactions.TransactionScope.Dispose()
//... continues here with references to my DAL code
What is the reason for this exception to happen?
I have already did some research on this but with no conrete success yet. I also read this questions here:
Intermittent System.ArgumentNullException using TransactionScope
TransactionScope automatically escalating to MSDTC on some machines?
And now I know that if I could avoid escalating my transactions to DTC I would get rid of this problem. But what if I could not? I have multiple databases to update or read from in one transaction so I have to use DTC. I'm getting this error ocassionaly on actions that usually works well.
Technical background
It is ASP MVC2 and LINQ2SQL application on .NET 3.5
We have three virtuals with load balanacing based on IP address, each having IIS7
Single virutal with SQL server 2008 - it is shared by web servers
I should point out that I was not able to reproduce this exception on my development machine (development server + SQL express 2008) and on our testing machine (virtual with single IIS7 and SQL server 2008 together) either.
I'm suspecting our production servers configuration that there is some threading/processing issue (like two processes are trying to use the same connection).
UPDATE
I have found another link. It is stating that ado.net connection dispose bug is probably back. But it is a pity there is no resolution in the end and I have found nobody else describing similar issue.
http://social.msdn.microsoft.com/Forums/nl-BE/adodotnetdataproviders/thread/388a7965-9385-4f5c-a261-1894aa73c16e
According to
http://support.microsoft.com/kb/960754, there is an issue with 2.50727.4016 version of System.Data.dll.
If your server has this older version, I would try to get the updated one from Microsoft.
It looks like a bug, as it's .NET internal code, nothing do do with your own code.
If you take a look with reflector (or any other IL tool) on the internal TransactedConnectionPool.TransactionEnded method, you will see its implementation has changed between .NET 3 and .NET 4... I suppose it was not thread-safe back then. You could try to report it to Microsoft Connect.
According to the doco MSDN System.Transactions.TransactionScope the method is synchronous and therefore it's using the monitor. The doco doesn't say that the method is threadsafe, so I think it's likely that you are somehow calling dispose on the same Transaction scope object from more than one thread. You can use a static property of the transactionscope object System.Transactions.Transaction.Current to find out which transaction you are referring to. Perhaps a log message prior to disposing your transaction scope might expose where this is occurring...
If it's not a threading issue then odds on you've found a corner case that trips a bug in .Net. I've found the behaviour of MSDTC when things go wrong to be unpleasant at best.

AS 400 Performance from .Net iSeries Provider

First off, I am not an AS 400 guy - at all. So please forgive me for asking any noobish questions here.
Basically, I am working on a .Net application that needs to access the AS400 for some real-time data. Although I have the system working, I am getting very different performance results between queries. Typically, when I make the 1st request against a SPROC on the AS400, I am seeing ~ 14 seconds to get the full data set. After that initial call, any subsequent calls usually only take ~ 1 second to return. This performance improvement remains for ~ 20 mins or so before it takes 14 seconds again.
The interesting part with this is that, if the stored procedure is executed directly on the iSeries Navigator, it always returns within milliseconds (no change in response time).
I wonder if it is a caching / execution plan issue but I can only apply my SQL SERVER logic to the AS400, which is not always a match.
Any suggestions on what I can do to recieve a more consistant response time or simply insight as to why the AS400 is acting in this manner when I was using the iSeries Data Provider for .Net? Is there a better access method that I should use?
Just in case, here's the code I am using to connect to the AS400
Dim Conn As New IBM.Data.DB2.iSeries.iDB2Connection(ConnectionString)
Dim Cmd As New IBM.Data.DB2.iSeries.iDB2Command("SPROC_NAME_HERE", Conn)
Cmd.CommandType = CommandType.StoredProcedure
Using Conn
Conn.Open()
Dim Reader = Cmd.ExecuteReader()
Using Reader
While Reader.Read()
'Do Something
End While
Reader.Close()
End Using
Conn.Close()
End Using
EDIT: after looking about a bit on this issue and using some of the comments below, I am beginning to wonder if I am experiencing this due to the gains from connection pooling? Thoughts?
I've found the Redbook Integrating DB2 Universal Database for iSeries with Microsoft ADO .NET useful for diagnosing issues like these.
Specifically look into the client and server side traces to help isolate the issue. And don't be afraid to call IBM software support. They can help you set up profiling to figure out the issue.
You may want to try a different driver to connect to the AS400-DB2 system. I have used 2 options.
the standard jt400.jar driver to create a simple java web service to get my data
the drivers from the company called HIT software (www.hitsw.com)
Obviously the first option would be the slower of the two, but thats the free way of doing things.
Each connection to the iSeries is backed by a job. Upon the first connection, the iSeries driver needs to create the connection pool, start a job, and associate that job with the connection object. When you close a connection, the driver will return that object to the connection pool, but will not end the job on the server.
You can turn on tracing to determine what is happening on your first connection attempt. To do so, add "Trace=StartDebug" to your connection string, and enable trace logging on the box that is running your code. You can do this by using the 'cwbmptrc' tool in the Client Access program directory:
c:\Program Files (x86)\IBM\Client Access>cwbmptrc.exe +a
Error logging is on
Tracing is on
Error log file name is:
C:\Users\Public\Documents\IBM\Client Access\iDB2Log.txt
Trace file name is:
C:\Users\Public\Documents\IBM\Client Access\iDB2Trace.txt
The trace output will give you insight into what operations the driver is performing and how long each operation takes to complete. Just don't forget to turn tracing off once you are done (cwbmptrc.exe -a)
If you don't want to mess with the tracing, a quick test to determine if connection pooling is behind the delay is to disable it by adding "Pooling=false" to your connection string. If connection pooling the is reason that your 2nd attempt is much faster, disabling connection pooling should make each request perform as slowly as the first.
I have seen similar performance from iSeries SQL (ODBC) queries for several years. I think it's part of the nature of the beast-- OS/400 dynamically moves data from disk to memory when it's accessed.
FWIW, I've also noticed that the iSeries is more like a tractor than a race car. It deals much better with big loads. In one case, I consolidated about a dozen short queries into a single monstrous one, and reduced the execution time from something like 20 seconds to about 2 seconds.
I have had to pull data from the AS/400 in the past, basically there were a couple of things that worked for me:
1) Dump data into a SQL Server table nightly where I could control the indexes, the native SqlClient beats the IBM DB2 .NET Client every time
2) Talk to one of your AS400 programmers and make sure the command you are using is hitting a logical file as opposed to a physical (logical v/s physical in their world is akin to our tables v/s views)
3) Create Views using a Linked Server on SQL server and query your views.
I have observed the same behavior when connecting to Iseries data from Java solutions hosted on Websphere Application Server (WAS) as well as .Net solutions hosted on IIS. The first call of the day is always more "expensive" than the second.
The delay on the first call is caused by the "setup" time for the Iseries to set up the job to service the request, (job name is QZDASOINIT in subsystem QUSRWRK). Subsequent calls will reuse the existing jobs that stay active waiting for more incoming requests.
The number of QZDASOINIT jobs and how long they stay active is configurable on the Iseries.
One document on how to tune your prestart job entries:
http://www.ibmsystemsmag.com/ibmi/tipstechniques/systemsmanagement/Tuning-Prestart-Job-Entries/
I guess it would be a reasonable assumption that there is also some overhead to the "first call of the day" on both WAS and IIS.
Try creating a stored procedure. This will create and cache your access plan with the stored procedure, so optimizer doesn't have to look in the SQL cache or reoptimize.

Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction

Just curious if anyone else has got this particular error and know how to solve it?
The scenario is as follow...
We have an ASP.NET web application using Enterprise Library running on Windows Server 2008 IIS farm connecting to a SQL Server 2008 cluster back end.
MSDTC is turned on. DB connections are pooled.
My suspicion is that somewhere along the line there is a failed MSDTC transaction, the connection got returned to the pool and the next query on a different page is picking up the misbehaving connection and got this particular error. Funny thing is we got this error on a query that has no need whatsoever with distributed transaction (committing to two database, etc.). We were only doing select query (no transaction) when we got the error.
We did SQL Profiling and the query got ran on the SQL Server, but never came back (since the MSDTC transaction was already aborted in the connection).
Some other related errors to accompany this are:
New request is not allowed to start
because it should come with valid
transaction descriptor.
Internal .Net Framework Data Provider error 60.
MSDTC has default 90 seconds timeout, if one query execute exceed this time limit, you will encounter this error when the transaction is trying to commit.
A bounty may help get the answer you seek, but you're probably going to get better answers if you give some code samples and give a better description of when the error occurs.
Does the error only intermittently occur? It sounds like it from your description.
Are you enclosing the close that you want to be done as a transaction in a using TransactionScope block as Microsoft recommends? This should help avoid weird transaction behavior. Recall that a using block makes sure that the object is always disposed regardless of exceptions thrown. See here: http://msdn.microsoft.com/en-us/library/ms172152.aspx
If you're using TransactionScope there is an argument System.TransactionScopeOption.RequiresNew that tells the framework to always create a new transaction for this block of code:
Using ts As New Transactions.TransactionScope(Transactions.TransactionScopeOption.RequiresNew)
' Do Stuff
End Using
Also, if you're suspicious that a connection is getting faulted and then put back into the connection pool, the likely solution is to enclose the code that may fault the connection in a Try-Catch block and Dispose the connection in the catch block.
Old question ... but ran into this issue past few days.
Could not find a good answer until now. Just wanted to share what I found out.
My scenario contains multiple sessions being opened by multiple session factories. I had to correctly rollback and wait and make sure the other transactions were no longer active. It seems that just rolling back one of them will rollback everything.
But after adding the Thread.Sleep() between rollbacks, it doesn't do the other and continues fine with the rollback. Subsequent hits that trigger the method don't result in the "New request is not allowed to start because it should come with valid transaction descriptor." error.
https://gist.github.com/josephvano/5766488
I have seen this before and the cause was exactly what you thought. As Rice suggested, make sure that you are correctly disposing of the db related objects to avoid this problem.

Resources