I inherited an ASP site whose ASP pages liberally include a large config file that, among many other things, opens two ADODB connections to MS SQL servers via:
SET Connection = Server.CreateObject("ADODB.Connection")
Connection.Open "Provider = sqloledb;" & _
I appears that on many pages these connections are never closed (though it's hard to be certain because the code is a maze of ASP #includes). In contrast, on some there is a very clear <% CONNECTION.CLOSE %>.
What is the consequence of failing to close these connections by the end of a GET?
Related
I've got a really really strange ASP.net intermittent session persistence problem. First some facts:
Environment:
F5 Load Balancer with sticky sessions enabled:
Web Servers: 2 *
Windows Server 2008 R2,
.net Framework Version: 2.0.50727, IIS7,
Database Server: SQL Server 2008 R2
Session Protocol: ASP.net InProc sessions, timeout set to 20mins, cookieless=false
CMS: Ektron CMS
400.net
Ok on with the problem.
We have some custom VB.net session values which get set in our website for a shopping cart. from time to time SOME of these variables seem to lose persistence. I say some because the main session remains persistent. I think the main culprit is this fellow:
Shared Sub New()
' If the cart is not in the session, create one and put it there
' Otherwise, get it from the session
If HttpContext.Current.Session("ASPNETShoppingCart") Is Nothing Then
Instance = New ShoppingCart()
Instance.Items = New List(Of CartItem)
HttpContext.Current.Session("ASPNETShoppingCart") = Instance
Else
Instance = CType(HttpContext.Current.Session("ASPNETShoppingCart"), ShoppingCart)
End If
End Sub
Even when this session object remains persistent, it doesn't show up when I run a trace:
e.g.
my other sessions variables like Session("userId") show up in a trace while I NEVER see a value for Session("ASPNETShoppingCart") - which I thought was strange - perhaps this is because it is an object as opposed to a string?
I've ruled out issues with the load balancer by bypassing it directly and calling the code directly on the server.
I know the full session is not being destroyed when the issue occurs because I'm still logged in and other session values appear on screen - but my shopping cart values do not.
Has anyone any idea as to what might cause something like this? Is there any way of protecting a session variable or at least being notified if it changes state so I can trace what's causing this?
Indeed, it is strange that "some" variables get lost and some don't.
Maybe you have some sort of "fallback" mechanism that recreates the "UserID" (from an authentication cookie, for example) and the same doesn't happen for other objects.
In general, I'd avoid using InProc session state at all costs because whenever the asp.net Worker process get's recycled -and this can happen at any time-, everything in the Session will be lost. Since you use sticky sessions, my suggestion is that you switch to OutOfProcess mode using a local StateServer. You can read more here.It's very simple to configure; just a change in the Web.config and making sure that the ASP.NET State service is started on the server.
The only caveat is that all your objects need to be Serializable but if you are not storing something exotic in Session, all you need to do is decorate your objects with the [Serializable] attribute.
I have an ASP.Net Application that uses a SQL Server database. I am also using ODBC to make the connection (see below). Then I load controls (many of them) with queries.
Is this the correct way to do this?
Also, I need to do most of these programatically, not at design time.
Sub Session_Start(ByVal sender As Object, ByVal e As EventArgs)
' Fires when the session is started
Session("ConnString") = "DRIVER={SQL Server};SERVER=myserver;Trusted_Connection=True;DATABASE=mydatabase"
Session("MyConnection") = New Odbc.OdbcConnection(Session("ConnString"))
End Sub
I don't think that saving connection objects to session is very good practice (see below why)
Can't you just save the connectionstring in session and re-create the sql server connection on page_Load?
Sql connections should normally only live as long as your request lifetime (maximum), prefereably shorter. You should close a sql connection as soon as you don't need it anymore.
It's bad practice to keep one open during the entire session. As this will make your connection pool run out of available connections very fast.
Can you please explan your question a bit better?
This is a quite a big, quite a badly coded ASP.NET website that I am currently tasked with maintaining. One issue that I'm not sure how to solve is that at seemingly random times, the live web site will lock up completely, page access is ok, but anything that touches the database causes the application to hang indefinitely.
The cause seems to be many more open connections to the database than you would expect of a lowish level traffic web site. Activity monitor shows 150+ open connections, most with an Application value of '.NET SqlClient Data Provider', with the network service login.
A quick solution is to restart the SQL Server service (I've recycled the ASP.NET app pool also just to ensure that the application lets go of anything, and stops any code from reattempting to open connections if there was some sort of loop process that I'm unaware of). This doesn't however help me solve the problem.
The application uses a Microsoft SQLHelper class, which is a pretty common library, so I'm fairly confident that the code that uses this class will have connections closed where required.
I have however spotted a few DataReaders that are not closed properly. I think I'm right in saying that a DataReader can keep the underlying connection open even if that connection is closed because it is a connected class (Correct me if I'm wrong).
Something that it perculiar is that one of the admins restarted the server (not the database server, the actual server) and immediatley, the site would hang again. The culprit was again 150+ open database connections.
Does anybody have any diagnostic technique that they can share with me for working out where this is happening?
Update: SQL Server Log files show many entries like this (30+)
2010-10-15 13:28:53.15 spid54 Starting up database 'test_db'.
I'm wondering if the server is getting hit by an attacker. That would explain the many connections right after boot, and at seemingly random times.
Update: Have changed the AutoClose property, though still hunting for a solution!
Update 2: See my answer to this question for the solution!
Update:
Lots and lots of Starting up database: Set the AutoClose property to false : REF
You are correct about your DataReaders: make sure you close them. However, I have experienced many problems with connections spawning out of control even when connections were closed properly. Connection pooling didn't seem to be working as expected since each post-back created a new SqlConnection. To avoid this seemingly uneeded re-creation of the connection, adopted a Singleton approach to my DAL. So I create a single DataAdapter and send all my data requests through it. Although I've been told that this is unwise, I have not received any support for that claim (and am still eager to read any documentation/opinion to this effect: I want to get better, not be status quo). I have a DataAdapter class for you to consider if you like.
If you are in SQL 2005+, you should be able to use Activity Monitor to see the "Details" of each connection which sometimes gives you the last statement executed. Perhaps this will help you track the statement back to some place in code.
I would recommend downloading http://sites.google.com/site/sqlprofiler/ to see what queries are happening, and sort of work backwards from there. Good luck man!
Many types such as DbConnection, DbCommand and DbDataReader and their derived types (SqlConnection, SqlCommand and SqlReader) implement the IDisposable interface/pattern.
Without re-gurgitating what has already been written on the subject you can take advantage of this mechanism via the using statement.
As a rule of thumb you should always try to wrap your DbConnection, DbCommand and DbDataReader objects with the using statement which will generate MSIL to call IDisposable's Dispose method. Usually in the Dispose method there is code to clean up unmanaged resources such as closing database connections.
For example:
using(SqlConnection cn = new SqlConnection(connString))
{
using(SqlCommand cmd = new SqlCommand("SELECT * FROM MyTable", cn))
{
cmd.CommandType = CommandType.Text;
cn.Open();
using(SqlDataReader rdr = cmd.ExecuteReader())
{
while(rdr.Read())
{
... do stuff etc
}
}
}
}
This will ensure that the Dispose methods are called on all of the objects used immediately after use. For example the Dispose methods of the SqlConnection closes the underlying database connection right away instead of leaving it around for the next garbage collection run.
Little changes like this improve your applications ability to scale under heavy load. As you already know, "Acquire expensive resources late and release early". The using statement is a nice bit of syntactic sugar to help you out.
If you're using VB.NET 2.0 or later it has the same construct:
Using Statement (Visual Basic) (MSDN)
using Statement (C# Reference) (MSDN)
This issue came up again today and I managed to solve it quite easility, so I thought I'd post back here.
In this case, you can assume that the connection spamming is coming from the same code block, so to find the code block, I opened activity monitor and checked the details for some of the many open connections (Right click > details).
This showed me the offending SQL, which you can search for in your application code / stored procedures.
Once I'd found it, it was as I suspected, an unclosed data reader. Problem is now solved.
First off, I am not an AS 400 guy - at all. So please forgive me for asking any noobish questions here.
Basically, I am working on a .Net application that needs to access the AS400 for some real-time data. Although I have the system working, I am getting very different performance results between queries. Typically, when I make the 1st request against a SPROC on the AS400, I am seeing ~ 14 seconds to get the full data set. After that initial call, any subsequent calls usually only take ~ 1 second to return. This performance improvement remains for ~ 20 mins or so before it takes 14 seconds again.
The interesting part with this is that, if the stored procedure is executed directly on the iSeries Navigator, it always returns within milliseconds (no change in response time).
I wonder if it is a caching / execution plan issue but I can only apply my SQL SERVER logic to the AS400, which is not always a match.
Any suggestions on what I can do to recieve a more consistant response time or simply insight as to why the AS400 is acting in this manner when I was using the iSeries Data Provider for .Net? Is there a better access method that I should use?
Just in case, here's the code I am using to connect to the AS400
Dim Conn As New IBM.Data.DB2.iSeries.iDB2Connection(ConnectionString)
Dim Cmd As New IBM.Data.DB2.iSeries.iDB2Command("SPROC_NAME_HERE", Conn)
Cmd.CommandType = CommandType.StoredProcedure
Using Conn
Conn.Open()
Dim Reader = Cmd.ExecuteReader()
Using Reader
While Reader.Read()
'Do Something
End While
Reader.Close()
End Using
Conn.Close()
End Using
EDIT: after looking about a bit on this issue and using some of the comments below, I am beginning to wonder if I am experiencing this due to the gains from connection pooling? Thoughts?
I've found the Redbook Integrating DB2 Universal Database for iSeries with Microsoft ADO .NET useful for diagnosing issues like these.
Specifically look into the client and server side traces to help isolate the issue. And don't be afraid to call IBM software support. They can help you set up profiling to figure out the issue.
You may want to try a different driver to connect to the AS400-DB2 system. I have used 2 options.
the standard jt400.jar driver to create a simple java web service to get my data
the drivers from the company called HIT software (www.hitsw.com)
Obviously the first option would be the slower of the two, but thats the free way of doing things.
Each connection to the iSeries is backed by a job. Upon the first connection, the iSeries driver needs to create the connection pool, start a job, and associate that job with the connection object. When you close a connection, the driver will return that object to the connection pool, but will not end the job on the server.
You can turn on tracing to determine what is happening on your first connection attempt. To do so, add "Trace=StartDebug" to your connection string, and enable trace logging on the box that is running your code. You can do this by using the 'cwbmptrc' tool in the Client Access program directory:
c:\Program Files (x86)\IBM\Client Access>cwbmptrc.exe +a
Error logging is on
Tracing is on
Error log file name is:
C:\Users\Public\Documents\IBM\Client Access\iDB2Log.txt
Trace file name is:
C:\Users\Public\Documents\IBM\Client Access\iDB2Trace.txt
The trace output will give you insight into what operations the driver is performing and how long each operation takes to complete. Just don't forget to turn tracing off once you are done (cwbmptrc.exe -a)
If you don't want to mess with the tracing, a quick test to determine if connection pooling is behind the delay is to disable it by adding "Pooling=false" to your connection string. If connection pooling the is reason that your 2nd attempt is much faster, disabling connection pooling should make each request perform as slowly as the first.
I have seen similar performance from iSeries SQL (ODBC) queries for several years. I think it's part of the nature of the beast-- OS/400 dynamically moves data from disk to memory when it's accessed.
FWIW, I've also noticed that the iSeries is more like a tractor than a race car. It deals much better with big loads. In one case, I consolidated about a dozen short queries into a single monstrous one, and reduced the execution time from something like 20 seconds to about 2 seconds.
I have had to pull data from the AS/400 in the past, basically there were a couple of things that worked for me:
1) Dump data into a SQL Server table nightly where I could control the indexes, the native SqlClient beats the IBM DB2 .NET Client every time
2) Talk to one of your AS400 programmers and make sure the command you are using is hitting a logical file as opposed to a physical (logical v/s physical in their world is akin to our tables v/s views)
3) Create Views using a Linked Server on SQL server and query your views.
I have observed the same behavior when connecting to Iseries data from Java solutions hosted on Websphere Application Server (WAS) as well as .Net solutions hosted on IIS. The first call of the day is always more "expensive" than the second.
The delay on the first call is caused by the "setup" time for the Iseries to set up the job to service the request, (job name is QZDASOINIT in subsystem QUSRWRK). Subsequent calls will reuse the existing jobs that stay active waiting for more incoming requests.
The number of QZDASOINIT jobs and how long they stay active is configurable on the Iseries.
One document on how to tune your prestart job entries:
http://www.ibmsystemsmag.com/ibmi/tipstechniques/systemsmanagement/Tuning-Prestart-Job-Entries/
I guess it would be a reasonable assumption that there is also some overhead to the "first call of the day" on both WAS and IIS.
Try creating a stored procedure. This will create and cache your access plan with the stored procedure, so optimizer doesn't have to look in the SQL cache or reoptimize.
Suppose that I have an ASP.NET page. In the page load event handler, I open a database connection and do some processing. But after the processing is done, I don't close the connection explicitly by calling the CLOSE method of the connection object.
Now when the page processing at the server side is finished, the GC will dispose all the variables in my page, and also, the connection object too. But when it is disposed, does the connection that was opened previously is automatically closed? I mean, when GC disposes the connection object, does it automatically close the connection that was established with the database server; or it simply dispose the connection object, and the connection at the database is remained open, until the connection timeout occurs at the database and then the database server closes the connection by itself?
The MSDN documentation is pretty clear about this:
If the SqlConnection goes out of
scope, it won't be closed. Therefore,
you must explicitly close the
connection by calling Close or
Dispose. Close and Dispose are
functionally equivalent.
Either use the using blocks to have it disposed automatically, or explicitly .Close() it. The using blocks are preferred.
By leaving connections open your application may eventually run out of connections when new requests are attempted, resulting in errors. I've faced such a problem in an application I was debugging. The original developers failed to close the connections explicitly on a few pages and traffic was high enough that users started getting errors. I wrapped the offending connections in a using block and the problem went away.
Connection is remained open. If you have lots of pageviews, and lots open connections, you can get 500 error.
Your connections won't be closed until after (not when) your page object is finalized, and that might be a while. It would be very easy to max out the number of available connections and start getting errors.
You should use using blocks, then you won't have to ask the question:
using (var conn = new SqlConnection(connectionString))
{
using (var cmd = new SqlCommand(commandText, conn))
{
using (var reader = cmd.ExecuteReader())
{
while (reader.Read()) { /* ... */ }
}
}
}
your connection won't be closed unless chosen by GC and if you have many visitors resulting in lots of connections then this may be horrible. Also if you try to open an opened connection it will throw error so you have to check for that better is either write it in using block or close the connection yourself.