I have this .net application where each client has their own database and thus their own connection string. The issue that I run into is that when multiple clients are logged in to the application connection to the databases is lost which will give an internal server error (or something similar, depending on the process).
I am not sure what is causing this, whether I should increase the Max Pool size or whether I should try something else.
I have put the connection.open() inside a using block, but that did not solve the issue.
I have tried to find a suitable solution, but have had no luck, so I hope the braintrust can help me. Many thanks
Related
I've built an application that does a lot of processing of user data (they can load in their data, map the variables, run analyses, review a dashboard, and download the results/report). It's a pretty heavy application, and I'm running into an issue that I'm not sure how to best solve for.
The problem is that sometimes the session will unexpectedly disconnect from the psql database. This causes problems because just about every corner of the application depends on retrieving or sending information. Basically, the app doesn't work at all without the connection. What's even worse, is the UI doesn't really inform the user of the problem, it kindof sits all lazy-like.
The application exists on an EC2 instance within a docker container, served through an HTTPS proxy (Caddy) to the public via a registered domain name. Each new session searches for a global pool connection, and if one does not exist, it creates one, then checks out a local connection and passes that into all the downstream modules.
I'm wondering how others have addressed this problem. Should I,
use a global pool, then check out a single connection and test for a severed connection at the start of each function? This is my current (unfinished) approach and seems not great.
search for a pool connection and checkout a connection at the start of each function, then return at the end? This would take a bit of time to implement (and test), but seems like a reasonable solution.
check for a connection every minute and if one doesn't exist, create one. I'm guessing this would need to happen in each module independently.
Any direction will be greatly appreciated.
Thanks,
Setting: ASP.Net application with Oracle backend, we utilize User Defined Types (UDTs) and use ODP.Net to communicate them between the front and back-ends.
Problem: I had to alter one of my UDTs attribute length, once I did that and tested in backend it worked fine, but when I run my site I keep getting the ORA-22337 error (in subject line)!!
You will not find much if you research this problem online, other than the useless Oracle error documentation you will not find anything helpful. The Oracle documentation says to close and re-open the connection, but that does not apply to my scenario
I already solved the problem by dropping and recreating the UDTs and NTs, but this is inefficient to have to do every time I need to modify one of my core UDTs, any ideas how to solve this without dropping and recreating everything?
If the error info says "Close and reopen the connection" as the solution and you are using a OracleConnection which has a connection pool in it, then simply Close()ing the connection is not good enough. It will just go back to the pool still open and when you "reconnnect" you will just get it back again. You'll need to Close all open connections and then call ClearPool() to make sure that all old connections in the pool are removed.
Ok, so the background is this.
I have created a hardware controller for a fingerprint reader, and a web application that allows users that have scanned in to do things in the web application. The web application was created using Code First, and the communication is done through signalr 2.0 The problem that I am having is this. Everything works beautifully for about a day, this used to be about half a day, but in IIS 7.0 I changed the idle time on the application pool to 200 mins, but I am still getting an error at random times on the web server, I have managed to have extended the amount of time that is stays running. However, what confuses me, and why I cannot seem to get a handle on what is happening is that when it does go down;
A) I do not know why? (I am leaning towards a timeout somewhere)
B) The error message is the same one you get when you make a change to the database structure and forget to use Database-Update from the package manager console, Yet no one is changing the database.
c) If you leave it alone it will fix itself, and I do not know why or how.
Has anyone seen behavior like this? and if so what caused it and how did you fix it? Or can anyone offer how I can debug this?
Thanks so much for any help!
Kelso
If the exception is "The model backing the 'YourContext' context has changed since the database was created. Consider using Code First Migrations to update the database" you could try to catch that exception and log the content of the following method and compare it to the return value of the method in Application_Start or whenever it worked for you.
((System.Data.Entity.DbContext)(context)).InternalContext.QueryForModel(0)
The method gives you a XML representation of your database schema.
Just to update on this issue, it turns out that the IIS server had been set to only a single CPU and a single thread, (VMare setting) and that thread was getting hung, and could not create a new thread to continue processing, once we updated the cpu's and increased the thread count to 5, everything works like a dream.
This is a quite a big, quite a badly coded ASP.NET website that I am currently tasked with maintaining. One issue that I'm not sure how to solve is that at seemingly random times, the live web site will lock up completely, page access is ok, but anything that touches the database causes the application to hang indefinitely.
The cause seems to be many more open connections to the database than you would expect of a lowish level traffic web site. Activity monitor shows 150+ open connections, most with an Application value of '.NET SqlClient Data Provider', with the network service login.
A quick solution is to restart the SQL Server service (I've recycled the ASP.NET app pool also just to ensure that the application lets go of anything, and stops any code from reattempting to open connections if there was some sort of loop process that I'm unaware of). This doesn't however help me solve the problem.
The application uses a Microsoft SQLHelper class, which is a pretty common library, so I'm fairly confident that the code that uses this class will have connections closed where required.
I have however spotted a few DataReaders that are not closed properly. I think I'm right in saying that a DataReader can keep the underlying connection open even if that connection is closed because it is a connected class (Correct me if I'm wrong).
Something that it perculiar is that one of the admins restarted the server (not the database server, the actual server) and immediatley, the site would hang again. The culprit was again 150+ open database connections.
Does anybody have any diagnostic technique that they can share with me for working out where this is happening?
Update: SQL Server Log files show many entries like this (30+)
2010-10-15 13:28:53.15 spid54 Starting up database 'test_db'.
I'm wondering if the server is getting hit by an attacker. That would explain the many connections right after boot, and at seemingly random times.
Update: Have changed the AutoClose property, though still hunting for a solution!
Update 2: See my answer to this question for the solution!
Update:
Lots and lots of Starting up database: Set the AutoClose property to false : REF
You are correct about your DataReaders: make sure you close them. However, I have experienced many problems with connections spawning out of control even when connections were closed properly. Connection pooling didn't seem to be working as expected since each post-back created a new SqlConnection. To avoid this seemingly uneeded re-creation of the connection, adopted a Singleton approach to my DAL. So I create a single DataAdapter and send all my data requests through it. Although I've been told that this is unwise, I have not received any support for that claim (and am still eager to read any documentation/opinion to this effect: I want to get better, not be status quo). I have a DataAdapter class for you to consider if you like.
If you are in SQL 2005+, you should be able to use Activity Monitor to see the "Details" of each connection which sometimes gives you the last statement executed. Perhaps this will help you track the statement back to some place in code.
I would recommend downloading http://sites.google.com/site/sqlprofiler/ to see what queries are happening, and sort of work backwards from there. Good luck man!
Many types such as DbConnection, DbCommand and DbDataReader and their derived types (SqlConnection, SqlCommand and SqlReader) implement the IDisposable interface/pattern.
Without re-gurgitating what has already been written on the subject you can take advantage of this mechanism via the using statement.
As a rule of thumb you should always try to wrap your DbConnection, DbCommand and DbDataReader objects with the using statement which will generate MSIL to call IDisposable's Dispose method. Usually in the Dispose method there is code to clean up unmanaged resources such as closing database connections.
For example:
using(SqlConnection cn = new SqlConnection(connString))
{
using(SqlCommand cmd = new SqlCommand("SELECT * FROM MyTable", cn))
{
cmd.CommandType = CommandType.Text;
cn.Open();
using(SqlDataReader rdr = cmd.ExecuteReader())
{
while(rdr.Read())
{
... do stuff etc
}
}
}
}
This will ensure that the Dispose methods are called on all of the objects used immediately after use. For example the Dispose methods of the SqlConnection closes the underlying database connection right away instead of leaving it around for the next garbage collection run.
Little changes like this improve your applications ability to scale under heavy load. As you already know, "Acquire expensive resources late and release early". The using statement is a nice bit of syntactic sugar to help you out.
If you're using VB.NET 2.0 or later it has the same construct:
Using Statement (Visual Basic) (MSDN)
using Statement (C# Reference) (MSDN)
This issue came up again today and I managed to solve it quite easility, so I thought I'd post back here.
In this case, you can assume that the connection spamming is coming from the same code block, so to find the code block, I opened activity monitor and checked the details for some of the many open connections (Right click > details).
This showed me the offending SQL, which you can search for in your application code / stored procedures.
Once I'd found it, it was as I suspected, an unclosed data reader. Problem is now solved.
Is it possible for ASP.NET to mix up which user is associated with which session variable on the server? Are session variables immutably tied to the original user that created them across time, space & dimension?
To answer your original question: Sessions are keyed to an id that is placed in a cookie. This id is generated using some random number crypto routines. It is not guaranteed to be unique but it is highly unlikely that it will ever be duplicated in the span of the life of a session. Even if your sessions run for full work days. It would probably take years for a really popular site to even generate a duplicate key (No stats or facts to back that up).
Having said all that it doesn't appear that your problem is with session values getting mixed up. The first thing that I would start to look at is connection pooling. ADO pools connections by default but if you request a connection with a username/password that is not in the pool it should give you a new connection. Hint that may be a performance bottleneck in the future if your site is very large. It has been a while since I worked with SQL Server, in Oracle there is a call that can be made to switch the identity of the user. I would be surprised if there was no equivalent in SQL Server. You might try connecting to your DB with a generic username/password and then executing that identity switch call before you hand back the connection to the rest of your code.
It depends on your session provider, if you have overriden the session key generation in a way that is no longer unique, then multiple users may be accessing the same session.
What behavior are you seeing? And are you sure there's no static in play with the variables you are talking about?
while anything is possible. . . .
No, unless you are storing session state in sql server or some other out of process storage and then messing with it. . .
The session is bound to a user cookie, the chances of that messing up in a normal scenario is very unlikely, however there could be issues if using distributed session state.
It's not possible. Sessions are tied to the creator.
Do you want to mix up, or do you have a case when it looks like mixed up?
More information:
I've got an app that takes the userid/password from the login page and stores it in a session variable. I plop it into my connection string for making calls to SQL Server.
When a table gets updated, we're using 'system_user' in the database to identify the 'last updated by' user. We're seeing some odd behavior in which the user we're expecting to be listed is incorrect, and it's showing someone else.
Can you pop in the debugger and see if the correct value is indeed being passed on that connection string? It would quickly help you idenfity which side the problem is on.
Also make sure that none of the connection code has static properties for connection or user, or one user may have their connection replaced with that of the most recent user before the update fires off.
My guess is that you're re-using a static field on a class to hold the connection string. Those static fields are re-used across multiple IIS requests so you're probably only ever seeing the most recently logged in user in the 'last updated by'.
BTW, unless you have a REALLY good reason for doing so then you shouldn't be connecting to the DB like this. You're preventing yourself from using connection pooling which is going to hurt performance under high loads.