Timeout Expired: Connection Pool - asp.net

Here is the error we are getting. We moved app and db servers to x64 from 32-bit. Framework 2.0 service pack 2 is installed on the servers.
Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
Here is some code for DataAccess that returns a value from inside the Try block:
public string GetSomething()
{ var a = String.Empty;
try
{
// loop through the datareader
return "some data";
}
finally
{
reader.close();
}
return whatever;
}
And here is some code that opens and manages the connection:
public DBHelper(IDbCommand command)
{
this.command = command;
if (command.Connection.State == ConnectionState.Open)
{
shouldCloseConnection= false;
}
else
{
command.Connection.Open();
shouldCloseConnection= true;
}
}

A) make sure your min pool size is big enough. Maybe something like 20 or 30.
B) Be careful you're disposing of objects properly. I learned that for instance when you use an IDataReader to get stuff from a database, you should do
using (IDataReader rdr = ...)
{
}
That way no matter what (including an error) the rdr will be disposed of. Anyway sometimes pool issues are caused by not disposing of database connections properly.

Try using
SqlConnection.ClearAllPools()
and see if this is really a connection pool problem.

Not a direct answer, but check out the Activity monitor and look at connections that are open for the applicaiton. Maybe the timeout setting in your previous install was set shorten causing the connections to get forced closed by SQL before the pool filled.

Related

The underlying provider failed on Open

I made 3 Ajax processes to run the below code at the same time.
but one of the processes throw exception that message says "The underlying provider failed on Open."
try{
orderRepository orderRepo = new orderRepository(); // get context (Mysql)
var result = (from x in orderRepo.orders
where x.orderid == orderno
select new {x.tracking, x.status, x.charged }).SingleOrDefault();
charged = result.charged;
}catch(Exception e){
log.Error(e.Message); // The underlying provider failed on Open.
}
And, I run the 1 Ajax call that failed before, then It passes through.
It happen to 1 of 3 (Ajax) process, sometimes, 2 of 5 process.
I guess it because all process try to using Database same time. but I couldn't find the solution.
This is my connection string,
<add name="EFMysqlContext" connectionString="server=10.0.0.10;User Id=root;pwd=xxxx;Persist Security Info=True;database=shop_db" providerName="Mysql.Data.MySqlClient" />
Anybody know the solution or something I can try, please advise me.
Thanks
It sounds like a problem because of concurrent connection with SQL Server using same username. Have you tried destroying/disposing the repository(or connection) object after using it?
Give it a try:
try{
using( orderRepository orderRepo = new orderRepository()) // get context (Mysql)
{
var result = (from x in orderRepo.orders
where x.orderid == orderno
select new {x.tracking, x.status, x.charged }).SingleOrDefault();
charged = result.charged;
} // orderRepo object automatically gets disposed here
catch(Exception e){
log.Error(e.Message); // The underlying provider failed on Open.
} }
Not sure if it matters, but your provider name is Mysql.Data.MySqlClient and not MySql.Data.MySqlClient (if it is case-sensitive, this could be the cause).

Asp .net Session is timeout and all data wasn't saved , any idea?

I created an web application where a lot of words was needed in the application which take a lot of time and thinking when u need to write it.
Let's suppose the session timeout after 30 minutes,i started writing a lot of words and while thinking and writing the session timeout and redirect to the login page and all written data is lost.
Any idea for this problem except extending session timeout period ???
Currently your session is created and managed as In-Process mode and in this mode you cannot recover session state once it reaches timeout stage. You may set SQL Server Mode and configure your application for SQL Server Mode so your data will be persisted into Sql Server database.
Profile Properties is an alternate to save the state.
You can use some ajax function that regularly "calls home" (executes some dummy code on the server). This will keep the session alive as long as that user has this page open.
You might need to explicitly use the Session in that callback, such as
Session["LastAccess"] = DateTime.Now;
just to keep it alive.
If you execute this call every 15 minutes, the session will not time out and the load on the server is minimal.
Use asynchronous programming model which allowing some portion of code to be executed on a separate threads.
There are three style of Programming with APM
Wait Until Done Model
Polling Model
Callback Model
Based on your requirement and result you can choose the model which is more appropriate.
For instance, let us say you can to read the file and wait until done and sample code is
byte[] buffer = new byte[100];
string filename =
string.Concat(Environment.SystemDirectory, "\\mfc71.pdb");
FileStream strm = new FileStream(filename,
FileMode.Open, FileAccess.Read, FileShare.Read, 1024,
FileOptions.Asynchronous);
// Make the asynchronous call
strm.Read(buffer, 0, buffer.Length);
IAsyncResult result = strm.BeginRead(buffer, 0, buffer.Length, null, null);
// Do some work here while you wait
// Calling EndRead will block until the Async work is complete
int numBytes = strm.EndRead(result);
// Don't forget to close the stream
strm.Close();
Console.WriteLine("Read {0} Bytes", numBytes);
Console.WriteLine(BitConverter.ToString(buffer));
But creating your threads are not necessary or suggesting, .NET supports a built in thread pool that can be used in many situation where you thinking to create your own threads. Sample code
static void WorkWithParameter(object o)
{
string info = (string) o;
for (int x = 0; x < 10; ++x)
{
Console.WriteLine("{0}: {1}", info,
Thread.CurrentThread.ManagedThreadId);
// Slow down thread and let other threads work
Thread.Sleep(10);
}
}
Instead of creating a new thread and controlling it, we use the ThreadPool to this work by using its QueueWorkItem method
WaitCallback workItem = new WaitCallback(WorkWithParameter));
if (!ThreadPool.QueueUserWorkItem(workItem, "ThreadPooled"))
{
Console.WriteLine("Could not queue item");
}

System.Data.SQLite Close() not releasing database file

I'm having a problem closing my database before an attempt to delete the file. The code is just
myconnection.Close();
File.Delete(filename);
And the Delete throws an exception that the file is still in use. I've re-tried the Delete() in the debugger after a few minutes, so it's not a timing issue.
I have transaction code but it doesn't run at all before the Close() call. So I'm fairly sure it's not an open transaction. The sql commands between open and close are just selects.
ProcMon shows my program and my antivirus looking at the database file. It does not show my program releasing the db file after the close().
Visual Studio 2010, C#, System.Data.SQLite version 1.0.77.0, Win7
I saw a two year old bug just like this but the changelog says it's fixed.
Is there anything else I can check? Is there a way to get a list of any open commands or transactions?
New, working code:
db.Close();
GC.Collect(); // yes, really release the db
bool worked = false;
int tries = 1;
while ((tries < 4) && (!worked))
{
try
{
Thread.Sleep(tries * 100);
File.Delete(filename);
worked = true;
}
catch (IOException e) // delete only throws this on locking
{
tries++;
}
}
if (!worked)
throw new IOException("Unable to close file" + filename);
Encountered the same problem a while ago while writing a DB abstraction layer for C# and I never actually got around to finding out what the issue was. I just ended up throwing an exception when you attempted to delete a SQLite DB using my library.
Anyway, this afternoon I was looking through it all again and figured I would try and find out why it was doing that once and for all, so here is what I've found so far.
What happens when you call SQLiteConnection.Close() is that (along with a number of checks and other things) the SQLiteConnectionHandle that points to the SQLite database instance is disposed. This is done through a call to SQLiteConnectionHandle.Dispose(), however this doesn't actually release the pointer until the CLR's Garbage Collector performs some garbage collection. Since SQLiteConnectionHandle overrides the CriticalHandle.ReleaseHandle() function to call sqlite3_close_interop() (through another function) this does not close the database.
From my point of view this is a very bad way to do things since the programmer is not actually certain when the database gets closed, but that is the way it has been done so I guess we have to live with it for now, or commit a few changes to System.Data.SQLite. Any volunteers are welcome to do so, unfortunately I am out of time to do so before next year.
TL;DR
The solution is to force a GC after your call to SQLiteConnection.Close() and before your call to File.Delete().
Here is the sample code:
string filename = "testFile.db";
SQLiteConnection connection = new SQLiteConnection("Data Source=" + filename + ";Version=3;");
connection.Close();
GC.Collect();
GC.WaitForPendingFinalizers();
File.Delete(filename);
Good luck with it, and I hope it helps
Just GC.Collect() didn't work for me.
I had to add GC.WaitForPendingFinalizers() after GC.Collect() in order to proceed with the file deletion.
Had a similar issue, though the garbage collector solution didn't fix it.
Found disposing of SQLiteCommand and SQLiteDataReader objects after use saved me using the garbage collector at all.
SQLiteCommand command = new SQLiteCommand(sql, db);
command.ExecuteNonQuery();
command.Dispose();
The following worked for me:
MySQLiteConnection.Close();
SQLite.SQLiteConnection.ClearAllPools()
More info:
Connections are pooled by SQLite in order to improve performance.It means when you call Close method on a connection object, connection to database may still be alive (in the background) so that next Open method become faster.When you known that you don't want a new connection anymore, calling ClearAllPools closes all the connections which are alive in the background and file handle(s?) to the db file get released.Then db file may get removed, deleted or used by another process.
In my case I was creating SQLiteCommand objects without explicitly disposing them.
var command = connection.CreateCommand();
command.CommandText = commandText;
value = command.ExecuteScalar();
I wrapped my command in a using statement and it fixed my issue.
static public class SqliteExtensions
{
public static object ExecuteScalar(this SQLiteConnection connection, string commandText)
{
using (var command = connection.CreateCommand())
{
command.CommandText = commandText;
return command.ExecuteScalar();
}
}
}
The using statement ensures that Dispose is called even if an exception occurs.
Then it's a lot easier to execute commands as well.
value = connection.ExecuteScalar(commandText)
// Command object created and disposed
I was having a similar problem, I've tried the solution with GC.Collect but, as noted, it can take a long time before the file becomes not locked.
I've found an alternative solution that involves the disposal of the underlying SQLiteCommands in the TableAdapters, see this answer for additional information.
I've been having the same problem with EF and System.Data.Sqlite.
For me I found SQLiteConnection.ClearAllPools() and GC.Collect() would reduce how often the file locking would happen but it would still occasionally happen (Around 1% of the time).
I've been investigating and it seems to be that some SQLiteCommands that EF creates aren't disposed and still have their Connection property set to the closed connection. I tried disposing these but Entity Framework would then throw an exception during the next DbContext read - it seems EF sometimes still uses them after connection closed.
My solution was to ensure the Connection property is set to Null when the connection closes on these SQLiteCommands. This seems to be enough to release the file lock. I've been testing the below code and not seen any file lock issues after a few thousand tests:
public static class ClearSQLiteCommandConnectionHelper
{
private static readonly List<SQLiteCommand> OpenCommands = new List<SQLiteCommand>();
public static void Initialise()
{
SQLiteConnection.Changed += SqLiteConnectionOnChanged;
}
private static void SqLiteConnectionOnChanged(object sender, ConnectionEventArgs connectionEventArgs)
{
if (connectionEventArgs.EventType == SQLiteConnectionEventType.NewCommand && connectionEventArgs.Command is SQLiteCommand)
{
OpenCommands.Add((SQLiteCommand)connectionEventArgs.Command);
}
else if (connectionEventArgs.EventType == SQLiteConnectionEventType.DisposingCommand && connectionEventArgs.Command is SQLiteCommand)
{
OpenCommands.Remove((SQLiteCommand)connectionEventArgs.Command);
}
if (connectionEventArgs.EventType == SQLiteConnectionEventType.Closed)
{
var commands = OpenCommands.ToList();
foreach (var cmd in commands)
{
if (cmd.Connection == null)
{
OpenCommands.Remove(cmd);
}
else if (cmd.Connection.State == ConnectionState.Closed)
{
cmd.Connection = null;
OpenCommands.Remove(cmd);
}
}
}
}
}
To use just call ClearSQLiteCommandConnectionHelper.Initialise(); at the start of application load.
This will then keep a list of active commands and will set their Connection to Null when they point to a connection that is closed.
Try this... this one tries all the above codes... worked for me
Reader.Close()
connection.Close()
GC.Collect()
GC.WaitForPendingFinalizers()
command.Dispose()
SQLite.SQLiteConnection.ClearAllPools()
Hope that helps
Use GC.WaitForPendingFinalizers()
Example:
Con.Close();
GC.Collect();`
GC.WaitForPendingFinalizers();
File.Delete(Environment.CurrentDirectory + "\\DATABASENAME.DB");
I believe the call to SQLite.SQLiteConnection.ClearAllPools() is the cleanest solution. As far as I know it is not proper to manually call GC.Collect() in the WPF environment. Although, I did not notice the problem until I have upgraded to System.Data.SQLite 1.0.99.0 in 3/2016
Had a similar problem. Calling Garbage Collector didn't help me. LAter I found a way to solve the problem
Author also wrote that he did SELECT queries to that database before trying to delete it. I have the same situation.
I have the following code:
SQLiteConnection bc;
string sql;
var cmd = new SQLiteCommand(sql, bc);
SQLiteDataReader reader = cmd.ExecuteReader();
reader.Read();
reader.Close(); // when I added that string, the problem became solved.
Also, I don't need to close database connection and to call Garbage Collector. All I had to do is to close reader which was created while executing SELECT query
Best answer that worked for me.
dbConnection.Close();
System.Data.SQLite.SQLiteConnection.ClearAllPools();
GC.Collect();
GC.WaitForPendingFinalizers();
File.Delete(Environment.CurrentDirectory + "\\DATABASENAME.DB");
The reason for this seems to be a feature called "Pooling".
Appending "Pooling=false" to the connection string causes the DB-File to be released with "connection.Close()".
See the FAQ on connection pooling here:
https://www.devart.com/dotconnect/sqlite/docs/FAQ.html#q54
I was struggling with the similar problem. Shame on me... I finally realized that Reader was not closed. For some reason I was thinking that the Reader will be closed when corresponding connection is closed. Obviously, GC.Collect() didn't work for me.
Wrapping the Reader with "using: statement is also a good idea. Here is a quick test code.
static void Main(string[] args)
{
try
{
var dbPath = "myTestDb.db";
ExecuteTestCommand(dbPath);
File.Delete(dbPath);
Console.WriteLine("DB removed");
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
Console.Read();
}
private static void ExecuteTestCommand(string dbPath)
{
using (var connection = new SQLiteConnection("Data Source=" + dbPath + ";"))
{
using (var command = connection.CreateCommand())
{
command.CommandText = "PRAGMA integrity_check";
connection.Open();
var reader = command.ExecuteReader();
if (reader.Read())
Console.WriteLine(reader.GetString(0));
//without next line database file will remain locked
reader.Close();
}
}
}
Maybe you don't need to deal with GC at all. Please, check if all sqlite3_prepare is finalized.
For each sqlite3_prepare, you need a correspondent sqlite3_finalize.
If you don't finalize correctly, sqlite3_close will not close the connection.
This works for me but i noticed sometimes journal files -wal -shm are not deleted when the process is closed. If you want SQLite to remove -wal -shm files when all connection are close the last connection closed MUST BE non-readonly. Hope this will help someone.
I was using SQLite 1.0.101.0 with EF6 and having trouble with the file being locked after all connections and entities disposed.
This got worse with updates from the EF keeping the database locked after they had completed.
GC.Collect() was the only workaround that helped and I was beginning to despair.
In desperation, I tried Oliver Wickenden's ClearSQLiteCommandConnectionHelper (see his answer of 8 July). Fantastic. All locking problems gone!
Thanks Oliver.
Waiting for Garbage Collector may not release the database all time and that happened to me. When some type of Exception occurs in SQLite database for example trying to insert a row with existing value for PrimaryKey it will hold the database file until you dispose it. Following code catches SQLite exception and cancels problematic command.
SQLiteCommand insertCommand = connection.CreateCommand();
try {
// some insert parameters
insertCommand.ExecuteNonQuery();
} catch (SQLiteException exception) {
insertCommand.Cancel();
insertCommand.Dispose();
}
If you not handle problematic commands' exceptions than Garbage Collector cannot do anything about them because there are some unhandled exceptions about these commands so they are not garbage. This handling method worked well for me with waiting for garbage collector.

How often should I open/close my Booksleeve connection?

I'm using the Booksleeve library in a C#/ASP.NET 4 application. Currently the RedisConnection object is a static object across my MonoLink class. Should I be keeping this connection open, or should I be open/closing it after each query/transaction (as I'm doing now)? Just slightly confused. Here's how I'm using it, as of now:
public static MonoLink CreateMonolink(string URL)
{
redis.Open();
var transaction = redis.CreateTransaction();
string Key = null;
try
{
var IncrementTask = transaction.Strings.Increment(0, "nextmonolink");
if (!IncrementTask.Wait(5000))
{
transaction.Discard();
throw new System.TimeoutException("Monolink index increment timed out.");
}
// Increment complete
Key = string.Format("monolink:{0}", IncrementTask.Result);
var AddLinkTask = transaction.Strings.Set(0, Key, URL);
if (!AddLinkTask.Wait(5000))
{
transaction.Discard();
throw new System.TimeoutException("Add monolink creation timed out.");
}
// Run the transaction
var ExecTransaction = transaction.Execute();
if (!ExecTransaction.Wait(5000))
{
throw new System.TimeoutException("Add monolink transaction timed out.");
}
}
catch (Exception ex)
{
transaction.Discard();
throw ex;
}
finally
{
redis.Close(false);
}
// Link has been added to redis
MonoLink ml = new MonoLink();
ml.Key = Key;
ml.URL = URL;
return ml;
}
Thanks, in advance, for any responses/insight. Also, is there any sort of official documentation for this library? Thank you S.O. ^_^.
According to the author of Booksleeve,
The connection is thread safe and intended to be massively shared;
don't do a connection per operation.
Should I be keeping this connection open, or should I be open/closing
it after each query/transaction (as I'm doing now)?
There is probably a little overhead if you will open a new connection each time you want to make a query/transaction and although redis is designed for high level of concurrently connected clients, there might be performance problems if their number is around tens of thousands. As far as I know connection pooling should be done by the client libraries (because redis itself doesn't have this functionality), so you should check if booksleeve supports this stuff. Otherwise you should open the connection when your application starts and keep it open for it's lifetime (in case you don't need parallel clients connected to redis for some reason).
Also, is there any sort of official documentation for this library?
The only documentation I was able to find regarding how to use it was tests folder in it's source codes.
For reference (continuing #bzlm's answer), I created a Singleton that always provides the same Redis connection using BookSleeve (if it's closed, it's being created. Else, the existing connection is being served).
Look at this: https://stackoverflow.com/a/8777999/290343
You consume it like that:
RedisConnection connection = Redis.RedisConnectionGateway.Current.GetConnection();

Small performance test on a web service

I'm trying to develop a small application that tests how many requests per second my service can support but I think I'm doing something wrong. The service is in an early development stage, but I'd like to have this test handy in order to check from time to time I'm not doing something that decrease the performance. The problem is that I cannot get the web server or the database server go to the 100% of CPU.
I'm using three different computers, in one is the web server (WinSrv Standard 2008 x64 IIS7), in other the database (Win 2K, SQL Server 2005) and the last is my computer (Win7 x64 Ultimate), where I'll run the test. The computers are connected through a 100 ethernet switch. The request POST is 9 bytes and the response will be 842 bytes.
The test launches several threads, and each thread has a while loop, in each loop it creates a WebRequest object, performs a call, increment a common counter and waits between 1 and 5 milliseconds, then it do it again:
static int counter = 0;
static void Main(string[] args)
{
ServicePointManager.DefaultConnectionLimit = 250;
Console.WriteLine("Ready. Press any key...");
Console.ReadKey();
Console.WriteLine("Running...");
string localhost = "localhost";
string linuxmono = "192.168.1.74";
string server = "192.168.1.5:8080";
DateTime start = DateTime.Now;
Random r = new Random(DateTime.Now.Millisecond);
for (int i = 0; i < 50; i++)
{
new Thread(new ParameterizedThreadStart(Test)).Start(server);
Thread.Sleep(r.Next(1, 3));
}
Thread.Sleep(2000);
while (true)
{
Console.WriteLine("Request per second :"
+ counter / DateTime.Now.Subtract(start).TotalSeconds);
Thread.Sleep(3000);
}
}
public static void Test(object ip)
{
Guid guid = Guid.NewGuid();
Random r = new Random(DateTime.Now.Millisecond);
while (true)
{
String test = "<lalala/>";
WebRequest req = WebRequest.Create("http://"
+ (string) ip + "/WebApp/" + guid.ToString()
+ "/Data/Tables=whatever");
req.Method = "POST";
req.ContentType = "application/xml";
req.Credentials = new NetworkCredential("aaa", "aaa","domain");
byte[] array = Encoding.UTF8.GetBytes(test);
req.ContentLength = array.Length;
using (Stream reqStream = req.GetRequestStream())
{
reqStream.Write(array, 0, array.Length);
reqStream.Close();
}
using (Stream responseStream = req.GetResponse().GetResponseStream())
{
String response = new StreamReader(responseStream).ReadToEnd();
if (response.Length != 842) Console.Write(" EEEE ");
}
Interlocked.Increment(ref counter);
Thread.Sleep(r.Next(1,5));
}
}
If I run the test neither of the computers do an excessive CPU usage. Let's say I get a X requests per second, if I run the console application two times at the same moment, I get X/2 request per second in each one... but still the web server is on 30% of CPU, the database server on 25%...
I've tried to remove the Thread.Sleep in the loop, but it doesn't make a big difference.
I'd like to put the machines to the maximum, to check how may requests per second they can provide. I guessed that I could do it in this way... but apparently I'm missing something here... What is the problem?
Kind regards.
IMO, you're better off using SoapUI for the test. You can easily adjust the test case for the number of threads, number of iterations, etc.. And it'll graph the results. When you hit the plateau where you overwhelm the server, you'll see it on the graph. If one PC isn't enough, just run more of them on other PCs. You can do all of this with the free version.
There are a lot of limiting factors besides the CPU on a web server. There are a lot of IIS settings which throttle the number of connections can be served.
I would read this:
http://www.eggheadcafe.com/articles/20050613.asp
I know it is for IIS 6, but there are things that will still apply.
If you have access to MSDN and have VS 2010 ultimate, I would check out their load testing tools. Purchasing the load testing program can be expensive, but if you need to test something specific, you can use the trial version to accomplish what you need. You can use it to monitor and response time, server utilization, etc. Well worth looking into.
I agree with Chris, and would go a step further to recommend JMeter, as it can also test the database and webapp, all within the same script.

Resources