Let's say we are executing lots of different sql command, and the SqlCommand.CommandTimeout was leave with default value 30 seconds.
And let's just assume some of those sql command are just long query and we might get the timeout exception.
Correct me if I'm wrong, this exception was just cause .Net do not want to wait anymore, but if we are using connection pool, this connection might be remain open, so will that sql statement might still running on the SQL server side? Or there are some hidden communication between those system to suddenly stop it no matter we are using connection pool or not?
Just want to know what's the mechanism and would it effect the SQL server performance. I mean if the query is really long such as take 10 mins to run if it's still running it might just slow down the server unnecessary as no one can get the result.
UPDATE
So here I'm asking about the connection pool specifically, it's definitely that the code will close the connection with exception handling, or we can just assume that are using code which is a pattern preferred named by #dash here. The problem is if I call the Close() or Dispose() method on that SqlConnection object, it is returned to the connection pool, it's not physically close it.
I'm asking when it's returned to pool, would that long query still running on SQL Server side. And if possible, how to avoid that.
UPDATE again
Thanks for #dash mentioning that about database transaction, yes a rollback will make it wait and we are not closing the connection and return it to the pool yet. So what if it's just a long select query or an update but just one individual update without any database transaction involved? And specifically I want to know is there a way that we can tell SQL Server that I do not need the result now please stop running it?
It all depends on how you are executing your queries really;
Imagine the following query:
SqlConnection myConnection = new SqlConnection("connection_string");
SqlCommand myCommand = new SqlCommand();
myCommand.Connection = myConnection;
myCommand.CommandType = CommandType.StoredProcedure;
myCommand.CommandTimeout = some_long_time;
myCommand.CommandText = "database_killing_procedure_lol";
myConnection.Open() //Connection's now open
myCommand.ExecuteNonQuery();
Two things will happen; one is this method will queue until the command.ExecuteNonQuery() finishes. The second is that we will also tie up a connection from the connection pool for the duration of the method.
What happens if we timeout? Well, an exception is thrown - a SqlException with a Number property = -2. However, remember, in the code above, there is no exception management so all that will happen is the objects will go out of scope and we'll need to wait for them to be disposed. In particular, our connection wont be reusable until this happens.
It's one of the reasons why the following pattern is preferred:
using(SqlConnection myConnection = new SqlConnection("connection_string"))
{
using(SqlCommand myCommand = new SqlCommand())
{
SqlCommand myCommand = new SqlCommand();
myCommand.Connection = myConnection;
myCommand.CommandType = CommandType.StoredProcedure;
myCommand.CommandTimeout = some_long_time;
myCommand.CommandText = "database_killing_procedure_lol";
myConnection.Open() //Connection's now open
myCommand.ExecuteNonQuery();
}
}
It means that, as soon as the query is finished, either naturally (it runs to completion) or through an exception (timeout or otherwise), the resources are given back immediately.
In your specific issue, having a large number of queries that take a long time to execute is bad for many reasons. In a web application, you potentially have many users contending for a limited number of resources; memory, database connections, cpu time and so on. Therefore, tying up any of these with expensive operations will reduce the responsiveness and performance of your web application, or limit the number of users you can serve simultaneously. Futher, if the database operation is expensive, you can tie up your database, too, further limiting perofrmance.
It's always worth attempting to bring the execution time for database queries down for this reason alone. If you can't, then you will have to be careful about how many of these types of queries you can run simultaneously.
EDIT:
So you are actually interested in what's happening on the SQL Server side... the answer is... it depends! The CommandTimeout is actually a client event - what you are saying is that if the query takes longer than n seconds, then I don't want to wait any more. SQL Server gets told that this is the case, but it still has to deal with what it's currently doing, so it can actually take some time before SQL Server finishes the query. It will attempt to prioritise this, but that's about it.
This is especially true with transactions; if you are running a query wrapped in a transaction, and you roll that back as part of your exception management, then you have to wait until the rollback is complete.
It's also very common to see people panic and start issuing KILL commands against the SQL Process id that the query is running under. This is often a mistake if the command is running a transaction, but is often okay for long running selects.
SQL Server has to manage it's state such that it remains consistent. The fact that the client is no longer listening means you have wasted work but SQL Server still has to clean up after itself.
So yes, the ASP.Net side of things will be all fine as it doesn't care, but SQL Server still has to finish the work it began, or reach a point where it can abandon that work safely, or rollback any changes in any transactions that were opened.
This obviously could have a performance impact on the database server depending on the query!
Even a long running SELECT or UPDATE or INSERT outside of a transaction has to finish. SQL Server will try and abandon it as soon as it can, but only if it's safe to do so. Obviously, for UPDATES and INSERT's especially, it has to reach a point where the database is still consistent. For SELECT's it will attempt to end as soon as it is able to.
Thanks for #dash mentioning that about database transaction, yes a
rollback will make it wait and we are not closing the connection and
return it to the pool yet. So what if it's just a long select query or
an update but just one individual update without any database
transaction involved? And specifically I want to know is there a way
that we can tell SQL Server that I do not need the result now please
stop running it?
I think This link will answer need
Link1
Link2
The SqlConnection.ClearPool() method may be wait you're looking for. The following post touches on this.
How to force a SqlConnection to physically close, while using connection pooling?
Related
I created a Symfony 3 command that is expected to run for days (or even weeks). It uses Doctrine 2 for reading some initial data and for writing the execution status from time to time. The SQLs are expected to take few milliseconds.
My concern is that the whole process will eventually crash if the MySQL connection closes due to inactivity.
Question: is Doctrine keeping the database connection open between flush calls? Or, is it reconnecting every time flush is called?
AFAIK Symfony will open up a connection to the database the first time Doctrine is used in your app and close it when the HTTP request is sent (or if you specifically tell Doctrine to close it). Once connected, Doctrine will have the connection active until you explicitly close it (and will be active before, during and after flush())
In your case you should probably open and close the db connection explicitly when you need it. Something like the following code could solve your problem:
// When you need the DB
/**
* #var \Doctrine\DBAL\Connection $connection
*/
$connection = $this->get('doctrine')->getConnection();
// check if the connection is still active and if not connect to the db
if(!$connection->isConnected()) {
$connection->connect();
}
// Your code to update the database goes after this.
your code
// Once you're done with the db update - close the connection.
if($connection->isConnected()) {
$connection->close(); // close the db connection;
}
This will avoid db connection timeouts and etc, however you should be quite careful with memory leaks if this script will be running as long as you're saying. Using Symfony might not be the best approach to this problem.
You can simply ping the connection every 1000 seconds, less than MySQL's connection limit.
Best thing to do would be to run a supervising process (eg. supervisord), which would restart the process as soon as your app stops. Then you can simply tell your script to exit before the connection is dropped (as it's a configured value, in MySQL for instance it's the wait_timeout variable). Supervising process will notice your app is dead and will restart it.
We are using the SQLite.NET PCL in a Xamarin application.
When putting the database under pressure by doing inserts into multiple tables we are seeing BUSY exceptions being thrown.
Can anyone explain what the difference is between BUSY and LOCKED? And what causes the database to be BUSY?
Our code uses a single connection to the database created using the following code:
var connectionString = new SQLiteConnectionString(GetDefaultConnectionString(),
_databaseConfiguration.StoreTimeAsTicks);
var connectionWithLock = new SQLiteConnectionWithLock(new SQLitePlatformAndroid(), connectionString);
return new SQLiteAsyncConnection (() => { return connectionWithLock; });
So our problem turned out to be that although we had ensured within the class we'd written that it only created a single connection to the database we hadn't ensured that this class was a singleton, therefore we were still creating multiple connections to the database. Once we ensured it was a singleton then the busy errors stopped
What I've take from this is:
Locked means you have multiple threads trying to access the database, the code is inherently not thread safe.
Busy means you have a thread waiting on another thread to complete, your code is thread safe but you are seeing contention in using the database.
...current operation cannot proceed because the required resources are locked...
I am assuming that you are using async-style inserts and are on different threads and thus an insert is timing out waiting for the lock of a different insert to complete. You can use synchronous inserts to avoid this condition. I personally avoid this, when needed, by creating a FIFO queue and consuming that queue synchronously on a dedicated thread. You could also handle the condition by retrying your transaction X number of times before letting the Exception ripple up.
SQLiteBusyException is a special exception that is thrown whenever SQLite returns SQLITE_BUSY or SQLITE_IOERR_BLOCKED error code. These codes mean that the current operation cannot proceed because the required resources are locked.
When a timeout is set via SQLiteConnection.setBusyTimeout(long), SQLite will attempt to get the lock during the specified timeout before returning this error.
Ref: http://www.sqlite.org/lockingv3.html
Ref: http://sqlite.org/capi3ref.html#sqlite3_busy_timeout
I have applied the following solution which works in my case(mobile app).
Use sqlitepclraw.bundle_green nugget package with SqlitePCL.
Try to use the single connection throughout the app.
After creating the SQLiteConnection.
Apply busytime out using following call.
var connection = new SQLiteConnection(databasePath: path);
SQLite3.BusyTimeout(connection.Handle, 5000); // 5000 millisecond.
I am facing a problem related “Cannot find table 0”. Initially I have no idea to find the root problem of this exception. Then I came to know that this problem arose due to the error “Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding”. (i.e) The execution of the Stored procedure in the SQL server 2005 takes time more than 3 seconds (default query timeout is 3 seconds). So I executed the same stored procedure with same parameters in SSMS (SQL Sever Management Studio). But it took only 1 second.
In the middle of time, we run the same source and SQL code in another server (Backup server). But there is no error. (i.e) The whole process (I have a do while loop which loops 40 times , consisting 4 for loops which loops approximately 60 times.Each time it will access 3 stored procedures) took 30 minutes in my system but in backup server only 3 minutes has been taken which means there is no timeout. But all the source are same. So now I came to an end that there is a technical issue involving.
I tried the following things in my source.
• In Asp.net, I added the “SQLCommand Timeout = 0”. But it is a failure.
• I used “SET ArithAbort ON” and “With Recompile”. It is also a failure.
• Then I used “Parameter Sniffing”. (i.e) I used local variables in stored procedures. But all went in a wrong direction.
The Ado.net code for accessing the stored procedure in asp.net is like the following:
Public Function retds1(ByVal SPName As String, ByVal conn As SqlConnection, Optional ByVal ParameterValues() As Object = Nothing) As DataSet
dconn = New SqlConnection(ConfigurationManager.ConnectionStrings("webriskpro").ConnectionString)
Try
sqlcmd = New SqlCommand
ds = New DataSet
If dconn.State = ConnectionState.Open Then dconn.Close()
sqlcmd = New SqlCommand(SPName, dconn)
sqlcmd.CommandType = CommandType.StoredProcedure
sqlcmd.CommandTimeout = 0
dconn.Open()
SqlCommandBuilder.DeriveParameters(sqlcmd)
If Not ParameterValues Is Nothing Then
For i As Integer = 1 To ParameterValues.Length
sqlcmd.Parameters(i).Value = ParameterValues(i - 1)
Next
End If
da = New SqlDataAdapter(sqlcmd)
da.SelectCommand.CommandTimeout = 0
da.Fill(ds)
Catch ex As Exception
send_prj_err2mail(ex, SPName, "")
Finally
dconn.Close()
End Try
Return ds
End Function
The execution plan of the error giving procedure is
Hope you guys understood my problem. Please get me back with some ideas!!
You will need to debug the application to find out what the root cause is. In another thread you mention:
I have checked the process in SQL Server Profiler. Since the loop has no limit, we don't know how many times it iterates.
You need to write out to a text file how many times the code loops, it may be a small number in Backup compared to a large number on PROD or your local PC. You should also log out how long each part of the operation takes, sample code using a StopWatch:
Dim sw as New StopWatch()
sw.Start()
...long operation...
sw.Stop()
File.Write("Operation XYZ took " & sw.EllapsedMilliseconds / 1000 & " seconds")
I am suspicious on the code here, it doesn't look like it was very well written. For example checking if the connection is open after instantiating it smells of poor resource management. Its like the author of this code found the connection was left open and wasn't closed somewhere else:
dconn = New SqlConnection(ConfigurationManager.ConnectionStrings("webriskpro").ConnectionString)
...
If dconn.State = ConnectionState.Open Then dconn.Close()
I'd recommend that you use the Using statement which guarantee's the connection will be closed, eg:
Using cn As New SqlConnection(ConnectionString)
cn.Open()
Using cmd As New SqlCommand("GetCustomerByID", cn)
Try
With cmd
.Connection = cn
.CommandType = CommandType.StoredProcedure
.Parameters.Add("#CustomerID", SqlDbType.Int, 4)
.Parameters("#CustomerID").Value = CustomerID
End With
da = New SqlDataAdapter(cmd)
da.Fill(ds, "Customer")
Catch ex As Exception
End Try
End Using
End Using
Although you know that If the connection is closed before Fill is called, it is opened to retrieve data, then closed. If the connection is open before Fill is called, it remains open. So you don't even need to explicitly open it in your case.
Again you will need to debug the application to find out what the root cause is. Add heaps of logging to help diagnose the problem. Debugging the problem should shed light and give you some clues as to the root cause of this performance bottleneck.
Good luck!!
Edit: For any of the Connection Pooling idea's raised here to work you will need to use ConnectionPooling, so remove Pooling=false from your connection string:
<add name="webriskpro"
connectionString="Data Source=TECH01\SQL2005;Initial Catalog=webriskpro1;User ID=sa;Password=#basix123; <strike>pooling=false</strike>;connection
timeout=600;"/>
Edit 2: Some research to help you work out the problem
Run SQLDiag which comes as a part of the product. You can refer books online for more details. In brief, capture Server Side trace and blocker script.
Once trace is captured, look for "Attention" event. That would be the spid which has received the error. If you filter by SPID, you would see RPC:Completed event before "Attention". Check the time over there. Is that time 30 seconds? If yes, then client waited for 30 second to get response from SQL and got "timed out" [This is client setting as SQL would never stop and connection]
Now, check if the query which was running really should take 30 seconds?
If yes then tune the query or increase the timeout setting from the client.
If no then this query must be waiting for some resources (blocked)
At this point go back to Blocker Script and check the time frame when "Attention" came
If you can execute a sp_who2 while the queries are timing out, you can use the BlkBy column to trace back to the holding the lock that everyone else is waiting on.
sp_who3 is useful too as it includes the actual query.
When you see which queries/transaction is locking/blocking your database till complete you may need to rewrite them or run them at an other time to avoid blocking other processes.
An extra point to dig into is the auto increment size of your transaction log and database. Set them on a fixed size instead of a percentage of the current files. If files are getting larger than the time it takes to allocate enough space and that's longer than your transaction timeout then your db will come to a halt.
I think the exception is because of the connection opening time.
You need to specify the timeout for the connection or delete the pooling=false from the connection string.
Give like this,
<add name="db1" connectionString="Data Source=your server;Initial Catalog=dbname;User ID=sa;Password=*****;connection timeout=600;"/>
The default size of the pooling is 100. If you would like to change you then you can.
Since in the backup server there will be no applications running, the speed of the system will be high and there will be no interrupts.
But in your local system, you might have worked on some other applications.So that could be one of the reason. The next time when you run, close all the opened application then check in the sql profiler.
Since the same code is working fine in the backup server, i dont think the loops(60*40 times) will not be the reason for slowing down.
Best of luck!!
A few ideas:
Check the primary server for resource starvation issues: memory, disk space, lots of other processes running and using resources, et cetera.
Check the primary server for configuration issues: database is configured to use only a small amount of memory, system swap file is too small, et cetera.
Check the primary server for hardware issues: memory errors, failing RAID, failing drive, et cetera.
Hope that helps!
If the error occurs only at high loads, it is likely that connections aren't being returned to the pool, and remain associated with the transaction for longer than they should. If transaction speed isn't a problem, try the following query:
SqlConnection.ClearPool(connection);
Note : For every operation in SQL we must open a connection so give this query after opening the connection
conn.Open()
cmd.ExecuteNonQuery()
SQLConnection.ClearPool(conn)
I am executing a Submit routine in ASP.Net. The problem is, while debugging the code in try-catch block, if I/user encounters an error, the SQL Transaction never rollbacks.
SQL Server 2008 hangs totally if I break this submit routine in between. I am unable to do Select/Insert operations even from SSMS. At the end, i have to restart SQL Server in order to rollback the transactions.
Code for submit:
SqlConnection conn = Db.getConn();
if (conn.State == ConnectionState.Closed) conn.Open();
SqlTransaction trn;
trn = conn.BeginTransaction();
SqlCommand sqlCmd = new SqlCommand("", conn);
sqlCmd.Transaction = trn;
try
{
string query = GetQuery(); // works fine
sqlCmd.CommandText = query;
sqlCmd.ExecuteNonQuery();
using (SqlBulkCopy bcp = new SqlBulkCopy(conn,SqlBulkCopyOptions.Default, trn))
{
bcp.ColumnMappings.Add("FaYear", "FaYear");
bcp.ColumnMappings.Add("CostCode", "CostCode");
bcp.ColumnMappings.Add("TokenNo", "TokenNo");
bcp.DestinationTableName = "ProcessTokenAddress";
bcp.WriteToServer(globaltblAddress.DefaultView.ToTable());
}
trn.commit();
}
catch (SqlException ex)
{
trn.Rollback();
}
NOTE: Just while writing the code here, i realized i have catched SqlException and not Exception. Is that what is causing the error? phew?
IMPORTANT: Do i need to rollback the transaction in Page_UnLoad or some other event handler which could handle unexpected situations (for eg. user closes the browser while the transaction is in progress, user hits back button etc).
First, in .Net you shouldn't be maintaining a single open connection which you reuse over and over. The result of this is actually exactly what you're experiencing—in a situation where a connection should be closed, it isn't.
Second, connections implement IDisposable. This means they should be created, and used, within a using statement, or a try-catch block with a finally that explicitly closes the connection. You can break this rule if you have a class that itself implements IDisposable and holds onto the connection for its own lifetime, then closes the connection when it is disposed.
You might be tempted to think that you're increasing efficiency by not opening and closing connections all the time. In fact, you'd be mistaken, because .Net deals with connection pooling for you. The standard practice is for you to pass around connection strings, not open connection objects. You can wrap a connection string in a class that will return a new connection for you, but you shouldn't maintain open connections. Doing so can lead to errors just such as you have experienced.
So instead, do these things:
Use a using statement. This will properly clean up your connections after creating and using them.
using (SqlConnection conn = Db.getConn()) {
conn.Open();
// your code here
}
The fact that you have to check whether the connection is open or not points to the problem. Don't do this. Instead, change your code in your Db class to hand out a newly-created connection every time. Then you can be certain that the state will be closed, and you can open it confidently. Alternately, open the connection in your Db class, but name your method to indicate the connection will be open such as GetNewOpenConnection. (Try to avoid abbreviation in method and class names.)
I would recommend that you throw the error. While logging it and not throwing it is a possible option, in any context where a user is sitting at a computer expecting a result, simply swallowing the error will not be the correct action, because then how will your later code know that an error occurred and let the user know? Some means of passing the exception information on to the user is necessary. It's better to not handle the exception at all than to swallow it silently.
Finally, a little style note is that getConn() does not follow the normal capitalization practices found in the C# community and recommended by Microsoft. Public methods in classes should start with a capital letter: GetConn().
I'm getting this error every few days. I won't see the error for a few days then I'll get a flurry of 20 or so all with in a minute or so.
I've been very thorough going throw my code so that I'm using this basic setup for my DB access.
try
{
myConnection.Open();
mySqlDataAdapter.Fill(myDataTable);
myConnection.Close();
}
Catch (Exception err)
{
if (myConnection.State != ConnectionState.Closed) myConnection.Close();
throw err;
}
The way I understand it this should execute my queries and immediately release the connection back to the pool but if something goes wrong with the query then I catch the excpetion close my connection then throw the error up, which eventually gets trapped at the application level and logs and emails me the error.
Even using this throughout my code I'm still running across the issue. What can I do to diagnose the root cause of the issue?
The issue is the number of pooled connections you can have in the pool.
In your connection string, you can add the
"Max Pool Size=100"
attribute to increase the size of your pool. However it sounds like you are concurrently running a significant number of SQL queries, all of which are long running. Perhaps you should look at ways to either shorten the queries or run them sequentially through a single connection.
changing the code to something like this makes it easier to read..
try
{
myConnection.Open();
mySqlDataAdapter.Fill(myDataTable);
}
Catch (Exception err)
{
throw err;
}
finally
{
myConnection.Close();
}
But it doesn't help your timeout..
It sound like the fill statement takes to long. Or that the problem actually is somewhere else, where you don't the connection.
SQL Profiling could help figuring out if the select statement takes to long..
A quick question here. Are you by chance on an access DB because there is a limit on the number of connections that you can concurrently have on it which would result in your type of error. SQL serve shouldn't have the same problem.
if you are runing SQL server then turn off connection pooling and see if it makes a difference to your app.
I found out with our system about 5 years ago when our company was rapidly growing that we basically broke access when we started constantly hitting the user cap. We switched to SQL in about 24 hours and haven't had a problem since.
If you're using MSSQL set up a profile running for sometime a day or two..
Make the profile to be saved to a file or a table, file is supposed to be faster...
And then having a script reading that file to a table you could easily query it to find the longest running queries.