I created a Symfony 3 command that is expected to run for days (or even weeks). It uses Doctrine 2 for reading some initial data and for writing the execution status from time to time. The SQLs are expected to take few milliseconds.
My concern is that the whole process will eventually crash if the MySQL connection closes due to inactivity.
Question: is Doctrine keeping the database connection open between flush calls? Or, is it reconnecting every time flush is called?
AFAIK Symfony will open up a connection to the database the first time Doctrine is used in your app and close it when the HTTP request is sent (or if you specifically tell Doctrine to close it). Once connected, Doctrine will have the connection active until you explicitly close it (and will be active before, during and after flush())
In your case you should probably open and close the db connection explicitly when you need it. Something like the following code could solve your problem:
// When you need the DB
/**
* #var \Doctrine\DBAL\Connection $connection
*/
$connection = $this->get('doctrine')->getConnection();
// check if the connection is still active and if not connect to the db
if(!$connection->isConnected()) {
$connection->connect();
}
// Your code to update the database goes after this.
your code
// Once you're done with the db update - close the connection.
if($connection->isConnected()) {
$connection->close(); // close the db connection;
}
This will avoid db connection timeouts and etc, however you should be quite careful with memory leaks if this script will be running as long as you're saying. Using Symfony might not be the best approach to this problem.
You can simply ping the connection every 1000 seconds, less than MySQL's connection limit.
Best thing to do would be to run a supervising process (eg. supervisord), which would restart the process as soon as your app stops. Then you can simply tell your script to exit before the connection is dropped (as it's a configured value, in MySQL for instance it's the wait_timeout variable). Supervising process will notice your app is dead and will restart it.
Related
Good morning,
I have an issue when selecting the next record with Doctrine when there is concurrency. I have installed supervisord inside a docker container that starts multiple processes on the same "dispatch" command. The dispatch commands basically gets the next job in queue in the db and sends it to the right executor. Right now I have two docker containers that each run multiple processes through supervisord. These 2 containers are on 2 different servers. I'm also using Doctrine Optimistic locking. So the Doctrine query to find the next job in queue is the following:
$qb = $this->createQueryBuilder('job')
->andWhere('job.status = :todo')
->setMaxResults( 1 )
->orderBy('job.priority', 'DESC')
->addOrderBy('job.createdAt', 'ASC')
->setParameters(array("todo" => Job::STATUS_TO_DO));
return $qb->getQuery()->getOneOrNullResult();
So the issue is that when a worker tries to get the next job with the above query, I notice that they frequently run into the Optimistic Lock Exception which is fine meaning the record is already used by another worker. When there is an Optimistic Lock Exception, it's caught and then worker stops and another one starts. But I lose a lot of time because of this, because it takes multiple tries for workers to finally get the next job instead of the Optimistic Lock exception.
I thought about getting a random job id in the above Doctrine query.
What's your take on this? Is there a better way to handle this?
I finally figured it out. There was a delay between one of the server and the remote mysql so updates were not seen right away and that triggered the Optimistic Lock Exception. I fixed it by moving the mysql DB to Azure which is way faster than the old server and causes no delays.
We are using the SQLite.NET PCL in a Xamarin application.
When putting the database under pressure by doing inserts into multiple tables we are seeing BUSY exceptions being thrown.
Can anyone explain what the difference is between BUSY and LOCKED? And what causes the database to be BUSY?
Our code uses a single connection to the database created using the following code:
var connectionString = new SQLiteConnectionString(GetDefaultConnectionString(),
_databaseConfiguration.StoreTimeAsTicks);
var connectionWithLock = new SQLiteConnectionWithLock(new SQLitePlatformAndroid(), connectionString);
return new SQLiteAsyncConnection (() => { return connectionWithLock; });
So our problem turned out to be that although we had ensured within the class we'd written that it only created a single connection to the database we hadn't ensured that this class was a singleton, therefore we were still creating multiple connections to the database. Once we ensured it was a singleton then the busy errors stopped
What I've take from this is:
Locked means you have multiple threads trying to access the database, the code is inherently not thread safe.
Busy means you have a thread waiting on another thread to complete, your code is thread safe but you are seeing contention in using the database.
...current operation cannot proceed because the required resources are locked...
I am assuming that you are using async-style inserts and are on different threads and thus an insert is timing out waiting for the lock of a different insert to complete. You can use synchronous inserts to avoid this condition. I personally avoid this, when needed, by creating a FIFO queue and consuming that queue synchronously on a dedicated thread. You could also handle the condition by retrying your transaction X number of times before letting the Exception ripple up.
SQLiteBusyException is a special exception that is thrown whenever SQLite returns SQLITE_BUSY or SQLITE_IOERR_BLOCKED error code. These codes mean that the current operation cannot proceed because the required resources are locked.
When a timeout is set via SQLiteConnection.setBusyTimeout(long), SQLite will attempt to get the lock during the specified timeout before returning this error.
Ref: http://www.sqlite.org/lockingv3.html
Ref: http://sqlite.org/capi3ref.html#sqlite3_busy_timeout
I have applied the following solution which works in my case(mobile app).
Use sqlitepclraw.bundle_green nugget package with SqlitePCL.
Try to use the single connection throughout the app.
After creating the SQLiteConnection.
Apply busytime out using following call.
var connection = new SQLiteConnection(databasePath: path);
SQLite3.BusyTimeout(connection.Handle, 5000); // 5000 millisecond.
ldap_unbind_ext is blocked until the previously initaited ldap search is completed.
I initiate search and unfortunately server takes 3+ minutes to respond.Meanwhile if I attempt to register to another server, the old connection should be tore down and a new connection should be established by my application.
But,as there is an active query on the old connection,ldap_unbind_ext gets blocked until search is completed.
I tried using ldap_abandon_ext before we call ldap_unbind_ext ,but now it blocks in ldap_abandon_ext .
Could someone help me on this.
Thanks in advance!
Using ldap_pvt_tls_destroy shall destroy the connection though there is an active query on the connection.
We need this when we tear down a connection, to allow the global TLS settings to change.
This function is called in ldapsearch, in its tool_destroy() routine, and also in slapd.
After moving most of our BT-Applications from BizTalk 2009 to BizTalk 2010 environment, we began the work to remove old applications and unused host. In this process we ended up with a zombie host instance.
This has resulted in that the bts_CleanupDeadProcesses startet to fail with error “Executed as user: RH\sqladmin. Could not find stored procedure 'dbo.int_ProcessCleanup_ProcessLabusHost'. [SQLSTATE 42000] (Error 2812). The step failed.”
After looking at the CleanupDeatProcess process, I found the zombie host instance found in the BTMsgBox.ProcessHeartBeats table, with dtNextHeartbeatTime set to the time when the host was removed.
(I'm assuming that the Host Instance Processes don't exist in your services any longer, and that the SQL Agent job fails)
From looking at the source of the [dbo].[bts_CleanupDeadProcesses] job, it loops through the dbo.ProcessHeartbeats table with a cursor (btsProcessCurse, lol) looking for 'dead' hearbeats.
Each process instance has its own cleanup sproc int_ProcessCleanup_[HostName] and a sproc for the heartbeat watchdog to call, viz bts_ProcessHeartbeat_[HostName] (although FWR the SPROC calls it #ApplicationName), filtered by WHERE (s.dtNextHeartbeatTime < #dtCurrentTime).
It is thus tempting to just delete the record for your deleted / zombie host (or, if you aren't that brave, to simply update the Next dtNextHeartbeatTime on the heartbeat record for your dead host instance to sometime next century). Either way, the SQL agent job should skip the dead instances.
An alternative could be to try and re-create the Host and Instances with the same name through the Admin Console, just to delete them (properly) again. This might however cause additional problems as BizTalk won't be able to create the 2 SPROCs above because of the undeleted objects.
However, I wouldn't obviously do this on your prod environment until you've confirmed this works with a trial run first.
It looks like someone else got stuck with a similar situation here
And there is also a good dive into the details of how the heartbeat mechanism works by XiaoDong Zhu here
Have you tried BTSTerminator? That works for one-off cleanups.
http://www.microsoft.com/en-us/download/details.aspx?id=2846
Let's say we are executing lots of different sql command, and the SqlCommand.CommandTimeout was leave with default value 30 seconds.
And let's just assume some of those sql command are just long query and we might get the timeout exception.
Correct me if I'm wrong, this exception was just cause .Net do not want to wait anymore, but if we are using connection pool, this connection might be remain open, so will that sql statement might still running on the SQL server side? Or there are some hidden communication between those system to suddenly stop it no matter we are using connection pool or not?
Just want to know what's the mechanism and would it effect the SQL server performance. I mean if the query is really long such as take 10 mins to run if it's still running it might just slow down the server unnecessary as no one can get the result.
UPDATE
So here I'm asking about the connection pool specifically, it's definitely that the code will close the connection with exception handling, or we can just assume that are using code which is a pattern preferred named by #dash here. The problem is if I call the Close() or Dispose() method on that SqlConnection object, it is returned to the connection pool, it's not physically close it.
I'm asking when it's returned to pool, would that long query still running on SQL Server side. And if possible, how to avoid that.
UPDATE again
Thanks for #dash mentioning that about database transaction, yes a rollback will make it wait and we are not closing the connection and return it to the pool yet. So what if it's just a long select query or an update but just one individual update without any database transaction involved? And specifically I want to know is there a way that we can tell SQL Server that I do not need the result now please stop running it?
It all depends on how you are executing your queries really;
Imagine the following query:
SqlConnection myConnection = new SqlConnection("connection_string");
SqlCommand myCommand = new SqlCommand();
myCommand.Connection = myConnection;
myCommand.CommandType = CommandType.StoredProcedure;
myCommand.CommandTimeout = some_long_time;
myCommand.CommandText = "database_killing_procedure_lol";
myConnection.Open() //Connection's now open
myCommand.ExecuteNonQuery();
Two things will happen; one is this method will queue until the command.ExecuteNonQuery() finishes. The second is that we will also tie up a connection from the connection pool for the duration of the method.
What happens if we timeout? Well, an exception is thrown - a SqlException with a Number property = -2. However, remember, in the code above, there is no exception management so all that will happen is the objects will go out of scope and we'll need to wait for them to be disposed. In particular, our connection wont be reusable until this happens.
It's one of the reasons why the following pattern is preferred:
using(SqlConnection myConnection = new SqlConnection("connection_string"))
{
using(SqlCommand myCommand = new SqlCommand())
{
SqlCommand myCommand = new SqlCommand();
myCommand.Connection = myConnection;
myCommand.CommandType = CommandType.StoredProcedure;
myCommand.CommandTimeout = some_long_time;
myCommand.CommandText = "database_killing_procedure_lol";
myConnection.Open() //Connection's now open
myCommand.ExecuteNonQuery();
}
}
It means that, as soon as the query is finished, either naturally (it runs to completion) or through an exception (timeout or otherwise), the resources are given back immediately.
In your specific issue, having a large number of queries that take a long time to execute is bad for many reasons. In a web application, you potentially have many users contending for a limited number of resources; memory, database connections, cpu time and so on. Therefore, tying up any of these with expensive operations will reduce the responsiveness and performance of your web application, or limit the number of users you can serve simultaneously. Futher, if the database operation is expensive, you can tie up your database, too, further limiting perofrmance.
It's always worth attempting to bring the execution time for database queries down for this reason alone. If you can't, then you will have to be careful about how many of these types of queries you can run simultaneously.
EDIT:
So you are actually interested in what's happening on the SQL Server side... the answer is... it depends! The CommandTimeout is actually a client event - what you are saying is that if the query takes longer than n seconds, then I don't want to wait any more. SQL Server gets told that this is the case, but it still has to deal with what it's currently doing, so it can actually take some time before SQL Server finishes the query. It will attempt to prioritise this, but that's about it.
This is especially true with transactions; if you are running a query wrapped in a transaction, and you roll that back as part of your exception management, then you have to wait until the rollback is complete.
It's also very common to see people panic and start issuing KILL commands against the SQL Process id that the query is running under. This is often a mistake if the command is running a transaction, but is often okay for long running selects.
SQL Server has to manage it's state such that it remains consistent. The fact that the client is no longer listening means you have wasted work but SQL Server still has to clean up after itself.
So yes, the ASP.Net side of things will be all fine as it doesn't care, but SQL Server still has to finish the work it began, or reach a point where it can abandon that work safely, or rollback any changes in any transactions that were opened.
This obviously could have a performance impact on the database server depending on the query!
Even a long running SELECT or UPDATE or INSERT outside of a transaction has to finish. SQL Server will try and abandon it as soon as it can, but only if it's safe to do so. Obviously, for UPDATES and INSERT's especially, it has to reach a point where the database is still consistent. For SELECT's it will attempt to end as soon as it is able to.
Thanks for #dash mentioning that about database transaction, yes a
rollback will make it wait and we are not closing the connection and
return it to the pool yet. So what if it's just a long select query or
an update but just one individual update without any database
transaction involved? And specifically I want to know is there a way
that we can tell SQL Server that I do not need the result now please
stop running it?
I think This link will answer need
Link1
Link2
The SqlConnection.ClearPool() method may be wait you're looking for. The following post touches on this.
How to force a SqlConnection to physically close, while using connection pooling?