if i have a very long run UPDATE query that takes hours and I happen to cancel in middle of when it's running.
I got this message below:
"User requested cancel of current operation"
Will Oracle automatically roll back the transactions?
Will DB lock be still acquired if I cancel the query? If so, how to unlock?
How to check which Update query is locking the database?
Thanks.
It depends.
Assuming that whatever client application you're using properly implemented a query timeout and that the error indicates that the timeout was exceeded, then Oracle will begin rolling back the transaction when the error is thrown. Once the transaction finishes rolling back, the locks will be released. Be aware, though, that it can easily take at least as long to roll back the query as it took to run. So it will likely be a while before the locks are released.
If, on the other hand, the client application hasn't implemented the cancellation properly, the client may not have notified Oracle to cancel the transaction so it will continue. Depending on the Oracle configuration and exactly what the client does, the database may detect some time later that the application was not responding and terminate the connection (going through the same rollback process discussed above). Or Oracle may end up continuing to process the query.
You can see what sessions are holding locks and which are waiting on locks by querying dba_waiters and dba_blockers.
Related
I have a web method exposed in web-service in ASP.net. On request i am fetching record from database and start the transaction. in transaction i update the record and perform other operations and commit the transaction. now another request came from web method and get the same record while another transaction is going on.
I am using dirty read with (noLock) , if i remove nolock causing time out. i am using ASP.net with vb and sql server 2008 R2.
Try to lock the record when you are ready to update and keep the lock time to the minimum. If you need to detect if the record was updated between the read and write operation, grab a timestamp when reading the record and see if the timestamp was changed when ready to write the changes. If the timestamp is not the same, you have detected that some other thread updated the record, and your changes no longer valid.
When I am trying to execute the insert procedure,I am geeting the following in data accesslayer.Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Plaese give me a solution
While it's possible that you have an extremely long INSERT statement that takes multiple seconds to execute, what seems more likely (especially as you tried increasing the timeout property) is that some other transaction has locked the table you are trying to insert into.
There are dozens of ways you could get into that situation. If this doesn't usually happen, investigate whether someone else could be locking that table. It could be as simple as another developer leaving open a debugging session stopped at a breakpoint while he or she went to lunch.
If this is happening over and over again, every time you run, and there's no lock on the table that you can detect, it's possible that you're conflicting with yourself at runtime: opening a transaction, changing something, and while the transaction is still open, executing a new command against the same database affecting one of the tables affected by the transaction.
If that's the case, you can do a couple of things: re-design how you're doing this two-step process so you only need one connection open at a time, or try doing both activities in the same transaction.
Also, be sure you are doing proper housekeeping with SqlConnection, SqlTransaction, and SqlDataReader objects: close or dispose of them as soon as you can. You might look up the C# using statement if you are not familiar with it.
I am developing an application that has a big database and it waits a bit long for the result from a few queries. Sometimes, users refreshes interface before the execution is finished. There are ten users and this scenario occurs so many times.I think scripts do not stop running on sql when the web page is refreshed (becuase performance decrasing so much after a few hours). If so, Can I stop the execution of the script when the page is refreshed. In web.config, will changing Connect Timeout attibute stop the execution and rollback operations on database after timeout period is passed? Or Is there any other option in web.config file?
No connection timeout property is meant to set the timeout after which the attempt to connect to the SQL server fails. What you need is a command timeout for your stored procedure. If you share your code, expect more help.
The architecture of the application is straight forward. There is a web application which maintain account holder data. This data is processed and the status of account holders is updated based on number of business rules. This process is initiated using a button on the page and is a long running process (say 15 mins). A component is developed to do this data processing which internally calls stored procedures. Most of the business rules are kept in stored procedure.
To handle timeouts the processing is done asynchornously(using Thread Pool or custom thread or Async Callback Delegates). The entire process run under a transaction. I would like to know your view on what happens to the transaction if the app pool is recycled or the worker process is terminated forcefully?
I'm going to assume you're using a SQL database like SQL Server, MySQL or Oracle.
These database platforms have their own internal transactional model. When you communicate with them and initiate a transaction, the server manages the transaction for you.
For a transaction to commit, the database has to be told by the client to commit the changes. If the transaction never receives this instruction, the transaction remains in a "pending" state. Eventually, after the transaction is "pending" without any further instructions, the server will consider it "dead" and will abandon it, perform a rollback on the transaction.
This is the worst-case scenario for transaction-handling, as a pending transaction may (depending on isolation level) cause resources in the database (rows, pages, entire tables) to be unavailable. Normally you see this when a network connection has failed mid-transaction (e.g. from power loss) and the client does not send the "close connection" command to the server.
If your application is terminated by having the Application Pool recycled whilst in the middle of working against the database during a transaction, the connection to the database will be closed. This act of closing the connection should cause the server to abandon any pending transactions associated with the connection.
The exact behaviour will depend on the specific database and configuration.
In either case, your database's data will remain intact.
If the worker process is terminated, I think application rollbacks.
But you have to test.
Just curious if anyone else has got this particular error and know how to solve it?
The scenario is as follow...
We have an ASP.NET web application using Enterprise Library running on Windows Server 2008 IIS farm connecting to a SQL Server 2008 cluster back end.
MSDTC is turned on. DB connections are pooled.
My suspicion is that somewhere along the line there is a failed MSDTC transaction, the connection got returned to the pool and the next query on a different page is picking up the misbehaving connection and got this particular error. Funny thing is we got this error on a query that has no need whatsoever with distributed transaction (committing to two database, etc.). We were only doing select query (no transaction) when we got the error.
We did SQL Profiling and the query got ran on the SQL Server, but never came back (since the MSDTC transaction was already aborted in the connection).
Some other related errors to accompany this are:
New request is not allowed to start
because it should come with valid
transaction descriptor.
Internal .Net Framework Data Provider error 60.
MSDTC has default 90 seconds timeout, if one query execute exceed this time limit, you will encounter this error when the transaction is trying to commit.
A bounty may help get the answer you seek, but you're probably going to get better answers if you give some code samples and give a better description of when the error occurs.
Does the error only intermittently occur? It sounds like it from your description.
Are you enclosing the close that you want to be done as a transaction in a using TransactionScope block as Microsoft recommends? This should help avoid weird transaction behavior. Recall that a using block makes sure that the object is always disposed regardless of exceptions thrown. See here: http://msdn.microsoft.com/en-us/library/ms172152.aspx
If you're using TransactionScope there is an argument System.TransactionScopeOption.RequiresNew that tells the framework to always create a new transaction for this block of code:
Using ts As New Transactions.TransactionScope(Transactions.TransactionScopeOption.RequiresNew)
' Do Stuff
End Using
Also, if you're suspicious that a connection is getting faulted and then put back into the connection pool, the likely solution is to enclose the code that may fault the connection in a Try-Catch block and Dispose the connection in the catch block.
Old question ... but ran into this issue past few days.
Could not find a good answer until now. Just wanted to share what I found out.
My scenario contains multiple sessions being opened by multiple session factories. I had to correctly rollback and wait and make sure the other transactions were no longer active. It seems that just rolling back one of them will rollback everything.
But after adding the Thread.Sleep() between rollbacks, it doesn't do the other and continues fine with the rollback. Subsequent hits that trigger the method don't result in the "New request is not allowed to start because it should come with valid transaction descriptor." error.
https://gist.github.com/josephvano/5766488
I have seen this before and the cause was exactly what you thought. As Rice suggested, make sure that you are correctly disposing of the db related objects to avoid this problem.