How to rollback DB transaction after http connection is lost - servlets

Recently at a interview the interviewer asked me a question, below is the question -
Suppose a request is sent to a servlet and the servlet performs several DB transactions(first update and commit, then read and update and again commit) which takes around 3-4 minutes, during that period the user press the cancel button and the connection is lost. How would you rollback the entire transaction.
My answer was - Since Servlet throws IOException we can handle the exception and rollback the transaction.
But again he questioned me what about the DB commits which are already done, how would you rollback that.
I was blank and replied that i never came across that situation. But i would really like to know what could be done in such a situation.
Thanks.

But again he questioned me what about the DB commits which are already
done, how would you rollback that.
I think it was not a servlet related questions.If the transaction was committed in the database you can not rollback it. A database transaction has several properties known as ACID (Atomicity, Consistency, Isolation, Durability). The one that applies in this case is Durability:
"Durability is the ACID property which guarantees that transactions
that have committed will survive permanently"

Related

Understanding what operation types will cause a failed transaction in GAE datastore

After reading the documentation, there is one thing that is not completely clear to me, I am hoping someone can clarify.
Obviously if you have two read-then-write transactions that occur concurrently, one of them will fail. However, if you have one read-then-write transaction in progress, and then another read occurs that was not part of a transaction, will that other non transactional read cancel the transaction, i.e. A transaction to read-then-write on a payment/transaction record should not be cancelled by the non transactional "Get all payment data" report. Is that correct?
Yes, that's correct. Non-transactional reads and read-only transactions do not contend with read-then-write transactions.

Cloud Datastore transaction terminated without explicit rollback defined

From following document: https://cloud.google.com/datastore/docs/concepts/transactions
What would happen if transaction fails with no explicit rollback defined? For example, if we're performing put() operation on value arguments.
The document states that transaction should be idempotent, what does this mean with respect to put() operation? It is not clear how idempotency is applied in this context.
How do we detect failure if failure from commit is not reliable according to the documentation?
We are seeing some symptoms where put() against value argument is sometimes partially saving the data. Note we do not have explicit rollback defined.
As you may already know, Datastore transactions are guaranteed to be atomic, which means that it applies the all-or-nothing principle; either all operations succeed or they all fail. This ensures that the data in your database remains consistent over time.
Now, regardless whether you execute put or any other operation in your transaction, your implementation of the code should always ensure that your transaction has either successfully commited or rolled back. This means that if you aren't fully sure whether the commit succeeded, you should explicitly issue a rollback.
However, there may be some exceptions where a commit might fail, and this doesn't necessarily mean that no data was written to your database. The documentation even points out that "you can receive errors in cases where transactions have been committed."
The simple way to detect transaction failures would be to add a try/catch block in your code for when an Exception (failed transactional operation) or DatastoreException (errors related to Datastore - failed commit) are thrown. I believe that you may already have an answer in this Stackoverflow post about this particular question.
A good practice is to make your transactions idempotent whenever possible. In other words, if you're executing a transaction that includes a write operation put() to your database, if this operation were to fail and needed to be retried, the end result should ideally remain the same.
A real world example can be - you're trying to transfer some money to your friend; the transaction consists of withdrawing 20 USD from your bank account and depositing this same amount into your friend's bank account. If the transaction were to fail and had to be retried, the transaction should still operate with the same amount of money (20 USD) as the final result.
Keep in mind that the Datastore API doesn't retry transactions by default, but you can add your own retry logic to your code, as per the documentation.
In summary, if a transaction is interrupted and your logic doesn't handle the failure accordingly, you may eventually see inconsistencies in the data of your database.

Doctrine: How to prevent transaction from becoming 'rollback only' through caught exception?

Deleting an entity fails because of an exception within a postRemove event handler. Even if the exception is caught the deletion fails because the transaction cannot be commit any more. How to solve this?
The complete story:
I need to keep track of some deleted entities in a Symfony 3.4 based web service using Doctrine.
To to this I have create an EventSubscriber which handles the postRemove event to check whether the deleted entity needs to be logged. In this case the entities UUID is stored in a DeleteLog table of th DB.
This works fine, but in rare cases persisting of the the DeleteLogEntry fails since there already exists a log entry for the given UUID which needs to be unique.
The source of this problem is some 3rd party code I cannot change my self. As a temporary solution tried to catch the UniqueConstraintViolationException. This does not solve the problem since now I get ConnectionException
Transaction commit failed because the transaction has been marked for
rollback only.
Is it possible to solve this dilemma?
Of course I could check if a DeleteLogEntry with the given UUID exists before creating a new one. But since this problem occurs only in rare cases, the check would be negative most of the time. Of course running the check anyway is not a catastrophic performance impact but simply seems not be the best solution.
Is there any may to catch the exception and keep the transaction from being marked as rollback only?
Nope, it's not possible to keep a transaction from being marked.
Doctine starts a nested transaction for postRemove and if it fails no other transactions should be committed. Marking a transaction for rollback only (and even closing entity manager) is expected behavior in such scenario, because there is no other way for Doctrine to ensure consistency as there are no support for real nested transactions.
If performance is not an issue, then checking for DeleteLogEntry is a good option.
Other possible workarounds:
store ID somewhere (Redis, Memcache, file, etc.) temporarily and update DeleteLogEntry later, after initial delete is committed
use a separate entity manager/connection to update DeleteLogEntry
remove Unique Constrain and use a background task to watch for duplicates

Oracle DB - Lock on cancellation of update query

if i have a very long run UPDATE query that takes hours and I happen to cancel in middle of when it's running.
I got this message below:
"User requested cancel of current operation"
Will Oracle automatically roll back the transactions?
Will DB lock be still acquired if I cancel the query? If so, how to unlock?
How to check which Update query is locking the database?
Thanks.
It depends.
Assuming that whatever client application you're using properly implemented a query timeout and that the error indicates that the timeout was exceeded, then Oracle will begin rolling back the transaction when the error is thrown. Once the transaction finishes rolling back, the locks will be released. Be aware, though, that it can easily take at least as long to roll back the query as it took to run. So it will likely be a while before the locks are released.
If, on the other hand, the client application hasn't implemented the cancellation properly, the client may not have notified Oracle to cancel the transaction so it will continue. Depending on the Oracle configuration and exactly what the client does, the database may detect some time later that the application was not responding and terminate the connection (going through the same rollback process discussed above). Or Oracle may end up continuing to process the query.
You can see what sessions are holding locks and which are waiting on locks by querying dba_waiters and dba_blockers.

Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction

Just curious if anyone else has got this particular error and know how to solve it?
The scenario is as follow...
We have an ASP.NET web application using Enterprise Library running on Windows Server 2008 IIS farm connecting to a SQL Server 2008 cluster back end.
MSDTC is turned on. DB connections are pooled.
My suspicion is that somewhere along the line there is a failed MSDTC transaction, the connection got returned to the pool and the next query on a different page is picking up the misbehaving connection and got this particular error. Funny thing is we got this error on a query that has no need whatsoever with distributed transaction (committing to two database, etc.). We were only doing select query (no transaction) when we got the error.
We did SQL Profiling and the query got ran on the SQL Server, but never came back (since the MSDTC transaction was already aborted in the connection).
Some other related errors to accompany this are:
New request is not allowed to start
because it should come with valid
transaction descriptor.
Internal .Net Framework Data Provider error 60.
MSDTC has default 90 seconds timeout, if one query execute exceed this time limit, you will encounter this error when the transaction is trying to commit.
A bounty may help get the answer you seek, but you're probably going to get better answers if you give some code samples and give a better description of when the error occurs.
Does the error only intermittently occur? It sounds like it from your description.
Are you enclosing the close that you want to be done as a transaction in a using TransactionScope block as Microsoft recommends? This should help avoid weird transaction behavior. Recall that a using block makes sure that the object is always disposed regardless of exceptions thrown. See here: http://msdn.microsoft.com/en-us/library/ms172152.aspx
If you're using TransactionScope there is an argument System.TransactionScopeOption.RequiresNew that tells the framework to always create a new transaction for this block of code:
Using ts As New Transactions.TransactionScope(Transactions.TransactionScopeOption.RequiresNew)
' Do Stuff
End Using
Also, if you're suspicious that a connection is getting faulted and then put back into the connection pool, the likely solution is to enclose the code that may fault the connection in a Try-Catch block and Dispose the connection in the catch block.
Old question ... but ran into this issue past few days.
Could not find a good answer until now. Just wanted to share what I found out.
My scenario contains multiple sessions being opened by multiple session factories. I had to correctly rollback and wait and make sure the other transactions were no longer active. It seems that just rolling back one of them will rollback everything.
But after adding the Thread.Sleep() between rollbacks, it doesn't do the other and continues fine with the rollback. Subsequent hits that trigger the method don't result in the "New request is not allowed to start because it should come with valid transaction descriptor." error.
https://gist.github.com/josephvano/5766488
I have seen this before and the cause was exactly what you thought. As Rice suggested, make sure that you are correctly disposing of the db related objects to avoid this problem.

Resources