Determining when a new transaction has started in SQLite - sqlite

Is there any way in SQLite to determine when a new transaction has started, or which transaction is currently in progress?
The purpose for this is in a trigger which is logging certain changes to a database. As far as I can tell, a trigger has no indication as to whether the given operation is by itself or part of a set of other operations. Something like a transaction count would allow for a clear delineation of which changes occurred atomically at the same time (e.g. for the purpose of playback).

SQLite has commit and rollback notification callbacks, which are called at the end of any transaction.

Related

Cloud Datastore transaction terminated without explicit rollback defined

From following document: https://cloud.google.com/datastore/docs/concepts/transactions
What would happen if transaction fails with no explicit rollback defined? For example, if we're performing put() operation on value arguments.
The document states that transaction should be idempotent, what does this mean with respect to put() operation? It is not clear how idempotency is applied in this context.
How do we detect failure if failure from commit is not reliable according to the documentation?
We are seeing some symptoms where put() against value argument is sometimes partially saving the data. Note we do not have explicit rollback defined.
As you may already know, Datastore transactions are guaranteed to be atomic, which means that it applies the all-or-nothing principle; either all operations succeed or they all fail. This ensures that the data in your database remains consistent over time.
Now, regardless whether you execute put or any other operation in your transaction, your implementation of the code should always ensure that your transaction has either successfully commited or rolled back. This means that if you aren't fully sure whether the commit succeeded, you should explicitly issue a rollback.
However, there may be some exceptions where a commit might fail, and this doesn't necessarily mean that no data was written to your database. The documentation even points out that "you can receive errors in cases where transactions have been committed."
The simple way to detect transaction failures would be to add a try/catch block in your code for when an Exception (failed transactional operation) or DatastoreException (errors related to Datastore - failed commit) are thrown. I believe that you may already have an answer in this Stackoverflow post about this particular question.
A good practice is to make your transactions idempotent whenever possible. In other words, if you're executing a transaction that includes a write operation put() to your database, if this operation were to fail and needed to be retried, the end result should ideally remain the same.
A real world example can be - you're trying to transfer some money to your friend; the transaction consists of withdrawing 20 USD from your bank account and depositing this same amount into your friend's bank account. If the transaction were to fail and had to be retried, the transaction should still operate with the same amount of money (20 USD) as the final result.
Keep in mind that the Datastore API doesn't retry transactions by default, but you can add your own retry logic to your code, as per the documentation.
In summary, if a transaction is interrupted and your logic doesn't handle the failure accordingly, you may eventually see inconsistencies in the data of your database.

Server Side Locks in Cloud Firestore

I'm curious about the behavior of the locks that are performed when doing server side transactions on Cloud Firestore as mentioned in this video: https://www.youtube.com/watch?time_continue=750&v=dOVSr0OsAoU
My transaction will be reading multiple documents and placing locks on them. My question is do these locks restrict all access to the documents - including concurrent reads from client code that isn't part of a transaction? Or do they only restrict writes?
If they do restrict reads is there any way around this - it could lead to severe slowdown in the app I'm working on.
Also in the case that a transaction tries to lock documents that are already locked - what is the retry pattern - how often does it retry, and is there an exponential backoff?
Thanks!
My transaction will be reading multiple documents and placing locks on them.
A transaction operation is first reading the value of a property within a document in order to perform the write operation. So it requires round trip communications with server in order to ensure that the code inside the transaction completes successfully.
My question is do these locks restrict all access to the documents - including concurrent reads from client code that isn't part of a transaction?
The answer is no, concurrent users can read the content of the document even if you perform a write operation using a transaction.
Also in the case that a transaction tries to lock documents that are already locked - what is the retry pattern - how often does it retry, and is there an exponential backoff?
According to the official documentation regarding Firestore transactions, a transaction can fail only the following cases:
The transaction contains read operations after write operations. Read operations must always come before any write operations.
The transaction read a document that was modified outside of the transaction. In this case, the transaction automatically runs again. The transaction is retried a finite number of times.
The transaction exceeded the maximum request size of 10 MiB.
Transaction size depends on the sizes of documents and index entries modified by the transaction. For a delete operation, this includes the size of the target document and the sizes of the index entries deleted in response to the operation.
A failed transaction returns an error and does not write anything to the database. You do not need to roll back the transaction; Cloud Firestore does this automatically.

How does SQLite prevent deadlocks with deferred transactions?

According to the documentation on deferred ransactions:
The default transaction behavior is deferred. (...) The first read operation against a database creates a SHARED lock and
the first write operation creates a RESERVED lock.
Also according to the documentation on locks:
Any number of processes can hold SHARED locks at the same time (...)
Only a single RESERVED lock may be active at one time, though multiple
SHARED locks can coexist with a single RESERVED lock
This sounds like a multiple readers/single writer lock with arbitrary reader-to-writer promotion mechanism, which is known to be a deadlock hazard:
A starts transaction
B starts transaction
A acquires SHARED lock and reads something
B acquires SHARED lock and reads something
A acquires RESERVED lock and prepares to write something. It can't write as long as there are other SHARED locks so it blocks.
B wishes to write so tries to take RESERVED lock. There is already another RESERVED lock so it blocks until it is released, still holding the SHARED lock.
Deadlock.
So how does SQLite get around this? Two possible solutions come to my mind, but both of them seem to break the whole idea of a transaction:
Would-be writers release the SHARED locks before acquiring RESERVED. This would break atomicity between reads and writes.
B doesn't block when trying to take a RESERVED lock, but errors-out. This would mean all the reads would need to be repeated and significantly complicates API usage.
Am I missing something? How does SQLite deal with this? Why would this seemingly dangerous type of transaction be the default?
By simple trial and error, I discovered that they took the error-out route.
In the given scenario, when B tries to take RESERVED, it will first wait for PRAGMA busy_timeout milliseconds. Then it will report Error: database is locked. The transaction will still be active, so an immediate retry is possible.
If A afterwards tries to COMMIT (or if it runs out of in-memory cache), it will take the PENDING lock (preventing additional SHARED locks) and then wait for EXCLUSIVE. If some SHARED locks remain after PRAGMA busy_timeout milliseconds, it will report Error: database is locked. The transaction will still be active, so an immediate retry is possible.
In other words, the deadlock prevention mechanism in use is timeout. However, it does require the API users to cooperate by rolling back and trying again.
As a guideline:
Use just BEGIN TRANSACTION (or explicitly BEGIN DEFERRED TRANSACTION) when you only expect to read. Writes could possibly fail, forcing you to rollback and retry the entire transaction again.
Use BEGIN IMMEDIATE TRANSACTION when you expect to maybe write at some point. This will block all other writers and all other immediate maybe-writers.
BEGIN EXCLUSIVE TRANSACTION will immediately block until all other locks are released. I have no idea why anyone would want this. Possibly to prepare for some data which needs to be written to disk as quickly as possible once it arrives? EDIT: It seems to be the only way to prevent timeouts at arbitrary points after beginning a transaction.

Is OpenEdge Auditing safe from Dirty Reads

With a Read Committed ODBC connection I'm getting the odd record lock when reading from OpenEdge's Auditing tables (Database user table CRUD operations)...
ERROR [HY000] [DataDirect][ODBC Progress OpenEdge Wire Protocol driver][OPENEDGE]Failure getting record lock on a record from table PUB._aud-audit-data.
I understand isolation levels and know that this can be resolved by changing to read uncommitted.
Obviously the main danger in doing this is that dirty reads become a possibility. Dirty reads would definitely cause my solution issues so would be a no-go. But...
Does the very nature of OpenEdge Auditing prevent possible dirty reads of the source records being CRUDed?
here's my thinking behind the question...
With my configuration, OpenEdge audit records are created upon completion of CRUD operations against the user tables. I want to make sure that I only read audit records where the user table CRUD operation has committed to the database. I suspect that this is always the case and that the audit records are only created after committal of the user table CRUD transaction. If this is the case then it would only be under exceptional circumstance that the audit records transactions would be rolled back and therefore dirty reads of the audit records are not really a possibility...
Is that feasible?
Can anyone clarify the life-cycle of a user transaction followed by an audit transaction in OpenEdge?
I assume it is unlikely but possible for the audit transaction to fail, in this case what happens to the original, audited, CRUD operation?

Oracle DB - Lock on cancellation of update query

if i have a very long run UPDATE query that takes hours and I happen to cancel in middle of when it's running.
I got this message below:
"User requested cancel of current operation"
Will Oracle automatically roll back the transactions?
Will DB lock be still acquired if I cancel the query? If so, how to unlock?
How to check which Update query is locking the database?
Thanks.
It depends.
Assuming that whatever client application you're using properly implemented a query timeout and that the error indicates that the timeout was exceeded, then Oracle will begin rolling back the transaction when the error is thrown. Once the transaction finishes rolling back, the locks will be released. Be aware, though, that it can easily take at least as long to roll back the query as it took to run. So it will likely be a while before the locks are released.
If, on the other hand, the client application hasn't implemented the cancellation properly, the client may not have notified Oracle to cancel the transaction so it will continue. Depending on the Oracle configuration and exactly what the client does, the database may detect some time later that the application was not responding and terminate the connection (going through the same rollback process discussed above). Or Oracle may end up continuing to process the query.
You can see what sessions are holding locks and which are waiting on locks by querying dba_waiters and dba_blockers.

Resources