I am running an END TRANSACTION on my database and occasionally I get error
#1 that "cannot commit - no transaction is active"
Is there a way to determine if a transaction is active before trying a commit? I have been tracking my "BEGIN TRANSACTIONS" by hand but I feel there is a better way.
I am using the C API
You might want to check this:
http://www.sqlite.org/c3ref/get_autocommit.html
According to the page, if you are in a transaction, sqlite3_get_autocommit() will return 0.
The interface you're looking for is actually implemented in this next version of sqlite as you can see in the notes: https://sqlite.org/draft/c3ref/txn_state.html
Determine the transaction state of a database
int sqlite3_txn_state(sqlite3*,const char *zSchema);
The sqlite3_txn_state(D,S) interface returns the current transaction state of schema S in database connection D. If S is NULL, then the highest transaction state of any schema on databse connection D is returned. Transaction states are (in order of lowest to highest):
SQLITE_TXN_NONE
SQLITE_TXN_READ
SQLITE_TXN_WRITE
If the S argument to sqlite3_txn_state(D,S) is not the name of a valid schema, then -1 is returned.
That's weird. I thought sqlite was always in a transaction, either explicitly created by you or implicitly created by sqlite:
http://www.sqlite.org/lang_transaction.html
So I suppose the error means that it's not in a transaction that you initiated ... and if that's what you need to know, it seems OK for sqlite to expect you to keep up with it. Not terribly convenient of course, but I guess that's the cost of a simple API. =/
In SQLite, transactions created using BEGIN TRANSACTION ... END TRANSACTION do not nest.
For nested transactions you need to use the SAVEPOINT and RELEASE commands.
See http://www.sqlite.org/lang_transaction.html
and http://www.sqlite.org/lang_savepoint.html
Related
I'm building this DB about the University for one of my course classes and I'm trying to create a trigger that doesn't allow for a professor to be under 21yo.
I have a Person class and then a Professor subclass.
What I want to happen is, you create a Person object, then a Professor object using that Person object's id, but, if the Person is under 21yo, delete this Professor object, then delete the Person object.
Everything works fine up until the "delete the Person object" part where this doesn't happen and I'm not sure why. Any help?
This is the sqlite code I have:
AFTER INSERT ON Professor
FOR EACH ROW
WHEN strftime('%J', 'now') - strftime('%J', (SELECT dateOfBirth from Person WHERE personId = NEW.personId)) < 7665 -- 21 years in days
BEGIN
SELECT RAISE(ROLLBACK, 'Professor cant be under 21');
DELETE FROM Person WHERE (personId= new.personId);
END;```
One common issue is that there many not be a current transaction scope to rollback to, which would result in this error:
Error: cannot rollback - no transaction is active
If that occurs, then the trigger execution will be aborted and the delete never executed.
If ROLLBACK does succeed, then this creates a paradox, by rolling back to before the trigger was executed in a strictly ACID environment it would not be valid to continue executing the rest of this trigger, because the INSERT never actually occurred. To avoid this state of ambiguity, any call to RAISE() that is NOT IGNORE will abort the processing of the trigger.
CREATE TRIGGER - The RAISE()
When one of RAISE(ROLLBACK,...), RAISE(ABORT,...) or RAISE(FAIL,...) is called during trigger-program execution, the specified ON CONFLICT processing is performed and the current query terminates. An error code of SQLITE_CONSTRAINT is returned to the application, along with the specified error message.
NOTE: This behaviour is different to some other RDBMS, for instance see this explanation on MS SQL Server where execution will specifically continue in the trigger.
As OP does not provide calling code that demonstrates the scenario it is worth mentioning that in SQLite on RAISE(ROLLBACK,)
If no transaction is active (other than the implied transaction that is created on every command) then the ROLLBACK resolution algorithm works the same as the ABORT algorithm.
Generally, if you wanted to Create a Person and then a Professor as a single operation, you would Create a Stored Procedure that would validate the inputs first, preventing the original insert at the start.
To maintain referential integrity, even if an SP is used, you could still add a check constraint on the Professor record or raise an ABORT from a BEFORE trigger to prevent the INSERT from occurring in the first place:
BEFORE INSERT ON Professor
FOR EACH ROW
WHEN strftime('%J', 'now') - strftime('%J', (SELECT dateOfBirth from Person WHERE personId = NEW.personId)) < 7665 -- 21 years in days
BEGIN
SELECT RAISE(ABORT, 'Professor can''t be under 21');
END
This way it is up to the calling process to manage how to handle the error. The ABORT can be caught in the calling logic and would effectively result in rolling back the outer transaction, but the point is that the calling logic should handle negative side effects. As a general rule triggers that cascade logic should only perform positive side effects, that is to say they should only affect data if the inserted row succeeds. In this case we are rolling back the insert, so it becomes hard to identify why the Person would be deleted.
I'm new to SQLite.
Say a row in your table has a column status that is persisting a state-machine. A typical operation on such a row would be to read the current state, and set the next state depending on that, the code having some side-effects.
In a transaction:
val state = db.execute("SELECT state FROM ...")
if (state == "READY_TO_SEND") {
.... do some side-effecting stuff, like sending an email ...
db.execute("UPDATE ... SET state = "MAIL_SENT")
}
In Postgres I could pessimistically lock the row using "SELECT ... FOR UPDATE" to synchronize in this case.
In case of SQLite with WAL mode enabled, if I understood correctly, the read of the current state could happen concurrently by two transactions. Both threads would create the side-effects. Then both transaction would want to upgrade to write. One of them will succeed at an upgrade, the other not, but it's too late to synchronize the side-effects.
Did I understand the SQLite isolation mechanism correctly?
How would you go about synchronization in this case?
Thanks!
I believe that in your example there would be no issue.
If both reads were concurrent and read the same row(s) both would want to update and set the state from "READY_TO_SEND" to "MAIL_SENT". Even if the updates ran concurrently the concurrency would be disturbed as only 1 writer can run at a time. Thus both updates would be applied but sequentially (even though only 1 would be required) and then end result would be the same.
However, since there is only one WAL file, there can only be one writer at a time.
SQLite - Write-Ahead-Logging
Perhaps a bigger issue/side affect could be if the update where UPDATE .... SET state = 'MAIL_SENT' WHERE state = 'READY_TO_SEND'; and the result of the update (number of rows updated) where then used to indicate if the email where sent. The 2nd could be 0 as the WHERE clause would then supress the update. Thus the result from the update might lead to an indication that the email was not sent when it has been sent.
However, this would be a design issue as obviously they are separate emails.
I’m trying to cause a ‘SELECT’ query to fail if the record it is trying to read is locked.
To simulate it I have added a trigger on UPDATE that sleeps for 20 seconds and then in one thread (Java application) I’m updating a record (oid=53) and in another thread I’m performing the following query:
“SET STATEMENT max_statement_time=1 FOR SELECT * FROM Jobs j WHERE j.oid =53”.
(Note: Since my mariadb server version is 10.2 I cannot use the “SELECT … NOWAIT” option and must use “SET STATEMENT max_statement_time=1 FOR ….” instead).
I would expect that the SELECT will fail since the record is in a middle of UPDATE and should be read/write locked, but the SELECT succeeds.
Only if I add ‘for update’ to the SELECT query the query fails. (But this is not a good option for me).
I checked the INNODB_LOCKS table during the this time and it was empty.
In the INNODB_TRX table I saw the transaction with isolation level – REPEATABLE READ, but I don’t know if it is relevant here.
Any thoughts, how can I make the SELECT fail without making it 'for update'?
Normally consistent (and dirty) reads are non-locking, they just read some sort of snapshot, depending on what your transaction isolation level is. If you want to make the read wait for concurrent transaction to finish, you need to set isolation level to SERIALIZABLE and turn off autocommit in the connection that performs the read. Something like
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET autocommit = 0;
SET STATEMENT max_statement_time=1 FOR ...
should do it.
Relevant page in MariaDB KB
Side note: my personal preference would be to use innodb_lock_wait_timeout=1 instead of max_statement_time=1. Both will make the statement fail, but innodb_lock_wait_timeout will cause an error code more suitable for the situation.
I have the DB SP UPDATEClientID as below. Which Takes client ID as parameter.
I'm calling UPDATEClientID SP 50 times in a one second, from WCF Custom Adapter. Then I'm seeing the SQL Deadlock issue.
In my scenario, I have to call UPDATEClientID SP 50 times in one second. How to resolve the SQL Deadlock issue?
CREATE PROCEDURE [dbo].[UPDATEClientID]
#ClientID VARCHAR(50) = NULL
AS
BEGIN
SET NOCOUNT ON;
UPDATE CleintDetails
SET STATUS = 'Y'
WHERE ClientID = #ClientID
END
Do you really have to call this Stored Procedure 50 time in one second or is it the case that you just happen to call 50 times per second?
Some options:
Set Ordered Delivery on the Send Port. This will serialize the requests. It will however be several orders of magnitude slower.
Optimize the statement with lock hints, ROWLOCK for example.
stored procedure code is executing under the BizTalk server default transaction level serializable. Change it to read committed.
We can set the transaction level by following statement in your stored proc.
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
I am trying to implement a simple counter using SQLite provided with Python. I am using CGI to write simple dynamic web pages. It's the only simple way I can think of to implement a counter. The problem is I need to first read the counter value and then update it. But ideally, every user should read a unique value, and it's possible for two users to read the same counter value if they read simultaneously. Is there a simple way to make the read/write operation atomic? I unfamiliar with SQL so please give specific statements to do so. Thanks in advance.
I use a table with one column and one row to store the counter.
You may try this flow of SQL statements:
BEGIN EXCLUSIVE TRANSACTION;
// update counter here
// retrieve new value for user
COMMIT TRANSACTION;
While you perform updates in trisection, changes can be seen only with connection on which they was performed. In this case we used EXCLUSIVE transactions, which locks database for other clients till it will commit transaction.
You should better not use the EXCLUSIVE keyword in the transaction to make it more efficient. The first select automatically creates a shared lock and the update statement will then turn it into an exclusive. It is only necessary that the SELECT and the UPDATE are both inside an explicit set transaction.
BEGIN TRANSACTION;
// read counter value
// update counter value
COMMIT TRANSACTION;