There is this documentation for sqlite: https://www.sqlite.org/lang_transaction.html which doesn't say anything about what happens in case I have case like:
BEGIN;
INSERT INTO a (x, y) VALUES (0, 0);
INSERT INTO b (x, y) VALUES (1, 2); -- line 3 error here, b doesn't have column x
COMMIT;
What happens in this case? Does it commit or rollback if there is error in line 3? I would expect automatic rollback, but I would like to be sure about it.
SQL statements are executed individually.
When a statement fails, any effects of this single statement are rolled back, but the transaction stays open and active.
When the application receives the error code, it must decide whether it wants to roll back the transaction, or retry, or do something else.
If you are using a function that executes multiple SQL statements, nothing changes; the effect is the same as if you had executed all statements up to the failing one individually.
Related
When multiple statements are submitted together --separated by semicolons(;) but in the same string-- and are NOT wrapped in an explicit transaction, is only a single implicit transaction created or is an implicit transaction created for each statement separately? Further, if one of the later statements fail and an automatic rollback is performed, are all of the statements rolled back?
This other answer almost satisfies my question, but wording in the official documentation leaves me puzzled. In fact, this may seem like a duplicate, but I am specifically wondering about implicit transactions for multiple statements. The other answer does not explicitly address this particular case.
As an example (borrowing from the other question), the following are submitted as a single string:
INSERT INTO a (x, y) VALUES (0, 0);
INSERT INTO b (x, y) VALUES (1, 2); -- line 3 error here, b doesn't have column x
The documentation says
Automatically started transactions are committed when the last query finishes. (emphasis added)
and
An implicit transaction (a transaction that is started automatically, not a transaction started by BEGIN) is committed automatically when the last active statement finishes. A statement finishes when its prepared statement is reset or finalized. (emphasis added)
The keyword last implies to me the possibility of multiple statements. Of course if an implicit transaction is started for each individual statement, then taken individually each statement will be the "last" statement to be executed, but in context of individual statements then it should just say the statement to emphasize the context being one single statement at a time.
Or is there there a difference between prepared statements and unprepared SQL strings? (But as I understand, all statements are prepared even if the calling application doesn't preserve the prepared statement for reuse, so I'm not sure this even matters.)
In the case of all statements being successful, the result of a single commit or multiple commits are essentially the same, but the docs only mention that the single failing statement is automatically rolled back, but doesn't mention other statements submitted together.
The sqlite3_prepare interface compiles the first SQL statement in a query string. The pzTail parameter to these functions returns a pointer to the beginning of the unused portion of the query string.
For example, if you call sqlite3_prepare with the multi-statement SQL string in your example, the first statement is the only one that is active for the resulting prepared statement. The pzTail pointer, if provided, points to the beginning of the second statement. The second statement is not compiled as a prepared statement until you call sqlite3_prepare again with the pzTail pointer.
So, no, multiple statements are not rolled back. Each implicit transaction created by the SQLite engine encompasses a single prepared statement.
I need to modify all the content in a table. So I wrap the modifications inside a transaction to ensure either all the operations succeed, or none do. I start the modifications with a DELETE statement, followed by INSERTs. What I’ve discovered is even if an INSERT fails, the DELETE has still takes place, and the database is not rolled back to the pre-transaction state.
I’ve created an example to demonstrate this issue. Put the following commands into a script called EXAMPLE.SQL
CREATE TABLE A(id INT PRIMARY KEY, val TEXT);
INSERT INTO A VALUES(1, “hello”);
BEGIN;
DELETE FROM A;
INSERT INTO A VALUES(1, “goodbye”);
INSERT INTO A VALUES(1, “world”);
COMMIT;
SELECT * FROM A;
If you run the script: “sqlite3 a.db < EXAMPLE.SQL”, you will see:
SQL error near line 10: column id is not unique
1|goodbye
What’s surprising is that the SELECT statement results did not show ‘1|hello’.
It would appear the DELETE was successful, and the first INSERT was successful. But when the second INSERT failed (as it was intended to)….it did not ROLLBACK the database.
Is this a sqlite error? Or an error in my understanding of what is supposed to happen?
Thanks
It works as it should.
COMMIT commits all operations in the transaction. The one involving world had problems so it was not included in the transaction.
To cancel the transaction, use ROLLBACK, not COMMIT. There is no automatic ROLLBACK unless you specify it as conflict resolution with e.g. INSERT OR ROLLBACK INTO ....
And use ' single quotes instead of “ for string literals.
This documentation shows the error types that lead to an automatic rollback:
SQLITE_FULL: database or disk full
SQLITE_IOERR: disk I/O error
SQLITE_BUSY: database in use by another process
SQLITE_NOMEM: out or memory
SQLITE_INTERRUPT: processing interrupted by application request
For other error types you will need to catch the error and rollback, more on this is covered in this SO question.
From http://www.sqlite.org/lang_conflict.html
ABORT
When an applicable constraint violation occurs, the ABORT resolution algorithm aborts the current SQL statement with an SQLITE_CONSTRAIT error and backs out any changes made by the current SQL statement; but changes caused by prior SQL statements within the same transaction are preserved and the transaction remains active. This is the default behavior and the behavior proscribed the SQL standard.
FAIL
When an applicable constraint violation occurs, the FAIL resolution algorithm aborts the current SQL statement with an SQLITE_CONSTRAINT error. But the FAIL resolution does not back out prior changes of the SQL statement that failed nor does it end the transaction. For example, if an UPDATE statement encountered a constraint violation on the 100th row that it attempts to update, then the first 99 row changes are preserved but changes to rows 100 and beyond never occur.
Both preserve changes made before the statement that caused constraint violation and do not end transaction. So, I suppose the only difference is that FAIL resolution does not let further changes to be made, while ABORT does only back up only conflicting statement. Did I get right?
The answer is simple: FAIL does not rollback changes done by the current statement.
Consider those 2 tables:
CREATE TABLE IF NOT EXISTS constFAIL (num UNIQUE ON CONFLICT FAIL);
CREATE TABLE IF NOT EXISTS constABORT (num UNIQUE ON CONFLICT ABORT);
INSERT INTO constFAIL VALUES (1),(3),(4),(5);
INSERT INTO constABORT VALUES (1),(3),(4),(5);
The statement
UPDATE constABORT SET num=num+1 WHERE num<5
will fail and change nothing.
But this satement
UPDATE constFAIL SET num=num+1 WHERE num<5
will update the first row, then fail and leave the 1 row updated, so the new values are 2, 3, 4, 5
I want to restrict the execution of my PL/SQL code from repetition. That is, I have written a PL/SQL code with three input parameters viz, Month, Year and a Flag. I have executed the procedure with the following values for the parameters:
Month: March
Year : 2011
Flag: Y
Now, If I am trying to execute the procedure with the same values to the parameters as above, I want to write some code in the PL/SQL to restrict the unwanted second execution. Can anyone help. I hope the question is no ambiguous.
You can use the function result cache: http://www.oracle-developer.net/display.php?id=504 . So Oracle can do this for you.
I would create another table that would store the 3 parameters of each request. When your procedure is called it would first check the "parameter request" table to see if the calling parameters have beem used before. If found, then exit the procedure. If not found, then save the parameters and execute the rest of the procedure.
Your going to need to keep "State" of the last call somewhere. I would recommend creating a table with a datetime column.
When your procedure is called update this table. So, next time when your procedure is called.. check this table to see when was the last time your procedure was called and then proceed accordingly.
Why not set up a table to track what arguments you've already executed it with?
In your procedure, first check that table to see if similar parameters have already been processed. If so, exit (with or without an error).
If not, insert them and do the processing necessary.
Depending on how tight the requirements are, you'll need to get a exclusive lock on that table to prevent concurrent execution.
A nice plus would be an extra column with "in progress"/"done"/"error" status so that you can check if things are going on properly. (Maybe a timestamp too if that's important/interesting.)
This setup allows you to easily clear some of the executions (by deleting some rows) if you find things need to be re-done for whatever reason.
Make an insert in the beginning of the procedure, and do a select for update tolock the table so no one else can process any data and if everything goes ok with the procedure, commit and release the table 😀
I'm running an asp.net 2.0 web application. Suddenly this morning, ExecuteNonQuery started returning -1, even though the commands in the SP that ExecuteNonQuery is running, are executing (I see items inserted and updated in the DB), and there is no exception thrown.
This happens only when I am connected to our production DB. When I'm connected to our development DB they return the correct number of rows affected.
Also, interestingly enough, ExecuteDataSet doesn't have problems - it returns DataSets beautifully.
So it doesn't seem like a connection problem. But then, what else could it be?
You can also check the procedure to see if there's this line of code in there -
SET NOCOUNT ON
This will cause the ExecuteNonQuery to always return -1.
-1 means "No rows effected".
SELECT statement will also return -1.
Check the effect of the SP on dev and live environments.
Taken from SqlCommand::ExecuteNonQuery Method
For UPDATE, INSERT, and DELETE statements, the return value is the number of rows affected by the command. When a trigger exists on a table being inserted or updated, the return value includes the number of rows affected by both the insert or update operation and the number of rows affected by the trigger or triggers. For all other types of statements, the return value is -1. If a rollback occurs, the return value is also -1.
I encounter almost same problem (in Postgresql though) with NpgsqlDataAdapter, the Update method always returns 1.
Perhaps your connection string to your development db is different from your production db. Or if there's really no difference between their connection strings, perhaps your production db's service pack is different from your development db.
ExceuteNonQuery returns -1 for all types stored procedure as per the msdn
it will returns updated records values only in case of sattment