How to write sqlite transaction that rolls back on any error - sqlite

I have searched extensively on this and I have found a lot of people asking the question but no answers that included code examples to help me understand.
I'd like to write a transaction (in sql using the command line sqlite3 interface) that performs several update statements, and if any of them fail for any reason, rolls back the transaction. The default behaviour appears to be to roll-back the statement that failed but commit the others.
This tutorial appears to advise that it's sufficient to add begin; and rollback; before and after the statements, but that's not true because I've tried it with deliberate errors and the non-error statements were definitely committed (which I don't want).
This example really confuses me because the two interlocutors seem to give conflicting advice at the end - one says that you need to write error handling (without giving any examples) whereas the other says that no error handling is needed.
My MWE is as follows:
create table if not exists accounts (
id integer primary key not null,
balance decimal not null default 0
);
insert into accounts (id, balance) values (1,200),(2,300);
begin transaction;
update accounts set balance = field1 - 100 where id = 1;
update accounts set balance = field1 + 100 where id = 2;
update accounts set foo = 23; //Deliberate error
commit;
The idea is that none of these changes should be committed.

The sqlite3 command-line shell is intended to be used interactively, so it allows you to continue after an error.
To abort on the first error instead, use the -bail option:
sqlite3 -bail my.db < mwe.sql

If you are executing line by line, then the idea is that you first run these commands:
create table if not exists accounts (
id integer primary key not null,
balance decimal not null default 0
);
insert into accounts (id, balance) values (1,200),(2,300);
begin transaction;
update accounts set balance = field1 - 100 where id = 1;
update accounts set balance = field1 + 100 where id = 2;
update accounts set foo = 23; //Deliberate error
At this point, if you have no errors, you run the commit:
commit;
All the updates should be visible if you open a second connection and query the table.
On another hand if you got an error, instead of committing you rollback:
rollback;
All the updates should be rolled back;
If you are doing it programatically in java you would enclose the updates in a try - catch block, and commit at the end of the try, or rollback inside the catch.

Related

How To Create a PL/SQL Trigger That Detects an Inserted or Updated Row and updates a Record in a Different Table?

I am creating a book tracking database for myself that holds information about my books and allows me to keep track of who is borrowing them. I am trying to create a trigger on my Checkouts table that runs if a record is added or updated that will determine if a checkout data has been entered or if a checkin date has been entered and change the "available" field in my Books table to "Y" or "N".
I have created a trigger called "update_book_availablility" on my Checkouts table but I keep getting this error:
"PLS-00103: Encountered the symbol 'end-of-file' when expecting one of the following: ( begin case declare and exception exit for goto if loop mod null pragma raise return select update while with <<continue close current delete fetch lock insert open rollback savepoint set sql execute commit forall merge standard pipe purge json_object
Errors: check compiler log"
Here is my trigger code:
CREATE OR REPLACE NONEDITIONABLE TRIGGER "UPDATE_BOOK_AVAILABILITY"
AFTER INSERT OR UPDATE OF ISBN, PersonID, checkout_date, checkin_date
ON Checkouts
FOR EACH ROW
BEGIN
IF :NEW.checkout_date = NULL
THEN
UPDATE Book
SET available = 'N'
WHERE ISBN IN (SELECT :NEW.ISBN FROM Checkouts);
END IF;
END;
Here is an image of my ERD:
ERD
I have been looking into and double checking my trigger syntax, If condition syntax, subquery syntax, and googling this error but have found nothing that has helped. I am new to PL/SQL and would appreciate any help in understanding what I have done wrong or missed.
PLS-00103: Encountered the symbol end-of-file error is SYNTAX ERROR
Copied your trigger and adjusted it to one of my test tables - it works. I removed NONEDITIONABLE and changed trigger table name as well as column names and table/column beeing updated by trigger.
To Do:
Check your syntax again or write the trigger from scratch once more
"...WHERE ISBN IN (SELECT :NEW.ISBN FROM Checkouts)..." selects one fixed value (FOR EACH ROW) :NEW.ISBN of triggering table, better ->> "... WHERE ISBN = :NEW.ISBN ..."
Prety sure that you don't need NONEDITIONABLE trigger for your books tracking app...
Regards...

How do I write a PLSQL trigger that increments a value?

I just started working with PL/SQL. The database is for a game I want to integrate into my discord bot. Both the DB and the bot are running on Oracle Cloud.
The DB has one table, players, consisting of a discord user id, they have a level initiated with 1, exp initiated with 0 and mana initiated with 100. For now, the first thing I wanted to implement was a TRIGGER that would activate when exp is updated on table players, check if exp reached the level up threshold and if so, reduce exp to 0 and increase the level by 1.
Right now, I have this:
CREATE OR REPLACE trigger level_up
BEFORE UPDATE
ON PLAYERS
FOR EACH ROW
WHEN (:new.EXP >= PLAYERS.LEVEL * 100)
begin
:new.EXP := 0;
:new.LEVEL := :old.LEVEL + 1;
end;
When I try to run this, I get the following error:
"Error: ORA-00904: : invalid identifier"
Nothing is highlighted and in SQL Developer, when I right-click it and then click "Go to source", it doesn't highlight anything and just throws the cursor to the beginning of the worksheet.
I have already tried a couple different things like
BEFORE UPDATE OF EXP ON PLAYERS
with the rest more or less the same and even tried working with AFTER UPDATE:
CREATE OR REPLACE trigger level_up
AFTER UPDATE
ON PLAYERS
FOR EACH ROW
begin
UPDATE players
SET players.exp = 0,
players.level = players.level + 1
WHERE players.exp < players.level * 100
end;
This gave me multiple errors though:
Error(6,9): PL/SQL: SQL Statement ignored
Error(10,5): PL/SQL: ORA-00933: SQL command not properly ended
Error(10,8): PLS-00103: Encountered the symbol "end-of-file" when expecting one of the following: ( begin case declare end exception exit for goto if loop mod null pragma raise return select update while with << continue close current delete fetch lock insert open rollback savepoint set sql execute commit forall merge pipe purge json_exists json_value json_query json_object json_array
At this point I am fully prepared to just abandon the oracle db and switch to mongodb or something, it's just bugging me out that I can't figure out what I am doing wrong.
Thank you for your time!
If the 1st trigger code is OK to you, then just fix what's wrong - why abandoning the whole idea of using Oracle? It's not its (Oracle's fault) you don't know how to use it.
Here's an example.
Table contains just two columns, necessary for this situation.
Column name can't be just level - that's name of a pseudocolumn in hierarchical query and is reserved word for that purpose; I renamed it to c_level (well, you can name it level, but it should then be enclosed into double quotes and you'd have to reference it that way - with double quotes - every time you access it, using exactly the same letter case. In Oracle, that's generally a bad idea).
SQL> create table players
2 (exp number,
3 c_level number
4 );
Table created.
Trigger: you should have removed colon when referencing the new pseudorecord in the when clause (while in trigger body you have to use it, the colon sign). Also, you don't reference c_level with table's name (players.c_level) but also using the pseudorecord.
SQL> create or replace trigger level_up
2 before update on players
3 for each row
4 when (new.exp >= new.c_level * 100)
5 begin
6 :new.exp := 0;
7 :new.c_level := :old.c_level + 1;
8 end;
9 /
Trigger created.
Let's try it. Initial row:
SQL> insert into players (exp, c_level) values (50, 1);
1 row created.
SQL> select * from players;
EXP C_LEVEL
---------- ----------
50 1
Let's update exp so that its value forces trigger to fire:
SQL> update players set exp = 101;
1 row updated.
New table contents:
SQL> select * from players;
EXP C_LEVEL
---------- ----------
0 2
SQL>
If that was your intention, then it kind of works.

SQLite error: cannot start a transaction within a transaction with very basic tables

I am brand new to SQL, and I am learning on an SQLite editor. So I create a couple of very simple tables. This code is straight from Linkedin learning "SQL essential training", and I am using the recommended SQLite editor.
CREATE TABLE widgetInventory(
id INTEGER PRIMARY KEY,
description TEXT,
onhand INTEGER NOT NULL);
CREATE TABLE widgetSales(
id INTEGER PRIMARY KEY,
inv_id INTEGER,
quan INTEGER,
price INTEGER);
Then I update widgetInventory with some data:
INSERT INTO widgetInventory (description, onhand) VALUES ('rock', 25);
INSERT INTO widgetInventory (description, onhand) VALUES ('paper', 25);
INSERT INTO widgetInventory (description, onhand) VALUES ('scissors', 25);
Next, I want to update the widgetSales table with a sale, and update the widgetInventory table to record the reduction of onhand.
BEGIN TRANSACTION;
INSERT INTO widgetSales (inv_id, quan, price) VALUES (1,5,500);
UPDATE widgetInventory SET onhand = (onhand-5) WHERE id = 1;
END TRANSACTION;
I am not understanding why this gives me an error when I run it, as it is exactly as it is in the lesson.
[06:18:04] Error while executing SQL query on database 'test': cannot start a transaction within a transaction
But, I can run the INSERT and UPDATE lines separately, and they do what I want them to do.
Apparently, running - END TRANSACTION; - before running the entire transaction appears to work.
I think that somehow, SQL thinks that a transaction is already occurring. Though, I'm not sure where exactly. So to stop it, you have to end the transaction first before proceeding with the course.
In the SQLite Editor, you may have to delete or comment out all of the code before and after these two transactions.
BEGIN TRANSACTION;
INSERT INTO widgetSales ( inv_id, quan, price ) VALUES ( 1, 5, 500 );
UPDATE widgetInventory SET onhand = ( onhand - 5 ) WHERE id = 1;
END TRANSACTION;
BEGIN TRANSACTION;
INSERT INTO widgetInventory ( description, onhand ) VALUES ( 'toy', 25 );
ROLLBACK;
Otherwise it won't execute the transaction.
Other than that, there is probably an error written in somewhere. Copying and pasting in the .txt file didn't give me that transaction error and could execute the transaction normally.
Just had this same error and my issue was I only highlighted the first line so SQLLite started the transaction but didn't run it fully. All I did was run end transaction, highlight the whole block of code and run that and it worked fine. Must be some syntax issue in Lite that doesn't run the full block itself.
while executing SQL query on database 'test': cannot start a
transaction within a transaction
means a transaction already exists. It may happen if someone forgets to select the END TRANSACTION; statement.
If you face this issue just select END TRANSACTION once and run. With this it will end the active transaction and then you can run any of the existing transaction.
For the particular case of following the Linkedin learning "SQL essential training" course, I have figured out to fix it by running (f9) the "BEGIN TRANSACTION", "...TRANSACTION CONTENTS..." and "END TRANSACTION" statements separately, not all the statements at the same time.
So,
First select the "BEGIN TRANSACTION;" and run it by pressing f9.
Then select the contents of the transactions (I think you can include also the "END TRANSACTION;" part) and run it.

sqlite transactions do not respect delete

I need to modify all the content in a table. So I wrap the modifications inside a transaction to ensure either all the operations succeed, or none do. I start the modifications with a DELETE statement, followed by INSERTs. What I’ve discovered is even if an INSERT fails, the DELETE has still takes place, and the database is not rolled back to the pre-transaction state.
I’ve created an example to demonstrate this issue. Put the following commands into a script called EXAMPLE.SQL
CREATE TABLE A(id INT PRIMARY KEY, val TEXT);
INSERT INTO A VALUES(1, “hello”);
BEGIN;
DELETE FROM A;
INSERT INTO A VALUES(1, “goodbye”);
INSERT INTO A VALUES(1, “world”);
COMMIT;
SELECT * FROM A;
If you run the script: “sqlite3 a.db < EXAMPLE.SQL”, you will see:
SQL error near line 10: column id is not unique
1|goodbye
What’s surprising is that the SELECT statement results did not show ‘1|hello’.
It would appear the DELETE was successful, and the first INSERT was successful. But when the second INSERT failed (as it was intended to)….it did not ROLLBACK the database.
Is this a sqlite error? Or an error in my understanding of what is supposed to happen?
Thanks
It works as it should.
COMMIT commits all operations in the transaction. The one involving world had problems so it was not included in the transaction.
To cancel the transaction, use ROLLBACK, not COMMIT. There is no automatic ROLLBACK unless you specify it as conflict resolution with e.g. INSERT OR ROLLBACK INTO ....
And use ' single quotes instead of “ for string literals.
This documentation shows the error types that lead to an automatic rollback:
SQLITE_FULL: database or disk full
SQLITE_IOERR: disk I/O error
SQLITE_BUSY: database in use by another process
SQLITE_NOMEM: out or memory
SQLITE_INTERRUPT: processing interrupted by application request
For other error types you will need to catch the error and rollback, more on this is covered in this SO question.

Stored procedure slow when called from web, fast from Management Studio

I have stored procedure that insanely times out every single time it's called from the web application.
I fired up the Sql Profiler and traced the calls that time out and finally found out these things:
When executed the statements from within the MS SQL Management Studio, with same arguments (in fact, I copied the procedure call from sql profile trace and ran it): It finishes in 5~6 seconds avg.
But when called from web application, it takes in excess of 30 seconds (in trace) so my webpage actually times out by then.
Apart from the fact that my web application has its own user, every thing is same (same database, connection, server etc)
I also tried running the query directly in the studio with the web application's user and it doesn't take more than 6 sec.
How do I find out what is happening?
I am assuming it has nothing to do with the fact that we use BLL > DAL layers or Table adapters as the trace clearly shows the delay is in the actual procedure. That is all I can think of.
EDIT I found out in this link that ADO.NET sets ARITHABORT to true - which is good for most of the time but sometime this happens, and the suggested work-around is to add with recompile option to the stored proc. In my case, it's not working but I suspect it's something very similar to this. Anyone knows what else ADO.NET does or where I can find the spec?
I've had a similar issue arise in the past, so I'm eager to see a resolution to this question. Aaron Bertrand's comment on the OP led to Query times out when executed from web, but super-fast when executed from SSMS, and while the question is not a duplicate, the answer may very well apply to your situation.
In essence, it sounds like SQL Server may have a corrupt cached execution plan. You're hitting the bad plan with your web server, but SSMS lands on a different plan since there is a different setting on the ARITHABORT flag (which would otherwise have no impact on your particular query/stored proc).
See ADO.NET calling T-SQL Stored Procedure causes a SqlTimeoutException for another example, with a more complete explanation and resolution.
I also experience that queries were running slowly from the web and fast in SSMS and I eventually found out that the problem was something called parameter sniffing.
The fix for me was to change all the parameters that are used in the sproc to local variables.
eg. change:
ALTER PROCEDURE [dbo].[sproc]
#param1 int,
AS
SELECT * FROM Table WHERE ID = #param1
to:
ALTER PROCEDURE [dbo].[sproc]
#param1 int,
AS
DECLARE #param1a int
SET #param1a = #param1
SELECT * FROM Table WHERE ID = #param1a
Seems strange, but it fixed my problem.
Not to spam, but as a hopefully helpful solution for others, our system saw a high degree of timeouts.
I tried setting the stored procedure to be recompiled by using sp_recompile and this resolved the issue for the one SP.
Ultimately there were a larger number of SP's that were timing-out, many of which had never done so before, by using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE the incident rate of timeouts has plummeted significantly - there are still isolated occurrences, some where I suspect the plan regeneration is taking a while, and some where the SPs are genuinely under-performant and need re-evaluation.
Could it be that some other DB calls made before the web application calls the SP is keeping a transaction open? That could be a reason for this SP to wait when called by the web application. I say isolate the call in the web application (put it on a new page) to ensure that some prior action in the web application is causing this issue.
You can target specific cached execution plans via:
SELECT cp.plan_handle, st.[text]
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
WHERE [text] LIKE N'%your troublesome SP or function name etc%'
And then remove only the execution plans causing issues via, for example:
DBCC FREEPROCCACHE (0x050006003FCA862F40A19A93010000000000000000000000)
I've now got a job running every 5 minutes that looks for slow running procedures or functions and automatically clears down those execution plans if it finds any:
if exists (
SELECT cpu_time, *
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext
--order by req.total_elapsed_time desc
WHERE ([text] LIKE N'%your troublesome SP or function name etc%')
and cpu_time > 8000
)
begin
SELECT cp.plan_handle, st.[text]
into #results
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
WHERE [text] LIKE N'%your troublesome SP or function name etc%'
delete #results where text like 'SELECT cp.plan_handle%'
--select * from #results
declare #handle varbinary(max)
declare #handleconverted varchar(max)
declare #sql varchar(1000)
DECLARE db_cursor CURSOR FOR
select plan_handle from #results
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO #handle
WHILE ##FETCH_STATUS = 0
BEGIN
--e.g. DBCC FREEPROCCACHE (0x050006003FCA862F40A19A93010000000000000000000000)
print #handle
set #handleconverted = '0x' + CAST('' AS XML).value('xs:hexBinary(sql:variable("#handle"))', 'VARCHAR(MAX)')
print #handleconverted
set #sql = 'DBCC FREEPROCCACHE (' + #handleconverted + ')'
print 'DELETING: ' + #sql
EXEC(#sql)
FETCH NEXT FROM db_cursor INTO #handle
END
CLOSE db_cursor
DEALLOCATE db_cursor
drop table #results
end
Simply recompiling the stored procedure (table function in my case) worked for me
like #Zane said it could be due to parameter sniffing. I experienced the same behaviour and I took a look at the execution plan of the procedure and all the statements of the sp in a row (copied all the statements form the procedure, declared the parameters as variables and asigned the same values for the variable as the parameters had). However the execution plan looked completely different. The sp execution took 3-4 seconds and the statements in a row with the exact same values was instantly returned.
After some googling I found an interesting read about that behaviour: Slow in the Application, Fast in SSMS?
When compiling the procedure, SQL Server does not know that the value of #fromdate changes, but compiles the procedure under the assumption that #fromdate has the value NULL. Since all comparisons with NULL yield UNKNOWN, the query cannot return any rows at all, if #fromdate still has this value at run-time. If SQL Server would take the input value as the final truth, it could construct a plan with only a Constant Scan that does not access the table at all (run the query SELECT * FROM Orders WHERE OrderDate > NULL to see an example of this). But SQL Server must generate a plan which returns the correct result no matter what value #fromdate has at run-time. On the other hand, there is no obligation to build a plan which is the best for all values. Thus, since the assumption is that no rows will be returned, SQL Server settles for the Index Seek.
The problem was that I had parameters which could be left null and if they were passed as null the would be initialised with a default value.
create procedure dbo.procedure
#dateTo datetime = null
begin
if (#dateTo is null)
begin
select #dateTo = GETUTCDATE()
end
select foo
from dbo.table
where createdDate < #dateTo
end
After I changed it to
create procedure dbo.procedure
#dateTo datetime = null
begin
declare #to datetime = coalesce(#dateTo, getutcdate())
select foo
from dbo.table
where createdDate < #to
end
it worked like a charm again.
--BEFORE
CREATE PROCEDURE [dbo].[SP_DEMO]
(
#ToUserId bigint=null
)
AS
BEGIN
SELECT * FROM tbl_Logins WHERE LoginId = #ToUserId
END
--AFTER CHANGING TO IT WORKING FINE
CREATE PROCEDURE [dbo].[SP_DEMO]
(
#ToUserId bigint=null
)
AS
BEGIN
DECLARE #Toid bigint=null
SET #Toid=#ToUserId
SELECT * FROM tbl_Logins WHERE LoginId = #Toid
END

Resources