I'm building this DB about the University for one of my course classes and I'm trying to create a trigger that doesn't allow for a professor to be under 21yo.
I have a Person class and then a Professor subclass.
What I want to happen is, you create a Person object, then a Professor object using that Person object's id, but, if the Person is under 21yo, delete this Professor object, then delete the Person object.
Everything works fine up until the "delete the Person object" part where this doesn't happen and I'm not sure why. Any help?
This is the sqlite code I have:
AFTER INSERT ON Professor
FOR EACH ROW
WHEN strftime('%J', 'now') - strftime('%J', (SELECT dateOfBirth from Person WHERE personId = NEW.personId)) < 7665 -- 21 years in days
BEGIN
SELECT RAISE(ROLLBACK, 'Professor cant be under 21');
DELETE FROM Person WHERE (personId= new.personId);
END;```
One common issue is that there many not be a current transaction scope to rollback to, which would result in this error:
Error: cannot rollback - no transaction is active
If that occurs, then the trigger execution will be aborted and the delete never executed.
If ROLLBACK does succeed, then this creates a paradox, by rolling back to before the trigger was executed in a strictly ACID environment it would not be valid to continue executing the rest of this trigger, because the INSERT never actually occurred. To avoid this state of ambiguity, any call to RAISE() that is NOT IGNORE will abort the processing of the trigger.
CREATE TRIGGER - The RAISE()
When one of RAISE(ROLLBACK,...), RAISE(ABORT,...) or RAISE(FAIL,...) is called during trigger-program execution, the specified ON CONFLICT processing is performed and the current query terminates. An error code of SQLITE_CONSTRAINT is returned to the application, along with the specified error message.
NOTE: This behaviour is different to some other RDBMS, for instance see this explanation on MS SQL Server where execution will specifically continue in the trigger.
As OP does not provide calling code that demonstrates the scenario it is worth mentioning that in SQLite on RAISE(ROLLBACK,)
If no transaction is active (other than the implied transaction that is created on every command) then the ROLLBACK resolution algorithm works the same as the ABORT algorithm.
Generally, if you wanted to Create a Person and then a Professor as a single operation, you would Create a Stored Procedure that would validate the inputs first, preventing the original insert at the start.
To maintain referential integrity, even if an SP is used, you could still add a check constraint on the Professor record or raise an ABORT from a BEFORE trigger to prevent the INSERT from occurring in the first place:
BEFORE INSERT ON Professor
FOR EACH ROW
WHEN strftime('%J', 'now') - strftime('%J', (SELECT dateOfBirth from Person WHERE personId = NEW.personId)) < 7665 -- 21 years in days
BEGIN
SELECT RAISE(ABORT, 'Professor can''t be under 21');
END
This way it is up to the calling process to manage how to handle the error. The ABORT can be caught in the calling logic and would effectively result in rolling back the outer transaction, but the point is that the calling logic should handle negative side effects. As a general rule triggers that cascade logic should only perform positive side effects, that is to say they should only affect data if the inserted row succeeds. In this case we are rolling back the insert, so it becomes hard to identify why the Person would be deleted.
Related
I have a simple procedure but I'm unsure on how to best implement a strategy to stop deadlocks or record locks. I'm updating a number of tables in an cursor LOOP while calling a procedure that also updates tables.
There have been issues with deadlocks or record locks, so I've been tasked to cure this problem of the program from crashing once it comes up against a deadlock or record lock but to sleep for 5 minutes and carry on processing any new records.
The perfect solution is that it skips pass the deadlock or record lock and carry's on processing the rest of the records that aren't locked, sleeps for 5 minutes then picks up that record when the cursor is called again. The program continues to run through the day until it's killed.
My procedure is below, I have put in what I think is best but should I have the exception inside the Inner loop rather than the outer loop? While also having a savepoint in the inner loop?
PROCEDURE process_dist_data_fix
IS
lx_record_locked EXCEPTION;
lx_deadlock_detected EXCEPTION;
PRAGMA EXCEPTION_INIT(lx_record_locked, -54);
PRAGMA EXCEPTION_INIT(lx_deadlock_detected, -60);
CURSOR c_files
IS
SELECT surr_id
FROM batch_pre_dist_data_fix
WHERE status = 'REQ'
ORDER BY surr_id;
TYPE file_type IS TABLE OF batch_pre_dist_data_fix.surr_id%TYPE;
l_file_tab file_type;
BEGIN
LOOP
BEGIN
OPEN c_files;
FETCH c_files BULK COLLECT INTO l_file_tab;
CLOSE c_files;
IF l_file_tab.COUNT > 0
THEN
FOR i IN 1..l_file_tab.COUNT
LOOP
-- update main table with start date
UPDATE batch_pre_dist_data_fix
SET start_dtm = SYSDATE
WHERE surr_id = l_file_tab(i);
-- update tables
update_soundmouse_tables (l_file_tab(i));
END LOOP;
END IF;
Dbms_Lock.Sleep(5*60); -- sleep for 5 mins before looking for more records to process
-- if there is a deadlock or a locked record then log the error, rollback and wait 5 minutes, then loop again
EXCEPTION
WHEN lx_deadlock_detected OR lx_record_locked THEN
ROLLBACK;
Dbms_Lock.Sleep(5*60); -- sleep for 5 minutes before processing records again
END;
END LOOP;
END process_dist_data_fix;
First understand that deadlock is a completely differnt issue than a "record locked". So for "record locked" most of the time there should not be anything that you need to do. 9/10 you want the program to wait on a lock. if you are waiting too long then you may have to redefine your transaction boundaries. For example here your code pattern is quite typical. You read a list of "to do" and then you "do it". In such cases it will be rare that you want to do something special for "record locking". if batch_pre_dist_data_fix table row is locked for some reason you should simply wait for lock to release and continue. if reverse is true, that since this job takes so long and you are locking batch_pre_dist_data_fix for so long for another process, then you may want to redefine transaction boundary. i.e. maybe you say that after each loop iteration you commit. But beware of how open cursor behave on commit.
Deadlock is a completely differnt animal. Here you always have "another session" and db detected a situation where you will never be able to get out of the situation and it killed one random session. So it is when Session 1 is waiting on a resource of session 2 and session 2 is waiting on a resource f session 1. That is an infinite wait condition that db can detect so it kills one random sessoin. One simple way to address that is that if all transactions that deal with multiple tables simply lock them in same order we will not have a deadlock. So lets say we have table A,B,C,D. If we simply decide that tables will be locked in this order. i.e. it is ok to do A,B,D or A,C,D or C,D - but never D,C. Then you will not get deadlock. To troubleshoot deadlock see the dump oracle created when it gave the error and find the other session and what was list of tables that session had and see how they should be locked.
I’m trying to cause a ‘SELECT’ query to fail if the record it is trying to read is locked.
To simulate it I have added a trigger on UPDATE that sleeps for 20 seconds and then in one thread (Java application) I’m updating a record (oid=53) and in another thread I’m performing the following query:
“SET STATEMENT max_statement_time=1 FOR SELECT * FROM Jobs j WHERE j.oid =53”.
(Note: Since my mariadb server version is 10.2 I cannot use the “SELECT … NOWAIT” option and must use “SET STATEMENT max_statement_time=1 FOR ….” instead).
I would expect that the SELECT will fail since the record is in a middle of UPDATE and should be read/write locked, but the SELECT succeeds.
Only if I add ‘for update’ to the SELECT query the query fails. (But this is not a good option for me).
I checked the INNODB_LOCKS table during the this time and it was empty.
In the INNODB_TRX table I saw the transaction with isolation level – REPEATABLE READ, but I don’t know if it is relevant here.
Any thoughts, how can I make the SELECT fail without making it 'for update'?
Normally consistent (and dirty) reads are non-locking, they just read some sort of snapshot, depending on what your transaction isolation level is. If you want to make the read wait for concurrent transaction to finish, you need to set isolation level to SERIALIZABLE and turn off autocommit in the connection that performs the read. Something like
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET autocommit = 0;
SET STATEMENT max_statement_time=1 FOR ...
should do it.
Relevant page in MariaDB KB
Side note: my personal preference would be to use innodb_lock_wait_timeout=1 instead of max_statement_time=1. Both will make the statement fail, but innodb_lock_wait_timeout will cause an error code more suitable for the situation.
From http://www.sqlite.org/lang_conflict.html
ABORT
When an applicable constraint violation occurs, the ABORT resolution algorithm aborts the current SQL statement with an SQLITE_CONSTRAIT error and backs out any changes made by the current SQL statement; but changes caused by prior SQL statements within the same transaction are preserved and the transaction remains active. This is the default behavior and the behavior proscribed the SQL standard.
FAIL
When an applicable constraint violation occurs, the FAIL resolution algorithm aborts the current SQL statement with an SQLITE_CONSTRAINT error. But the FAIL resolution does not back out prior changes of the SQL statement that failed nor does it end the transaction. For example, if an UPDATE statement encountered a constraint violation on the 100th row that it attempts to update, then the first 99 row changes are preserved but changes to rows 100 and beyond never occur.
Both preserve changes made before the statement that caused constraint violation and do not end transaction. So, I suppose the only difference is that FAIL resolution does not let further changes to be made, while ABORT does only back up only conflicting statement. Did I get right?
The answer is simple: FAIL does not rollback changes done by the current statement.
Consider those 2 tables:
CREATE TABLE IF NOT EXISTS constFAIL (num UNIQUE ON CONFLICT FAIL);
CREATE TABLE IF NOT EXISTS constABORT (num UNIQUE ON CONFLICT ABORT);
INSERT INTO constFAIL VALUES (1),(3),(4),(5);
INSERT INTO constABORT VALUES (1),(3),(4),(5);
The statement
UPDATE constABORT SET num=num+1 WHERE num<5
will fail and change nothing.
But this satement
UPDATE constFAIL SET num=num+1 WHERE num<5
will update the first row, then fail and leave the 1 row updated, so the new values are 2, 3, 4, 5
I am running an END TRANSACTION on my database and occasionally I get error
#1 that "cannot commit - no transaction is active"
Is there a way to determine if a transaction is active before trying a commit? I have been tracking my "BEGIN TRANSACTIONS" by hand but I feel there is a better way.
I am using the C API
You might want to check this:
http://www.sqlite.org/c3ref/get_autocommit.html
According to the page, if you are in a transaction, sqlite3_get_autocommit() will return 0.
The interface you're looking for is actually implemented in this next version of sqlite as you can see in the notes: https://sqlite.org/draft/c3ref/txn_state.html
Determine the transaction state of a database
int sqlite3_txn_state(sqlite3*,const char *zSchema);
The sqlite3_txn_state(D,S) interface returns the current transaction state of schema S in database connection D. If S is NULL, then the highest transaction state of any schema on databse connection D is returned. Transaction states are (in order of lowest to highest):
SQLITE_TXN_NONE
SQLITE_TXN_READ
SQLITE_TXN_WRITE
If the S argument to sqlite3_txn_state(D,S) is not the name of a valid schema, then -1 is returned.
That's weird. I thought sqlite was always in a transaction, either explicitly created by you or implicitly created by sqlite:
http://www.sqlite.org/lang_transaction.html
So I suppose the error means that it's not in a transaction that you initiated ... and if that's what you need to know, it seems OK for sqlite to expect you to keep up with it. Not terribly convenient of course, but I guess that's the cost of a simple API. =/
In SQLite, transactions created using BEGIN TRANSACTION ... END TRANSACTION do not nest.
For nested transactions you need to use the SAVEPOINT and RELEASE commands.
See http://www.sqlite.org/lang_transaction.html
and http://www.sqlite.org/lang_savepoint.html
I would like to find out if it is possible to find out which package or procedure in a package is updating a table?
Due to a certain project being handed over (the person who handed over the project has since left) without proper documentation, data that we know we have updated always go back to some strange source point.
We are guessing that this could be a database job or scheduler that is running the update command without our knowledge. I am hoping that there is a way to find out where the source code is calling from that is updating the table and inserting the source as a trigger on that table that we are monitoring.
Any ideas?
Thanks.
UPDATE: I poked around and found out
how to trace a statement back to its
owning PL/SQL object.
In combination with what Tony mentioned, you can create a logging table and a trigger that looks like this:
CREATE TABLE statement_tracker
( SID NUMBER
, serial# NUMBER
, date_run DATE
, program VARCHAR2(48) null
, module VARCHAR2(48) null
, machine VARCHAR2(64) null
, osuser VARCHAR2(30) null
, sql_text CLOB null
, program_id number
);
CREATE OR REPLACE TRIGGER smb_t_t
AFTER UPDATE
ON smb_test
BEGIN
INSERT
INTO statement_tracker
SELECT ss.SID
, ss.serial#
, sysdate
, ss.program
, ss.module
, ss.machine
, ss.osuser
, sq.sql_fulltext
, sq.program_id
FROM v$session ss
, v$sql sq
WHERE ss.sql_address = sq.address
AND ss.SID = USERENV('sid');
END;
/
In order for the trigger above to compile, you'll need to grant the owner of the trigger these permissions, when logged in as the SYS user:
grant select on V_$SESSION to <user>;
grant select on V_$SQL to <user>;
You will likely want to protect the insert statement in the trigger with some condition that only makes it log when the the change you're interested in is occurring - on my test server this statement runs rather slowly (1 second), so I wouldn't want to be logging all these updates. Of course, in that case, you'd need to change the trigger to be a row-level one so that you could inspect the :new or :old values. If you are really concerned about the overhead of the select, you can change it to not join against v$sql, and instead just save the SQL_ADDRESS column, then schedule a job with DBMS_JOB to go off and update the sql_text column with a second update statement, thereby offloading the update into another session and not blocking your original update.
Unfortunately, this will only tell you half the story. The statement you're going to see logged is going to be the most proximal statement - in this case, an update - even if the original statement executed by the process that initiated it is a stored procedure. This is where the program_id column comes in. If the update statement is part of a procedure or trigger, program_id will point to the object_id of the code in question - you can resolve it thusly:
SELECT * FROM all_objects where object_id = <program_id>;
In the case when the update statement was executed directly from the client, I don't know what program_id represents, but you wouldn't need it - you'd have the name of the executable in the "program" column of statement_tracker. If the update was executed from an anonymous PL/SQL block, I'm not how to track it back - you'll need to experiment further.
It may be, though, that the osuser/machine/program/module information may be enough to get you pointed in the right direction.
If it is a scheduled database job then you can find out what scheduled database jobs exist and look into what they do. Other things you can do are:
look at the dependencies views e.g. ALL_DEPENDENCIES to see what packages/triggers etc. use that table. Depending on the size of your system that may return a lot of objects to trawl through.
Search all the database source code for references to the table like this:
select distinct type, name
from all_source
where lower(text) like lower('%mytable%');
Again that may return a lot of objects, and of course there will be some "false positives" where the search string appears but isn't actually a reference to that table. You could even try something more specific like:
select distinct type, name
from all_source
where lower(text) like lower('%insert into mytable%');
but of course that would miss cases where the command was formatted differently.
Additionally, could there be SQL scripts being run through "cron" jobs on the server?
Just write an "after update" trigger and, in this trigger, log the results of "DBMS_UTILITY.FORMAT_CALL_STACK" in a dedicated table.
The purpose of this function is exactly to give you the complete call stack of al the stored procedures and triggers that have been fired to reach your code.
I am writing from the mobile app, so i can't give you more detailed examples, but if you google for it you'll find many of them.
A quick and dirty option if you're working locally, and are only interested in the first thing that's altering the data, is to throw an error in the trigger instead of logging. That way, you get the usual stack trace and it's a lot less typing and you don't need to create a new table:
AFTER UPDATE ON table_of_interest
BEGIN
RAISE_APPLICATION_ERROR(-20001, 'something changed it');
END;
/