ORA-04091: table name is mutating - plsql

I get ORA-04091 Error while inserting data into table A. Table A records are refferencing other records in the same table 1:N.
Father records have fk_id = null and child records have fk not null.
create or replace trigger TRBI_A
BEFORE INSERT ON A
for each row
BEGIN
IF :new.fk_id IS NOT NULL then
UPDATE A SET actualTS = CURRENT_TIMESTAMP WHERE id = :new.fk_id;
END IF;
END;
ORA-04091: table name is mutating, trigger/function may not see it
The problem could is probably caused by trigger which tried to modify or query a table that is currently being modified by the statement that fired the trigger.
Does anyone know how to modify the trigger to have it correct?

You know what the problem is, so just read your code a little: you update the same table you are putting the trigger on.
I guess in your case you just need to put :NEW.actualTS:=current_timestamp, without using the update statement.

Related

Mutating table trigger error for one type of insert statement

I'm getting mutating table error for statement insert into employee select 'xyz',200 from dual and scripts executes successfully for insert into employee values ('abc',100);.
Can somebody explain why the statement fails for one type of insert statement? Both scripts insert similar type of data into table
details of script:
--table creation
create table employee (name varchar2(30),salary number);
--trigger creation
create or replace trigger emp_trig
before insert on employee
for each row
begin
delete from employee where name=:new.name;
end;
/
--insert statement 1
insert into employee values ('abc',100);
--result : 1 row inserted
--insert statement 2
insert into employee select 'xyz',200 from dual
--result:
Error report -
ORA-04091: table NMS_CON.EMPLOYEE is mutating, trigger/function may not see it
ORA-06512: at "NMS_CON.EMP_TRIG", line 2
ORA-04088: error during execution of trigger 'NMS_CON.EMP_TRIG'
Inserting a single row will not lead to a mutating table error - how could it, since that row wasn't there before?
But insert-select potentially involves more than one row, so then you get the error.
Generally, you should not have non-query DML operations in your trigger. Too many possible side effects and undesirable consequences.
A better approach is to write a procedure that will do the insert for you, do not give insert privileges on the table directly, only to the package that owns the procedure. Then inside that procedure you can do a delete before your insert, or you can do a merge - or whatever.
All the logic is hidden inside the procedure and by restricting privs on the table, you ensure that the procedure must be called.
Hope that helps!

How to Update Fields On Insert using Trigger

I've created and worked with Triggers in Oracle for years however I'm unable to wrap my head around how to update a field when inserting data into a sqlite database.
All I want to do is create a trigger that automatically inserts the current DateTime into a column in the sqlite database named 'createdDate' for ANY record that is inserted.
What is the best approach to accomplish this?
Below is what I've attempted without success.
CREATE TRIGGER outputLogDB_Created_Trig
BEFORE INSERT
ON outputLog
WHEN NEW.createdDate IS NULL
BEGIN
SELECT CASE WHEN NEW.createdDate IS NULL THEN NEW.createdDate = datetime('now', 'localtime') END;
END;
The above is almost a replica of how I would implement my triggers in Oracle with some modifications of course for sqlite. The logic is basically identical.
What am I missing?
Later Edit - I can get it to work if I instead use AFTER INSERT and not using FOR EACH ROW
CREATE TRIGGER outputLog_Created_Trig
AFTER INSERT
ON outputLog
WHEN New.createdDate IS NULL
BEGIN
UPDATE outputLog
SET createdDate = datetime('now', 'localtime')
WHERE outputLog_ID = New.rowid;
END;
But why can't I just insert the record using the new value while I'm inserting it? Am I ONLY able to get this in there using an Update AFTER I've already inserted the record?
The issue I have with this is the fact that I'd like to have a NOT NULL constraint on the createdDate column. Perhaps I'm simply used to how I've done it for years in Oracle? I realize the Trigger 'should' take care of any record and force this field to NEVER be NULL. It's just that NOT being able to add the constraint for some reason makes me uneasy. Do I need to let go of this worry?
Thanks to Shawn pointing me toward an easy simple solution to my problem. All that is needed in a SQLite database to insert the current Date/Time for each record being inserted is to set the DEFAULT value on that column to CURRENT_TIMESTAMP.
Since I wanted the timestamp in my own local time see below my create table script that is the solution to my problem.
CREATE TABLE outputLog (
outputLog_ID INTEGER PRIMARY KEY ASC ON CONFLICT ROLLBACK AUTOINCREMENT
NOT NULL ON CONFLICT ROLLBACK,
outputLog TEXT,
created DATETIME DEFAULT (datetime(CURRENT_TIMESTAMP, 'localtime') )
NOT NULL )
;

How to fix the mutating trigger in oracle

I wrote the trigger for updating the column value in the same table. For Ex I wrote a trigger on metermaster table after update of assettype column , with in the trigger i am trying to update the instantaneousinterval column in the same metermaster table. Its throws the error like this
ERROR: ORA-04091: table PSEB.METERMASTER is mutating, trigger/function
may not see it.
my trigger code is as follows:
CREATE OR REPLACE TRIGGER PSEB.spiupdate
AFTER
update of assettype
ON pseb.metermaster
referencing new as new old as old
for each row
DECLARE
vassettype number;
resval number(10);
vassettypename varchar2(50);
vmeterid number;
begin
select :new.assettype,:new.meterid INTO vassettype,vmeterid from dual;
select assettypename into vassettypename from pseb.METERASSETINSTTYPE where ASSETTYPEID=vassettype;
select case when assettypename like 'DT' then 86400 when assettypename like 'HT' then 3600 when assettypename like 'FSB' then 86400 end into resval from pseb.meterassetinsttype where assettypename =vassettypename;
update pseb.metermaster set instantaneousinterval=resval where meterid=vmeterid;
end;
I tried to use the
pragma autonomous_transaction;
but it gives the deadlock condition.
ERROR: ORA-00060: deadlock detected while waiting for resource
ORA-06512:
pls help me to fix this issue.
instead of this update statement
update pseb.metermaster set instantaneousinterval=resval where meterid=vmeterid;
use
:new.instantaneousinterval=resval;
A mutating table occurs when a statement causes a trigger to fire and that trigger references the table that caused the trigger. The best way to avoid such problems is to not use triggers, but I suspect the DBA didn’t take the time to do that. He could have done one of the following:
Changed the trigger to an after trigger.
Changed it from a row level trigger to a statement level trigger.
Convert to a Compound Trigger.
Modified the structure of the triggers to use a combination of row and statement level triggers.
Made the trigger autonomous with a commit in it.
Try this pragma autonomous_transaction; with Commit
Since the trigger is updating the same table on which it is defined, why don't you update the two columns in the first update statement itself?
i.e, Instead of using an update like
UPDATE pseb.metermaster
SET assettype = '<v_assettype>';
and relying on trigger to update the instantaneousinterval column, why don't you use an update statement like the following (code is not tested)
UPDATE pseb.metermaster
SET assettype = '<v_assettype>',
instantaneousinterval = (SELECT CASE
WHEN assettypename LIKE 'DT' THEN 86400
WHEN assettypename LIKE 'HT' THEN 3600
WHEN assettypename LIKE 'FSB' THEN 86400
END
FROM pseb.meterassetinsttype
WHERE assettypeid = '<v_assettype>');
In my opinion, using a trigger and autonomous_transaction in this case would be a wrong approach. To know why this is wrong, please search http://asktom.oracle.com/ for this error.

Value of a NEW variable on a trigger not changing, on plsql

I'm loading data through Oracle Apex utilities using a datasheet.
I want to make a trigger that checks for a value on the table from the data loaded, and then changes it depending on what it gets.
The table has 4 columns: id,name,email,type
The data to load is something like this: name,email,type
Now my trigger:
create or replace TRIGGER BI_USER
before insert ON USER
for each row
declare
begin
if :NEW.ID is null then
select USERID_SEQ.nextval into :NEW.ID from dual;
end if;
:NEW.TYPE := 'something else';
end;
The ID works great, it takes a number from the sequence, but :new.type isn't working, it doesn't change.
I also run the SQL insert separately and the same happens.
EDIT:
new.type type is char(1), I wrote it like this just for testing yet it doesn't change...
aah I'm dissapoint of myself, it throws the error just after reading the data and never fires the trigger.
What I was trying to do is that it will have the name of the TYPE column, and put the id from that table into the NEW.type
Is there a way to change the NEW type?
I see what you're trying to do. You want your table to accept an inserted record containing data that will not fit in the width of one of the fields, and you want to use a trigger to "fix" the data so that it will fit.
Unfortunately, this trigger will not help you because the data is validated before your triggers are fired.
An alternative way to get around this may be to use a view with an instead-of trigger. The view would have a column "TYPE" which is based on a string of length 9; the instead-of trigger would convert this to the CHAR(1) for insert into the underlying table.
Try this instead:
select 'something else' into :NEW.TYPE from dual;
If this syntax worked for ID it should also work for TYPE

Select for Update sql is a read and write mode?

I have simultaneous request to a particular row in a table and PL/SQL statement is used to update the table by reading the data from master row in the same table and update the current range row and master row it read.
Algorithm is like this:-
Declare
variable declaration
BEGIN
Select (Values) into (values1) from table where <condition1> for update;
select count(*) into tempval from table where <condition2>;
if (tempval == 0) then
insert into table values(values);
else
select (values) into (values2) from table where <condition2> for update;
update table set (values1) where <condition2>;
end if
update table set (values1+incrval) where <condition1>
END;
Unfortunately the master row is updated properly with the correct sequence but the current range picks up the old value of the master range. It does the dirty read. Even though the transaction isolation level for the table is serialized.
Please could some tell me what is happening here?
This is working as designed. Oracle default, and only, read isolation lets the session see all of their own updates. If you perform:
INSERT INTO TABLE1 (col1) values (1);
COMMIT;
UPDATE TABLE1 SET col1 = 2 where col1 = 1;
SELECT col1 FROM TABLE1;
you will see 2 returned from the last query. Please read the Merge Explanation for how to use a MERGE statement to perform the insert or update based upon a single criteria.

Resources