How do I call user-defined function only if it exists? - sqlite

SQLite allows to define custom functions that can be called from SQL statements. I use this to get notified of trigger activity in my application:
CREATE TRIGGER AfterInsert AFTER INSERT ON T
BEGIN
-- main trigger code involves some extra db modifications
-- ...
-- invoke application callback
SELECT ChangeNotify('T', 'INSERT', NEW.id);
END;
However, user-defined functions are added only to current database connection. There may be other clients who haven't defined ChangeNotify and I don't want them to get a "no such function" error.
Is is possible to call a function only if it's defined? Any alternative solution is also appreciated.

SQLite is designed as an embedded database, so it is assumed that your application controls what is done with the database.
SQLite has no SQL function to check for user-defined functions.
If you want to detect changes made only from your own program, use sqlite3_update_hook.

Prior to calling your user defined function, you can check if the function exists by selecting from pragma_function_list;
select exists(select 1 from pragma_function_list where name='ChangeNotify');
1

It would be possible by combining a query against pragma_function_list and a WHEN statement on the trigger --
CREATE TRIGGER AfterInsert AFTER INSERT ON T
WHEN EXISTS (SELECT 1 FROM pragma_function_list WHERE name = 'ChangeNotify')
BEGIN
SELECT ChangeNotify('T', 'INSERT', NEW.id);
END;
except that query preparation attempts to resolve functions prior to execution. So, afaik, this isn't possible to do in a trigger.
I need to do the same thing and asked here: https://sqlite.org/forum/forumpost/a997b1a01d
Hopefully they come back with a solution.
Update
SQLite forum suggestion is to use create temp trigger when your extension loads -- https://sqlite.org/forum/forumpost/96160a6536e33f71
This is actually a great solution as temp triggers are:
not visible to other connections
are cleaned up when the connection creating them ends

Related

In sqlite, how do I use trigger to write to other files?

according to this trigger example
I can use trigger to write to other files for audting trails. How do I exactly do that?
I have tried this to no avail
CREATE TRIGGER log AFTER INSERT ON my_table
BEGIN
ATTACH DATABASE /location/otherfile AS logDb
Insert ..
I tried the above code the sqlite console and got the syntax error near ATTACH. How else can I do this?
Thanks!
Inside a trigger body, only INSERT/UPDATE/DELETE/SELECT statements are allowed, and you cannot access other database schemas.
To access other files, you have to create a user-defined function, which requires support from any application that modifies the database.

PLSQL triggers and normal triggers

Could anyone tell me difference between PLSQL trigger and trigger. In oracle docs I am finding two chapters of triggers. I am unable to get clear picture between those two
If you look at the reference on Oracle Triggers (http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/triggers.htm#LNPLS020) you'll see this:
A trigger is a named PL/SQL unit that is stored in the database and
executed (fired) in response to a specified event that occurs in the
database.
So triggers are written in PL/SQL, which is why you find the syntax in the PL/SQL reference (http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/triggers.htm#LNPLS020).
Bottom line, there is only one object being discussed: a trigger. A trigger is an object in the database written in PL/SQL. It is described generally in the Oracle documentation and again in the PL/SQL language reference.

Sqlite atomically read and update a counter?

I am trying to implement a simple counter using SQLite provided with Python. I am using CGI to write simple dynamic web pages. It's the only simple way I can think of to implement a counter. The problem is I need to first read the counter value and then update it. But ideally, every user should read a unique value, and it's possible for two users to read the same counter value if they read simultaneously. Is there a simple way to make the read/write operation atomic? I unfamiliar with SQL so please give specific statements to do so. Thanks in advance.
I use a table with one column and one row to store the counter.
You may try this flow of SQL statements:
BEGIN EXCLUSIVE TRANSACTION;
// update counter here
// retrieve new value for user
COMMIT TRANSACTION;
While you perform updates in trisection, changes can be seen only with connection on which they was performed. In this case we used EXCLUSIVE transactions, which locks database for other clients till it will commit transaction.
You should better not use the EXCLUSIVE keyword in the transaction to make it more efficient. The first select automatically creates a shared lock and the update statement will then turn it into an exclusive. It is only necessary that the SELECT and the UPDATE are both inside an explicit set transaction.
BEGIN TRANSACTION;
// read counter value
// update counter value
COMMIT TRANSACTION;

pl sql: trigger for insert data from another table

There is the table OLD and a similar one, NEW. I want to insert in the existing process that fills the table OLD a trigger event that for each new inserted row, this event will insert the newly inserted row to table NEW, as well. Inside the body of trigger, i need to include the query BELOW which aggregates values of OLD before inserted in NEW:
insert into NEW
select (select a.id,a.name,a.address,b.jitter,a.packet,a.compo,b.rtd,a.dur from OLD a,
select address,packet,compo, avg(jitter) as jitter, avg(rtd) as rtd from OLD
group by address,packet,compo ) b
where a.address=b.address and a.packet=b.packet and a.compo=b.compo;
can you correct any possible mistakes or suggest other trigger syntax on the statement below?
CREATE OR REPLACE TRIGGER insertion
after update on OLD
for each row
begin
MY select query above
end;
In a for each row trigger you cannot query the table itself. You will get a mutating table error message if you do.
I recommend only to use triggers for the most basic functionality such as handing out ID numbers and very basic checks.
If you do use triggers for more complex tasks you may very easily end up with a system that's very hard to debug and maintain because of all kinds of actions that appear out of knowhere.
Look at this question for another approach: getting rid of Insert trigger
Oracle Streams might also be a good solution. In the apply handler, you can include your own custom PL/SQL code. This procedure will be called after the COMMIT, so you can avoid mutating table errors.
However, Streams requires a large amount of setup to make it work. It might be overkill for what you are doing.

How to find out which package/procedure is updating a table?

I would like to find out if it is possible to find out which package or procedure in a package is updating a table?
Due to a certain project being handed over (the person who handed over the project has since left) without proper documentation, data that we know we have updated always go back to some strange source point.
We are guessing that this could be a database job or scheduler that is running the update command without our knowledge. I am hoping that there is a way to find out where the source code is calling from that is updating the table and inserting the source as a trigger on that table that we are monitoring.
Any ideas?
Thanks.
UPDATE: I poked around and found out
how to trace a statement back to its
owning PL/SQL object.
In combination with what Tony mentioned, you can create a logging table and a trigger that looks like this:
CREATE TABLE statement_tracker
( SID NUMBER
, serial# NUMBER
, date_run DATE
, program VARCHAR2(48) null
, module VARCHAR2(48) null
, machine VARCHAR2(64) null
, osuser VARCHAR2(30) null
, sql_text CLOB null
, program_id number
);
CREATE OR REPLACE TRIGGER smb_t_t
AFTER UPDATE
ON smb_test
BEGIN
INSERT
INTO statement_tracker
SELECT ss.SID
, ss.serial#
, sysdate
, ss.program
, ss.module
, ss.machine
, ss.osuser
, sq.sql_fulltext
, sq.program_id
FROM v$session ss
, v$sql sq
WHERE ss.sql_address = sq.address
AND ss.SID = USERENV('sid');
END;
/
In order for the trigger above to compile, you'll need to grant the owner of the trigger these permissions, when logged in as the SYS user:
grant select on V_$SESSION to <user>;
grant select on V_$SQL to <user>;
You will likely want to protect the insert statement in the trigger with some condition that only makes it log when the the change you're interested in is occurring - on my test server this statement runs rather slowly (1 second), so I wouldn't want to be logging all these updates. Of course, in that case, you'd need to change the trigger to be a row-level one so that you could inspect the :new or :old values. If you are really concerned about the overhead of the select, you can change it to not join against v$sql, and instead just save the SQL_ADDRESS column, then schedule a job with DBMS_JOB to go off and update the sql_text column with a second update statement, thereby offloading the update into another session and not blocking your original update.
Unfortunately, this will only tell you half the story. The statement you're going to see logged is going to be the most proximal statement - in this case, an update - even if the original statement executed by the process that initiated it is a stored procedure. This is where the program_id column comes in. If the update statement is part of a procedure or trigger, program_id will point to the object_id of the code in question - you can resolve it thusly:
SELECT * FROM all_objects where object_id = <program_id>;
In the case when the update statement was executed directly from the client, I don't know what program_id represents, but you wouldn't need it - you'd have the name of the executable in the "program" column of statement_tracker. If the update was executed from an anonymous PL/SQL block, I'm not how to track it back - you'll need to experiment further.
It may be, though, that the osuser/machine/program/module information may be enough to get you pointed in the right direction.
If it is a scheduled database job then you can find out what scheduled database jobs exist and look into what they do. Other things you can do are:
look at the dependencies views e.g. ALL_DEPENDENCIES to see what packages/triggers etc. use that table. Depending on the size of your system that may return a lot of objects to trawl through.
Search all the database source code for references to the table like this:
select distinct type, name
from all_source
where lower(text) like lower('%mytable%');
Again that may return a lot of objects, and of course there will be some "false positives" where the search string appears but isn't actually a reference to that table. You could even try something more specific like:
select distinct type, name
from all_source
where lower(text) like lower('%insert into mytable%');
but of course that would miss cases where the command was formatted differently.
Additionally, could there be SQL scripts being run through "cron" jobs on the server?
Just write an "after update" trigger and, in this trigger, log the results of "DBMS_UTILITY.FORMAT_CALL_STACK" in a dedicated table.
The purpose of this function is exactly to give you the complete call stack of al the stored procedures and triggers that have been fired to reach your code.
I am writing from the mobile app, so i can't give you more detailed examples, but if you google for it you'll find many of them.
A quick and dirty option if you're working locally, and are only interested in the first thing that's altering the data, is to throw an error in the trigger instead of logging. That way, you get the usual stack trace and it's a lot less typing and you don't need to create a new table:
AFTER UPDATE ON table_of_interest
BEGIN
RAISE_APPLICATION_ERROR(-20001, 'something changed it');
END;
/

Resources