Sqlite: table constraints and triggers - sqlite

I know the order of triggers in SQLite is undefined (you cannot be sure what trigger will be executed first), but, how about the relationship between table constraints and triggers?
I mean, suppose I have, for example, a UNIQUE (or CHECK) constraint in a column, and a BEFORE and AFTER UPDATE triggers on that table. If the UNIQUE column is modified, when does sqlite check the UNIQUE constraint? before calling BEFORE triggers, after calling AFTER triggers, between them, or with undefined order?
I have found nothing in SQLite docs about it.

SQLite reccommends not to modify data in BEFORE UPDATE/DELETE triggers, since it will lead to undefined behaviour (see: Cautions on the use of before triggers in the documentations).
There is a hint in a SQLite source code comment (src/update.c) that helps to know what happens under the hood:
/* Fire any BEFORE UPDATE triggers. This happens before constraints are
** verified. One could argue that this is wrong.
*/
Looking at the source code, whenever SQLite updates a table it perform this actions:
Loads the table data used by the update.
Runs the UPDATE operation (you need this to populate old.field and new.field)
Then, it Executes the BEFORE UPDATE trigger(s).
If the BEFORE UPDATE trigger(s) didn't delete the row data:
Loads the table data not used by the trigger.
Then Checks constraints (Primary keys, foreign keys, uniqueness, on..cascade, etc)
And then SQLite executes the AFTER UPDATE trigger(s).
If any BEFORE UPDATE trigger deleted the row data:
There is no need to check constraints.
No AFTER UPDATE triggers are run.

When the documentation does not say anything about it, then the order is undefined.
As long as the triggers do not have side effects outside the database, this does not matter, because any changes made by a trigger would be rolled back if a constraint fails.
Please note that SQLite takes backwards compatibility very seriously, so it is unlikely that the actual order will ever change.

Related

Is there a way to efficiently test whether a newly inserted/updated row matches a SQLite query?

I have a SQLite table in my application that periodically has INSERT/UPDATE statements executed against it. I would like to display a view in my application that reflects some query that is run against that table, and keep it continually updated as the table contents change. Since the table could be large, I would like to avoid having to re-run the query each time the table is updated so that I can update the view.
One idea I had was to use SQLite's Data Change Notification Callbacks to be notified whenever an INSERT/UPDATE occurs against the table in question. In my callback, I have the rowid of the newly-updated row, and I would like to see whether it matches the query. Assuming I have the query available as a prepared sqlite3_stmt, what would be the most efficient way to test whether the row would be matched by the query?
Aside: I know that I can't do anything in the callback itself that would affect the state of the database connection, and that's fine. I can defer the actual work of checking the query until later to ensure safety; I'm just trying to determine what the best mechanism for checking the query against the new row contents would be.

Sqlite: Are updates to two tables within an insert trigger atomic?

I refactored a table that stored both metadata and data into two tables, one for metadata and one for data. This allows metadata to be queried efficiently.
I also created an updatable view with the original table's columns, using sqlite's insert, update and delete triggers. This allows calling code that needs both data and metadata to remain unchanged.
The insert and update triggers write each incoming row as two rows - one in the metadata table and one in the data table, like this:
// View
CREATE VIEW IF NOT EXISTS Item as select n.Id, n.Title, n.Author, c.Content
FROM ItemMetadata n, ItemData c where n.id = c.Id
// Trigger
CREATE TRIGGER IF NOT EXISTS item_update
INSTEAD OF UPDATE OF id, Title, Author, Content ON Item
BEGIN
UPDATE ItemMetadata
SET Title=NEW.Title, Author=NEW.Author
WHERE Id=old.Id;
UPDATE ItemData SET Content=NEW.Content
WHERE Id=old.Id;
END;
Questions:
Are the updates to the ItemMetadata and ItemData tables atomic? Is there a chance that a reader can see the result of the first update before the second update has completed?
Originally I had the WHERE clauses be WHERE rowid=old.rowid but that seemed to cause random problems so I changed them to WHERE Id=old.Id. The original version was based on tutorial code I found. But after thinking about it I wonder how sqlite even comes up with an old rowid - after all, this is a view across multiple tables. What rowid does sqlite pass to an update trigger, and is the WHERE clause the way I first coded it problematic?
The documentation says:
No changes can be made to the database except within a transaction. Any command that changes the database (basically, any SQL command other than SELECT) will automatically start a transaction if one is not already in effect.
Commands in a trigger are considered part of the command that triggered the trigger.
So all commands in a trigger are part of a transaction, and atomic.
Views do not have a (usable) rowid.

R - how to react to database inserts/updates/deletes?

I'm reading in data from an SQLite database table into a data.frame with R's DBI. Often (as often as every 5 secs), new records get added into the database table externally, or existing ones updated/deleted, at which point I need to propagate these changes to my data.frame.
So the question is how can I hook onto and respond to these database events in R? I don't want to have to keep querying the database every 5 secs just to make sure nothing has changed. Is there some callback mechanism at my disposal?
If you have access to the C code that is writing your SQL data, then you can implement a callback:
http://www.sqlite.org/c3ref/update_hook.html
and then in your callback function you could update the timestamp of a file if the table being modified is one your R code cares about. Then your R code checks the timestamp of that file, and if its changed, only then does it need to query the SQLite database.
Now I don't know if you could add a callback to the SQLite connection held by R and expect to get a callback if another SQLite connection/process changes the database table. I doubt it, I suspect the callbacks are only triggered if the connection they are registered with does the update, because otherwise all sorts of asynchronous things happen, and there's no event handler.
Another idea is to use triggers to update a database table of modification times. Define triggers on all tables you care about so that they update a row in a "last modified" table. Then use the file modification time to check for any change to the database, and then your R only has to query the "last modified" table to see what specific table has changed since last check.

Replace DELETE for UPDATE with a trigger on SQLite

Is it possible in SQLite to make an update instead of a delete within a trigger ?
I.e, I got these two tables:
CREATE TABLE author (authorid INTEGER PRIMARY KEY, temporal NUMERIC);
CREATE TABLE comment (id INTEGER PRIMARY KEY, text TEXT, authorid INTEGER, FOREIGN KEY(authorid) REFERENCES author(authorid));
When a deletion of an author is attempted and there's any comment referencing that author i want to update the "temporal" field and abort deletion.
I've tested different approaches with triggers but i have not found a way to do the two things, make the update and abort the delete. I can abort the delete (though in this case it's not necessary as it is enforced by the foreign key constraint) or make the update (though the delete will remove the record, so the update has no effect)
Aborting the deletion is possible only with using RAISE to generate an error, but this would have the consequence that any UPDATE gets rolled back.
You could make author a view and create several INSTEAD OF triggers that pass through most actions to the base table.
However, it would be much easier to handler the temporal logic in your application.

pl sql: trigger for insert data from another table

There is the table OLD and a similar one, NEW. I want to insert in the existing process that fills the table OLD a trigger event that for each new inserted row, this event will insert the newly inserted row to table NEW, as well. Inside the body of trigger, i need to include the query BELOW which aggregates values of OLD before inserted in NEW:
insert into NEW
select (select a.id,a.name,a.address,b.jitter,a.packet,a.compo,b.rtd,a.dur from OLD a,
select address,packet,compo, avg(jitter) as jitter, avg(rtd) as rtd from OLD
group by address,packet,compo ) b
where a.address=b.address and a.packet=b.packet and a.compo=b.compo;
can you correct any possible mistakes or suggest other trigger syntax on the statement below?
CREATE OR REPLACE TRIGGER insertion
after update on OLD
for each row
begin
MY select query above
end;
In a for each row trigger you cannot query the table itself. You will get a mutating table error message if you do.
I recommend only to use triggers for the most basic functionality such as handing out ID numbers and very basic checks.
If you do use triggers for more complex tasks you may very easily end up with a system that's very hard to debug and maintain because of all kinds of actions that appear out of knowhere.
Look at this question for another approach: getting rid of Insert trigger
Oracle Streams might also be a good solution. In the apply handler, you can include your own custom PL/SQL code. This procedure will be called after the COMMIT, so you can avoid mutating table errors.
However, Streams requires a large amount of setup to make it work. It might be overkill for what you are doing.

Resources