How to validate/debug trigger statement after it is loaded - sqlite

There are two tables ... the 'master' (tblFile) holds record details of files that have been processed by some java code .. the PK is the file name. A column of interest in this table is the 'status' column (VALID or INVALID).
In the subordinate table (tblAnomaly) there are many records that hold the anomalies from processing each file .. this table has a FK as the file name from tblFile ... and along with other columns of relevant data there is a boolean type column which acts as an acceptance flag of the anomaly. NULL is accept .. # is not.
The user manually works their way through the list of anomalies presented in a swing ListPane and checks off the anomalies as they address the issue in the source file. When all the anomalies have been dealt with i need the status of the file in tblFile to change to VALID so that it can be imported into a database.
Here is the trigger i have settled on having designed the statements individually via an SQL editor .. however, i do not know how to validate/debug the trigger statement after it is loaded to the database, so cannot work out why it does not work ... no action and no feedback!!
CREATE TRIGGER
updateFileStatus
AFTER UPDATE ON tblAnomaly
WHEN 0 = (SELECT COUNT(*) FROM tblAnomaly WHERE tblAnomaly.file_name = tblFile.file_name AND tblAnomaly.accept = '#')
BEGIN
UPDATE tblFile
SET tblFile.file_status = 'VALID'
WHERE tblFile.file_name = tblAnomaly.file_name;
END;

So i worked it out! .. here is the solution that works.
CREATE TRIGGER
updateFileStatus
AFTER UPDATE ON tblAnomaly
WHEN 0 = (SELECT COUNT(*)
FROM tblAnomaly
WHERE file_name = old.file_name
AND accept = '#')
BEGIN
UPDATE tblFile
SET file_status = 'VALID'
WHERE file_name = old.file_name;
END;

Related

I need to recover some data in one column from a backup table and write it to the relevent row in working database

My table has a notes column (Text not Null) and I just noticed that all notes older than a month or so are missing. I have a month old backup with most of those Notes.
I have a column called PO which will be the unique identifier for each Row.
I'll call the backup DB Source and the working DB Target.
So, I want to copy the column Notes from Source into Target where Target.PO = Source.PO but only when Source.Notes <> ""
The following code throws a syntax error even though an online validator OKs it.
UPDATE
Target,
Source
SET
Target.Notes = Source.Notes
WHERE
Target.PO = Source.PO
AND Source.Notes <> '""';
You can only UPDATE a single table, you are specifying 2 tables.
You could use:-
UPDATE target
SET notes = (SELECT notes FROM source WHERE source.po = target.po)
WHERE target.po IN (SELECT po FROM source)
AND target.notes = ''
AND (SELECT source.notes FROM source WHERE source.po = target.po) <> ''
;
This does an additional check using AND target.notes = '' which could be omitted to exactly be the requested solution.
Edit re comment
I totally screwed up my descripion of the tables and the source code submitted. I have 2 databases Target and Source. They both have a table called Sales , I want to copy the column Notes from Source.sales to Target.sales if source.sales.notes <> "" and where source.sales.po = target.sales.po
Then what you do you is open/connect the Target database, then ATTACH the Source database (using ATTACH <the filename for the Source Database> AS source).
Then you could use (untested):-
UPDATE main.sales AS ms
SET notes = (SELECT notes FROM source.sales AS ss WHERE ss.po = ms.po)
WHERE ms.po IN (SELECT po FROM source.sales)
AND ms.notes = ''
AND (SELECT notes FROM source.sales WHERE po = ms.po) <> ''
;

FluentMigrator - Check if data/row exists

I am using FluentMigrator to migrate one database schema to another. I have a case in which I want to check if some data (specifically a row) exists before adding a new one.
if (!Schema.Table("MyTable").Something().Exists)
Insert.IntoTable("MyTable").Row(new { Id = 100, Field="Value" });
How do I check that the row exists first?
As of version 3.0, there are no builtin features in FluentMigrator to insert a row if it doesn't exist. There is a request on GitHub to add this feature: https://github.com/fluentmigrator/fluentmigrator/issues/640.
However, you could use the Execute.Sql() method and write your own SQL query that checks if the row exists before inserting it as shown here Check if a row exists, otherwise insert.
Execute.Sql(#"
begin tran
if not exists (select * from MyTable with (updlock, rowlock, holdlock) where id='100' and Field='Value')
begin
insert into MyTable values (100, 'Value')
end
commit
");

How to skip a transaction in replicate(Target system) in Goldengate oracle

Requirement:
Source table contains 5 columns. We are replicating 3 columns on target out of 5.
SEQ_ID is additional column on target.
When update operation is performed on columns which are not in target table,SEQ_ID is increased.
SEQ_ID should increase only when update performed on columns which are present on target.
Enabling unconditional supplemental table level logging on selected columns(ID,AGE,COL1) to be replicated:
Source:
Table name: Test1(ID,AGE,COL1,COL2,COL3)
Target:
Table name: Test1(ID,AGE,COL1,SEQ_ID)
We created a sequence to increse the SEQ_ID when a insert or update happens.
Scenario :
If insert or update happens on source table on these columns (ID,AGE,COL1) SEQ_ID is incresed,
and if update happens on others columns(COL2,COL3) SEQ_ID is also getting incremented.
Our Requirement is when update happens on others columns(COL2,COL3) ,SEQ_ID should not get incrementd.
I want to skip the transaction of updates happening on columns(COL2,COL3) .
Source:
Primary extract test_e1
EXTRACT TEST_e1
USERID DBTUAT_GG,PASSWORD dbt_1234
EXTTRAIL /DB_TRACK_GG/GGS/dirdat/dd
GETUPDATEBEFORES
--IGNOREUPDATES
--IGNOREDELETES
NOCOMPRESSUPDATES
TABLE HARI.TEST1,COLS(ID,AGE,COL1),FILTER (ON UPDATE,IGNORE UPDATE, #STREQ(before.AGE, AGE) = 0);
Datapump test_p1:
EXTRACT TEST_P1
USERID DBTUAT_GG,PASSWORD dbt_1234
RMTHOST 10.24.187.235, MGRPORT 7809,
RMTTRAIL /Trail_files/tt
--PASSTHRU
TABLE DBTUAT_GG.TEST1;
Target:
Target Repicat file:
Edit param test_r
REPLICAT TEST_R
USERID GGPROD,PASSWORD GGPROD_123
SOURCEDEFS ./dirsql/def32.sql
HANDLECOLLISIONS
IGNOREDELETES
INSERTMISSINGUPDATES
MAP HARI.TEST1, TARGET HARI.TEST1, &
SQLEXEC (ID test_num,QUERY "select GGPROD.test_seq.NEXTVAL test_val from dual", NOPARAMS), &
COLMAP(USEDEFAULTS,SEQ_ID=test_num.test_val);
Kindly suggest any possible solutions .
First note: you don't need USERID and PASSWORD in the Pump PRM. The Pump process does not connect to any database.
Actually you have already achieved your goal. You are only replicating the data when AGE is modified. There is a FILTER clause in the Extract PRM file. The ID column should not get changed since this is the PK. The only problem is that your are increasing the SEQ_ID if a delete gets replicated.
Something like that should work:
ALLOWDUPTARGETMAP
IGNOREDELETES
-- just inserts and updates that change COL1
MAP HARI.TEST1, TARGET HARI.TEST1, &
SQLEXEC (ID test_num,QUERY "select GGPROD.test_seq.NEXTVAL test_val from dual", NOPARAMS), &
COLMAP(USEDEFAULTS, SEQ_ID = test_num.test_val);
-- just replicate the delete operations
GETDELETES
IGNOREINSERTS
IGNOREUPDATES
MAP HARI.TEST1, TARGET HARI.TEST1;
GETINSERTS
GETUPDATES

Modify a column to NULL - Oracle

I have a table named CUSTOMER, with few columns. One of them is Customer_ID.
Initially Customer_ID column WILL NOT accept NULL values.
I've made some changes from code level, so that Customer_ID column will accept NULL values by default.
Now my requirement is that, I need to again make this column to accept NULL values.
For this I've added executing the below query:
ALTER TABLE Customer MODIFY Customer_ID nvarchar2(20) NULL
I'm getting the following error:
ORA-01451 error, the column already allows null entries so
therefore cannot be modified
This is because already I've made the Customer_ID column to accept NULL values.
Is there a way to check if the column will accept NULL values before executing the above query...??
You can use the column NULLABLE in USER_TAB_COLUMNS. This tells you whether the column allows nulls using a binary Y/N flag.
If you wanted to put this in a script you could do something like:
declare
l_null user_tab_columns.nullable%type;
begin
select nullable into l_null
from user_tab_columns
where table_name = 'CUSTOMER'
and column_name = 'CUSTOMER_ID';
if l_null = 'N' then
execute immediate 'ALTER TABLE Customer
MODIFY (Customer_ID nvarchar2(20) NULL)';
end if;
end;
It's best not to use dynamic SQL in order to alter tables. Do it manually and be sure to double check everything first.
Or you can just ignore the error:
declare
already_null exception;
pragma exception_init (already_null , -01451);
begin
execute immediate 'alter table <TABLE> modify(<COLUMN> null)';
exception when already_null then null;
end;
/
You might encounter this error when you have previously provided a DEFAULT ON NULL value for the NOT NULL column.
If this is the case, to make the column nullable, you must also reset its default value to NULL when you modify its nullability constraint.
eg:
DEFINE table_name = your_table_name_here
DEFINE column_name = your_column_name_here;
ALTER TABLE &table_name
MODIFY (
&column_name
DEFAULT NULL
NULL
);
I did something like this, it worked fine.
Try to execute query, if any error occurs, catch SQLException.
try {
stmt.execute("ALTER TABLE Customer MODIFY Customer_ID nvarchar2(20) NULL");
} catch (SQLException sqe) {
Logger("Column to be modified to NULL is already NULL : " + sqe);
}
Is this correct way of doing?
To modify the constraints of an existing table
for example... add not null constraint to a column.
Then follow the given steps:
1) Select the table in which you want to modify changes.
2) Click on Actions.. ---> select column ----> add.
3) Now give the column name, datatype, size, etc. and click ok.
4) You will see that the column is added to the table.
5) Now click on Edit button lying on the left side of Actions button.
6) Then you will get various table modifying options.
7) Select the column from the list.
8) Select the particular column in which you want to give not null.
9) Select Cannot be null from column properties.
10) That's it.

How to handle a missing feature of SQLite : disable triggers?

How to handle a missing feature of SQLite: disable triggers?
I don't have it stored the name of triggers for a specific table.
For example how can I drop all triggers?
What would you do?
So here it is 2015 and there still is no 'disable triggers' in SQLite. For a mobile Application this can be problematic--especially if it's a corporate App requiring offline functionality and local data.
An initial data load can be slowed to crawl by trigger execution even when you don't wrap each insert in an individual transaction.
I solved this issue using SQLite SQL fairly simply. I have a settings table that doesn't participate in the init load. It holds 'list' of key/value pairs. I have one key called 'fireTrigger' with a bit value of 0 or 1. Every trigger I have has an expression that selects value and if it equals 1 it fires the trigger, otherwise it doesn't.
This expression is in addition to any expressions evaluated on the data relating to the trigger. e.g.:
AND 1 = (SELECT val FROM MTSSettings WHERE key = 'fireTrigger')
In simple clean effect this allows me to disable/enable the trigger with a simple UPDATE to the settings table
I wrote a very simple extension function to set a boolean value to true or false.
And a function to retrieve this value (GetAllTriggersOn()).
With this function I can define all my triggers like:
CREATE TRIGGER tr_table1_update AFTER UPDATE ON TABLE1 WHEN GetAllTriggersOn()
BEGIN
-- ...
END
SQLite stores schema (meta) information in the built-in sqlite_master table.
To get a list of available triggers use the below query:
SELECT name FROM sqlite_master
WHERE type = 'trigger' -- AND tbl_name = 'a_table_name'
Set a flag in your database and use it in the triggers WHEN condition.
Say you want to create a trigger on the "clients" table after an insert. You have created a table "trigger_settings" with a TINYINT "triggers_on" field - this is your flag. Then you can set the field to 0 if you want to turn off the filters and to 1 when you want to turn them back on.
Then you create your filter with a WHEN condition that checks the "triggers_on" field.
For example:
CREATE TRIGGER IF NOT EXISTS log_client_data_after_insert
AFTER INSERT
ON [clients]
WHEN (SELECT triggers_on FROM trigger_settings)=1
BEGIN
your_statement
END;
Maybe you can make a stored procedures for droping and creating them. Is that good for you ?
Expanding on Nick Dandoulakis's answer, you could drop all relevant triggers and then reinstate them before the transaction's completion:
BEGIN;
SELECT name, sql FROM sqlite_master WHERE type = 'trigger' AND tbl_name = 'mytable';
-- store all results
-- for each name: DROP TRIGGER $name;
-- do normal work
-- for each sql: execute the SQL verbatim
COMMIT;
Expanding other answers this is how i'm doing it. Take into account that this is disabling all triggers for all tables in the database except some of then used by spatialite
SQLITE_FILE=/tmp/my.sqlite
# Define output sql files as variables
CREATE_TRIGGER_SQL=/tmp/create_triggers.sql
DROP_TRIGGER_SQL=/tmp/drop_triggers.sql
## Dump CREATE TRIGGER statements to a file ##
# To wrap statements in a transaction
echo -e "BEGIN;\n\n" > "${CREATE_TRIGGER_SQL}"
# `SELECT sql` does not output semicolons, so we must concatenate them
sqlite3 -bail "${SQLITE_FILE}" "SELECT sql || ';' FROM sqlite_master WHERE type = 'trigger' AND (name NOT LIKE 'gid_%' AND name NOT LIKE 'ggi_%' AND name NOT LIKE 'ggu_%' AND name NOT LIKE 'gii_%' AND name NOT LIKE 'giu_%' AND name NOT LIKE 'vwgcau_%' AND name NOT LIKE 'vtgcau_%' AND name NOT LIKE 'gcau_%' AND name NOT LIKE 'geometry_columns_%' AND name NOT LIKE 'gcfi_%' AND name NOT LIKE 'gctm_%' AND name NOT LIKE 'vtgcfi_%' AND name NOT LIKE 'vwgcfi_%' AND name NOT LIKE 'vtgcs_%' AND name NOT LIKE 'vwgc_%' AND name NOT LIKE 'vtgc_%' AND name NOT LIKE 'gcs_%');" >> "${CREATE_TRIGGER_SQL}"
echo -e "\n\nCOMMIT;" >> "${CREATE_TRIGGER_SQL}"
## Dump DROP TRIGGER statements to a file ##
echo -e "BEGIN;\n\n" > "${DROP_TRIGGER_SQL}"
sqlite3 -bail "${SQLITE_FILE}" "SELECT 'DROP TRIGGER ' || name || ';' FROM sqlite_master WHERE type = 'trigger' AND (name NOT LIKE 'gid_%' AND name NOT LIKE 'ggi_%' AND name NOT LIKE 'ggu_%' AND name NOT LIKE 'gii_%' AND name NOT LIKE 'giu_%' AND name NOT LIKE 'vwgcau_%' AND name NOT LIKE 'vtgcau_%' AND name NOT LIKE 'gcau_%' AND name NOT LIKE 'geometry_columns_%' AND name NOT LIKE 'gcfi_%' AND name NOT LIKE 'gctm_%' AND name NOT LIKE 'vtgcfi_%' AND name NOT LIKE 'vwgcfi_%' AND name NOT LIKE 'vtgcs_%' AND name NOT LIKE 'vwgc_%' AND name NOT LIKE 'vtgc_%' AND name NOT LIKE 'gcs_%');" >> "${DROP_TRIGGER_SQL}"
echo -e "\n\nCOMMIT;" >> "${DROP_TRIGGER_SQL}"
## Execute like ##
sqlite3 -bail /"${SQLITE_FILE}" < "${DROP_TRIGGER_SQL}"
# do things
sqlite3 -bail /"${SQLITE_FILE}" < "${CREATE_TRIGGER_SQL}"

Resources