Sqlite delete all connected rows from 3 join table - sqlite

How do you delete all rows connected from 2 different table in SQLite?
**Table1** **Table2** **Table3**
| ID | Number | | ID | Tax | Table1ID | | ID | Price | Table2ID |
| 1 | 0 | | 1 | 21 | 1 | | 1 | 56 | 1 |
| 2 | 1 | | 2 | 15 | 2 | | 2 | 5 | 2 |
| 3 | 0 | | 3 | 10 | 3 | | 3 | 98 | 3 |
I want to delete all rows from Table1-3 where Table1.Number = 0, how I can get that?

What you need is to define the tables properly so that Table2.Table1ID references Table1.ID and Table3.Table2ID references Table2.ID with the action ON DELETE CASCADE:
PRAGMA foreign_keys = ON;
CREATE TABLE Table1(
`ID` INTEGER PRIMARY KEY,
`Number` INTEGER
);
CREATE TABLE Table2(
`ID` INTEGER PRIMARY KEY,
`Tax` INTEGER,
`Table1ID` INTEGER REFERENCES Table1(ID) ON DELETE CASCADE
);
CREATE TABLE Table3(
`ID` INTEGER PRIMARY KEY,
`Price` INTEGER,
`Table2ID` INTEGER REFERENCES Table2(ID) ON DELETE CASCADE
);
Note that you must turn on the foreign key support because it is off by default.
Now every time you delete a row from Table1, all rows of Table2 that hold a reference to the deleted row of Table1 will be deleted too.
Also all rows of Table3 that hold a reference to the deleted rows of Table2 will be deleted.
So all you need is:
DELETE FROM Table1 WHERE Number = 0;
See the demo.

Related

MonetDB recursive CTE (common table expressions)

It seems MonetDB does not support recursive CTE. This is a useful feature that I used to get BOM from ERP systems. For a greater flexibility I used Firebird recursive stored procedures to enhance the output with extra calculations. A good example of SQLServer recursive CTE can be found here https://www.essentialsql.com/recursive-ctes-explained/
Question is: Is it any way I can achieve similar results in MonetDB?
There is currently no support for recursive CTEs in MonetDB[Lite]. The solution you have proposed yourself seems like the way to go.
It is clear that once I have access to procedures, variables and while-loop, something can be done. The following code provides me the desired result using temporary tables. I would appreciate if anybody can provide me an alternative to this solution that provides the same results without using the temporary tables overhead.
CREATE TEMPORARY TABLE BOM (parent_id string, comp_id string, qty double) ON COMMIT PRESERVE ROWS;
INSERT INTO BOM VALUES('a','b',5), ('a','c',2), ('b','d',4), ('b','c',7), ('c','e',3);
select * from BOM;
+-----------+---------+--------------------------+
| parent_id | comp_id | qty |
+===========+=========+==========================+
| a | b | 5 |
| a | c | 2 |
| b | d | 4 |
| b | c | 7 |
| c | e | 3 |
+-----------+---------+--------------------------+
CREATE TEMPORARY TABLE EXPLODED_BOM (parent_id string, comp_id string, path string, qty double, level integer) ON COMMIT PRESERVE ROWS;
CREATE OR REPLACE PROCEDURE UPDATE_BOM()
BEGIN
DECLARE prev_count int;
DECLARE crt_count int;
DECLARE crt_level int;
delete from EXPLODED_BOM; --make sure is empty
insert into EXPLODED_BOM select parent_id, comp_id, parent_id||'-'||comp_id, qty, 0 from BOM; --insert first level
SET prev_count = 0;
SET crt_count = (select count(*) from EXPLODED_BOM);
SET crt_level = 0;
-- (crt_level < 100) avoids possible infinite loop, if BOM is malformed
WHILE (crt_level < 100) and (crt_count > prev_count) DO
SET prev_count = crt_count;
insert into EXPLODED_BOM select e.parent_id, a.comp_id, e.path||'-'||a.comp_id, a.qty*e.qty, crt_level+1
from BOM a, EXPLODED_BOM e
where a.parent_id = e.comp_id and e.level=crt_level;
-- is it any chance to get the amount of "affected rows" by insert, update or delete statements, this way I can avoid checking the new count?
SET crt_count = (select count(*) from EXPLODED_BOM);
SET crt_level = crt_level +1;
END WHILE;
END;
call UPDATE_BOM();
select * from EXPLODED_BOM;
+-----------+---------+---------+--------------------------+-------+
| parent_id | comp_id | path | qty | level |
+===========+=========+=========+==========================+=======+
| a | b | a-b | 5 | 0 |
| a | c | a-c | 2 | 0 |
| b | d | b-d | 4 | 0 |
| b | c | b-c | 7 | 0 |
| c | e | c-e | 3 | 0 |
| a | d | a-b-d | 20 | 1 |
| a | c | a-b-c | 35 | 1 |
| a | e | a-c-e | 6 | 1 |
| b | e | b-c-e | 21 | 1 |
| a | e | a-b-c-e | 105 | 2 |
+-----------+---------+---------+--------------------------+-------+

Add columns from file to an existing table in MariaDB 10.1

I want to add a new column from a file to an existing table, in the way cbind does in R.
The file has 1 column, 23710 lines, all numbers:
me#my_server:/var/www/html/my_website$ head my_sample.txt
61
66
0
330
76
9
10
16
6
0
Using the code:
ALTER TABLE my_table ADD COLUMN IF NOT EXISTS sample69 INT(10) DEFAULT NULL;
LOAD DATA LOCAL INFILE '/var/www/html/my_website/my_sample.txt' INTO TABLE my_table LINES TERMINATED BY '\n' (sample69);
Before:
MariaDB [my_database]> select * from my_table limit 10;
+------------+-----------+
| geneSymbol | sample000 |
+------------+-----------+
| A1BG | 61 |
| A1BG-AS1 | 66 |
| A1CF | 0 |
| A2M | 330 |
| A2M-AS1 | 76 |
| A2ML1 | 9 |
| A2MP1 | 10 |
| A4GALT | 16 |
| A4GNT | 6 |
| AA06 | 0 |
+------------+-----------+
MariaDB [my_database]> select count(*) from my_table;
+----------+
| count(*) |
+----------+
| 23710 |
+----------+
After:
MariaDB [my_database]> select * from my_table limit 10;
+------------+-----------+-----------+
| geneSymbol | sample000 | sample69 |
+------------+-----------+-----------+
| A1BG | 61 | NULL |
| A1BG-AS1 | 66 | NULL |
| A1CF | 0 | NULL |
| A2M | 330 | NULL |
| A2M-AS1 | 76 | NULL |
| A2ML1 | 9 | NULL |
| A2MP1 | 10 | NULL |
| A4GALT | 16 | NULL |
| A4GNT | 6 | NULL |
| AA06 | 0 | NULL |
+------------+-----------+-----------+
MariaDB [my_database]> select count(*) from my_table;
+----------+
| count(*) |
+----------+
| 47420 |
+----------+
It apparently appends the data to the end of the column. Instead I want the new column to be the same length of 23710, filled with the new data from the file.
What am I doing wrong?
LOAD only loads whole rows.
Even if it could load just one column, how would it know which row each number goes with?
You must reconstruct the data with two columns (geneSymbol and sample69), load that into a temp table, then do a multi-table JOIN to move the data into the main table.
Addenda
If you have 69 columns of samples, that it the wrong way to design the schema. At some point, you will hit a limit.
Plan A: Lots of rows, not lots of columns:
CREATE TABLE x (
geneSymbol VARCHAR(..) ...,
num SMALLINT UNSIGNED NOT NULL,
value SMALLINT UNSIGNED NOT NULL,
PRIMARY KEY(geneSymbol, num)
) ENGINE=InnoDB
Plan B (This will require code to add each new sample):
CREATE TABLE x (
geneSymbol VARCHAR(..) ...,
text NOT NULL, -- JSON encoded list of samples for that gene
PRIMARY KEY(geneSymbol)
) ENGINE=InnoDB
Plan C (aimed at reading one sample):
CREATE TABLE x (
num SMALLINT UNSIGNED NOT NULL,
text NOT NULL, -- JSON encoded list of values for that sample
PRIMARY KEY(num)
) ENGINE=InnoDB
What will your queries be like? I suspect you will be reading all the data, not doing any WHERE clauses based on symbol or num??

SQLite - Update a column based on values from two other tables' columns

I am trying to update Data1's ID to Record2's ID when:
Record1's and Record2's Name are the same, and
Weight is greater in Record2.
Record1
| ID | Weight | Name |
|----|--------|------|
| 1 | 10 | a |
| 2 | 10 | b |
| 3 | 10 | c |
Record2
| ID | Weight | Name |
|----|--------|------|
| 4 | 20 | a |
| 5 | 20 | b |
| 6 | 20 | c |
Data1
| ID | Weight |
|----|--------|
| 4 | 40 |
| 5 | 40 |
I have tried the following SQLite query:
update data1
set id =
(select record2.id
from record2,record1
where record1.name=record2.name
and record1.weight<record2.weight)
where id in
(select record1.id
from record1, record2
where record1.name=record2.name
and record1.weight<record2.weight)
Using the above query Data1's id is updated to 4 for all records.
NOTE: Record1's ID is the foreign key for Data1.
For the given data set the following seems to serve the cause:
update data1
set id =
(select record2.id
from record2,record1
where
data1.id = record1.id
and record1.name=record2.name
and record1.weight<record2.weight)
where id in
(select record1.id
from record1, record2
where
record1.id in (select id from data1)
and record1.name=record2.name
and record1.weight<record2.weight)
;
See it in action: SQL Fiddle.
Please comment if and as this requires adjustment / further detail.

update table in sorted order

I am maintaining table structure like below.
sortid | id | name
1 | 1 | aa
3 | 2 | cc
4 | 3 | cc
2 | 4 | bb
5 | 5 | dd
Where sortid is maintained according to ascending order of name.
Now I want to update name 'dd' to 'aa', such way that sort id is also updated to its correct value.
Update table set name="bb" where name like "dd";
After updating my table should become like below.
sortid | id | name
1 | 1 | aa
4 | 2 | cc
5 | 3 | cc
3 | 4 | bb
2 | 5 | aa
That sortid is the number of rows that would be sorted before this row.
So you can compute it by counting rows:
UPDATE MyTable
SET sortid = (SELECT COUNT(*)
FROM MyTable AS T2
WHERE T2.name < MyTable.name) +
(SELECT COUNT(*)
FROM MyTable AS T2
WHERE T2.name = MyTable.name
AND T2.id <= MyTable.id);
(The second subquery resolves duplicate sortid values that would result from duplicate names.)

Avoid Full Table Scan - Extract First Row Only

I am trying to write a query that only extract the first (random) row when condition is met.
-- Create table
create table TRANSACTIONS_SAMPLE
(
institution_id NUMBER(5) not null,
id NUMBER(10) not null,
partitionkey NUMBER(10) default 0 not null,
cardid NUMBER(10),
accountid NUMBER(10),
batchid NUMBER(10) not null,
amt_bill NUMBER(16,3),
load_date DATE not null,
trxn_date DATE not null,
single_msg_flag NUMBER(5),
authaccounttype VARCHAR2(2 BYTE),
originator VARCHAR2(50),
amount NUMBER(16,3) default 0.000 not null,
embeddedfee NUMBER(16,3) default 0.000 not null,,
valuedate DATE,
startofinterest DATE,
minduevaluedate DATE,
postdate DATE,
posttimestamp DATE,
Status CHAR(4 BYTE) default 'NEW' not null,
)
partition by list (PARTITIONKEY)
(
partition 0002913151 values (1234567)
tablespace LIVE
pctfree 10
initrans 16
maxtrans 255
storage
(
initial 8M
next 1M
minextents 1
maxextents unlimited
)
);
-- Create/Recreate indexes
create index TRANSACTIONS_SAMPLEI01 on TRANSACTIONS_SAMPLE (ACCOUNTID)
local;
create index TRANSACTIONS_SAMPLEI02 on TRANSACTIONS_SAMPLE (LOAD_DATE)
local;
create index TRANSACTIONS_SAMPLEI03 on TRANSACTIONS_SAMPLE (BATCHID)
local;
create index TRANSACTIONS_SAMPLEI04 on TRANSACTIONS_SAMPLE (POSTDATE)
local;
create index TRANSACTIONS_SAMPLEI05 on TRANSACTIONS_SAMPLE (POSTTIMESTAMP)
local;
create index TRANSACTIONS_SAMPLEI06 on TRANSACTIONS_SAMPLE (STATUS, PARTITIONKEY)
local;
create index TRANSACTIONS_SAMPLEI07 on TRANSACTIONS_SAMPLE (CARDID, TRXN_DATE)
local;
create unique index TRANSACTIONS_SAMPLEUI01 on TRANSACTIONS_SAMPLE (ID, PARTITIONKEY)
local;
-- Create/Recreate primary, unique and foreign key constraints
alter table TRANSACTIONS_SAMPLE
add constraint TRANSACTIONS_SAMPLEPK primary key (ID, PARTITIONKEY);
--QUERY
Select * From (
Select t.AccountId From Transactions_sample t Group by t.Accountid Having Count(t.AccountId) > 10 order by dbms_random.random)
Where Rownum = 1
The problem With this Query is full table scan. I want to achieve the same results without having to fully Access the table. Any ideas?
Thanks
You can get it down to a full index scan, using TRANSACTIONS_SAMPLEI01, if you add a filter for where AccountId is not null. But only if you don't want to count null values, of course.
The column is nullable, but the index doesn't contain null values. To include the count of nulls it has to do a full table scan because it cannot get that count from the index. If you have that filter the optimizer knows all the account ID values must be in the index, so it only has to refer to that, not the table itself.
explain plan for
Select * From (
Select t.AccountId From Transactions_sample t where AccountId is not null Group by t.Accountid Having Count(t.AccountId) > 10 order by dbms_random.random)
Where Rownum = 1;
select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------
Plan hash value: 381125580
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 0 (0)| 00:00:01 | | |
|* 1 | COUNT STOPKEY | | | | | | | |
| 2 | VIEW | | 1 | 13 | 0 (0)| 00:00:01 | | |
|* 3 | SORT ORDER BY STOPKEY | | 1 | 13 | 0 (0)| 00:00:01 | | |
|* 4 | FILTER | | | | | | | |
| 5 | SORT GROUP BY NOSORT | | 1 | 13 | 0 (0)| 00:00:01 | | |
| 6 | PARTITION LIST SINGLE| | 1 | 13 | 0 (0)| 00:00:01 | 1 | 1 |
|* 7 | INDEX FULL SCAN | TRANSACTIONS_SAMPLEI01 | 1 | 13 | 0 (0)| 00:00:01 | 1 | 1 |
---------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWNUM=1)
3 - filter(ROWNUM=1)
4 - filter(COUNT("T"."ACCOUNTID")>10)
7 - filter("ACCOUNTID" IS NOT NULL)
Note
-----
- dynamic sampling used for this statement (level=2)
Alternatively, if the column can be made not-nullable then the filter wouldn't be required.

Resources