sqlite join select will necessary fully scan a table? - sqlite

here is my case:
CREATE TABLE estimateperiod(estimatePeriodId int, periodTypeId int, companyId int, fiscalChainSeriesId int, fiscalQuarter int,fiscalYear int, calendarQuarter int, calendarYear int, periodEndDate datetime,advanceDate datetime);
CREATE INDEX estimateperiod_estimateperiodid_companyid on estimateperiod(estimateperiodid, companyid);
CREATE TABLE isinenhancedsymbol(symbolid int, symboltypeid int, symbolvalue char(64), relatedcompanyid char(64), exchangeid int, objectid int, symbolstartdate date, symbolenddate date, activeflag int);
CREATE INDEX isinenhancedsymbol_relatedcompanyid_isin on isinenhancedsymbol(relatedcompanyid, symbolvalue);
when I run this:
sqlite> explain query plan **select ep.estimateperiodid, ep.companyid , isin.symbolvalue from estimateperiod ep, isinenhancedsymbol isin where ep.estimateperiodid = 100 and ep.companyid = isin.relatedcompanyid;**
orde from deta
---- ------------- ----
0 1 TABLE isinenhancedsymbol AS isin
1 0 TABLE estimateperiod AS ep WITH INDEX estimateperiod_estimateperiodid_companyid
So, isinenhancedsymbol table is fully scanned, this cost long time. All fields in select are in covering index, why isinenhancedsymbol cannot be searched using the index?

SQLite version 3.6.20 is quite a few years out of date.
Covering indexes are supported with any somewhat current version:
sqlite> .eqp on
sqlite> select ep.estimateperiodid, ep.companyid , isin.symbolvalue from estimateperiod ep, isinenhancedsymbol isin where ep.estimateperiodid = 100 and ep.companyid = isin.relatedcompanyid;
--EQP-- 0,0,0,SEARCH TABLE estimateperiod AS ep USING COVERING INDEX estimateperiod_estimateperiodid_companyid (estimatePeriodId=?)
--EQP-- 0,1,1,SCAN TABLE isinenhancedsymbol AS isin USING COVERING INDEX isinenhancedsymbol_relatedcompanyid_isin

Related

MARIADB sequences - incrementing by 2

I have the following MARIADB code. It's supposed to demonstrate:
Constructing tables using sequences for incrementing the ID.
Using a temporary table+join to INSERT data into a table, while incrementing the ID.
Procedure:
Sequence S1 and table T1 are created. T1_ID is incremented with S1
Sequence S2 and table T2 are created. T2_ID is incremented with S2
Table T1 is filled with data. All is fine.
Temporary table TEMP_T2 is created and filled with data. No ID in this table. Column T1_NAME is a cross reference to SHORT_NAME in table T1.
The T1_ID is introduced into table TEMP_T2 with a join. The result of this SELECT is inserted into T2. Here, the sequence S2 should auto-increment T2_ID.
For some reason, at the end, T2 looks like this:
T2_ID|T1_ID|NAME|
-----+-----+----+
2| 1|y |
4| 2|x |
6| 2|z |
Why was T2_ID double-incremented?
Thanks!
USE DB1;
SET FOREIGN_KEY_CHECKS = 0;
DROP SEQUENCE IF EXISTS `S2`;
DROP SEQUENCE IF EXISTS `S1`;
DROP TABLE IF EXISTS `T2`;
DROP TABLE IF EXISTS `T1`;
-- Create sequence S1 and able T1
CREATE SEQUENCE `S1` start with 1 minvalue 1 maxvalue 9223372036854775806 increment by 1 cache 1000 nocycle ENGINE=InnoDB;
SELECT SETVAL(`S1`, 1, 0);
CREATE TABLE `T1` (
`T1_ID` tinyint(4) NOT NULL DEFAULT nextval(`S1`),
`SHORT_NAME` varchar(10) NOT NULL,
PRIMARY KEY (`T1_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
-- Create sequence T2 and table T2
CREATE SEQUENCE `S2` start with 1 minvalue 1 maxvalue 9223372036854775806 increment by 1 cache 1000 nocycle ENGINE=InnoDB;
SELECT SETVAL(`S2`, 1, 0);
CREATE TABLE `T2` (
`T2_ID` int(11) NOT NULL DEFAULT nextval(`S2`),
`T1_ID` int(11) DEFAULT NULL,
`NAME` varchar(100) DEFAULT NULL COLLATE 'utf8mb3_bin',
PRIMARY KEY (`T2_ID`),
UNIQUE KEY `T2_NAME_UN` (`NAME`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
-- Load data into T1
DELETE FROM T1;
INSERT INTO T1(SHORT_NAME) VALUES
('a'),
('b'),
('c');
SELECT * FROM T1;
-- Create temporary file for joining with T1
DROP TABLE IF EXISTS `TEMP_T2`;
CREATE TEMPORARY TABLE `TEMP_T2` (
`T1_NAME` varchar(10) DEFAULT NULL,
`NAME` varchar(100) DEFAULT NULL,
UNIQUE KEY `T2_NAME_UN` (`NAME`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
DELETE FROM TEMP_T2 ;
-- Insert data into the temporary table
INSERT INTO TEMP_T2(T1_NAME,NAME) VALUES
('b','x'),
('a','y'),
('b','z');
SELECT * FROM TEMP_T2;
# Do a join with TEMP_T2 x T1 and insert into T2
INSERT INTO T2(T1_ID,NAME)
SELECT
t1.T1_ID ,
t2.NAME
FROM TEMP_T2 AS t2
INNER JOIN T1 AS t1
ON t2.T1_NAME =t1.SHORT_NAME ;
SELECT * FROM T2;
Thanks for the responses.
I'm using SEQUENCE rather than AUTO_INCREMENT because I was told that it is the more modern way. It also enables retrieving the last ID of any specific table.
It's strange that this should be a bug. It seems like really basic functionality. But so it is...
I've found this as a reported existing bug MDEV-29540 in INSERT ... SELECT as it pertains to sequences in default values of columns.
Because this bug is reported and fix, this problem is/will not occur in the 10.3.37, 10.4.27, 10.5.18, 10.6.11, 10.7.7, 10.8.6, 10.9.4, 10.10.2, 10.11.1 and later versions.

How to insert data from R into Oracle table with identity column?

Assume I have a simple table in Oracle db
CREATE TABLE schema.d_test
(
id_record integer GENERATED AS IDENTITY START WITH 95000 NOT NULL,
DT DATE NOT NULL,
var varchar(50),
num float,
PRIMARY KEY (ID_RECORD)
)
And I have a dataframe in R
dt = c('2022-01-01', '2005-04-01', '2011-10-02')
var = c('sgdsg', 'hjhgjg', 'rurtur')
num = c(165, 1658.5, 8978.12354)
data = data.frame(dt, var, num)%>%
mutate(dt = as.Date(dt))
I'm trying to insert data into Oracle d_test table using the code
data %>%
dbWriteTable(
oracle_con,
value = .,
date = T,
'D_TEST',
append = T,
row.names=F,
overwrite = F
)
But the following error returned
Error in .oci.WriteTable(conn, name, value, row.names = row.names, overwrite = overwrite, :
Error in .oci.GetQuery(con, stmt, data = value) :
ORA-00947: not enough values
What's the problem?
How can I fix it?
Thank you.
This is pure Oracle (I don't know R).
Sample table:
SQL> create table test_so (id number generated always as identity not null, name varchar2(20));
Table created.
SQL> insert into test_so(name) values ('Name 1');
1 row created.
My initial idea was to suggest you to insert any value into the ID column, hoping that Oracle would discard it and generate its own value. However, that won't work.
SQL> insert into test_so (id, name) values (-100, 'Name 2');
insert into test_so (id, name) values (-100, 'Name 2')
*
ERROR at line 1:
ORA-32795: cannot insert into a generated always identity column
But, if you can afford recreating the table so that it doesn't automatically generate the ID column's value but use a "workaround" (we used anyway, as identity columns are relatively new in Oracle) - a sequence and a trigger - you might be able to "fix" it.
SQL> drop table test_so;
Table dropped.
SQL> create table test_so (id number not null, name varchar2(20));
Table created.
SQL> create sequence seq_so;
Sequence created.
SQL> create or replace trigger trg_bi_so
2 before insert on test_so
3 for each row
4 begin
5 :new.id := seq_so.nextval;
6 end;
7 /
Trigger created.
Inserting only name (Oracle will use a trigger to populate ID):
SQL> insert into test_so(name) values ('Name 1');
1 row created.
This is what you'll do in your code - provide dummy ID value, just to avoid
ORA-00947: not enough values
error you have now. Trigger will discard it and use sequence anyway:
SQL> insert into test_so (id, name) values (-100, 'Name 2');
1 row created.
SQL> select * from test_so;
ID NAME
---------- --------------------
1 Name 1
2 Name 2 --> this is a row which was supposed to have ID = -100
SQL>
The way you can handle this problem is to create table with GENERATED BY DEFAULT ON NULL AS IDENTITY like this
CREATE TABLE CM_RISK.d_test
(
id_record integer GENERATED BY DEFAULT ON NULL AS IDENTITY START WITH 5000 NOT NULL ,
DT date NOT NULL,
var varchar(50),
num float,
PRIMARY KEY (ID_RECORD)
)

How to move column from referenced table to referrer table using only SQLite language dialect?

I have following table structure in SQLite database:
CREATE TABLE A (id integer primary key, name text, room Integer);
CREATE TABLE B (id integer primary key, idA integer not null, code blob, FOREIGN KEY(idA) REFERENCES A(id));
For each record in A there are 1 to n records in B that refers it. Desired table structure is:
CREATE TABLE A (id integer primary key, name text);
CREATE TABLE B (id integer primary key, idA integer not null, code blob, room Integer, FOREIGN KEY(idA) REFERENCES A(id));
So, I would like to transer room column from A to B without data loss: recreate table A without room column, delete duplicates from A, add room column to B, set it depends on what values were in referenced A records room columns (original A table) and reassign idA for B records.
Is it possible using only SQLite and if it is, how to do this, using only SQLite?
Thank you!
have a look at http://sqlfiddle.com/#!5/e26e2/1 .
creating the databases :
CREATE TABLE A (id integer primary key, name text, room Integer);
CREATE TABLE B (id integer primary key, idA integer not null, code blob, FOREIGN KEY(idA) REFERENCES A(id));
Fill in some test data :
insert into A (id, name, room) values (1, "azerty", 123) ;
insert into A (id, name, room) values (2, "querty", 456) ;
insert into B (id, idA, code) values (10, 1, "code 1") ;
insert into B (id, idA, code) values (15, 1, "code 1b") ;
insert into B (id, idA, code) values (20, 1, "code 1c") ;
insert into B (id, idA, code) values (25, 2, "code 2") ;
insert into B (id, idA, code) values (30, 3, "code 3") ;
copy the wrong table layouts to tables we will later drop :
alter table B rename to old_B ;
alter table A rename to old_A ;
Create tables according the correct layout :
CREATE TABLE A (id integer primary key, name text);
CREATE TABLE B (id integer primary key, idA integer not null, code blob, room Integer, FOREIGN KEY(idA) REFERENCES A(id));
Copy selected data from old_A into A :
INSERT OR REPLACE INTO A ( id, name)
SELECT id, name FROM old_A ;
fill B with an inner join to "line up" room with correct idA :
INSERT OR REPLACE INTO B ( id, idA, code, room)
SELECT old_B.*, old_A.room FROM old_B INNER JOIN old_A ON old_B.idA = old_A.id ;
drop the old tables
DROP TABLE old_A ;
DROP TABLE old_B ;

Why one sqlite query is so much slower even though they are very similar?

>
sqlite> .timer on
query 1
sqlite> select count(*) from alpha where Name = 'SHOUT' and Date between 20130101 and 20140101;
3783443
CPU Time: user 42.067187 sys 2.098010
query 2
sqlite> select count(*) from alpha where Date between 20130101 and 20140101;
3783443
CPU Time: user 0.450523 sys 0.054451
Schema:
sqlite> .schema
CREATE TABLE alpha (
Date Date,
Name VARCHAR(50),
Symbol VARCHAR(10),
Value FLOAT,
ChangeDate DATETIME DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (Date,Name,Symbol) );
CREATE TABLE cusip (
Symbol VARCHAR(10),
Cusip VARCHAR(9),
PRIMARY KEY (Symbol) );
CREATE INDEX idx_alpha_Date on alpha (Date);
CREATE INDEX idx_alpha_Symbol on alpha (Symbol);
CREATE INDEX idx_alpha_date_name on alpha ( Date, Name );
CREATE INDEX idx_alpha_name on alpha (Name);
Use explain query plan to see how the indices are used and for more details, explain to see how it translates to sqlite virtual machine code.
sqlite> explain query plan select count(*) from alpha where Name = 'SHOUT' and Date between 20130101 and 20140101;
0|0|0|SEARCH TABLE alpha USING INDEX idx_alpha_name (Name=?) (~5 rows)
sqlite> explain query plan select count(*) from alpha where Date between 20130101 and 20140101;
0|0|0|SEARCH TABLE alpha USING COVERING INDEX idx_alpha_date_name (Date>? AND Date<?) (~31250 rows)
In the first case, an index is used only for the Name = 'SHOUT' part and Date between 20130101 and 20140101 is applied to all results in that intermediate result set, possibly taking a long time. In the latter case, the results can be obtained from an index alone, without needing to scan through an intermediate result set.

Single insertion of data on one date in SQL Server?

ALTER PROCEDURE [dbo].[K_FS_InsertMrpDetails]
#date datetime,
#feedtype varchar(50),
#rateperkg float,
#rateper50kg float,
#updatedby varchar(50)
AS
BEGIN
INSERT INTO K_FS_FeedMrpDetails([date], feedtype, rateperkg, rateper50kg, updatedby, updatedon)
VALUES(#date, #feedtype, #rateperkg, #rateper50kg, #updatedby, getdate())
SELECT '1' AS status
END
With this query we insert 9 rows at a time but what I want is in one same date do not insert again different details. How can I please help me.
Add a unique constraint on the column [date]. That will prevent you from adding more than one row with the same [date] value.
Update:
To allow 9 rows for each date you can add a computed column D that removes the time part and you need to add a column that will hold the values 1 to 9 R. Use a check constraint on R to only allow 1-9. Finally you create a unique constraint on (R, D).
Sample table definition:
create table T
(
ID int identity primary key,
DT datetime not null,
R tinyint check (R in (1,2,3,4,5,6,7,8,9)) not null,
D as dateadd(day, datediff(day, 0, DT), 0),
constraint ux_RD unique (R,D)
)
Try with this:
insert into T(DT, R) values(getdate(), 1)
insert into T(DT, R) values(getdate(), 2)
insert into T(DT, R) values(getdate(), 1)
First and second insert works fine, the third raises a unique constraint exception.

Resources