How to set unique constraint over multiple columns when any one can be null? - sqlite

How do I set unique constraint over multiple columns when any one can be null in SQLite?
e.g. I have made unique("col1","col2","col3") and tried insert into tablename values("abc","def",null) twice it inserted both rows.
The unique constraint is not working when third column is null.

In sqlite, all null are differences.
I think the best way to solve this issue is to set column c not null with a special default value. Then use the default value (for example 0, '') to represent null.
edit 1
you can easily extend this solution to any columns
create table test (
a text not null default "",
b text not null default "",
c text not null default ""
);

You could create a trigger to enfore this:
CREATE TRIGGER col1_col2_col3null_unique
BEFORE INSERT ON MyTable
FOR EACH ROW
WHEN col1 IS NOT NULL
AND col2 IS NOT NULL
AND col3 IS NULL
BEGIN
SELECT RAISE(ABORT, 'col1/col2/col3 must be unique')
FROM MyTable
WHERE col1 = NEW.col1
AND col2 = NEW.col2
AND col3 IS NULL;
END;
You need such a trigger for each possible combination of NULLs in the three columns.
Furthermore, if it is possible to have UPDATEs that change such a column to NULL, you need triggers for those, too.

Starting with version 3.9.0 (2015-10-14) you can use indexes on expressions (https://www.sqlite.org/expridx.html) and use for example the COALESCE function to transform null values into some form of fallback value:
CREATE UNIQUE INDEX IX_Unique ON Table1 (
COALESCE(col1, ""),
COALESCE(col2, ""),
COALESCE(col3, "")
);

Related

check for null values in all columns of a table using SQLite

I have a table with more than 15 columns. 2 of of them are of the type varchar, and most of them of type int and float.
I am new to SQL and am trying to figure out a way by which I can check if any of the columns have a NULL value in it.
Had there been just 4 or 5 columns I could have checked them individually with
SELECT COUNT(*) FROM table_name WHERE col1 IS NULL OR col2 IS NULL OR col3 IS NULL ...
But is there any efficient way to do this on a lot of columns in SQLite specifically?
I have referred to other questions regarding this here but I cannot use xml or store anything. Also I am using SQLite and can only run a query.
There is no way (that I know of) to check all columns if they contain null without explicitly listing all the column names.
Your query is the proper way to do it.
If you want to shorten (not significantly) the code you could use these alternatives:
SELECT COUNT(*) FROM table_name WHERE col1 + col2 + col3 IS NULL;
or:
SELECT COUNT(*) FROM table_name WHERE col1 || col2 || col3 IS NULL;
or:
SELECT COUNT(*) FROM table_name WHERE MAX(col1, col2, col3) IS NULL;
The above queries work, for any data type of the columns, because if there is even only 1 column equal to null then addition, concatenation and the scalar function MAX() (and MIN()) all return null.
See the demo.

How to insert data from R into Oracle table with identity column?

Assume I have a simple table in Oracle db
CREATE TABLE schema.d_test
(
id_record integer GENERATED AS IDENTITY START WITH 95000 NOT NULL,
DT DATE NOT NULL,
var varchar(50),
num float,
PRIMARY KEY (ID_RECORD)
)
And I have a dataframe in R
dt = c('2022-01-01', '2005-04-01', '2011-10-02')
var = c('sgdsg', 'hjhgjg', 'rurtur')
num = c(165, 1658.5, 8978.12354)
data = data.frame(dt, var, num)%>%
mutate(dt = as.Date(dt))
I'm trying to insert data into Oracle d_test table using the code
data %>%
dbWriteTable(
oracle_con,
value = .,
date = T,
'D_TEST',
append = T,
row.names=F,
overwrite = F
)
But the following error returned
Error in .oci.WriteTable(conn, name, value, row.names = row.names, overwrite = overwrite, :
Error in .oci.GetQuery(con, stmt, data = value) :
ORA-00947: not enough values
What's the problem?
How can I fix it?
Thank you.
This is pure Oracle (I don't know R).
Sample table:
SQL> create table test_so (id number generated always as identity not null, name varchar2(20));
Table created.
SQL> insert into test_so(name) values ('Name 1');
1 row created.
My initial idea was to suggest you to insert any value into the ID column, hoping that Oracle would discard it and generate its own value. However, that won't work.
SQL> insert into test_so (id, name) values (-100, 'Name 2');
insert into test_so (id, name) values (-100, 'Name 2')
*
ERROR at line 1:
ORA-32795: cannot insert into a generated always identity column
But, if you can afford recreating the table so that it doesn't automatically generate the ID column's value but use a "workaround" (we used anyway, as identity columns are relatively new in Oracle) - a sequence and a trigger - you might be able to "fix" it.
SQL> drop table test_so;
Table dropped.
SQL> create table test_so (id number not null, name varchar2(20));
Table created.
SQL> create sequence seq_so;
Sequence created.
SQL> create or replace trigger trg_bi_so
2 before insert on test_so
3 for each row
4 begin
5 :new.id := seq_so.nextval;
6 end;
7 /
Trigger created.
Inserting only name (Oracle will use a trigger to populate ID):
SQL> insert into test_so(name) values ('Name 1');
1 row created.
This is what you'll do in your code - provide dummy ID value, just to avoid
ORA-00947: not enough values
error you have now. Trigger will discard it and use sequence anyway:
SQL> insert into test_so (id, name) values (-100, 'Name 2');
1 row created.
SQL> select * from test_so;
ID NAME
---------- --------------------
1 Name 1
2 Name 2 --> this is a row which was supposed to have ID = -100
SQL>
The way you can handle this problem is to create table with GENERATED BY DEFAULT ON NULL AS IDENTITY like this
CREATE TABLE CM_RISK.d_test
(
id_record integer GENERATED BY DEFAULT ON NULL AS IDENTITY START WITH 5000 NOT NULL ,
DT date NOT NULL,
var varchar(50),
num float,
PRIMARY KEY (ID_RECORD)
)

How to convert Bigquery repeated record into a column?

event_params is a repeated record. Its key values can be
firebase_event_origin, engagement_time_msec, firebase_screen, ...
Each key has a number of optional values according to the data type:
string_value, int_value, ...
I want to convert the key into a column and that the value will populate it.
For example: the key firebase_screen will be converted into a column firebase_screen with a value of webview screen. Same for all the other repeated records in the table.
I'm not sure if the UNNEST is the right solution here since it breaks it down into records instead of columns.
The screenshots of schema and the table I used for this example:
You need to unnest first, then group the data again.
Please replace the bracket in FROM ( ... ) with your table.
SELECT
date,
ANY_VALUE(CASE WHEN t.key="firebase_screen" THEN t.string_value ELSE NULL END) AS firebase_screen,
ANY_VALUE(CASE WHEN t.key="ga_session_number" THEN t.int_value ELSE NULL END) AS ga_session_number,
FROM (
SELECT
1 date,
[STRUCT("firebase_screen" AS key,
"webs" AS string_value,
NULL AS int_value),
STRUCT("ga_session_number" AS key,
NULL,
6 AS int_value) ] AS event_params ) AS tbl,
UNNEST(tbl.event_params) AS t
GROUP BY 1

How to check for overlapping intervals when entering data in SQLite3?

I need to place a check in my SQLite3 database that ensures that the user cannot enter data with overlapping intervals.
For example:
hole # Sample From To
1 1 1 2
1 2 2 3
1 3 2.2 2.9
With the example above I have checks in place that will catch any duplicate 'From' in each hole, but sample #3 is not a duplicate so it will not be caught, but it is an overlapping interval.
I don't want this for a query, but rather as a data-entry check built into the table.
So far I've tried adding a constraint check of ('From' NOT BETWEEN 'From' and 'To) but to no avail. I don't understand whether the check is trying to be applied on a row by row basis, which I want, or on a primary key basis.
Here is the table definition that I am trying:
CREATE TABLE assay (
BHID TEXT NOT NULL
CONSTRAINT [Check BHID] REFERENCES collar (BHID) ON DELETE CASCADE
ON UPDATE CASCADE
MATCH SIMPLE,
[Sample #] TEXT UNIQUE,
[FROM] NUMERIC NOT NULL
CONSTRAINT [Interval Check] CHECK ( ("TO" > "FROM") ),
[TO] NUMERIC NOT NULL,
Ag NUMERIC CONSTRAINT [Max Silver] CHECK ( (Ag < 1000) ),
Zn NUMERIC CONSTRAINT [Max Zinc] CHECK ( (Zn < 50) ),
Pb NUMERIC CONSTRAINT [Max Lead] CHECK ( (Pb < 50) ),
Fe NUMERIC,
PRIMARY KEY (
BHID,
[FROM]
)
);
And here is the table with the updated constraint (before commiting):
CREATE TABLE assay (
BHID TEXT NOT NULL
CONSTRAINT [Check BHID] REFERENCES collar (BHID) ON DELETE CASCADE
ON UPDATE CASCADE
MATCH SIMPLE,
[Sample #] TEXT UNIQUE,
[FROM] NUMERIC NOT NULL
CONSTRAINT [Interval Check] CHECK ( ("TO" > "FROM") )
CONSTRAINT [Not Between] CHECK ( ('From' NOT BETWEEN 'From' AND 'To') ),
[TO] NUMERIC NOT NULL,
Ag NUMERIC CONSTRAINT [Max Silver] CHECK ( (Ag < 1000) ),
Zn NUMERIC CONSTRAINT [Max Zinc] CHECK ( (Zn < 50) ),
Pb NUMERIC CONSTRAINT [Max Lead] CHECK ( (Pb < 50) ),
Fe NUMERIC,
PRIMARY KEY (
BHID,
[FROM]
)
);
I deleted the data row with the conflicting data (From: 2.2, To: 2.9) and committed the change before trying to add the new constraint check. But it won't let me commit the new constraint, I believe because it is trying to apply it to the entire column.
So my question should be this: Is there a way to apply a constraint check on a row by row basis in sql?
In SQL, double quotes are used to quote table and column names; single quotes are used for string values. So the check
('FROM' NOT BETWEEN 'FROM' AND 'TO')
just compares these constant string values. This check always fails.
Anyway, a CHECK constraint can access only values in the current row.
To be able to look at other rows, you have to use a trigger:
CREATE TRIGGER no_overlaps
BEFORE INSERT ON Assay
WHEN EXISTS (SELECT *
FROM Assay
WHERE "From" <= NEW."To"
AND "To" >= NEW."From")
BEGIN
SELECT RAISE(FAIL, "overlapping intervals");
END;

how to select a particular null value from the table containing null values

how to select a particular null value from the table containing null values
You can select on the other columns, but of course all NULL values are the same.
You can use is null .
For example , select * from tableA where columnA IS NULL will select all the rows from table A which columnA has a null value.

Resources