SQLite RANDOM() function in CTE - sqlite

I found behavior of RANDOM() function in SQLite, which doesn't seems correct.
I want to generate random groups using random RANDOM() and CASE. However, it looks like CTE is not behaving in a correct way.
First, let's create a table
DROP TABLE IF EXISTS tt10ROWS;
CREATE TEMP TABLE tt10ROWS (
some_int INTEGER);
INSERT INTO tt10ROWS VALUES
(1), (2), (3), (4), (5), (6), (7), (8), (9), (10);
SELECT * FROM tt10ROWS;
Incorrect behaviour
WITH
-- 2.a add columns with random number and save in CTE
STEP_01 AS (
SELECT
*,
ABS(RANDOM()) % 4 + 1 AS RAND_1_TO_4
FROM tt10ROWS)
-- 2.b - get random group
select
*,
CASE
WHEN RAND_1_TO_4 = 1 THEN 'GROUP_01'
WHEN RAND_1_TO_4 = 2 THEN 'GROUP_02'
WHEN RAND_1_TO_4 = 3 THEN 'GROUP_03'
WHEN RAND_1_TO_4 = 4 THEN 'GROUP_04'
END AS GROUP_IT
from STEP_01;
Using such query we get a table, which generates correct values for RAND_1_TO_4 columns, but GROUP_IT column is incorrect. We can see, that groups don't match and some groups even missing.
Correct behaviour
I found a walkaround for such problem by creating a temporary table instead of using CTE. It helped.
-- 1.a - add column with random number 1-4 and save as TEMP TABLE
drop table if exists ttSTEP01;
CREATE TEMP TABLE ttSTEP01 AS
SELECT
*,
ABS(RANDOM()) % 4 + 1 AS RAND_1_TO_4
FROM tt10ROWS;
-- 1.b - get random group
select
*,
CASE
WHEN RAND_1_TO_4 = 1 THEN 'GROUP_01'
WHEN RAND_1_TO_4 = 2 THEN 'GROUP_02'
WHEN RAND_1_TO_4 = 3 THEN 'GROUP_03'
WHEN RAND_1_TO_4 = 4 THEN 'GROUP_04'
END AS GROUP_IT
from ttSTEP01;
QUESTION
What is the reasons behind such behaviour, where GROUP_IT column is not generated properly?

If you look at the bytecode generated by the incorrect query using EXPLAIN, you'll see that every time the RAND_1_TO_4 column is referenced, its value is re-calculated and a new random number is used (I suspect but aren't 100% sure this has something to do with how random() is a non-deterministic function). The null values are for those times when none of the CASE tests end up being true.
When you insert into a temporary table and then use that for the rest, the values of course remain static and it works as expected.

Related

SQLite order results by smallest difference

In many ways this question follows on from my previous one. I have a table that is pretty much identical
CREATE TABLE IF NOT EXISTS test
(
id INTEGER PRIMARY KEY,
a INTEGER NOT NULL,
b INTEGER NOT NULL,
c INTEGER NOT NULL,
d INTEGER NOT NULL,
weather INTEGER NOT NULL);
in which I would typically have entries such as
INSERT INTO test (a,b,c,d,weather) VALUES(1,2,3,4,30100306);
INSERT INTO test (a,b,c,d,weather) VALUES(1,2,3,4,30140306);
INSERT INTO test (a,b,c,d) VALUES(1,2,5,5,10100306);
INSERT INTO test (a,b,c,d) VALUES(1,5,5,5,11100306);
INSERT INTO test (a,b,c,d) VALUES(5,5,5,5,21101306);
Typically this table would have multiple rows with the some/all of b, c and d values being identical but with different a and weather values. As per the answer to my other question I can certainly issue
WITH cte AS (SELECT *, DENSE_RANK() OVER (ORDER BY (b=2) + (c=3) + (d=4) DESC) rn FROM test where a = 1) SELECT * FROM cte WHERE rn < 3;
No issues thus far. However, I have one further requirement which arises as a result of the weather column. Although this value is an integer it is in fact a composite where each digit represents a "banded" weather condition. Take for example weather = 20100306. Here 2 represents the wind direction divided up into 45 degree bands on the compass, 0 represents a wind speed range, 1 indicates precipitation as snow etc. What I need to do now while obtaining my ordered results is to allow for weather differences. Take for example the first two rows
INSERT INTO test (a,b,c,d,weather) VALUES(1,2,3,4,30100306);
INSERT INTO test (a,b,c,d,weather) VALUES(1,2,3,4,30140306);
Though otherwise similar they represent rather different weather conditions - the fourth number is four as opposed to 0 indicating a higher precipitation intensity brand. The WITH cte... above would rank the first two rows at the top which is fine. But what if I would rather have the row that differs the least from an incoming "weather condition" of 30130306? I would clearly like to have the second row appearing at the top. Once again, I can live with the "raw" result returned by WITH cte... and then drill down to the right row based on my current "weather condition" in Java. However, once again I find myself thinking that there is perhaps a rather neat way of doing this in SQL that is outwith my skill set. I'd be most obliged to anyone who might be able to tell me how/whether this can be done using just SQL.
You can sort the results 1st by DENSE_RANK() and 2nd by the absolute difference of weather and the incoming "weather condition":
WITH cte AS (
SELECT *,
DENSE_RANK() OVER (ORDER BY (b=2) + (c=3) + (d=4) DESC) rn
FROM test
WHERE a = 1
)
SELECT a,b,c,d,weather
FROM cte
WHERE rn < 3
ORDER BY rn, ABS(weather - ?);
Replace ? with the value of that incoming "weather condition".

How to cast a column into decimal of varying significant digits in Oracle

I have a column that is stored in ###0.0000000000 format. In a report I'm generating I need it to only show a few significant digits. Problem is the number needed changes based on the product with a default of 2. There's a column in another table that provides the required digits per each product.
I've tried a few things so far but it seems to not like it and throws a syntax error.
Cast(A.Price as Numeric(10,coalesce(B.Sig_Digits,2)))
That threw an error so I tried making the coalesce part a column and aliasing it in case the coalesce broke it, and that didn't work either. Round will take a column as an argument but I don't want it to round. Other than an ugly
case when Sig_digits = 1 then to_char(price,'###0.0') when Sig_digits = 2...
etc. what other options are there? This is a very large report, with 100+ columns and a few million rows so I'd prefer to not do the case when.
Use TO_CHAR with RPAD to add 0s to the end of the format model to the correct number of decimal places:
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE table_name ( value, sig ) AS
SELECT 123.456789, 2 FROM DUAL UNION ALL
SELECT 123456789.123456789, 7 FROM DUAL;
Query 1:
SELECT TO_CHAR( value, RPAD( 'FM999999999990.', sig + 15, '0' ) )
FROM table_name
Results:
| TO_CHAR(VALUE,RPAD('FM999999999990.',SIG+15,'0')) |
|---------------------------------------------------|
| 123.46 |
| 123456789.1234568 |

how to query by row number in oracle

Is there any way to query directly by row number in a table in Oracle? In other words, to achieve the same effect of ordinary lookup in an array in some basic language like C or Java. I've not yet tried virtual columns.
For instance, the following is an example of an efficient query, but it wastes disk space:
create table ary (row_position_id number(10) NOT NULL,
datum binary_float NOT NULL);
declare i pls_integer;
begin
for i in 0..10000000
loop
insert into ary values (i, dbms_random.normal());
end loop;
commit;
end;
create unique index ary_rp on ary(row_position_id);
now, i'm going to create a set of query values to store in another "parameter" table:
create table query_values (qval number(10) NOT NULL);
declare i pls_integer;
begin
for i in 0..10000
loop
insert into query_values (abs(dbms_random.random() % 10000000));
end loop;
commit;
end;
now, having these query values, i'm going to query the original table
select d.* from ary d where exists (select 0 from query_values v
where d.row_position_id = v.qval);
Now, this query would be fine -- it would use INDEX UNIQUE SCAN and TABLE access by ROWID. The problem I have is that the row_position_id takes up as much space in the table blocks as the actual data (the DATUM column).
I am aware of Index-organized tables and also Virtual Columns (which cannot be used with IOTs). And, of course, things like ROWNUM and ROW_NUMBER are irrelevant here (unless I'm misunderstanding something).
Also worth pointing out, this table is static data -- once loaded, it will never change. I would likely do an ALTER TABLE ARY READ ONLY;
What I would really like is:
create table ary (datum binary_float not null);
-- load rows in a specific order
-- efficiently query this table by implicit row position
Thanks very much!
Henry
I think you're going to want to keep the extra column. Here's why:
As you said, ROWNUM and ROW_NUMBER are not applicable here because they are generated as rows are returned in the query; they will not tell you anything about insert order.
What about ROWID? ROWID is just where a row is stored - again, from the docs:
The data object number of the object
The data block in the data file in which the row resides
The position of the row in the data block (first row is 0)
The data file in which the row resides (first file is 1). The file number is relative to the tablespace.
The "position in the data block" sounds interesting, but you would have no idea what order of the data blocks that were inserted (Oracle could use whatever datablocks it can quickly make use of) so this would not be a reliable option, and even so, you'd be having to parse ROWIDs which are not human readable (e.g. in 12g they look like this: *BAGAASMCwQL+ )
Another option is ORA_ROWSCN which is interesting in that it does give you some idea of order, in terms of the system change number. However, it doesn't come for free. Just to start, you have to create your table with the ROWDEPENDENCIES option and as per docs:
ROWDEPENDENCIES Specify ROWDEPENDENCIES if you want to enable
row-level dependency tracking. This setting is useful primarily to
allow for parallel propagation in replication environments. It
increases the size of each row by 6 bytes.
The other catch with this is that you would have to have to follow each row inserted with a commit so each row would get a different SCN.
If you're willing to go this far, you'll still have to convert the rows to have indexes (starting with, say, 0 or 1) that you can use to join to other tables.
Here's a quick sample of what it would involve:
DROP TABLE temp;
CREATE TABLE temp
( a number(10)
, b varchar2(10)
)
ROWDEPENDENCIES
;
-- one commit after all rows
INSERT INTO temp VALUES (1, 'A');
INSERT INTO temp VALUES (2, 'B');
INSERT INTO temp VALUES (3, 'C');
INSERT INTO temp VALUES (4, 'D');
INSERT INTO temp VALUES (5, 'E');
INSERT INTO temp VALUES (6, 'F');
COMMIT;
SELECT X.*, ROWNUM
FROM (SELECT T.*
, ORA_ROWSCN
FROM TEMP T
ORDER BY ORA_ROWSCN
) x
;
A B ORA_ROWSCN ROWNUM
1 A 2272340 1
2 B 2272340 2
6 F 2272340 3
4 D 2272340 4
5 E 2272340 5
3 C 2272340 6
Whoops. Those rows are definitely not in the order they came in.
Now using one commit per row:
TRUNCATE TABLE temp;
INSERT INTO temp VALUES (1, 'A');
COMMIT;
INSERT INTO TEMP VALUES (2, 'B');
COMMIT;
INSERT INTO temp VALUES (3, 'C');
COMMIT;
INSERT INTO temp VALUES (4, 'D');
COMMIT;
INSERT INTO temp VALUES (5, 'E');
COMMIT;
INSERT INTO temp VALUES (6, 'F');
COMMIT;
SELECT X.*, ROWNUM
FROM (SELECT T.*
, ORA_ROWSCN
FROM TEMP T
ORDER BY ORA_ROWSCN
) x
;
A B ORA_ROWSCN ROWNUM
1 A 2272697 1
2 B 2272699 2
3 C 2272701 3
4 D 2272703 4
5 E 2272705 5
6 F 2272707 6
Better. But if you've got a significant number of rows it's not going to go in fast. (I think this is what you would do if you intentionally wanted to slow down your inserts. ;) )
I think that's about as good as you'll get trying to get around using your own column, BUT there is still hope to economize storage: you can do away with the table + index and just go with an index-organized table. It's basically an index that you query directly.
It's just this easy:
CREATE TABLE TEMP2
( A NUMBER(10)
, B VARCHAR2(10)
, CONSTRAINT PK_CONSTRAINT PRIMARY KEY (A)
)
ORGANIZATION INDEX
;
There are other parameters you'll want to consider for this as well, but for more info check out... the docs.

Fastest Way to Count Distinct Values in a Column, Including NULL Values

The Transact-Sql Count Distinct operation counts all non-null values in a column. I need to count the number of distinct values per column in a set of tables, including null values (so if there is a null in the column, the result should be (Select Count(Distinct COLNAME) From TABLE) + 1.
This is going to be repeated over every column in every table in the DB. Includes hundreds of tables, some of which have over 1M rows. Because this needs to be done over every single column, adding Indexes for every column is not a good option.
This will be done as part of an ASP.net site, so integration with code logic is also ok (i.e.: this doesn't have to be completed as part of one query, though if that can be done with good performance, then even better).
What is the most efficient way to do this?
Update After Testing
I tested the different methods from the answers given on a good representative table. The table has 3.2 million records, dozens of columns (a few with indexes, most without). One column has 3.2 million unique values. Other columns range from all Null (one value) to a max of 40K unique values. For each method I performed four tests (with multiple attempts at each, averaging the results): 20 columns at one time, 5 columns at one time, 1 column with many values (3.2M) and 1 column with a small number of values (167). Here are the results, in order of fastest to slowest
Count/GroupBy (Cheran)
CountDistinct+SubQuery (Ellis)
dense_rank (Eriksson)
Count+Max (Andriy)
Testing Results (in seconds):
Method 20_Columns 5_Columns 1_Column (Large) 1_Column (Small)
1) Count/GroupBy 10.8 4.8 2.8 0.14
2) CountDistinct 12.4 4.8 3 0.7
3) dense_rank 226 30 6 4.33
4) Count+Max 98.5 44 16 12.5
Notes:
Interestingly enough, the two methods that were fastest (by far, with only a small difference in between then) were both methods that submitted separate queries for each column (and in the case of result #2, the query included a subquery, so there were really two queries submitted per column). Perhaps because the gains that would be achieved by limiting the number of table scans is small in comparison to the performance hit taken in terms of memory requirements (just a guess).
Though the dense_rank method is definitely the most elegant, it seems that it doesn't scale well (see the result for 20 columns, which is by far the worst of the four methods), and even on a small scale just cannot compete with the performance of Count.
Thanks for the help and suggestions!
SELECT COUNT(*)
FROM (SELECT ColumnName
FROM TableName
GROUP BY ColumnName) AS s;
GROUP BY selects distinct values including NULL. COUNT(*) will include NULLs, as opposed to COUNT(ColumnName), which ignores NULLs.
I think you should try to keep the number of table scans down and count all columns in one table in one go. Something like this could be worth trying.
;with C as
(
select dense_rank() over(order by Col1) as dnCol1,
dense_rank() over(order by Col2) as dnCol2
from YourTable
)
select max(dnCol1) as CountCol1,
max(dnCol2) as CountCol2
from C
Test the query at SE-Data
A development on OP's own solution:
SELECT
COUNT(DISTINCT acolumn) + MAX(CASE WHEN acolumn IS NULL THEN 1 ELSE 0 END)
FROM atable
Run one query that Counts the number of Distinct values and adds 1 if there are any NULLs in the column (using a subquery)
Select Count(Distinct COLUMNNAME) +
Case When Exists
(Select * from TABLENAME Where COLUMNNAME is Null)
Then 1 Else 0 End
From TABLENAME
You can try:
count(
distinct coalesce(
your_table.column_1, your_table.column_2
-- cast them if you want replace value from column are not same type
)
) as COUNT_TEST
Function coalesce help you combine two columns with replace not null values.
I used this in mine case and success with correctly result.
Not sure this would be the fastest but might be worth testing. Use case to give null a value. Clearly you would need to select a value for null that would not occur in the real data. According to the query plan this would be a dead heat with the count(*) (group by) solution proposed by Cheran S.
SELECT
COUNT( distinct
(case when [testNull] is null then 'dbNullValue' else [testNull] end)
)
FROM [test].[dbo].[testNullVal]
With this approach can also count more than one column
SELECT
COUNT( distinct
(case when [testNull1] is null then 'dbNullValue' else [testNull1] end)
),
COUNT( distinct
(case when [testNull2] is null then 'dbNullValue' else [testNull2] end)
)
FROM [test].[dbo].[testNullVal]

Select random row from a sqlite table

I have a sqlite table with the following schema:
CREATE TABLE foo (bar VARCHAR)
I'm using this table as storage for a list of strings.
How do I select a random row from this table?
Have a look at Selecting a Random Row from an SQLite Table
SELECT * FROM table ORDER BY RANDOM() LIMIT 1;
The following solutions are much faster than anktastic's (the count(*) costs a lot, but if you can cache it, then the difference shouldn't be that big), which itself is much faster than the "order by random()" when you have a large number of rows, although they have a few inconvenients.
If your rowids are rather packed (ie. few deletions), then you can do the following (using (select max(rowid) from foo)+1 instead of max(rowid)+1 gives better performance, as explained in the comments):
select * from foo where rowid = (abs(random()) % (select (select max(rowid) from foo)+1));
If you have holes, you will sometimes try to select a non-existant rowid, and the select will return an empty result set. If this is not acceptable, you can provide a default value like this :
select * from foo where rowid = (abs(random()) % (select (select max(rowid) from foo)+1)) or rowid = (select max(rowid) from node) order by rowid limit 1;
This second solution isn't perfect : the distribution of probability is higher on the last row (the one with the highest rowid), but if you often add stuff to the table, it will become a moving target and the distribution of probabilities should be much better.
Yet another solution, if you often select random stuff from a table with lots of holes, then you might want to create a table that contains the rows of the original table sorted in random order :
create table random_foo(foo_id);
Then, periodicalliy, re-fill the table random_foo
delete from random_foo;
insert into random_foo select id from foo;
And to select a random row, you can use my first method (there are no holes here). Of course, this last method has some concurrency problems, but the re-building of random_foo is a maintainance operation that's not likely to happen very often.
Yet, yet another way, that I recently found on a mailing list, is to put a trigger on delete to move the row with the biggest rowid into the current deleted row, so that no holes are left.
Lastly, note that the behavior of rowid and an integer primary key autoincrement is not identical (with rowid, when a new row is inserted, max(rowid)+1 is chosen, wheras it is higest-value-ever-seen+1 for a primary key), so the last solution won't work with an autoincrement in random_foo, but the other methods will.
You need put "order by RANDOM()" on your query.
Example:
select * from quest order by RANDOM();
Let's see an complete example
Create a table:
CREATE TABLE quest (
id INTEGER PRIMARY KEY AUTOINCREMENT,
quest TEXT NOT NULL,
resp_id INTEGER NOT NULL
);
Inserting some values:
insert into quest(quest, resp_id) values ('1024/4',6), ('256/2',12), ('128/1',24);
A default select:
select * from quest;
| id | quest | resp_id |
1 1024/4 6
2 256/2 12
3 128/1 24
--
A select random:
select * from quest order by RANDOM();
| id | quest | resp_id |
3 128/1 24
1 1024/4 6
2 256/2 12
--*Each time you select, the order will be different.
If you want to return only one row
select * from quest order by RANDOM() LIMIT 1;
| id | quest | resp_id |
2 256/2 12
--*Each time you select, the return will be different.
What about:
SELECT COUNT(*) AS n FROM foo;
then choose a random number m in [0, n) and
SELECT * FROM foo LIMIT 1 OFFSET m;
You can even save the first number (n) somewhere and only update it when the database count changes. That way you don't have to do the SELECT COUNT every time.
Here is a modification of #ank's solution:
SELECT *
FROM table
LIMIT 1
OFFSET ABS(RANDOM()) % MAX((SELECT COUNT(*) FROM table), 1)
This solution also works for indices with gaps, because we randomize an offset in a range [0, count). MAX is used to handle a case with empty table.
Here are simple test results on a table with 16k rows:
sqlite> .timer on
sqlite> select count(*) from payment;
16049
Run Time: real 0.000 user 0.000140 sys 0.000117
sqlite> select payment_id from payment limit 1 offset abs(random()) % (select count(*) from payment);
14746
Run Time: real 0.002 user 0.000899 sys 0.000132
sqlite> select payment_id from payment limit 1 offset abs(random()) % (select count(*) from payment);
12486
Run Time: real 0.001 user 0.000952 sys 0.000103
sqlite> select payment_id from payment order by random() limit 1;
3134
Run Time: real 0.015 user 0.014022 sys 0.000309
sqlite> select payment_id from payment order by random() limit 1;
9407
Run Time: real 0.018 user 0.013757 sys 0.000208
SELECT bar
FROM foo
ORDER BY Random()
LIMIT 1
I came up with the following solution for the large sqlite3 databases:
SELECT * FROM foo WHERE rowid = abs(random()) % (SELECT max(rowid) FROM foo) + 1;
The abs(X) function returns the absolute value of the numeric argument
X.
The random() function returns a pseudo-random integer between
-9223372036854775808 and +9223372036854775807.
The operator % outputs the integer value of its left operand modulo its right operand.
Finally, you add +1 to prevent rowid equal to 0.

Resources