SQLite: how to highlight fields outside of trigger spec? - sqlite

I have an SQLite table as such:
A
B
0
0
1
1
and the following trigger specs that fire on update or insert for each column:
A = 0
B = 1
In the application, I need to display the rows that break the specs, with culprit fields highlighted, as such:
A
B
0
0
1
1
I honestly don't know how to approach this. Would this require that I somehow return a 'bitmap' of the table along with the selected data? Is there a better way to 'tag' fields?
Thanks!

In a query all you can have is 2 extra flag columns (one for each of the columns A and B) with 0 or 1:
SELECT *,
A = 0 flag_A,
B = 1 flag_B
FROM tablename;
In your application you can check the values of the flag columns and highlight a column if its flag is equal to 0.
See the demo.

Related

Does Kusto support DEFAULT values for columns?

I have a table with 3 columns :
ID
productId
customerId
1
5
1
2
4
1
3
5
1
4
4
1
I want to add a new column called ID_MOD and its value by default will be ID%X (X is a number).
Expected result for X=3 :
ID
productId
customerId
ID_MOD
1
5
1
1
2
4
1
2
3
5
1
0
4
4
1
1
I have X instances of my app and I want each instance to query specific ID_MOD values(0/1/2.../X-1).
Is it possible to use default values for columns? If it is, can the default value be calculated based on other columns ?
what you can do is create a stored function that receives x as an input parameter, and extends your table with a calculated column (at query time).
For example:
.create-or-alter function FunctionName(x:int)
{
TableName
| extend ID_MOD = ID % x
}
If you decide x always has the same value and shouldn't be parameter, you can name the function using the same name as the table, and it will 'hide' the original table.
If the logic of calculating the extended column is well-defined in advance, you can invoke it at ingestion time, using an update policy

How to create a sqlite recursive view that properly uses an index for the first row

We're using sqlite version 3.16.0.
I would like to create some views to simplify some common recursive operations I do on our schema. However, these views turn out to be significantly slower than running the SQL directly.
Specifically, a view to show me the ancestors for a given node:
CREATE VIEW ancestors AS
WITH RECURSIVE ancestors
(
leafid
, parentid
, name
, depth
)
AS
(SELECT id
, parentid
, name
, 1
FROM objects
UNION ALL
SELECT a.leafid
, f.parentid
, f.name
, a.depth + 1
FROM objects f
JOIN ancestors a
ON f.id = a.parentid
) ;
when used with this query:
SELECT *
FROM ancestors
WHERE leafid = 157609;
yields this result:
sele order from deta
---- ------------- ---- ----
2 0 0 SCAN TABLE objects
3 0 1 SCAN TABLE ancestors AS a
3 1 0 SEARCH TABLE objects AS f USING INTEGER PRIMARY KEY (rowid=?)
1 0 0 COMPOUND SUBQUERIES 0 AND 0 (UNION ALL)
0 0 0 SCAN SUBQUERY 1
Run Time: real 0.374 user 0.372461 sys 0.001483
Yet running the query directly (with a WHERE constraint on the initial query for the same row), yields:
WITH RECURSIVE ancestors
(
leafid, parentid, name, depth
)
AS
(SELECT id, parentid , name, 1
FROM objects
WHERE id = 157609
UNION ALL
SELECT a.leafid, f.parentid , f.name, a.depth + 1
FROM objects f
JOIN ancestors a
ON f.id = a.parentid
)
SELECT *
FROM ancestors;
Run Time: real 0.021 user 0.000249 sys 0.000111
sele order from deta
---- ------------- ---- ----
2 0 0 SEARCH TABLE objects USING INTEGER PRIMARY KEY (rowid=?)
3 0 1 SCAN TABLE ancestors AS a
3 1 0 SEARCH TABLE objects AS f USING INTEGER PRIMARY KEY (rowid=?)
1 0 0 COMPOUND SUBQUERIES 0 AND 0 (UNION ALL)
0 0 0 SCAN SUBQUERY 1
The second result is around 15 times faster because we're using the PK index on objects to get the initial row, whereas the view seems to scan the entire table, filtering on leaf node only after the ancestors for all rows are found.
Is there any way to write the view such that I can apply a constraint on a consuming select that would be applied to the optimization of the initial query?
You are asking for the WHERE leafid = 157609 to be moved inside the first subquery. This is the push-down optimization, and SQLite tries to do it whenever possible.
However, this is possible only if the database is able to prove that the result is guaranteed to be the same. For this particular query, you know that the transformation would be valid, but, at the moment, there is no algorithm to make this proof for recursive CTEs.

SQLite query returns 0 results

I am having trouble with a query.
Fiddle: https://www.db-fiddle.com/f/JXQHw1VzF7vAowNLFrxv5/1
This is not going to work.
So my question is: What has to be done to get a result when I wanna use both conditions.
(attr_key = 0 AND attr_value & 201326592 = 201326592)
AND
(attr_key = 30 AND attr_value & 8 = 8)
Thanks in advance!
Best regards
One way to check for the presence of some number of key value pairs in the items_attributes table would be to use conditional aggregation:
SELECT i.id
FROM items i
LEFT JOIN items_attributes ia
ON i.id = ia.owner
GROUP BY
i.id
HAVING
SUM(CASE WHEN ia.key = 0 AND ia.value = 201326592 THEN 1 ELSE 0 END) > 0 AND
SUM(CASE WHEN ia.key = 30 AND ia.value = 8 THEN 1 ELSE 0 END) > 0
The trick in the above query is that we scan each cluster of key/value pairs for each item, and then check whether the pairs you expect are present.
Note: My query just returns id values from items matching all key value pairs. If you want to bring in other columns from either of the two tables, you may simply add on more joins to what I wrote above.

over - partition by in SQLite

I have a table TABLE in SQLite database with columns DATE, GROUP. I want to select the first 10 entries in each group. After researching similar topics here on stackoverflow, I came up with the following query, but it runs very slowly. Any ideas how to make it faster?
select * from TABLE as A
where (select count(*) from TABLE as B
where B.DATE < A.DATE and A.GROUP == B.GROUP) < 10
This is the result of EXPLAIN QUERY PLAN (TABLE = clients_bets):
Here are a few suggestions :
Use a covering index (an index containing all the data needed in the subquery, in this case the group and date)
create index some_index on some_table(some_group, some_date)
Additionally, rewrite the subquery to make is less dependent on outer query :
select * from some_table as A
where rowid in (
select B.rowid
from some_table as B
where A.some_group == B.some_group
order by B.some_date limit 10 )
The query plan change from :
0 0 0 SCAN TABLE some_table AS A
0 0 0 EXECUTE CORRELATED LIST SUBQUERY 1
1 0 0 SEARCH TABLE some_table AS B USING COVERING INDEX idx_1 (some_group=?)
to
0 0 0 SCAN TABLE some_table AS A
0 0 0 EXECUTE CORRELATED SCALAR SUBQUERY 1
1 0 0 SEARCH TABLE some_table AS B USING COVERING INDEX idx_1 (some_group=? AND some_date<?)
While it is very similar, the query seems quite faster. I'm not sure why.

Parsing data rows in plsql

This is quite clumsy.
Initial info: There's a clumsy select query eventually returning 0 or 1 depending on several conditions. Usually it get's to select only one row (other data is cut of by where, etc.). The problem occurs when there's more rows to parse. So the data actually looks like follows:
Status
0
1
instead of
Status
1
Problem: Only one rowed data is needed in return, i.e. if there's a 1 present in any row, it should 1 returned, otherwise 0.
Condition: It should be done only in a query (no variables, ifs etc.).
Thanks in advance.
If you are sure that 1 and 0 are the only valuesbeing returned, Can't you use a max over this query to see any 1s..?
select max(id) result
from (
select 1 id from dual
union all
select 0 id from dual
)
RESULT
----------
1
1 select max(id)
2 from (
3 select 0 id from dual
4 union all
5 select 0 id from dual
6 union all
7 select 0 id from dual
8* )
SQL> /
MAX(ID)
----------
0

Resources