I have a SQLite database that looks similar to this:
---------- ------------ ------------
| Car | | Computer | | Category |
---------- ------------ ------------
| id | | id | | id |
| make | | make | | record |
| model | | price | ------------
| year | | cpu |
---------- | weight |
------------
The record column in my Category table contains a comma separated list of the table name and id of the items that belong to that Category, so an entry would look like this:
Car_1,Car_2.
I am trying to split the items in the record on the comma to get each value:
Car_1
Car_2
Then I need to take it one step further and split on the _ and return the Car records.
So if I know the Category id, I'm trying to wind up with this in the end:
---------------- ------------------
| Car | | Car |
---------------| -----------------|
| id: 1 | | id: 2 |
| make: Honda | | make: Toyota |
| model: Civic | | model: Corolla |
| year: 2016 | | year: 2013 |
---------------- ------------------
I have had some success on splitting on the comma and getting 2 records back, but I'm stuck on splitting on the _ and making the join to the table in the record.
This is my query so far:
WITH RECURSIVE record(recordhash, data) AS (
SELECT '', record || ',' FROM Category WHERE id = 1
UNION ALL
SELECT
substr(data, 0, instr(data, ',')),
substr(data, instr(data, ',') + 1)
FROM record
WHERE data != '')
SELECT recordhash
FROM record
WHERE recordhash != ''
This is returning
--------------
| recordhash |
--------------
| Car_1 |
| Car_2 |
--------------
Any help would be greatly appreciated!
If your recursive CTE works as expected then you can split each of the values of recordhash with _ as a delimiter and use the part after _ as the id of the rows from Car to return:
select * from Car
where id in (
select substr(recordhash, 5)
from record
where recordhash like 'Car%'
)
Related
I have a table with DATETIME field, which is indexed by a BTree. Now i want to query it with following statement:
SELECT
count(us.CITY) as metric,
us.CITY as Name,
us.LATITUDE as latitude,
us.LONGITUDE as longitude
FROM
FACT
LEFT JOIN
USER us
ON
us.ID_USER = FACT.USER
WHERE
ASSESSMENT_DATE BETWEEN FROM_UNIXTIME(1601568552) AND FROM_UNIXTIME(1604028277)
GROUP BY us.CITY, us.LATITUDE, us.LONGITUDE;
EXPLAIN:
+------+-------------+-------+--------+----------------------------+---------+---------+------------------------------+--------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+--------+----------------------------+---------+---------+------------------------------+--------+----------------------------------------------+
| 1 | SIMPLE | FACT | ALL | INDEX_FACT_ASSESSMENT_DATE | NULL | NULL | NULL | 762621 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | us | eq_ref | PRIMARY | PRIMARY | 46 | dwh0.FACT.USER,dwh0.FACT.ENV | 1 | |
+------+-------------+-------+--------+----------------------------+---------+---------+------------------------------+--------+----------------------------------------------+
2 rows in set (0.001 sec)
Interestingly, by only changing the dates manually into the DATETIME Format string it uses the index. But the FROM_UNIXTIME() function should in my opinion return the exactly same thing...
SELECT
count(us.CITY) as metric,
us.CITY as Name,
us.LATITUDE as latitude,
us.LONGITUDE as longitude
FROM
FACT
LEFT JOIN
USER us
ON
us.ENV = FACT.ENV AND us.ID_USER = FACT.USER
WHERE
-- ASSESSMENT_DATE BETWEEN FROM_UNIXTIME(1596649101) AND FROM_UNIXTIME(1599108827)
ASSESSMENT_DATE BETWEEN '2020-08-05 11:30:11.987' AND '2020-09-03 11:30:11.987'
GROUP BY us.CITY, us.LATITUDE, us.LONGITUDE;
EXPLAIN:
+------+-------------+-------+--------+----------------------------+----------------------------+---------+------------------------------+--------+--------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra
|
+------+-------------+-------+--------+----------------------------+----------------------------+---------+------------------------------+--------+--------------------------------------------------------+
| 1 | SIMPLE | FACT | range | INDEX_FACT_ASSESSMENT_DATE | INDEX_FACT_ASSESSMENT_DATE | 5 | NULL | 132008 | Using index condition; Using temporary; Using filesort |
| 1 | SIMPLE | us | eq_ref | PRIMARY | PRIMARY | 46 | dwh0.FACT.USER,dwh0.FACT.ENV | 1 |
|
+------+-------------+-------+--------+----------------------------+----------------------------+---------+------------------------------+--------+--------------------------------------------------------+
2 rows in set (0.001 sec)
Can anyone refer to such a problem? the where clause is generated by grafana, so i can not change that, but the rest i can change if it changes something.
Thanks for suggestions!
Sorry for bothering.. after around 10^5 more inserts, it works for both cases... Maybe it was just bad luck
I’m trying to generate a query where I limit the number of sub results I get per a particular category, and could use some help on if there is a good function for this.
Quick Example:
| ID | Category | Value | A bunch of other important columns |
|-----------|-----------------|--------------|-------------------------------------------|
| 1 | A | GUID | |
| 2 | A | GUID | |
| 3 | A | GUID | |
| 4 | A | GUID | |
| 5 | B | GUID | |
| 6 | B | GUID | |
I want to return only N GUIDs per category. (Largely because I’m hitting the 64MB Kusto query limits for some Categories that won’t be useful anyway)
The Top-nested operator looks good at first, BUT I don’t want to do any aggregation, and it filters out other important columns. Per the note on the page, I can use Ignore=max(1) to remove the aggregation, then do some serializing of all my other columns to a certain value, then unpack after the filter. But that feels like I’m doing something very wrong.
I've also tried something like:
| partition by Category ( top 3 by Value)
But it's limited to 64 partitions, and I need closer to 500.
Any idea of a good pattern to do this?
Here you go:
let NumItemsPerCategory = 3;
datatable(ID:long, Category:string, Value:guid)
[
1, "A", guid(40b73f8f-78d2-4eae-bd5b-b3e00f38ac33),
2, "A", guid(043ee507-aadf-4453-bcc6-d8f4f541b043),
3, "A", guid(f71d3cc0-ce46-474f-9dcd-f3883fa08859),
4, "A", guid(bf259fc8-e9fe-4a99-a296-ca81e1fa250a),
5, "B", guid(d8ee3ac7-da76-4e87-a9ed-e5a37c943ad2),
6, "B", guid(282e74ff-3b71-407c-a2a7-92bb1cb17b27),
]
| summarize PackedItems = make_list(pack_all(), NumItemsPerCategory) by Category
| project-away Category
| mv-expand PackedItem = PackedItems
| evaluate bag_unpack(PackedItem)
| project-away PackedItems
Result:
| ID | Category | Value |
|----|----------|--------------------------------------|
| 1 | A | 40b73f8f-78d2-4eae-bd5b-b3e00f38ac33 |
| 2 | A | 043ee507-aadf-4453-bcc6-d8f4f541b043 |
| 3 | A | f71d3cc0-ce46-474f-9dcd-f3883fa08859 |
| 5 | B | d8ee3ac7-da76-4e87-a9ed-e5a37c943ad2 |
| 6 | B | 282e74ff-3b71-407c-a2a7-92bb1cb17b27 |
Table is the following:
CREATE TABLE UserLog(uid TEXT, clicks INT, lang TEXT)
Where uid field should be unique.
Here is some sample data:
| uid | clicks | lang |
----------------------------------------
| "898187354" | 4 | "ru" |
| "898187354" | 4 | "ru" |
| "123456789" | 1 | <null> |
| "123456789" | 10 | "en" |
| "140922382" | 13 | <null> |
As you can see, I have multiple rows with where the uid field is now duplicated. I would like for those rows to be merged in a following way:
clicks fields are added, and lang fields are updated if their previous value was null.
For the data shown above, it would look something like this:
| uid | clicks | lang |
---------------------------------------
| "898187354" | 8 | "ru" |
| "123456789" | 11 | "en" |
| "140922382" | 13 | <null> |
It seems that I can find many ways to simply delete duplicate data, which I do not necessarily want to do. I'm unsure how I can introduce logic in SQL statements that does this.
First update:
update userlog
set
clicks = (select sum(u.clicks) from userlog u where u.uid = userlog.uid),
lang = (select max(u.lang) from userlog u where u.uid = userlog.uid)
where not exists (
select 1 from userlog u
where u.uid = userlog.uid and u.rowid < userlog.rowid
);
and then delete the duplicate rows that are not needed:
delete from userlog
where exists (
select 1 from userlog u
where u.uid = userlog.uid and u.rowid < userlog.rowid
);
I have two tables
Names
id | name
---------
5 | bill
15 | bob
10 | nancy
Entries
id | name_id | added | description
----------------------------------
2 | 5 | 20140908 | i added this
4 | 5 | 20140910 | added later on
9 | 10 | 20140908 | i also added this
1 | 15 | 20140805 | added early on
6 | 5 | 20141015 | late to the party
I'd like to order Names by the first of the numerically-lowest added values in the Entries table, and display the rows from both tables ordered by the added column overall, so the results will be something like:
names.id | names.name | entries.added | entries.description
-----------------------------------------------------------
15 | bob | 20140805 | added early on
5 | bill | 20140908 | i added this
10 | nancy | 20140908 | i also added this
I looked into joins on the first item (e.g. SQL Server: How to Join to first row) but wasn't able to get it to work.
Any tips?
Give this query a try:
SELECT Names.id, Names.name, Entries.added, Entries.description
FROM Names
INNER JOIN Entries
ON Names.id = Entries.name_id
ORDER BY Entries.added
Add DESC if you want it in reverse order i.e.: ORDER BY Entries.added DESC.
This should do it:
SELECT n.id, n.name, e.added, e.description
FROM Names n INNER JOIN
(SELECT name_id, description, Min(added) FROM Entries GROUP BY name_id, description) e
ON n.id = e.name_id
ORDER BY e.added
Below is my sample data, I would like to get the host:value pair with the latest time.
+------+-------+-------+
| HOST | VALUE | TIME |
+------+-------+-------+
| A | 100 | 13:40 |
| A | 150 | 13:00 |
| A | 222 | 13:23 |
| B | 210 | 13:55 |
| B | 300 | 13:44 |
+------+-------+-------+
Wanted to get only rows with the latest time value for the each host column value.
The result should be like:
A 150 13:40
B 210 13:55
I think there are several analytical function to achieve this requirement in Oracle but I'm not sure what can I do in SQLite.
Can you let me know how I can make a query?
Here is an ANSI-compliant way of performing your query which should run on all versions of SQLite. For a potentially shorter solution see the answer by #CL.
SELECT t1.HOST || '-' || t1.VALUE || '-' || t1.TIME AS HOSTVALUETIME
FROM table t1 INNER JOIN
(
SELECT HOST, MAX(TIME) AS MAXTIME
FROM table
GROUP BY HOST
) t2
ON t1.HOST = t2.HOST AND t1.TIME = t2.MAXTIME
ORDER BY t1.HOST DESC
Output:
+---------------+
| HOSTVALUETIME |
+---------------+
| A-100-13:50 |
| B-210-13:55 |
+---------------+
In SQLite 3.7.11 or later, MAX() selects from which row in a group the other column values come:
SELECT Host,
Value,
MAX(Time)
FROM TheNameOfThisTableIsSoSecretThatICantTellYou
GROUP BY Host;