Why is SQLite query on two indexed columns so slow? - sqlite

I have a table with around 65 million rows that I'm trying to run a simple query on. The table and indexes looks like this:
CREATE TABLE E(
x INTEGER,
t INTEGER,
e TEXT,
A,B,C,D,E,F,G,H,I,
PRIMARY KEY(x,t,e,I)
);
CREATE INDEX ET ON E(t);
CREATE INDEX EE ON E(e);
The query I'm running looks like this:
SELECT MAX(t), B, C FROM E WHERE e='G' AND t <= 9878901234;
I need to run this queries for thousands of different values of t and was expecting each query to run in a fraction of a second. However, the above query is taking nearly 10 seconds to run!
I tried running the query plan but only get this:
0|0|0|SEARCH TABLE E USING INDEX EE (e=?)
So this should be using the index. With a binary search I would expect worse case only 26 tests, which I would be pretty quick.
Why is my query so slow?

Each table in a query can use one index. Since your WHERE clause looks at multiple columns, you can use a multi-column index. For these, all but the last column used from the index has to test for equality; the last one used can be used for greater than/less than.
So:
CREATE INDEX e_idx_e_t ON E(e, t);
should give you a boost.
For further reading about how Sqlite uses indexes, the Query Planner documentation is a good introduction.
You're also mixing an aggregate function (max(t)) and columns (B and C) that aren't part of a group. In Sqlite's case, this means that it will pick values for B and C from the row with the maximum t value; other databases usually throw an error.

Related

Best index to cover a mix of exact match and less/greater than query in sqlite

I have a table which I need to filter on (using sqlite), it has 3 fields in the query:
WHERE x <= 'something' AND y = 'something' AND z = 'SOMETHING ELSE'
ORDER BY x DESC
I was wondering what's the best index to cover this query.
I have tried a few, for example:
CREATE INDEX idx_x_y_z ON user_messages(
x, y, z
);
CREATE INDEX idx_y_z ON user_messages(
y, z
);
but the best I can get is:
SEARCH TABLE table USING INDEX idx_y_z
USE TEMP B-TREE FOR ORDER BY
Is that optimal or I can avoid the USE TEMP B-TREE FOR ORDER BY?
By reading https://explainextended.com/2009/04/01/choosing-index/ it seems that to be the case, but since the query is slightly different (we order by a field we filter on), I was wondering if maybe is not exactly the same.
Also, I am struggling to find good resources on this, a lot of it addresses the most common scenarios, while it's a bit harder to find more in depth resources, do you have any suggestion?
Thanks!
UPDATE:
Turns out there was another issue, I have oversimplified the schema in the original question.
One of the fields had a type of BOOLEAN, and I was matching it by using the IS FALSE operator, which would return the right number of rows, while = 0 would not for some reasons.
When querying with = it would not use a TEMP B-TREE, while it would when using IS FALSE.
To address this issue I have just created an index excluding the BOOLEAN field, and B-TREE was not used anymore, only SEARCH-TABLE.
From the query planner documentation:
Then the index might be used if the initial columns of the index (columns a, b, and so forth) appear in WHERE clause terms. The initial columns of the index must be used with the = or IN or IS operators. The right-most column that is used can employ inequalities.
So since your WHERE has two exact comparisons and one less than or equal, that one should come last in an index for best effect:
CREATE INDEX idx_y_z_x ON user_messages(y, z, x);
Using that index with a query with your WHERE terms:
sqlite> EXPLAIN QUERY PLAN SELECT * FROM user_messages
...> WHERE x <= 'something' AND y = 'something' AND z = 'something'
...> ORDER BY x DESC;
QUERY PLAN
`--SEARCH TABLE user_messages USING INDEX idx_y_z_x (y=? AND z=? AND x<?)
As you can see, it fully uses the index, with no temporary table needed for sorting the results.
More, essential, reading about how sqlite uses indexes can be found here.

Neo4j MATCH then MERGE too many DB hits

This is the query:
MATCH (n:Client{curp:'SOME_VALUE'})
WITH n
MATCH (n)-[:HIZO]-()-[r:FB]-()-[:HIZO]-(m:Client)
WHERE ID(n)<>ID(m)
AND NOT (m)-[:FB]->(n)
MERGE (n)-[:FB]->(m) RETURN m.curp
Why is the Merge stage getting so many DB hits if the query already narrowed down
n, m pairs to 6,781 rows?
Details of that stage shows this:
n, m, r
(n)-[ UNNAMED155:FB]->(m)
Keep in mind that queries build up rows, and operations in your query get run on every row that is built up.
Because the pattern in your match may find multiple paths to the same :Client, it will build up multiple rows with the same n and m (but possibly different r, but as you aren't using r anywhere else in your query, I encourage you to remove the variable).
This means that even though you mean to MERGE a single relationship between n and a distinct m, this MERGE operation will actually be run for every single duplicate row of n and m. One of those MERGEs will create the relationship, the others will be wasting cycles matching on the relationship that was created without doing anything more.
That's why we should be able to lower our db hits by only considering distinct pairs of n and m before doing the MERGE.
Also, since your query made sure we're only considering n and m where the relationship doesn't exist, we can safely use CREATE instead of MERGE, and it should save us some db hits because MERGE always attempts a MATCH first, which isn't necessary.
An improved query might look like this:
MATCH (n:Client{curp:'SOME_VALUE'})
WITH n
MATCH (n)-[:HIZO]-()-[:FB]-()-[:HIZO]-(m:Client)
WHERE n <> m
AND NOT (m)-[:FB]->(n)
WITH DISTINCT n, m
MERGE (n)-[:FB]->(m)
RETURN m.curp
EDIT
Returning the query to use MERGE for the :FB relationship, as attempts to use CREATE instead ended up not being as performant.

Comparison operators behave differently after indexing

With this schema:
CREATE TABLE temperatures(
sometext TEXT,
lowtemp INT,
hightemp INT,
moretext TEXT);
When I do search
select * from temperatures where lowtemp < 20 and hightemp > 20;
I get the correct result which is always one record (due to the specifics of the data).
Now, when I index the table:
CREATE INDEX ltemps ON temperatures(lowtemp);
CREATE INDEX htemps ON temperatures(hightemp);
The exact same query above stops providing expected results -- now I get many records, including ones where the lowtemp and hightemp obviously don't meet the comparison test.
I'm running this on the same sqlite3 database, same table. The only difference is adding the above 2 index statements after table creation.
Can someone explain how indexing influences this behavior?

SQLite - Update with random unique value

I am trying to populate everyrow in a column with random ranging from 0 to row count.
So far I have this
UPDATE table
SET column = ABS (RANDOM() % (SELECT COUNT(id) FROM table))
This does the job but produces duplicate values, which turned out to be bad. I added a Unique constraint but that just causes it to crash.
Is there a way to update a column with random unique values from certain range?
Thanks!
If you want to later read the records in a random order, you can just do the ordering at that time:
SELECT * FROM MyTable ORDER BY random()
(This will not work if you need the same order in multiple queries.)
Otherwise, you can use a temporary table to store the random mapping between the rowids of your table and the numbers 1..N.
(Those numbers are automatically generated by the rowids of the temporary table.)
CREATE TEMP TABLE MyOrder AS
SELECT rowid AS original_rowid
FROM MyTable
ORDER BY random();
UPDATE MyTable
SET MyColumn = (SELECT rowid
FROM MyOrder
WHERE original_rowid = MyTable.rowid) - 1;
DROP TABLE MyOrder;
What you seem to be seeking is not simply a set of random numbers, but rather a random permutation of the numbers 1..N. This is harder to do. If you look in Knuth (The Art of Computer Programming), or in Bentley (Programming Pearls or More Programming Pearls), one suggested way is to create an array with the values 1..N, and then for each position, swap the current value with a randomly selected other value from the array. (I'd need to dig out the books to check whether it is any arbitrary position in the array, or only with a value following it in the array.) In your context, then you apply this permutation to the rows in the table under some ordering, so row 1 under the ordering gets the value in the array at position 1 (using 1-based indexing), etc.
In the 1st Edition of Programming Pearls, Column 11 Searching, Bentley says:
Knuth's Algorithm P in Section 3.4.2 shuffles the array X[1..N].
for I := 1 to N do
Swap(X[I], X[RandInt(I,N)])
where the RandInt(n,m) function returns a random integer in the range [n..m] (inclusive). That's nothing if not succinct.
The alternative is to have your code thrashing around when there is one value left to update, waiting until the random number generator picks the one value that hasn't been used yet. As a hit and miss process, that can take a while, especially if the number of rows in total is large.
Actually translating that into SQLite is a separate exercise. How big is your table? Is there a convenient unique key on it (other than the one you're randomizing)?
Given that you have a primary key, you can easily generate an array of structures such that each primary key is allocated a number in the range 1..N. You then use Algorithm P to permute the numbers. Then you can update the table from the primary keys with the appropriate randomized number. You might be able to do it all with a second (temporary) table in SQL, especially if SQLite supports UPDATE statements with a join between two tables. But it is probably nearly as simple to use the array to drive singleton updates. You'd probably not want a unique constraint on the random number column while this update is in progress.

sqlite subqueries with group_concat as columns in select statements

I have two tables, one contains a list of items which is called watch_list with some important attributes and the other is just a list of prices which is called price_history. What I would like to do is group together 10 of the lowest prices into a single column with a group_concat operation and then create a row with item attributes from watch_list along with the 10 lowest prices for each item in watch_list. First I tried joins but then I realized that the operations where happening in the wrong order so there was no way I could get the desired result with a join operation. Then I tried the obvious thing and just queried the price_history for every row in the watch_list and just glued everything together in the host environment which worked but seemed very inefficient. Now I have the following query which looks like it should work but it's not giving me the results that I want. I would like to know what is wrong with the following statement:
select w.asin,w.title,
(select group_concat(lowest_used_price) from price_history as p
where p.asin=w.asin limit 10)
as lowest_used
from watch_list as w
Basically I want the limit operation to happen before group_concat does anything but I can't think of a sql statement that will do that.
Figured it out, as somebody once said "All problems in computer science can be solved by another level of indirection." and in this case an extra select subquery did the trick:
select w.asin,w.title,
(select group_concat(lowest_used_price)
from (select lowest_used_price from price_history as p
where p.asin=w.asin limit 10)) as lowest_used
from watch_list as w

Resources