separate indexes for select optimization - sqlite

I have a table 'data' with columns
id (auto_increment) id_device (integer) timestamp(numeric)
I need to execute these selects:
select * from data where id<10000000 and id_device=345
or
select * from data where id<10000000 and id_device=345 and timestamp>'2017-01-01 10:00:00' and timestamp<'2017-03-01 08:00:00'
For first select:
Is it better to make separate index for "id" and separate for "id_device"?
Or is it better for performance to make index like INDEX id, id_device?
For second select:
Is better to make separate index for "id" and separate for "id_device" and separate for "timestamp"?
Or is it better for performance to make index like INDEX id, id_device, timestamp?

My short answer: it depends on your data.
Longer: if id_device=345 is true for fewer rows than id<10000000 then id_device should be listed first in a multi-column index: ...ON data(id_device,id). Also if select speed is more important to you/your users than insert/update/delete speed, then why not add a lot of indexes and leave it to the query planner to choose which ones to use:
create index i01_tbl on tbl(id);
create index i02_tbl on tbl(id_device);
create index i03_tbl on tbl(timestamp);
create index i04_tbl on tbl(id,id_device);
create index i05_tbl on tbl(id_device,id);
create index i06_tbl on tbl(timestamp,id);
create index i07_tbl on tbl(id,timestamp);
create index i08_tbl on tbl(id_device,timestamp);
create index i09_tbl on tbl(timestamp,id_device);
create index i10_tbl on tbl(id, id_device, timestamp);
create index i11_tbl on tbl(id_device, id, timestamp);
create index i12_tbl on tbl(id_device, timestamp, id);
create index i13_tbl on tbl(id, timestamp, id_device);
create index i14_tbl on tbl(timestamp, id_device, id);
create index i15_tbl on tbl(timestamp, id, id_device);
The query planner algorithms in your database (sqlite have them too) usually make good choises on that. Especially if you run the ANALYZE sqlite command periodically or after changing lots of data. The downside of having many indexes is slower inserts and deletes (and updates if they involve indexed columns) and more disk/memory usage. Use explain plan on your important SQLs (important when it comes to speed) to check which indexes are used and not. If an index is never used or only used in queries that is fast anyway without it, then you can drop those. Also be aware that newer versions of your database (sqlite, oracle, postgresql) can have newer query planner algorithms which for most SELECTs are better, but for some can get worse. Realistic tests on realistic datasets are the best way to tell. Which indexes to create is not an exact science and dont have definitive rules that fits all cases.

Related

Querying on Global Secondary indexes with a usage of contains operator

I've been reading a DynamoDB docs and was unable to understand if it does make sense to query on Global Secondary Index with a usage of 'contains' operator.
My problem is as follows: my dynamoDB document has a list of embedded objects, every object has a 'code' field which is unique:
{
"entities":[
{"code":"entity1Code", "name":"entity1Name"},
{"code":"entity2Code", "name":"entity2Name"}
]
}
I want to be able to get all documents that contain entities with entity.code = X.
For this purpose I'm considering adding a Global Secondary Index that would contain all entity.codes that are present in current db document separated by a comma. So the example above would look like:
{
"entities":[
{"code":"entity1Code", "name":"entity1Name"},
{"code":"entity2Code", "name":"entity2Name"}
],
"entitiesGlobalSecondaryIndex":"entityCode1,entityCode2"
}
And then I would like to apply filter expression on entitiesGlobalSecondaryIndex something like: entitiesGlobalSecondaryIndex contains entityCode1.
Would this be efficient or using global secondary index does not make sense in this way and DynamoDB will simply check the condition against every document which is similar so scan?
Any help is very appreciated,
Thanks
The contains operator of a query cannot be run on a partition Key. In order for a query to use any sort of operators (contains, begins with, > < ect...) you must have a range attributes- aka your Sort Key.
You can very well set up a GSI with some value as your PK and this code as your SK. However, GSIs are replication of the table - there is a slight potential for the data ina GSI to lag behind that of the master copy. If the query you're doing against this GSI isn't very often, then you're probably safe from that.
However. If you are trying to do this to the entire table at once then it's no better than a scan.
If what you need is a specific Code to return all its documents at once, then you could do a GSI with that as the PK. If you add a date field as the SK of this GSI it would even be time sorted. If you query against that code in that index, you'll get every single one of them.
Since you may have multiple codes, if they aren't too many per document, you maybe could use a Sparse Index - if you have an entity with code "AAAA" then you also have an attribute named AAAA (or AAAAflag or something.) It is always null/does not exist Unless the entities contains that code. If you do a GSI on this AAAflag attribute, it will only contain documents that contain that entity code, and ignore all where this attribute does not exist on a given document. This may work for you if you can also provide a good PK on this to keep the numbers well partitioned and if you don't have too many codes.
Filter expressions by the way are different than all of the above. Filter expressions are run on tbe data that would be returned, after it is already read out of the table. This is useful I'd you have a multi access pattern setup, but don't want a particular call to get all the documents associated with a particular PK - in the interests of keeping the data your code is working with concise. The query with a filter expression still retrieves everything from that query, but only presents what makes it past the filter.
If are only querying against a particular PK at any given time and you want to know if it contains any entities of x, then a Filter expressions would work perfectly. Of course, this is only per PK and not for your entire table.
If all you need is numbers, then you could do a count attribute on the document, or a meta document on that partition that contains these values and could be queried directly.
Lastly, and I have no idea if this would work or not, if your entities attribute is a map type you might very well be able to filter against entities code - and maybe even with entities.code.contains(value) if it was an SK - but I do not know if this is possible or not

How to introduce indexing to sqlite query in android?

In my android application, I use Cursor c = db.rawQuery(query, null); to query data from a local sqlite database, and one of the query string looks like the following:
SELECT t1.* FROM table t1
WHERE NOT EXISTS (
SELECT 1 FROM table t2
WHERE t2.start_time = t1.start_time AND t2.stop_time > t1.stop_time
)
however, the issue is that the query gets very slow when the database gets huge. Trying to look into introducing indexing to speed up the query, but so far, not been very successful, therefore, would be great to have some help here, as it's also hard to find examples for this for android applications.
You can create a composite index for the columns start_time and stop_time:
CREATE INDEX idx_name ON table_name(start_time, stop_time);
You can read in The SQLite Query Optimizer Overview:
The ON and USING clauses of an inner join are converted into
additional terms of the WHERE clause prior to WHERE clause analysis
...
and:
If an index is created using a statement like this:
CREATE INDEX idx_ex1 ON ex1(a,b,c,d,e,...,y,z);
Then the index might be used if the initial columns of the index
(columns a, b, and so forth) appear in WHERE clause terms. The initial
columns of the index must be used with the = or IN or IS operators.
The right-most column that is used can employ inequalities.
You may have to uninstall the app from the device so that the db is deleted and rerun to recreate it, or increase the version number of the db so that you can create the index in the onUpgrade() method.

How do I create an expression-based index using the instr() function?

With Sqlite3, I am trying to do a query like:
select *
from data
where instr(filepath,'.txt') != 0
And I want to index this query to speed it up.
I tried to create an index like:
create index data_instr_filepath
on data(instr(filepath,'.txt'));
However, "explain query plan" still shows that I'm doing a table scan.
Is this doable in sqlite? The examples I have found for doing expression-based indexes seems to be limited to the length function and multiplying two columns together.
UPDATE:
Thanks to Mike's answer, I refactored my query to not use inequalities and was able to create an index that hits it. Below are my indexes that I ended up using:
create index data_instr_filepath_txt on data(instr(filepath,'.txt'));
create index data_instr_filepath_substr on data(substr(filepath,0,instr(filepath,'.')));
The reason is that an index will likely not be used for an inequality as per :-
Similarly, index columns will not normally be used (for indexing
purposes) if they are to the right of a column that is constrained
only by inequalities. The SQLite Query Optimizer Overview
You are able to try forcing the use of an index by using INDEXED BY. However, this will not work in your situation because of the above flagging the index as not being usable. (the query will still work)
e.g.
EXPLAIN QUERY PLAN
SELECT * FROM data INDEXED BY data_instr_filepath
WHERE instr(filepath,'.txt') != 0
results in :-
no query solution
Time: 0s

SQLite: re-arrange physical position of rows inside file

My problem is that my querys are too slow.
I have a fairly large sqlite database. The table is:
CREATE TABLE results (
timestamp TEXT,
name TEXT,
result float,
)
(I know that timestamps as TEXT is not optimal, but please ignore that for the purposes of this question. I'll have to fix that when I have the time)
"name" is a category. This calculation holds the results of a calculation that has to be done at each timestamp for all "name"s. So the inserts are done at equal-timestamps, but the querys will be done at equal-names (i.e. I want given a name, get its time series), like:
SELECT timestamp,result WHERE name='some_name';
Now, the way I'm doing things now is to have no indexes, calculate all results, then create an index on name CREATE INDEX index_name ON results (name). The reasoning is that I don't need the index when I'm inserting, but having the index will make querys on the index really fast.
But it's not. The database is fairly large. It has about half a million timestamps, and for each timestamp I have about 1000 names.
I suspect, although I'm not sure, that the reason why it's slow is that every though I've indexed the names, they're still scattered all around the physical disk. Something like:
timestamp1,name1,result
timestamp1,name2,result
timestamp1,name3,result
...
timestamp1,name999,result
timestamp1,name1000,result
timestamp2,name1,result
timestamp2,name2,result
etc...
I'm sure this is slower to query with NAME='some_name' than if the rows were physically ordered as:
timestamp1,name1,result
timestamp2,name1,result
timestamp3,name1,result
...
timestamp499997,name1000,result
timestamp499998,name1000,result
timestamp499999,name1000,result
timestamp500000,namee1000,result
etc...
So, how do I tell SQLite that the order in which I'd like the rows in disk isn't the one they were written in?
UPDATE: I'm further convinced that the slowness in doing a select with such an index comes exclusively from non-contiguous disk access. Doing SELECT * FROM results WHERE name=<something_that_doesnt_exist> immediately returns zero results. This suggests that it's not finding the names that's slow, it's actually reading them from the disk.
Normal sqlite tables have, as a primary key, a 64-bit integer (Known as rowid and a few other aliases). That determines the order that rows are stored in a B*-tree (Which puts all actual data in leaf node pages). You can change this with a WITHOUT ROWID table, but that requires an explicit primary key which is used to place rows in a B-tree. So if every row's (name, timestamp) columns make a unique value, that's a possibility that will leave all rows with the same name on a smaller set of pages instead of scattered all over.
You'd want the composite PK to be in that order if you're searching for a particular name most of the time, so something like:
CREATE TABLE results (
timestamp TEXT
, name TEXT
, result REAL
, PRIMARY KEY (name, timestamp)
) WITHOUT ROWID
(And of course not bothering with a second index on name.) The tradeoff is that inserts are likely to be slower as the chances of needing to split a page in the B-tree go up.
Some pragmas worth looking into to tune things:
cache_size
mmap_size
optimize (After creating your index; also consider building sqlite with SQLITE_ENABLE_STAT4.)
Since you don't have an INTEGER PRIMARY KEY, consider VACUUM after deleting a lot of rows if you ever do that.

Reason to use anything other than RecId as a clustered index

Is there any reason to use an index other than RecId (SurrogateKey in AX2012) as the clustered index?
Confirmed by a quick Google search (*), one should consider at least 4 criteria when deciding on clustered indexes:
Index must be unique.
Index must be narrow (As few fields as possible - since these would be copied to every other index).
Index must be static (As updating the index field value(s) will cause SQL server to physically move the record to a new location)
Index must be ordered (Ascending / Descending).
RecId adheres to all of the above, in a better way than any index you can create yourself. Any index you create yourself will violate at least the 2nd and/or the 4th, since it would automatically include DataAreaId.
What I think...
Could it be that the option to set this is just a legacy property from AX3.0 or lower, and that its use could be deprecated now?
*TechNet SQL Server Index Design Guide and Effective Clustered Indexes
While RecId is a good choice, you can make a shorter key on say an int on a global table (SaveDataPerCompany = No).
Access patterns matters, if you often access your customers by account number, you might as well store the records in that order.
Also, if you only have one index as is often the case for group and parameter tables, you are not punished for having a longer key, it will need storage somewhere anyway.
See also What do Clustered and Non clustered index actually mean?

Resources