SQLite, Sorting a data base by a timestamp - sqlite

this is my first time asking a question, so bear with me and thanks in advance for any response I get.
I am using sqlite3 on a Macbook pro.
Every record in my database has a time stamp in the form YYYY-MM-DD HH:MM:SS, and I need to sort the entire database by the time stamps. The closest answer I have found to letting me do this is SELECT * FROM Table ORDER BY date(dateColumn) DESC Limit 1 from SQLite Order By Date but this returns the most recent date. I would love to be able to apply this but I am just learning sqlite can't figure how to do so.

Change the limit to the number of rows you want:
SELECT * FROM Table ORDER BY dateColumn DESC Limit 10000000;
you can figure out how many rows you have using
SELECT count(*) FROM Table;
and give a limit greater than that number. Beware: If you want all rows you should really put a limit, because if you don't put a limit and simply do
SELECT * FROM Table ORDER BY dateColumn DESC;
it will limit the output to a certain number depending on your system configurations so you might not get all rows.

When you don't want a limit, omit it.
Please note that it is not necessary to call the date function:
SELECT * FROM MyTable ORDER BY dateColumn;

Just leave off the "Limit 1". The query means "SELECT *" (the star means return all the columns) "FROM Table" (kind of obvious, but from the table name you enter here) "ORDER BY date(dateColumn)" (again, somewhat obvious, but this is the sort order where you put your data column name) "DESC" (backwards sort, leave this off if you want ascending, aka forward, sort) and "Limit 1" (only return the first record in the record set).

Related

Sqlite order of query

I'm running this query on a sqlite db and it looks that its working fine.
SELECT batterij ,timestamp FROM temphobbykamer WHERE nodeid= 113 AND timestamp >= 1527889336634 AND timestamp <= 1530481336634 AND ROWID % 20 =0
But can i be sure that the query is handled in the correct order?
It must find all records from node113 between time A and B. From this selection found I only want to have every 20th record.
I can imagine if the query order difference, that if you query every 20th record between time A and B and select from this selection all the node113 records that the response will be different.
When no ORDER BY is specified, the order is undefined. However, typically sqlite will return in ROWID order since you haven't specified anything else. To make sure you get consistent results, you should specify ORDER BY ROWID

Getting a range of tuples from an ordered SQLite table

First I'd like to apologize if the topic seems vague; I always have a hard time framing them succinctly. That done, I'll get into it.
Suppose I have a database table that looks like the following:
CREATE TABLE The_table(
item_id INTEGER PRIMARY KEY ASC AUTOINCREMENT,
item TEXT);
Now, I have a pretty basic query that will get items from said table and order them:
SELECT *
FROM The_table
ORDER BY x;
where x could be either item_id or item. I can guarantee that both fields are order-able. My question is this:
Is there a way to modify the query I gave to get a range of the ordered elements: say from 20th element in the table to the 40th element in the table (after the table has been ordered) or something similar.
Any help would be appreciated.
Thanks,
Yes - it's called "between"
SELECT *
FROM The_Table
WHERE item_id BETWEEN 20 AND 40
This does exactly what it says - it looks for a value between the two numbers supplied. Very useful for finding ranges; works in reverse too (i.e. NOT BETWEEN). For more see here.
If you want a specific row or group of rows (as your updated question suggests) after sorting you can use the LIMIT clause to select a range of entries
SELECT *
FROM The_Table
LIMIT 20, 20
Using LIMIT this way the first number is the starting point in the table and the second number is how many records to return from that point. This statement will return 20 rows starting at row 20 whatever that value is.

Select older date

I have a database limited to n records, if a new record has to be inserted and there's no space I want to delete the oldest one, mind that there could be more than a record with same date: in this case I just remove the first one.
Is it possibile to achieve something like this in sqllite which doesn't have date support?
First of all, to be able to sort your records by date, you have to insert them in the format YYYYMMDD or YYYYMMDDHHmm
Now to get one of oldest ones with same date, you can do this :
SELECT * FROM URTABLE WHERE
LAST_UPDATE_DATE = (SELECT MAX(LAST_UPDATE_DATE) FROM URTABLE) LIMIT 1

Sqlite Query Optimization (using Limit and Offset)

Following is the query that I use for getting a fixed number of records from a database with millions of records:-
select * from myTable LIMIT 100 OFFSET 0
What I observed is, if the offset is very high like say 90000, then it takes more time for the query to execute. Following is the time difference between 2 queries with different offsets:
select * from myTable LIMIT 100 OFFSET 0 //Execution Time is less than 1sec
select * from myTable LIMIT 100 OFFSET 95000 //Execution Time is almost 15secs
Can anyone suggest me how to optimize this query? I mean, the Query Execution Time should be same and fast for any number of records I wish to retrieve from any OFFSET.
Newly Added:-
The actual scenario is that I have got a database having > than 1 million records. But since it's an embedded device, I just can't do "select * from myTable" and then fetch all the records from the query. My device crashes. Instead what I do is I keep fetching records batch by batch (batch size = 100 or 1000 records) as per the query mentioned above. But as i mentioned, it becomes slow as the offset increases. So, my ultimate aim is that I want to read all the records from the database. But since I can't fetch all the records in a single execution, I need some other efficient way to achieve this.
As JvdBerg said, indexes are not used in LIMIT/OFFSET.
Simply adding 'ORDER BY indexed_field' will not help too.
To speed up pagination you should avoid LIMIT/OFFSET and use WHERE clause instead. For example, if your primary key field is named 'id' and has no gaps, than your code above can be rewritten like this:
SELECT * FROM myTable WHERE id>=0 AND id<100 //very fast!
SELECT * FROM myTable WHERE id>=95000 AND id<95100 //as fast as previous line!
By doing a query with a offset of 95000, all previous 95000 records are processed. You should make some index on the table, and use that for selecting records.
As #user318750 said, if you know you have a contiguous index, you can simply use
select * from Table where index >= %start and index < %(start+size)
However, those cases are rare. If you don't want to rely on that assumption, use a sub-query, for example using rowid, which is always indexed,
select * from Table where rowid in (
select rowid from Table limit %size offset %start)
This speeds things up especially if you have "fat" rows (e.g. that contain blobs).
If maintaining the record order is important (it usually isn't), you need to order the indices first:
select * from Table where rowid in (
select rowid from Table order by rowid limit %size offset %start)
select * from data where rowid = (select rowid from data limit 1 offset 999999);
With SQLite, you don't need to get all rows returned at once in a big fat array, you can get called back for every row. This way, you can process the results as they come in, which should address both your crashing and performance issues.
I guess you're not using C as you would already be using a callback, but this technique should be available in any other language.
Javascript example (from : https://www.npmjs.com/package/sqlite3 )
db.each("SELECT rowid AS id, info FROM lorem", function(err, row) {
console.log(row.id + ": " + row.info);
});

Does a multi-column index work for single column selects too?

I've got (for example) an index:
CREATE INDEX someIndex ON orders (customer, date);
Does this index only accelerate queries where customer and date are used or does it accelerate queries for a single-column like this too?
SELECT * FROM orders WHERE customer > 33;
I'm using SQLite.
If the answer is yes, why is it possible to create more than one index per table?
Yet another question: How much faster is a combined index compared with two separat indexes when you use both columns in a query?
marc_s has the correct answer to your first question. The first key in a multi key index can work just like a single key index but any subsequent keys will not.
As for how much faster the composite index is depends on your data and how you structure your index and query, but it is usually significant. The indexes essentially allow Sqlite to do a binary search on the fields.
Using the example you gave if you ran the query:
SELECT * from orders where customer > 33 && date > 99
Sqlite would first get all results using a binary search on the entire table where customer > 33. Then it would do a binary search on only those results looking for date > 99.
If you did the same query with two separate indexes on customer and date, Sqlite would have to binary search the whole table twice, first for the customer and again for the date.
So how much of a speed increase you will see depends on how you structure your index with regard to your query. Ideally, the first field in your index and your query should be the one that eliminates the most possible matches as that will give the greatest speed increase by greatly reducing the amount of work the second search has to do.
For more information see this:
http://www.sqlite.org/optoverview.html
I'm pretty sure this will work, yes - it does in MS SQL Server anyway.
However, this index doesn't help you if you need to select on just the date, e.g. a date range. In that case, you might need to create a second index on just the date to make those queries more efficient.
Marc
I commonly use combined indexes to sort through data I wish to paginate or request "streamily".
Assuming a customer can make more than one order.. and customers 0 through 11 exist and there are several orders per customer all inserted in random order. I want to sort a query based on customer number followed by the date. You should sort the id field as well last to split sets where a customer has several identical dates (even if that may never happen).
sqlite> CREATE INDEX customer_asc_date_asc_index_asc ON orders
(customer ASC, date ASC, id ASC);
Get page 1 of a sorted query (limited to 10 items):
sqlite> SELECT id, customer, date FROM orders
ORDER BY customer ASC, date ASC, id ASC LIMIT 10;
2653|1|1303828585
2520|1|1303828713
2583|1|1303829785
1828|1|1303830446
1756|1|1303830540
1761|1|1303831506
2442|1|1303831705
2523|1|1303833761
2160|1|1303835195
2645|1|1303837524
Get the next page:
sqlite> SELECT id, customer, date FROM orders WHERE
(customer = 1 AND date = 1303837524 and id > 2645) OR
(customer = 1 AND date > 1303837524) OR
(customer > 1)
ORDER BY customer ASC, date ASC, id ASC LIMIT 10;
2515|1|1303837914
2370|1|1303839573
1898|1|1303840317
1546|1|1303842312
1889|1|1303843243
2439|1|1303843699
2167|1|1303849376
1544|1|1303850494
2247|1|1303850869
2108|1|1303853285
And so on...
Having the indexes in place reduces server side index scanning when you would otherwise use a query OFFSET coupled with a LIMIT. The query time gets longer and the drives seek harder the higher the offset goes. Using this method eliminates that.
Using this method is advised if you plan on joining data later but only need a limited set of data per request. Join against a SUBSELECT as described above to reduce memory overhead for large tables.

Resources