Deleting all records from 201 to infinity in JDOQL - jdo

Working on a logging feature, I only wish to keep the last 200 records in the datastore.
How can I do this in JDOQL?
If I'd use SQL it would be as easy as
DELETE FROM MyTable OFFSET 201 ORDER BY myDate DESC,
but I have a hard time to find something similar for JDOQL.

Query q = pm.newQuery("SELECT FROM mydomain.MyClass ORDER BY myDate DESC RANGE 201");
q.deletePersistentAll()
Looks very similar to me

Related

Get top records by latest date in Azure CosmosDB

I have a table like this:
Assume there are lots of Names (i.e., E,F,G,H,I etc.,) and their respective Date and Produced Items in this table. It's a massive table, so I'd want to write an optimised query.
In this, I want to query the latest A,B,C,D records.
I was using the following query:
SELECT * FROM c WHERE c.Name IN ('A','B','C','D') ORDER BY c.Date DESC OFFSET 0 LIMIT 4
But the problem with this query is, since I'm ordering by Date, the latest 4 records I'm getting are:
I want to get this result:
Please help me in modifying the query. Thanks.

Sqlite order of query

I'm running this query on a sqlite db and it looks that its working fine.
SELECT batterij ,timestamp FROM temphobbykamer WHERE nodeid= 113 AND timestamp >= 1527889336634 AND timestamp <= 1530481336634 AND ROWID % 20 =0
But can i be sure that the query is handled in the correct order?
It must find all records from node113 between time A and B. From this selection found I only want to have every 20th record.
I can imagine if the query order difference, that if you query every 20th record between time A and B and select from this selection all the node113 records that the response will be different.
When no ORDER BY is specified, the order is undefined. However, typically sqlite will return in ROWID order since you haven't specified anything else. To make sure you get consistent results, you should specify ORDER BY ROWID

SQLite, Sorting a data base by a timestamp

this is my first time asking a question, so bear with me and thanks in advance for any response I get.
I am using sqlite3 on a Macbook pro.
Every record in my database has a time stamp in the form YYYY-MM-DD HH:MM:SS, and I need to sort the entire database by the time stamps. The closest answer I have found to letting me do this is SELECT * FROM Table ORDER BY date(dateColumn) DESC Limit 1 from SQLite Order By Date but this returns the most recent date. I would love to be able to apply this but I am just learning sqlite can't figure how to do so.
Change the limit to the number of rows you want:
SELECT * FROM Table ORDER BY dateColumn DESC Limit 10000000;
you can figure out how many rows you have using
SELECT count(*) FROM Table;
and give a limit greater than that number. Beware: If you want all rows you should really put a limit, because if you don't put a limit and simply do
SELECT * FROM Table ORDER BY dateColumn DESC;
it will limit the output to a certain number depending on your system configurations so you might not get all rows.
When you don't want a limit, omit it.
Please note that it is not necessary to call the date function:
SELECT * FROM MyTable ORDER BY dateColumn;
Just leave off the "Limit 1". The query means "SELECT *" (the star means return all the columns) "FROM Table" (kind of obvious, but from the table name you enter here) "ORDER BY date(dateColumn)" (again, somewhat obvious, but this is the sort order where you put your data column name) "DESC" (backwards sort, leave this off if you want ascending, aka forward, sort) and "Limit 1" (only return the first record in the record set).

Select older date

I have a database limited to n records, if a new record has to be inserted and there's no space I want to delete the oldest one, mind that there could be more than a record with same date: in this case I just remove the first one.
Is it possibile to achieve something like this in sqllite which doesn't have date support?
First of all, to be able to sort your records by date, you have to insert them in the format YYYYMMDD or YYYYMMDDHHmm
Now to get one of oldest ones with same date, you can do this :
SELECT * FROM URTABLE WHERE
LAST_UPDATE_DATE = (SELECT MAX(LAST_UPDATE_DATE) FROM URTABLE) LIMIT 1

Sqlite Query Optimization (using Limit and Offset)

Following is the query that I use for getting a fixed number of records from a database with millions of records:-
select * from myTable LIMIT 100 OFFSET 0
What I observed is, if the offset is very high like say 90000, then it takes more time for the query to execute. Following is the time difference between 2 queries with different offsets:
select * from myTable LIMIT 100 OFFSET 0 //Execution Time is less than 1sec
select * from myTable LIMIT 100 OFFSET 95000 //Execution Time is almost 15secs
Can anyone suggest me how to optimize this query? I mean, the Query Execution Time should be same and fast for any number of records I wish to retrieve from any OFFSET.
Newly Added:-
The actual scenario is that I have got a database having > than 1 million records. But since it's an embedded device, I just can't do "select * from myTable" and then fetch all the records from the query. My device crashes. Instead what I do is I keep fetching records batch by batch (batch size = 100 or 1000 records) as per the query mentioned above. But as i mentioned, it becomes slow as the offset increases. So, my ultimate aim is that I want to read all the records from the database. But since I can't fetch all the records in a single execution, I need some other efficient way to achieve this.
As JvdBerg said, indexes are not used in LIMIT/OFFSET.
Simply adding 'ORDER BY indexed_field' will not help too.
To speed up pagination you should avoid LIMIT/OFFSET and use WHERE clause instead. For example, if your primary key field is named 'id' and has no gaps, than your code above can be rewritten like this:
SELECT * FROM myTable WHERE id>=0 AND id<100 //very fast!
SELECT * FROM myTable WHERE id>=95000 AND id<95100 //as fast as previous line!
By doing a query with a offset of 95000, all previous 95000 records are processed. You should make some index on the table, and use that for selecting records.
As #user318750 said, if you know you have a contiguous index, you can simply use
select * from Table where index >= %start and index < %(start+size)
However, those cases are rare. If you don't want to rely on that assumption, use a sub-query, for example using rowid, which is always indexed,
select * from Table where rowid in (
select rowid from Table limit %size offset %start)
This speeds things up especially if you have "fat" rows (e.g. that contain blobs).
If maintaining the record order is important (it usually isn't), you need to order the indices first:
select * from Table where rowid in (
select rowid from Table order by rowid limit %size offset %start)
select * from data where rowid = (select rowid from data limit 1 offset 999999);
With SQLite, you don't need to get all rows returned at once in a big fat array, you can get called back for every row. This way, you can process the results as they come in, which should address both your crashing and performance issues.
I guess you're not using C as you would already be using a callback, but this technique should be available in any other language.
Javascript example (from : https://www.npmjs.com/package/sqlite3 )
db.each("SELECT rowid AS id, info FROM lorem", function(err, row) {
console.log(row.id + ": " + row.info);
});

Resources