Keep a maximum of 5 rows in a table (Room/Sqlite) - sqlite

I want to store the last 5 searches a user performed in a SQLite table using Room. How can I always delete the oldest entry when there are more than 5 entries?
I don't want to add a date column and sort by date, as for privacy reasons I don't want to store the time when a user performed the search
I don't want to use an autoincrement id column, as it's theoretically limited at some given maximum that the ID can be
Could I maybe use the rowid? So checking if the number of entries in the table is larger than 5, then sort by rowid ascending and delete the first entry? Any other ideas?

I don't want to use an autoincrement id column, as it's theoretically
limited at some given maximum that the ID can be
On that logic one shouldn't use autoincrement values in database at all. I doubt you really have such a data that Long with its maximum value (9223372036854775807) wouldn't be enough or could be achieved.
Well, as one more alternative - you can use next schema if there is 5 rows in your table and you have int id field for example:
Delete row with minimal id (I guess it would be 0)
Update all the rows with one query, decreasing their id by 1.
#Query("update search_table set id = id - 1")
fun reorderData()
insert new row.

Related

Insert or ignore every column

I have a problem with a sqlite command.
I have a table with three columns: Id, user, number.
The id is continuing. Now if I put a user and a number inside my list, my app should compare if such a user with this number already exist. The problem is, if I use a standard "insert or ignore" command, the Id column is not fixed, so I will get a new entry every time.
So is it possible just two compare two of three columns if they are equal?
Or do I have to use a temporary list, where are only two columns exist?
The INSERT OR IGNORE statement ignores the new record if it would violate a UNIQUE constraint.
Such a constraint is created implicitly for the PRIMARY KEY, but you can also create one explicitly for any other columns:
CREATE TABLE MyTable (
ID integer PRIMARY KEY,
User text,
Number number,
UNIQUE (User, Number)
);
You shouldn't use insert or ignore unless you are specifying the key, which you aren't and in my opinion never should if your key is an Identity (Auto number).
Based on User and Number making a record in your table unique, you don't need the id column and your primary key should be user,number.
If for some reason you don't want to do that, and bearing in mind in that case you are saying that User,Number is not your uniqueness constraint then something like
if not exists(Select 1 From MyTable Where user = 10 and Number = 15)
Insert MyTable(user,number) Values(10,15)
would do the job. Not a SqlLite boy, so you might have to rwiddle with the syntax and wrap escape your column names.

How to design DynamoDB table to facilitate searching by time ranges, and deleting by unique ID

I'm new to DynamoDB - I already have an application where the data gets inserted, but I'm getting stuck on extracting the data.
Requirement:
There must be a unique table per customer
Insert documents into the table (each doc has a unique ID and a timestamp)
Get X number of documents based on timestamp (ordered ascending)
Delete individual documents based on unique ID
So far I have created a table with composite key (S:id, N:timestamp). However when I come to query it, I realise that since my id is unique, because I can't do a wildcard search on ID I won't be able to extract a range of items...
So, how should I design my table to satisfy this scenario?
Edit: Here's what I'm thinking:
Primary index will be composite: (s:customer_id, n:timestamp) where customer ID will be the same within a table. This will enable me to extact data based on time range.
Secondary index will be hash (s: unique_doc_id) whereby I will be able to delete items using this index.
Does this sound like the correct solution? Thank you in advance.
You can satisfy the requirements like this:
Your primary key will be h:customer_id and r:unique_id. This makes sure all the elements in the table have different keys.
You will also have an attribute for timestamp and will have a Local Secondary Index on it.
You will use the LSI to do requirement 3 and batchWrite API call to do batch delete for requirement 4.
This solution doesn't require (1) - all the customers can stay in the same table (Heads up - There is a limit-before-contact-us of 256 tables per account)

SQLite - Selecting not indexed column in GROUP BY

I have similar situation like question below.
Mysql speed up max() group by
SELECT MAX(id) id, cid FROM table GROUP BY cid
To optimize above query (shown in the question), creating index(cid, id) does the trick.
However, when I add a column that is not indexed to SELECT, query speed drastically slows down.
For example,
SELECT MAX(id) id, cid, newcolumn FROM table GROUP BY cid
If I create index(cid, id, newcolumn), query time comes back to minimal. It seems I should index all the columns I select while using GROUP BY.
Is there any way other than indexing all the columns to be select?
When all the columns used in the query are part of the index (which is then called a covering index), SQLite can get all values from the index and does not need to access the table itself.
When adding a column that is not indexed, each record must be looked up in both the index and the table.
Furthermore, the order of the records in the table is unlikely to be the same as the order in the index, so the table's pages are not read in order, and are read multiple times, which means that caching will not work as well.
The newcolumn values must be read from either the table or an index; there is no other mechanism to store data.
tl;dr: no

will the maximum number of rows decrease after records are deleted from a table as the row id keeps incrementing

This might be a beginners question, but when testing my sqlite data base, I found that when I delete a row, the row id keeps incrementing when I insert a new row and doesn't reuse for instance the row id of a deleted row. So, what will happen if the row id runs out to it's maximum value, while there are less rows in the table?
This is documented:
If the table has previously held a row with the largest possible ROWID, then new INSERTs are not allowed and any attempt to insert a new row will fail with an SQLITE_FULL error.
If you omit the AUTOINCREMENT keyword, IDs will still autoincrement, but can be reused if you delete the last row or if the values overflow:
If the largest ROWID is equal to the largest possible integer (9223372036854775807) then the database engine starts picking positive candidate ROWIDs at random until it finds one that is not previously used.
When you add row number as auto increment you have to check largest value. If data rows go to that limit you have to use bigger data type. But usually integer doesn't cross because a database designer must keep eye on normalization.
If data rows give so big. You are really stuck with the queries. It will take huge time. SQLite is mainly useful for low end device. They are not so capable of handling big data.

How can I insert a new table row into every other row in an existing table?

Ok I have a sqlite db, that has roughly 100 rows. It is kind of a strange thing that I'm trying to do, but I need to insert a new row between each of the existing rows.
I have been trying to use the Insert statement as follows, but haven't had any luck:
insert into t1(column1) values("hello") where id%2 == 0
So I'm basically trying to use the %-operator to tell me if the id is even or odd. For every even id number, I'd like to insert a new row.
What am I missing? What can I do differently? How can I insert a new row into every other row and have the index updated as well?
Thanks
Your question assumes that the rows have some kind of built-in order to them, and that you can insert rows between other rows. That's not true.
It is true that rows have an order on disk, and that the id column is usually assigned in order, but that's an implementation detail. When you perform a query, the database is free to return the rows in any order it chooses, unless you specify what you want with an ORDER BY clause.
Now, I'm assuming what you really want is to insert rows between the existing rows in id order. One way to get what you want would look like this:
UPDATE t1 SET id = id * 2
INSERT INTO t1 (id, column) SELECT id+1, "hello" FROM t1
The UPDATE would double the ids of all the existing rows (so 1,2,3 becomes 2,4,6); then the INSERT would perform a query on t1 and use the result to insert a new set of rows with id values one more than the existing rows (so 2,4,6 becomes 3,5,7).
I haven't tested the above statements, so I don't know if they would work or if they require some extra trickery (like a temporary table) since we are querying and updating the same table in one statement. Also I may have made a syntax error.
Don't consider the rows as pre-ordered in the database. A database will store them as they come in, or according to an index. It's your task to order them on retrieval (i.e. when you query for data) according to your needs.

Resources