How to "update" the _id column in SQLite Database Browser - sqlite

The _id column in my database is an INTEGER PRIMARY KEY, so it is an auto-incrementing column.
The problem is that now I deleted a row, and the column didn't update the auto-incrementing number.
Is there a way to make the _id column update, so there wouldn't be holes in the sequence?
Thank you very much in advance.

No. This is not how it is intended to be used. Don't mess with the primary key! There will be gapes. The id is just a unique identifier.
If you need a rank then you can do that
select t.*, #rank := #rank + 1 as gapless_rank
from your_table t
cross join (select #rank := 0) r
order by id

To get the nth ID from the table, use a query like this:
SELECT _id
FROM MyTable
ORDER BY _id
LIMIT 1
OFFSET n-1

Related

Show first n rows sorted by one column but they should be unique by another column (SQLite, Android Room)

A simple select * from mytable will return below rows. I don't know how to draw table in post so I am adding the image
As I mentioned in the question title:
(i) show first n rows sorted by one column (can be achieved using order by)
(ii) but they should be unique by another column (unique by collectionID column)
select * from mytable
order by lastAccessTime DESC;
this sorts the table in descending order according to their lastAccessTime as shown in below image:
Now I want to filter these rows according to their collectionID. So only 1 row per collectionID. I have added the image. The strikethrough rows should be removed.
Also, First n rows (lets say 30) should be returned.
I am using Android Room ORM which uses SQLite but to get the desired result set I have to write the correct query.
I think you need a window function filter here. Which will assign a row number based on collectionID and then you can just fetch only 1 row per collectionID. You may give a try to -
SELECT *
FROM (SELECT *, ROW_NUMBER() OVER(PARTITION BY collectionID ORDER BY ID DESC) RN
FROM mytable) T
WHERE RN = 1
LIMIT 30;
The key idea is to "filter" the data with one query which is the source of another query. A window function can be used as in the other answer, but a basic sub-query is also sufficient:
SELECT *
FROM mytable
INNER JOIN
(SELECT Max(id) AS singleID, collectionID
FROM mytable
GROUP BY collectionID) AS filter
ON mytable.id = filter.singleID
ORDER BY lastAccessTime DESC
LIMIT 30;

Does PRIMARY KEY constraint defined on table level guarantees AUTOINCREMENT and no values reuse?

Consider the following table definition:
CREATE TABLE names (
id INTEGER,
name TEXT NOT NULL,
PRIMARY KEY (id)
)
Does it guarantee that the id will be auto-incremented for every new insert AND that the values for deleted rows will not be reused?
I looked up in the documentation for Sqlite3, but couldn't find the answer.
id INTEGER PRIMARY KEY on it's own guarantees (requires) a unique integer value and will if no value is specifically assigned provide one until the highest value has reached the highest allowed value for a 64 bit signed integer (9223372036854775807) after which an unused value may be found and applied.
With AUTOINCREMENT there is a guarantee (if not circumvented) of always providing a higher value BUT if 9223372036854775807 is reached instead of allocating an unused number an SQLITE_FULL error will result. That is the only difference from the point of view of what number will be assigned.
Neither guarantees a monotonically increasing value.
Without AUTOINCREMENT the calculation/algorithm is equivalent to
1 + max(rowid) and if the value is greater than 9223372036854775807 an attempt is made to find an unused and therefore lower value.
I've not seen that anyone has come across the situation where a random unused value has not been assigned.
With AUTOINCREMENT the calculation/algorithim is
the greater of 1 + max(rowid) or SELECT seq FROM sqlite_sequence WHERE name = 'the_table_name_the_rowid_is_being_assigned_to' and if the value is greater than 9223372036854775807 then SQLITE_FULL ERROR.
noting that either way there is the possibility that the max rowid is for a row that eventually doesn't get inserted and therefore the potential for gaps.
The answer is perhaps best put as: it's best/recommended to use the id column solely for it's intended purpose, that of efficiently identifying a row and not as a means of handling other data requirements, and if done so, there there is no need for AUTOINCREMENT (which has overheads)
In short
Does it guarantee that the id will be auto-incremented
NO
values for deleted rows will not be reused?
NO for the given code
for :-
CREATE TABLE names (id INTEGER PRIMARY KEY AUTOINCREMENT name TEXT NOT NULL)
again NO as if 9223372036854775807 is reached then an SQLITE_FULL error will result, otherwise YES.
So really AUTOINCREMENT is only really relevant (if the id used as expected/intended) when the 9223372036854775807'th row has been inserted.
Perhaps consider the following :-
DROP TABLE IF EXISTS table1;
DROP TABLE IF EXISTS table2;
CREATE TABLE IF NOT EXISTS table1 (id INTEGER PRIMARY KEY, somecolumn TEXT);
CREATE TABLE IF NOT EXISTS table2 (id INTEGER PRIMARY KEY AUTOINCREMENT, somecolumn TEXT);
INSERT INTO table1 VALUES (9223372036854775807,'blah');
INSERT INTO table2 VALUES (9223372036854775807,'blah');
INSERT INTO table1 (somecolumn) VALUES(1),(2),(3);
SELECT * FROM table1;
INSERT INTO table2 (somecolumn) VALUES(1),(2),(3);
This creates the two similar tables, the only difference being the use of AUTOINCREMENT. Each has a row inserted with the highest allowable value for the id column.
An attempt is then made to insert 3 rows where the id will be assigned by SQLite.
3 rows are inserted into the table without AUTOINCREMENT but no rows are inserted when AUTOINCREMENT is used. as per :-
CREATE TABLE IF NOT EXISTS table1 (id INTEGER PRIMARY KEY, somecolumn TEXT)
> OK
> Time: 0.098s
CREATE TABLE IF NOT EXISTS table2 (id INTEGER PRIMARY KEY AUTOINCREMENT, somecolumn TEXT)
> OK
> Time: 0.098s
INSERT INTO table1 VALUES (9223372036854775807,'blah')
> Affected rows: 1
> Time: 0.094s
INSERT INTO table2 VALUES (9223372036854775807,'blah')
> Affected rows: 1
> Time: 0.09s
INSERT INTO table1 (somecolumn) VALUES(1),(2),(3)
> Affected rows: 3
> Time: 0.087s
SELECT * FROM table1
> OK
> Time: 0s
INSERT INTO table2 (somecolumn) VALUES(1),(2),(3)
> database or disk is full
> Time: 0s
The result of the SELECT for table1 (which may differ due to randomness) was :-

SQLite auto increase a non-primary key

I have a table which has one primary key integer:
CREATE TABLE TBL (ID INTEGER PRIMARYKEY,ZID INTEGER)
That zid integer field that must be incremented from the previous one found in the database.
I could do something like that:
INSERT INTO TBL (zid) VALUES ((SELECT MAX(zid) + 1 FROM TBL));
However, the value of that integer field will, at some point, reset to zero. Therefore I want to increment from the last entry, not necessarily the maximum in the entire table.
How can I do that? A trigger?
Thanks.
how about a query like:
SELECT zid + 1 FROM TBL ORDER BY id DESC LIMIT 1
with this select query you will only get the value from the last line (+1)

SQLITE fill value with unique random table

I want to create a table with a field that is unique and limited to a certain value. Lets say that the limit is 100, the table is full, I remove a random row, and when I create a new row it has the value that was freed before.
It doesn't need to be the fastest thing in the world (the limit is quite small), I just want to implement it in a DB.
Any ideas?
Create one more column in main table, say deleted (integer, 0 or 1). When you need to delete with certain id, do not really delete it, but simply update deleted to 1:
UPDATE mytable SET deleted=1 WHERE id = <id_to_delete>
When you need to insert, find id to be reused:
SELECT id FROM mytable WHERE deleted LIMIT 1
If this query returns empty result, then use INSERT to create new id. Otherwise, simply update your row:
UPDATE mytable SET deleted=0, name='blah', ... WHERE id=<id_to_reuse>
All queries reading from your main table should have WHERE constraint with NOT deleted condition:
SELECT * FROM mytable WHERE NOT deleted
If you add index on deleted, this method should work fast even for large number of rows.
This solution does everything in a trigger, so you can just use a normal INSERT.
For the table itself, we use an autoincrementing ID column:
CREATE TABLE MyTable(ID INTEGER PRIMARY KEY, Name);
We need another table to store an ID temporarily:
CREATE TABLE moriturus(ID INTEGER PRIMARY KEY);
And the trigger:
CREATE TRIGGER MyTable_DeleteAndReorder
AFTER INSERT ON MyTable
FOR EACH ROW
WHEN (SELECT COUNT(*) FROM MyTable) > 100
BEGIN
-- first, select a random record to be deleted, and save its ID
DELETE FROM moriturus;
INSERT INTO moriturus
SELECT ID FROM MyTable
WHERE ID <> NEW.ID
ORDER BY random()
LIMIT 1;
-- then actually delete it
DELETE FROM MyTable
WHERE ID = (SELECT ID
FROM moriturus);
-- then change the just inserted record to have that ID
UPDATE MyTable
SET ID = (SELECT ID
FROM moriturus)
WHERE ID = NEW.ID;
END;

SQLite: Selecting the maximum corresponding value

I have a table with three columns as follows:
id INTEGER name TEXT value REAL
How can I select the value at the maximum id?
Get the records with the largest IDs first, then stop after the first record:
SELECT * FROM MyTable ORDER BY id DESC LIMIT 1
Just like the mysql, you can use MAX()
e.g. SELECT MAX(id) AS member_id, name, value FROM YOUR_TABLE_NAME
Try this:
SELECT value FROM table WHERE id==(SELECT max(id) FROM table));
If you want to know the query syntax :
String query = "SELECT MAX(id) AS max_id FROM mytable";

Resources