SQLLite: How to renumber a auto-increment field? - sqlite

I have two different CSV files which I have merged and imported into a single table in a SQLite3 database. Each CSV file contained a column called ID. Since, some of the ID's are duplicates (across the CSV files) and this is a primary key field, I need a way to completely renumber the ID field for each row in the table.
The ID field is also an auto-increment field.
So, what I would like to do is to run a SQL command or some other method where the ID for each row of the table would be reset to ensure uniqueness. For example, the ID field for the first row will be set to 1 the next to 2 and so on.
Note, it is not so important that it begin with 1. Ensuring primary key uniqueness is the goal here. It doesn't matter what number it starts at. There are also no foreign key relations so that is not an issue.
Any suggestions much appreciated.

Okay, in my case, I figured out that it was easiest to not import the ID column. Rather, I imported everything else and then added an ID field of type auto-increment. Once I did that, everything was re-numbered as I wanted.

Related

PHPMyAdmin - Deleting rows from table with no unique column

This is really annoying, I've encountered this problem plenty of times when importing a database:
https://i.gyazo.com/8051625ceaa6f2e00212a134a96a485e.png
Because it has no unique column, I can't delete rows. Because I can't delete rows, I can't assign a unique column because I can't delete the rows with duplicate entries for that column (those rows with ID = 0).
I can't remember how I fixed this before. I have no idea how this problem even happens, I thought the wp_options table would have a unique key on the ID column by default.
Ah, sorry the solution was really simple. PHPMyAdmin just prevents you from deleting rows through the GUI, but the SQL query to delete the rows still works. I deleted those duplicate rows by going into the SQL tab and running DELETE FROM wp_options WHERE option_id = '0';
I've also encountered this problem, and thought the same thing (NOTE: It's NOT just the wp_options table - you're going to have issues on other tables as well!)
The issue is the way the export / import is processed.
There's a subtle "autoincrement" checkbox somewhere in the phpMyAdmin interface for the export under the "object creation" options:
NOTE: Once you've exported / imported, and find yourself dealing with this, the simplest way that I've come up with to solve the issue is:
Create a new column titled 'new_ID', make it AUTO_INCREMENT.
Then,
a. either run a query to update the existing ID column to the new_ID value, OR
b. Delete the existing ID column, and rename 'new_ID' to the correct name, and add a PRIMARY KEY index.
I figured out what caused the problem. When importing the SQL file, there was an error when importing one of the keys for one of the tables. As a result, it skipped the importing of every key after that, and the option_id unique key was one of the keys it skipped. So yeah you were right, the problem happened with a lot of other tables too. The solution was to import the rest of the keys and indexes and stuff from the .sql file.
What I mean is this:
--
-- Indexes for table wp_links
ALTER TABLE wp_links
ADD PRIMARY KEY (link_id), ADD KEY link_visible (link_visible);
--
-- Indexes for table wp_options
ALTER TABLE wp_options
ADD PRIMARY KEY (option_id), ADD UNIQUE KEY option_name (option_name), ADD KEY wpe_autoload_options_index (autoload);
I forgot to delete the wp_links table before importing the new database tables, so it couldn't create the link_id primary key since it already existed. As a result, every key that appeared in the file after that one got skipped.

How to design DynamoDB table to facilitate searching by time ranges, and deleting by unique ID

I'm new to DynamoDB - I already have an application where the data gets inserted, but I'm getting stuck on extracting the data.
Requirement:
There must be a unique table per customer
Insert documents into the table (each doc has a unique ID and a timestamp)
Get X number of documents based on timestamp (ordered ascending)
Delete individual documents based on unique ID
So far I have created a table with composite key (S:id, N:timestamp). However when I come to query it, I realise that since my id is unique, because I can't do a wildcard search on ID I won't be able to extract a range of items...
So, how should I design my table to satisfy this scenario?
Edit: Here's what I'm thinking:
Primary index will be composite: (s:customer_id, n:timestamp) where customer ID will be the same within a table. This will enable me to extact data based on time range.
Secondary index will be hash (s: unique_doc_id) whereby I will be able to delete items using this index.
Does this sound like the correct solution? Thank you in advance.
You can satisfy the requirements like this:
Your primary key will be h:customer_id and r:unique_id. This makes sure all the elements in the table have different keys.
You will also have an attribute for timestamp and will have a Local Secondary Index on it.
You will use the LSI to do requirement 3 and batchWrite API call to do batch delete for requirement 4.
This solution doesn't require (1) - all the customers can stay in the same table (Heads up - There is a limit-before-contact-us of 256 tables per account)

SQLite Find ROWID of most recent or next ROWID

So I have a table with data about an image. The table looks something like this...
ROWID|title|description|file_path
The file path contains the name of the image. I want to rename the image to match the ROWID.
How do I get the latest ROWID? I need to also account for rows that have been deleted as I am using this as an autoincremented primary key. Because, if a row within the table has been deleted it is possible for the table to look like this...
1|title A|description A|..\fileA.jpg
2|title B|description B|..\fileB.jpg
5|title E|description E|..\fileE.jpg
7|title G|description G|..\fileG.jpg
On top of that there could be one or more rows that have been deleted so the next ROWID could be 10 for all I know.
I also need to account for an fresh new table or a table that has had all data deleted and the next ROWID could be 1000.
In summary, I guess the real question is; Is there a way to find out what the next ROWID will be?
If you have specified AUTOINCREMENT in primary key field and table is not empty this query will return latest ROWID for table MY_TABLE:
SELECT seq
FROM sqlite_sequence
WHERE name = 'MY_TABLE'
What language? Looks like the c API has the following function:
sqlite3_int64 sqlite3_last_insert_rowid(sqlite3*);
http://www.sqlite.org/c3ref/last_insert_rowid.html
You could also just do:
select MAX(rowid) from [tablename];
Unfortunately neither of these methods completely worked the way I needed them to, but what i did end up doing was....
insert data into table with the fields I needed the rowid for filled with 'aaa'
then updated the rows with the data.
This seemed to solve my current issue. Hopefully it doesn't cause another issue down the road.
I think last_insert_rowid is what you want, usually.
Note that the rowid behavior is different depending on the autoincrement flag - either it will monotonically increase, or it will assume any free id. This will not usually affect any smaller use cases though.

How to make sure SQLite never create a row with the same ID twice?

I have a SQLite table that looks like this:
CREATE TABLE Cards (id INTEGER PRIMARY KEY, name TEXT)
So each time I create a new row, SQLite is going to automatically assign it a unique ID.
However, if I delete a row and then create a new row, the new row is going to have the ID of the previously deleted row.
How can I make sure it doesn't happen? Is it possible to somehow force SQLite to always give really unique IDs, that are even different from previously deleted rows?
I can do it in code but I'd rather let SQLite do it if it's possible. Any idea?
Look at autoincrement (INTEGER PRIMARY KEY AUTOINCREMENT). It will guarantee this and if the request can't be honored it will fail with SQLITE_FULL.

The ids of a table in my sqlite database have been reset

I have an iPhone app and one of my users found a really strange problem with my application. I can't reproduce the problem and I can't figure out why it's happening. Maybe you can?
In Sqlite I have a table with about 1000 rows, each with a unique id. But for some reason the id of that table has restarted, before it was around 1000 but now it's restarted from 80 something. So everytime the user inserts a new row the new assigned id starts around 80 something and I get two duplicates ids that should be unique and yeah you can understand the problem. I have looked at all queries that does anything to that table and none of them could have done this. I always relay on the built in mechanism where the ids are assigned automatically.
Have you seen anything like this?
The schema of the table looks like this:
CREATE TABLE mytable(
id INTEGER PRIMARY KEY
);
As you can see I don't use AUTOINCREMENT. But from what I understand even if the user deletes a row with id 80, it is ok to give a new inserted row id 80 but not like it works now where the database just keeps incrementing the ids even if I have already have rows with the same id. Shouldn't it work like this:
HIGHEST ROWID IS 1000, ALL IDS FROM 0-1000 ARE TAKEN
USER DELETES ROW WITH ID 80
INSERT A NEW ROW
THE ID OF THE INSERTED ROW MIGHT NOW BE 80
SETS THE ID OF THE INSERTED ROW TO 80
INSERT A NEW ROW
THE ID OF THE INSERTED ROW CAN NOT BE 81 AS THIS IS ALREADY TAKEN
SETS THE ID OF THE INSERTED ROW TO 1001
Isn't that how it should work?
Did you declare your id column as a(n autoincrementing) primary key?
CREATE TABLE mytable(
id INTEGER PRIMARY KEY AUTOINCREMENT
);
By adding the autoincrement keyword you ensure that all keys generated will be unique over the lifetime of your table. By omitting it, the keys will still be unique, but it may generate keys that have already been used by other, deleted entries. Note that using autoincrement can cause problems, so read up on it before you add it.
Edit This is a bit of a long-shot, but sqlite only supports one primary key per table. If you have more than one primary key declared, you need to declare all but the one you actually want to use as a primary key as "unique". Hence
CREATE TABLE mytable(
id INTEGER PRIMARY KEY,
otherId INTEGER UNIQUE
);
Hard to say without the code and schema, but my instinct is that this unique ID is not defined as either unique nor primary key, which they should.
How do you make sure (in theory) id's are unique? What is your insert query like?

Resources