why is Sqlite checksum not same after reversing edits? - sqlite

obviously editing any column value will change the checksum.
but saving the original value back will not return the file to the original checksum.
I ran VACUUM before and after so it isn't due to buffer size.
I don't have any indexes referencing the column and rows are not added or removed so pk index shouldn't need to change either.
I tried turning off the rollback journal, but that is a separate file so I'm not surprised it had no effect.
I'm not aware of an internal log or modified dates to explain why the same content does not produce the same file bytes.
Looking for insight on what is happening inside the file to explain this and if there is a way to make it behave(I don't see a relevant PRAGMA).
granted https://sqlite.org/dbhash.html exists to work around this problem but I don't see any of these conditions being triggered "... and so forth" is a pretty vague cause

Database files contain (the equivalent of) a timestamp of the last modification so that other processes can detect that the data has changed.
There are many other things that can change in a database file (e.g., the order of pages, the B-tree structure, random data in unused parts) without a difference in the data as seen at the SQL level.
If you want to compare databases at the SQL level, you have to compare a canonical SQL representation of that data, such as the .dump output, or use a specialized tool such as dbhash.

Related

Can select commands edit a sql database in anyway?

Is there any way in which select commands alter a sqlite database? I would assume not, but don't want to rely on that assumption. (the specific concern i had in mind was if e.g. querying the database for example creates indexes or similar for quicker retrieval in subsequent times, hence causing the sql files to change)
Asking, because i want to cache some values calculated from a sql file, and only update these values if there has been an edit to the sql file [specifically if the number of bytes file size has changed, which would indicate the sql database has changed. The specific calculations are quite computations intensive, so don't want to repeat unless neaded].
SELECT statements cannot modify the database.
SQLite sometimes needs to store temporary indexes or intermediate results, but such data goes into the temporary database, not into the actual database file.
Anyway, to find out whether a database file has changed, check the file change counter.

Meta-data from SQLite

Is there any way to query a SQLite database for basic meta data such as:
Last date/time updated
Hash of database to indicate "state"
I am just looking for a simple, infrastructural way to have a script evaluate different databases and take a reasonable point of view on whether they are the same "state" as other databases in a different environment (PROD and DEV for instance).
In my experience, if no update, new record, or any change is made to the SQLite database file, the last modified time of the file doesn't change. So the last modified time should suffice for the time of any change made to database.
If 2 database files with same state are only accessed for reading, their modified times are always the same.
Similarly you get the file sizes for comparison.
You can use the whole file to calculate hash. If you consider same data in the database as the same "state" regardless of any difference in the past, then maybe you want hash of the all records in database, which is probably not simple.

sqlite: online backup is not identical to original

I'm doing an online backup of an (idle) database using the example 2 code from here. The backup file is not identical to the original (the length is the same, but it differs in 3 bytes), although the .dump from both databases is identical. Backup files taken at different times are identical to each other.
This isn't great, as I'd like a simple guarantee that the backup is identical to the original, and I'd like to record checksums on the actual database and the backups to simplify restores. Any idea if I can get around this, or if I can use the backup API to generate files that compare identically?
The online backup can write into an existing database, so this writing is done inside a transaction.
At the end of such a transaction, the file change counter (offsets 24-27) is changed to allow other processes to detect that the database was modified and that any caches in those processes are invalid.
This change counter does not use the value from the original database because it might be identical to the old value of the destination database.
If the destination database is freshly created, the change counter starts at zero.
This is likely to be a change from the original database, but at least it's consistent.
The byte at offset 28 was decreased because the database has some unused pages.
The byte at offset 44 was changed because the database does not actually use new schema features.
You might be able to avoid these changes by doing a VACUUM before the backup, but this wouldn't help for the change counter.
I would not have expected them to be identical, just because the backup API ensures that any backups are self consistent (ie transactions in progress are ignored).

Does SQLite checksum its data?

Harddrive bit-rot does happen. I'm using SQLite for a project with fairly critical data. Obviously, I'll be taking regular backups of the database, but does SQLite checksum its data?
I've read about the PRAGMA integrity_check, but can't really say whether it does integrity check on the actual data. The page "How To Corrupt An SQLite Database File" doesn't really mention the fact about bit rot on a harddrive, which is the reason why I'm asking.
Also, the database I am dealing with will be an indexable append-only log. One option would be for me to rotate the database regularly and create an MD5 sum of each rotated file. But maybe that's too much work...
Any input appreciated.
From reading the integrity_check documentation, I would say it would not be guaranteed to detect corruption that only affects user data (due to undetected bit errors on media).
Since your data is an append-only log, you've got it pretty easy. One way would be to write a text file log on a separate hard drive that contains hashes (MD5 or whatever) of every row of your data. Then you can use that hash log to verify the contents of the real database. Obviously backups will be an integral part of your plan.
Just stumbled upon this; I could be using the fzec Python package to recover broken data. Each row would have multiple "fzec block columns" to recover from corruption. Seems pretty neat.

What are the restrictions on "database-name" when attaching a database using SQLite?


			
				
In the absence of any information otherwise, assume it's an SQL identifier. SQLite reserves the names main and temp, but almost anything else can be used if properly quoted. Still, I'd recommend avoiding SQL keywords and such, if just to keep the confusion quota down. (Database names are arbitrary, and do not need to correlate with the name of the file containing them.)

Resources