I have a script that was reading data from a sqlite3 database and while this script was running I made a copy of the database cp mydatabase mydatabase.bak. Will this affect either the script that was reading from the db or the copy of the db? I had a look at the sqlite documentation here [0] but I didn't put a lock on the db as per the instructions.
[0] http://www.sqlite.org/backup.html
Copying the file should be analogous to another application reading the database, so it shouldn't be a problem. Multiple applications can safely read the database file at the same time (per the SQLite FAQ).
As another point, consider that you can read from a database even if the database and its directory both lack write permissions. Since in that scenario there's no way for the reading application to be modifying the database file or creating a temp file that needs to be incorporated into it, there's no way for any of a number of simultaneously reading applications to affect what any of the others see.
Related
I'm using a sqlite database with sqlalchemy in a python web app. I'd like to periodically back up the database by copying the file to blob storage. Obviously I can just copy the file, but I figure this'll lead to a corrupted backup if a write operation occurs while the file is being copied.
One approach is to acquire a file lock everywhere in my application that writes to the DB. But that's a bit error prone. Any recommendations?
The following seems to work:
subprocess.check_call(["sqlite3", running_db_filename, f".backup {target_path}"])
Only downside is vulnerability to injection.
If a SQLite database using write-ahead logging is interrupted with un-checkpointed transactions (due to a power failure or whatever), then reopened with the temporary -wal file missing, will the database open cleanly to its state as of the last checkpoint, or will it be corrupted in some way?
We're trying to get SQLite working with iCloud (yes, we know you're not supposed to do that, but we also make a Windows and an Android app and need a cross-platform database solution), and we think that WAL provides a potential way to avoid having to maintain two copies of our database - we'd keep the -wal file outside of iCloud but store the main database in it, thus avoiding the problem of iCloud backing up rollback journals (or backing up databases mid-transaction without those journals).
The file format documentation mentions a "hot WAL file", but this applies only to uncommitted data.
The database file itself does not contain any information about committed data in the -wal file, i.e., transactions before a checkpoint typically do not alter the main database file at all.
Therefore, deleting the -wal file will simply restore the database to the state it was after the last checkpoint (which is outdated, but consistent); all transactions committed later will just be lost.
See the "Checkpointing" section of SQLite's Write-Ahead Logging. From what I understand, the data in the WAL file would simply not be committed.
In other words, you'd lose the data in the .WAL file that hasn't yet been committed, but the main database itself should be perfectly fine.
It can lead to db corruption when WAL file would be deleted during checkpointing operation. In case of unfinished db modification WAL file is necessary to complete the changes, otherwise db file is in transient state. DB and WAL file create complete picture of the db state. It is also explicitly stated in https://www.sqlite.org/howtocorrupt.html#delhotjrnl that "SQLite must see the journal files in order to recover from a crash or power failure."
Open read-only
I have a sqlite3 file on a filesystem that belongs to a different user than is running the reading process. I want the reading process to be able to read the file in read-only mode, so I'm passing SQLITE_OPEN_READONLY. I would expect that to work. Surely the idea is that read-only mode works on files that we don't want to write to?
When I prepare my first statement I get
unable to open database file
Similarly if I run the sqlite3 command line tool I get the same result unless I sudo. Which seems to confirm to me that the issue is writeability rather than anything else.
Journal files
The answer to this question seems to suggest that if there are journal files around then read-only access isn't possible.
Why are there journal files? Because another process is writing the file, my user process is trying to open it in read-only. To do this I am using Write-Ahead Logging, which produces two journal files, -shm and -wal. True enough, if I stop the writing process and remove the journal files, my user process can open it in read-only mode.
Incompatibility?
So I have two situations:
If the file belongs to the writing process and also the read-only process, write-ahead logging enables process A to write and process B to read-only
If the file belongs to the writing process but does not belong to the read-only process, the read-only process is blocked from opening read-only.
How do I achieve both of these? To spell it out, I want:
Writing process owns database
Read-only process does not own database
Read-only process cannot write to database
Write-ahead logging is enabled on database
Seems like a simple set of requirements, but I can't see an obvious solution.
**EDIT: ** Going by this documentation, it looks like this isn't possible. Can you suggest any alternative ways to achieve the above?
Yes WAL-journaled databases cannot be opened read-only, explicitly or otherwise (i.e. in the case where the database file is read-only to the process).
If you require that the read-only process absolutely not be allowed to modify the database file, then the only thing that comes to mind is that the write process maintains a not WAL-journal additional copy of the database.
Bottom line: to the best of my knowledge, WAL and read-only can't be done.
I think what the documentation is saying is that the WAL database itself may not be present on a readonly media, which does not necessarily mean you cannot use SQLITE_OPEN_READONLY. In fact, I have successfully opened two connections, a read-write as well as one with SQLITE_OPEN_READONLY, both on a WAL sqlite database. These work just fine. I tested an INSERT query using the read-only connection and the statement correctly returned an error that the database is read-only.
Just make sure that the database is stored on some media with write-access as a -shm file needs to be created and maintained, and so even a 'ready-only' connection may actually physically write something to disk - which doesn't necessarily mean that it can modify data using SQL.
I've got a sqlite3 DB that I need to read (not write) sitting on a read-only filesystem. There is also a -journal file associated with the database, which is interfering with opening the database because the first thing the sqlite code wants to do is delete that -journal file and it cannot because the filesystem is read-only. Setting the journal_mode to off doesn't help because that apparently only applies to new transactions. Is there a way to tell sqlite3 to simply ignore all mention of a -journal file associated with a DB?
Unfortunately no.
The problem is that the existence of a journal file indicates that a transaction was left in an incomplete state, and needs to be rolled back by transferring the content of the journal file back into the database file.
This requires write access to the file system, and SQLite will not allow you to open the file without performing this rollback.
You can read more about this here: Read-Only Databases:
No SQLite database (regardless of whether or not it is WAL mode) is readable if it is located on read-only media and it requires recovery. So, for example, if an application crashes and leaves an SQLite database with a hot journal, that database cannot be opened unless the opening process has write privilege on the database file, the directory containing the database file, and the hot journal. This is because the incomplete transaction left over from the crash must be rolled back prior to reading the database and that rollback cannot occur without write permission on all files and the directory containing them.
If you don't care about the possible corruption that discarding the journal file might lead to, you can make a copy of the database file, and leave the journal behind. Though, if you have the ability to do that, I would in fact copy the journal file too, to a writable file system, and open that database as normal, which would roll back the transaction properly.
The copy on the read-only file system though is not usable in its current state.
Is it possible somehow to create in-memory database in SQLite and then destroy it just by some query?
I need to do this for unit testing my db layer. So far I've only worked by creating normal SQLite db file and delete if after all tests, but doing it all in memory would be much better.
So is it possible to instanciate database only in memory without writing anything to disc?
I can't use just transactions, because I want to create whole new database.
Create it with the filename ":memory:": In-Memory Databases.
It'll cease to exist as soon as the connection to it is closed.
As an alternative to in memory databases, you can create a SQLite temporary database by using an empty string for the filename. It will be deleted when the connection is closed. The advantage over an in-memory database is your databases are not limited to available memory.
Alternatively, you can create your database in a temp file and let the operating system clean it up. This has the advantage of being accessible for inspection.
I'd suggest mounting a tmpfs filesystem somewhere (RAM only filesystem) and using that for your unit tests.
Instantiate DB files as normal then blow them away using rm - yet nothing has gone to disk.
(EDIT: Nice - somebody beat me to a correct answer ;) Leaving this here as another option regardless)
I suggest you using lmDisk toolkit.
It's a tool kit to mount a part of ram or image file as normal disk. you can copy your project (or just your db) there.
I've try it to process raw data and create a db for a game ai.