How do I run long running queries when there are constant writes in Sqlite3? - sqlite

I have an sqlite3 database that has one process inserting a single row every second or so. Is it possible to execute a long running query on this database in another process? When I try I'm often getting "database locked" errors. Do both the reader and the writer require exclusive access to the database? Will a very long read-only query cause the writes to fail?

Try using the WAL journal mode.

Related

Why are temporally long SELECTs in SQLite blocking updates in other processes?

I'm on a Mac, running 10.15.7. My SQLite version is 3.32.3.
I have a large SQLite database (16GB) against which the query behavior is kind of mystifying. I have a SELECT query which takes a very long time (between 20 and 30 seconds). If I start this query in one SQLite shell, and attempt to do an UPDATE in another SQLite shell, I can get a write lock, but the commit yields "database is locked" (which I'm pretty sure corresponds to SQLITE_BUSY):
sqlite> begin immediate transaction;
sqlite> update edges set suppressed = 1 where id = 1;
sqlite> end transaction;
Error: database is locked
As I understand SQLite, it supports parallel reads but exclusive writes, and I'm only doing a write in the shell shown here; the other one is just running an expensive SELECT. The documentation does say this:
An attempt to execute COMMIT might also result in an SQLITE_BUSY return code if an another thread or process has an open read connection. When COMMIT fails in this way, the transaction remains active and the COMMIT can be retried later after the reader has had a chance to clear.
But I don't understand why, or under what circumstances this COMMIT behavior arises; it says "might", but it doesn't elaborate. Nor do I understand how this statement is consistent with the idea that SQLite is exclusive only with respect to writes.
Thanks to all in advance for an explanation.
Commenter Shawn is correct; the answer is that the default SQLite journal mode blocks a write lock if either a write or a read is underway. This is made clear here, although I couldn't find that mentioned in the core SQLite documentation.

sqlite .backup command fails when another process writes to the database (Error: database is locked)

The goal is to complete an online backup while other processes write to the database.
I connect to the sqlite database via the command line, and run
.backup mydatabase.db
During the backup, another process writes to the database and I immediately receive the message
Error: database is locked
and the backup disappears (reverts to a size of 0).
During the backup process there is a journal file, although it never gets very large. I checked that the journal_size_limit pragma is set to -1, which I believe means its unlimited. My understanding is that writes to the database should go to the journal during the backup process, but maybe I'm wrong. I'm new to sqlite and databases in general.
Am I going about this the wrong way?
If the sqlite3 backup writes "Error: database is locked", then you should use
sqlite3 source.db ".timeout 10000" ".backup backup.db"
See also Increase the lock timeout with sqlite, and what is the default values? about default timeouts (spoiler: it's zero) and now with backups solved you can switch SQLite to WAL mode (it supports multiple writers!).
//writing this as an answer so it would be easier to google this, thanks guys!

Is it possible to mark an SQLite DB as a non-corrupt one

I have aborted vacuum with ctrl+c and deleted the journal (I thought it is useless for that case). Now it writes that the db is corrupt. I wonder if it is possible to mark the DB as non-corrupt without recreating it by translation to sql and back.

QSqlError("5", "Unable to fetch row", "database is locked")

I am getting this error "QSqlError("5", "Unable to fetch row", "database is locked")"
I have done my research and I think the problem arises from the fact that I am executing an INSERT query while the SELECT query is still active, which locks the database. Now I'd imagine people run into this problem often since it is common to write to a database based on the output of a SELECT query, so I wanted to ask what is the best way to solve this? Would I be able to fetch the query (using query.next()) after closing it with query.finish() to unlock the database? Or should I store the result in a temporary container, close the query then iterate over the temporary container?
Thank you very much in advance
Do you have a database reader on when you run this? I had a similar issue that only occurred when I had DB Browser for SQLite running. Make sure that you don't have any other software that has your database file open. I don't always have this issue with using DB Browser for SQLite but when I do, closing the program fixes it.
It addition, I tend to run query.finish() after each query is complete, to ensure no interaction.
I hope this helps you out!

sqlite3 - Exception: database disk image is malformed

i'm working with IronPython 2.6 for .Net4 and sqlite3 module from: IronPython.SQLite.
i have a written a GUI program what runs in four frames of an MDI window. Every of the four programs receives data from a serialport and stores this data in a sqlite database. One database per program.
Between inserting this data on receive into the database the program querys the database every 100ms for the latest data items.
I'm already using a mutex call for the cursor.execute() command to prevent problems with simultaneous commands (insert or select).
During runtime the program (sporadically) runs into an exception.
When trying to query data:
System.Exception: database disk image is malformed
or when trying to insert data:
System.Exception: database or disk is full
Is it possible, that an database query short after an database insert (or the way around) could cause such exceptions and destroy the database?
It would be very kind of you, if you could give me a kind of advice how to solve this issue.

Resources