I have one process constantly insert into a sqlite3 database and another process select from the sqlite3 database(slow sql).
Does sqlite3 lock the database on reads?
I want make sure every write success. Read fail is acceptable.
According to SQLite3 locking reference after start of transaction (BEGIN command), a SHARED lock will be acquired when the first SELECT statement is executed. Shared lock means that the database may be read but not written. A RESERVED lock will be acquired when the first INSERT, UPDATE, or DELETE statement is executed.
Related
I have an sqlite3 database that has one process inserting a single row every second or so. Is it possible to execute a long running query on this database in another process? When I try I'm often getting "database locked" errors. Do both the reader and the writer require exclusive access to the database? Will a very long read-only query cause the writes to fail?
Try using the WAL journal mode.
I'm on a Mac, running 10.15.7. My SQLite version is 3.32.3.
I have a large SQLite database (16GB) against which the query behavior is kind of mystifying. I have a SELECT query which takes a very long time (between 20 and 30 seconds). If I start this query in one SQLite shell, and attempt to do an UPDATE in another SQLite shell, I can get a write lock, but the commit yields "database is locked" (which I'm pretty sure corresponds to SQLITE_BUSY):
sqlite> begin immediate transaction;
sqlite> update edges set suppressed = 1 where id = 1;
sqlite> end transaction;
Error: database is locked
As I understand SQLite, it supports parallel reads but exclusive writes, and I'm only doing a write in the shell shown here; the other one is just running an expensive SELECT. The documentation does say this:
An attempt to execute COMMIT might also result in an SQLITE_BUSY return code if an another thread or process has an open read connection. When COMMIT fails in this way, the transaction remains active and the COMMIT can be retried later after the reader has had a chance to clear.
But I don't understand why, or under what circumstances this COMMIT behavior arises; it says "might", but it doesn't elaborate. Nor do I understand how this statement is consistent with the idea that SQLite is exclusive only with respect to writes.
Thanks to all in advance for an explanation.
Commenter Shawn is correct; the answer is that the default SQLite journal mode blocks a write lock if either a write or a read is underway. This is made clear here, although I couldn't find that mentioned in the core SQLite documentation.
The goal is to complete an online backup while other processes write to the database.
I connect to the sqlite database via the command line, and run
.backup mydatabase.db
During the backup, another process writes to the database and I immediately receive the message
Error: database is locked
and the backup disappears (reverts to a size of 0).
During the backup process there is a journal file, although it never gets very large. I checked that the journal_size_limit pragma is set to -1, which I believe means its unlimited. My understanding is that writes to the database should go to the journal during the backup process, but maybe I'm wrong. I'm new to sqlite and databases in general.
Am I going about this the wrong way?
If the sqlite3 backup writes "Error: database is locked", then you should use
sqlite3 source.db ".timeout 10000" ".backup backup.db"
See also Increase the lock timeout with sqlite, and what is the default values? about default timeouts (spoiler: it's zero) and now with backups solved you can switch SQLite to WAL mode (it supports multiple writers!).
//writing this as an answer so it would be easier to google this, thanks guys!
I have aborted vacuum with ctrl+c and deleted the journal (I thought it is useless for that case). Now it writes that the db is corrupt. I wonder if it is possible to mark the DB as non-corrupt without recreating it by translation to sql and back.
I have a python script which creates a sqlite database out of some external data. This works fine. But everytime I execute a GROUP BY query on this database, I get an "Error: unable to open database file". Normal SELECT queries work.
This is an issue for both, the sqlite3 library of python and the sqlite3 cli binary:
sqlite> SELECT count(*) FROM REC;
count(*)
----------
528489
sqlite> SELECT count(*) FROM REC GROUP BY VERSION;
Error: unable to open database file
sqlite>
I know that these errors are typically permission errors (I have read all questions which I could find on this topic on StackOverflow), but I'm quite sure it is not in my case:
I'm talking about a readily created database and read requests
I checked the permissions: Both the file and its containing folders have write permissions set
I can even write to the database: Creating a new table is no problem.
The device is not full, it got plenty of space.
Ensure that your process has access to the TEMP directory.
From the SQLite's Use Of Temporary Disk Files documentation:
SQLite may make use of transient indices to implement SQL language
features such as:
An ORDER BY or GROUP BY clause
The DISTINCT keyword in an aggregate query
Compound SELECT statements joined by UNION, EXCEPT, or INTERSECT
Each transient index is stored in its own temporary file. The
temporary file for a transient index is automatically deleted at the
end of the statement that uses it.
You probably can verify if temporary storage is the problem by setting the temp_store pragma to MEMORY:
PRAGMA temp_store = MEMORY;
to tell SQLite to keep the transient index for the GROUP BY clause in memory.
Alternatively, create an explicit index on the column you are grouping by to prevent the transient index from being created.