I create my sqlite DB connection by using memory mode:
connection = DriverManager.getConnection("jdbc:sqlite:file:" + dbName + "?mode=memory&cache=shared")
And then use WAL-mode by excute: "pragma journal_mode=WAL"
I found that the memory size growing all the time when insert/delete operation. And it can not release even i call the wal_checkpoint(TRUNCATE).
A in-memory database only supports memory mode or no journal. Using a journal file for a database that only exists in RAM and goes away when the program that uses it exits doesn't make any sense.
Related
I have an sqlite3 database that has one process inserting a single row every second or so. Is it possible to execute a long running query on this database in another process? When I try I'm often getting "database locked" errors. Do both the reader and the writer require exclusive access to the database? Will a very long read-only query cause the writes to fail?
Try using the WAL journal mode.
I am modifying a project that opens a sqlite database through sqlalchemy.
I am running multiple processes of this application, which seems to cause some (locking?) issues. The processes stall waiting for IO (state D in top).
The database is only queried / read, never written to. Unfortunately, the 5GB database file is on a NFS directory, so reading is not instantaneous.
Can I set the database to read-only? Will sqlite/sqlalchemy then avoid the locking/transaction mechanism? Setting isolation_level="READ UNCOMMITTED" seems not to have been enough. It's not clear to me from the documentation how and whether I should set SERIALIZABLE.
Alternatively, I guess I could copy the db to memory first, but it is not clear how to hand the memory database over to sqlalchemy. Should I copy on connect?
I was able to solve it by copying the database into memory, by replacing
ENGINE = create_engine('sqlite:///' + DATABASE_FILE)
with:
ENGINE = create_engine('sqlite:///')
import sqlite3
filedb = sqlite3.connect('file:' + DATABASE_FILE + '?mode=ro', uri=True)
print("loading database to memory ...")
filedb.backup(ENGINE.raw_connection().connection)
print("loading database to memory ... done")
filedb.close()
followed by
BASE = declarative_base()
SESSION = sessionmaker(bind=ENGINE)
The goal is to complete an online backup while other processes write to the database.
I connect to the sqlite database via the command line, and run
.backup mydatabase.db
During the backup, another process writes to the database and I immediately receive the message
Error: database is locked
and the backup disappears (reverts to a size of 0).
During the backup process there is a journal file, although it never gets very large. I checked that the journal_size_limit pragma is set to -1, which I believe means its unlimited. My understanding is that writes to the database should go to the journal during the backup process, but maybe I'm wrong. I'm new to sqlite and databases in general.
Am I going about this the wrong way?
If the sqlite3 backup writes "Error: database is locked", then you should use
sqlite3 source.db ".timeout 10000" ".backup backup.db"
See also Increase the lock timeout with sqlite, and what is the default values? about default timeouts (spoiler: it's zero) and now with backups solved you can switch SQLite to WAL mode (it supports multiple writers!).
//writing this as an answer so it would be easier to google this, thanks guys!
I have aborted vacuum with ctrl+c and deleted the journal (I thought it is useless for that case). Now it writes that the db is corrupt. I wonder if it is possible to mark the DB as non-corrupt without recreating it by translation to sql and back.
While installing SQLite I am getting message as
SQLite version 3.9.2 2015-11-02 18:31:45
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite>
what it means transient in-memory database?
The documentation says:
An SQLite database is normally stored in a single ordinary disk file. However, in certain circumstances, the database might be stored in memory.
[…]
When this is done, no disk file is opened. Instead, a new database is created purely in memory. The database ceases to exist as soon as the database connection is closed.