I am running SQLite3 version sqlite-3.6.12 and I have successfully ported it to my OS. The problem I am seeing is that when I execute the command "PRAGMA journal_mode = OFF" it returns "OFF" but I am still seeing *.db-journal files being created. It is critical that these files are not created for the purpose of my project. When I step through the code sqlite3PagerJournalMode is returning PAGER_JOURNALMODE_OFF so I am wondering if setting journal_mode=OFF should still produce these files or if there is something else that I am missing.Please help
I also tried PRAGMA main.journal_mode = OFF and PRAGMA journal_mode = MEMORY.But the journel file is creating as such !!!!
Many pragmas have both temporary and
permanent forms. Temporary forms
affect only the current session for
the duration of its lifetime. The
permanent forms are stored in the
database and affect every session.
When to use pragmas on sqlite?
Try to set exclusive access (PRAGMA locking_mode=exclusive), sometimes journal is created for external locking.
Related
I have a very large sqlite database with a single table with two text columns (about 2.3 billion rows, 98GB) that I'm trying to create an index on using the sqlite3 cli tool on Ubuntu 20.04.
The command I'm trying to run is:
CREATE INDEX col1_col2_x ON tablename(col1 COLLATE NO CASE,col2);
The goal is to also create the opposite index to be able to do very fast case-insensitive searches on either column.
Every time I try, it runs for about an hour and then the process exits with just the message "Killed" and exit code 137, which I don't see listed in the sqlite3 documentation.
My first thought was running out of memory, so I tried setting the pragma temp_store_directory as well as the TEMP_DIR environment variable to same directory as the database file, which has about 8TB of free space, so I'm not sure what's going wrong.
Is sqlite not meant for databases of this size? Creating the index before insert doesn't seem to be a viable option as it's looking like it's going to take months. I should also note that I was able to create the exact same indexes successfully with a 36GB table that has the same schema so I'm wondering if I'm running into an undocumented limitation?
I'm also open to other database solutions if sqlite isn't the right solution, although preliminary tests of postgres didn't seem to be any better.
Have you considered setting any of the various PRAGMA statements for the database?
Even with a 2Gb database of only 5 million rows the following were helpful.
PRAGMA page_size = 4096;
PRAGMA cache_size = 10000;
PRAGMA synchronous = OFF;
PRAGMA auto_vacuum = FULL;
PRAGMA automatic_index = FALSE;
PRAGMA journal_mode = OFF;
page_size
Query or set the page size of the database. The page size must be a power of two between 512 and 65536 inclusive.
cache_size
Query or change the suggested maximum number of database disk pages that SQLite will hold in memory at once per open database file.
synchronous
With synchronous OFF (0), SQLite continues without syncing as soon as it has handed data off to the operating system.
automatic_vaccum
When the auto-vacuum mode is 1 or "full", the freelist pages are moved to the end of the database file and the database file is truncated to remove the freelist pages at every transaction commit.
automatic_index
Set Automatic Indexes
journal_mode
The OFF journaling mode disables the rollback journal completely. No rollback journal is ever created and hence there is never a rollback journal to delete. The OFF journaling mode disables the atomic commit and rollback capabilities of SQLite.
The goal is to complete an online backup while other processes write to the database.
I connect to the sqlite database via the command line, and run
.backup mydatabase.db
During the backup, another process writes to the database and I immediately receive the message
Error: database is locked
and the backup disappears (reverts to a size of 0).
During the backup process there is a journal file, although it never gets very large. I checked that the journal_size_limit pragma is set to -1, which I believe means its unlimited. My understanding is that writes to the database should go to the journal during the backup process, but maybe I'm wrong. I'm new to sqlite and databases in general.
Am I going about this the wrong way?
If the sqlite3 backup writes "Error: database is locked", then you should use
sqlite3 source.db ".timeout 10000" ".backup backup.db"
See also Increase the lock timeout with sqlite, and what is the default values? about default timeouts (spoiler: it's zero) and now with backups solved you can switch SQLite to WAL mode (it supports multiple writers!).
//writing this as an answer so it would be easier to google this, thanks guys!
When FDConnection is using the SQLite Driver it has a LockingMode property that is set to Exclusive by default. However, this does not seem to work as expected.
When running the below code, an error does not occur when opening the second connection:
FDConnection1.Params.Database := DB_PATH;
FDConnection1.Open();
FDQuery1.SQL.Text := 'update admin set last_write = 2';
FDQuery1.ExecSQL;
FDConnection2.Params.Database := DB_PATH;
FDConnection2.Open();
Specifically setting the SQLite pragma for Exclusive locking mode also does not seem to work:
FDConnection1.Params.Database := DB_PATH;
FDConnection1.Open();
FDQuery1.SQL.Text := 'PRAGMA locking_mode = EXCLUSIVE';
FDQuery1.ExecSQL;
FDQuery1.SQL.Text := 'update admin set last_write = 2';
FDQuery1.ExecSQL;
FDConnection2.Params.Database := DB_PATH;
FDConnection2.Open();
Again, no error on opening the second connection.
How does one effect an Exclusive locking mode when opening a SQLite database? Why does setting the PRAGMA manually not work?
EDIT
After further testing, I see that opening a second connection with a different component set e.g. UniDAC or ZeosLib, does in fact result in an error.
However, no error occurs when opening a second FDConnection or even writing to that connection. It seems like FireDAC connections are in some way shared no matter what..
I think you are misunderstanding the meaning of the EXCLUSIVE lock.
from the SQLite 3 documentation it is:
An EXCLUSIVE lock is needed in order to write to the database file.
Only one EXCLUSIVE lock is allowed on the file and no other locks of
any kind are allowed to coexist with an EXCLUSIVE lock. In order to
maximize concurrency, SQLite works to minimize the amount of time that
EXCLUSIVE locks are held.
this lock is requested only when trying to write to the database file. (see: 5.0 Writing to a database file)
To confirm this I made a simple test with SQLiteStudio and a simple Delphi application where I instruct the SQLiteStudio to add 1 million records and try to add one with the Delphi app. I always get a Firedac error Database is locked.
I'm trying to backup my sqlite database from a cronjob that runs in 5 minute intervals. The database is "live", so there are running queries at the time I want to perform the backup.
I want to be sure, that the database is in good shape when I backup it so that I can rely on the backup.
My current strategy (in pseudocode):
function backup()
{
#try to acquire the lock for 2 seconds, then check the database integrity
sqlite3 mydb.sqlite '.timeout 2000' 'PRAGMA integrity_check;'
if (integrity is ok and database was not locked)
{
#perform the backup to backup.sqlite
sqlite3 mydb.sqlite '.timeout 2000' '.backup backup.sqlite'
if (backup could be performed)
{
#Check the consistency of the backup database
sqlite3 backup.sqlite 'PRAGMA integrity_check;'
if (ok)
{
return true;
}
}
}
return false;
}
Now, there are some problems with my strategy:
If the live database is locked, I run into problems because I cannot perform the backup then. Maybe a transaction could help there?
If something goes wrong between the PRAGMA integrity_check; and the backup, I'm f*cked.
Any ideas? And by the way, what is the difference between the sqlite3 .backup and a good old cp mydb.sqlite mybackup.sqlite ?
[edit] I'm running nodejs on an embedded system, so if someone suggests the sqlite online backup api with some ruby wrapper - no chance ;(
If you want to backup while queries are running you need to use the backup API. The documentation has a worked out example of an online backup of a running database (example 2). I don't understand the Ruby reference, you can integrate it in your program or do it as a small C program running besides the real application -- I've done both.
An explicit integrity_check on the backup is overkill. The backup API guarantees that the destination database is consistent and up-to-date. (The flip side of that coin is that if you update the DB too often while a backup is running, the backup might never finish.)
It is possible to use 'cp' to make a backup, but not of a running database. You need to have an exclusive lock for the entire duration of the backup, so it's not really 'live'. You also need to be careful to copy all of sqlite's temp files as well as the main database.
I'd expect the sqlite3 ".backup" command to use the backup API.
If you cannot use the backup API, you must use another mechanism to prevent the database file from being modified while you're copying it.
Start a transaction with BEGIN IMMEDIATE:
After a BEGIN IMMEDIATE, no other database connection will be able to write to the database or do a BEGIN IMMEDIATE or BEGIN EXCLUSIVE. Other processes can continue to read from the database, however.
I am running SQLite3 version sqlite-3.6.12 and I have successfully
ported it to my OS. The problem I am seeing is that when I execute the
command "PRAGMA journal_mode = OFF" it returns "OFF" but I am still seeing
*.db-journal files being created. It is critical that these files are not
created for the purpose of my project. When I step through the code
sqlite3PagerJournalMode is returning PAGER_JOURNALMODE_OFF so I am wondering
if setting journal_mode=OFF should still produce these files or if there is
something else that I am missing.Please help
I also tried PRAGMA main.journal_mode = OFF and PRAGMA journal_mode = MEMORY.But the journel file is creating as such !!!!
Compile your application with the ption macro:
SQLITE_ENABLE_ATOMIC_WRITE
If this C-preprocessor macro is defined and if the xDeviceCharacteristics method of sqlite3_io_methods object for a database file reports (via one of the SQLITE_IOCAP_ATOMIC bits) that the filesystem supports atomic writes and if a transaction involves a change to only a single page of the database file, then the transaction commits with just a single write request of a single page of the database and no rollback journal is created or written.