BerkeleyDb : safe removing files of environment - berkeley-db

I use BerkeleyDb in a project.
I have some environments composed by several file. Sometimes, I need to remove some of these files.
When I remove file with the filesystem, opening the environment raise an error No such file or directory.
Do exists a way to safely removing a file in BerkeleyDb environment ?

To remove a database, you need to be absolutely certain that no references to the database exist in the environment. The most foolproof way to do this is as follows:
Use db->remove() to remove the database from inside your application.
Use dbenv->txn_checkpoint() to flush all changes, checkpoint the log, and then flush the log.
Use dbenv->txn_checkpoint() with the DB_FORCE flag to push one more checkpoint through, ensuring that when the environment is recovered that it doesn't attempt to recover databases that predated the last checkpoint.
Step 3 sounds insane, I know. And maybe it isn't needed any more. But it certainly was needed in the not-too-distant past. Certainly steps 1 and 2 are needed. You'll need to experiment to see if step 3 is necessary for your application.

Related

SQLite: how to totally clear the shared-cache?

I'm experimenting with enabling the shared cache in a SQLite implementation I'm working on. In the actual app everything works fine, but my unit tests are now failing with "disk I/O error"s. I'm assuming this is because the shared cache is making assumptions about the file that are no longer valid once it's been deleted.
How can I clear out this shared cache data? I've tried running sqlite3_shutdown() followed by sqlite3_initialize() but the problems persist.
I've actually discovered that some of my tests weren't closing connections properly, and that was the source of my problem - shared-cache was just highlighting it (though I'm not 100% sure why).
That said, in my journey, I did manage to find a way to control where SQLite puts its temporary files. the sqlite3_temp_directory global variable lets you define it - by default it's blank and defers to the OS, I think.
If you set that directory you can manually clear out any files whenever you wish.

SQLite - make true read only database

I would like to open and read an SQLite .db file, read-only. I guarantee that nobody else will touch it during this time (perhaps, except for read only).
What I need from SQLite3 in return, is that it will write nothing to disk, ever (specifically - none of those described here), and not use any file-system locks on the file.
Is that too much to ask?
If you are running under some Unix, you can use the unix-none VFS to disable all locking.
In Windows, SQLite always uses locks.
If you really want to avoid locks, you can either write your own VFS, or override the locking system calls with xSetSystemCall.
If SQLite needs a temporary file, you cannot prevent it from creating one.
However, you can configure it to create them in memory instead of on disk.
The VFS does not have a Lock method that can be injected. Therefore there is no a direct method to inject dummy LockFile and LockFileEx methods.
These methods are referenced inside sqlite3_io_methods (winIoMethod) and don't seem to be easy to modify in runtime without altering SQLite source code.
So, if I understand correctly, VFS is not the right direction? Or is it?
May be use a read-only user? I don't know if such role exists in SQL Lite.

Can I get a callback / do I know when SQLite has created write-ahead log files? I want to chmod them

I have an elevated process and I want to make sure the SQLite files that it creates are readable by other processes. For some reason umask doesn't seem to do what I want (set permissions of sqlite file created by process).
I'm using write-ahead logging, so -wal and -shm files are created in addition to the database file. I want all 3 to be chmodded correctly.
I wonder if it's possible to get in after the SQLite file is created and chmod it.
Possible approaches:
touch all 3 files before SQLite tries to create them, then chmod and hope the mask stays the same
Intercept when the files are created and chmod them.
Work out how to get umask to work for the process.
Mystery option four.
What's the best way to go?
Questions for approaches:
Will SQLite be OK with this?
Do we know when all 3 files are created? Is there some kind of callback I can give a function pointer to? Do we know if the same wal and shm files are around forever? Or are they deleted and re-created?
You can touch the database file before opening it. (When you use the sqlite3 command-line tool to open a new file, but do nothing but begin; and commit;, SQLite itself will create a zero-sized file.)
If you want to intercept file operations, you can register your own VFS.
The -wal and -shm files are created dynamically, but SQLite will give them the same permission bits as the main database file. The comments for robust_open() in os_unix.c say:
If the file creation mode "m" is 0 then set it to the default for
SQLite. The default is SQLITE_DEFAULT_FILE_PERMISSIONS (normally
0644) as modified by the system umask. If m is not 0, then
make the file creation mode be exactly m ignoring the umask.
The m parameter will be non-zero only when creating -wal, -journal,
and -shm files. We want those files to have exactly the same
permissions as their original database, unadulterated by the umask.
In that way, if a database file is -rw-rw-rw or -rw-rw-r-, and a
transaction crashes and leaves behind hot journals, then any
process that is able to write to the database will also be able to
recover the hot journals.

Write-Ahead Logging and Read-Only mode compatible in SQLite3?

Open read-only
I have a sqlite3 file on a filesystem that belongs to a different user than is running the reading process. I want the reading process to be able to read the file in read-only mode, so I'm passing SQLITE_OPEN_READONLY. I would expect that to work. Surely the idea is that read-only mode works on files that we don't want to write to?
When I prepare my first statement I get
unable to open database file
Similarly if I run the sqlite3 command line tool I get the same result unless I sudo. Which seems to confirm to me that the issue is writeability rather than anything else.
Journal files
The answer to this question seems to suggest that if there are journal files around then read-only access isn't possible.
Why are there journal files? Because another process is writing the file, my user process is trying to open it in read-only. To do this I am using Write-Ahead Logging, which produces two journal files, -shm and -wal. True enough, if I stop the writing process and remove the journal files, my user process can open it in read-only mode.
Incompatibility?
So I have two situations:
If the file belongs to the writing process and also the read-only process, write-ahead logging enables process A to write and process B to read-only
If the file belongs to the writing process but does not belong to the read-only process, the read-only process is blocked from opening read-only.
How do I achieve both of these? To spell it out, I want:
Writing process owns database
Read-only process does not own database
Read-only process cannot write to database
Write-ahead logging is enabled on database
Seems like a simple set of requirements, but I can't see an obvious solution.
**EDIT: ** Going by this documentation, it looks like this isn't possible. Can you suggest any alternative ways to achieve the above?
Yes WAL-journaled databases cannot be opened read-only, explicitly or otherwise (i.e. in the case where the database file is read-only to the process).
If you require that the read-only process absolutely not be allowed to modify the database file, then the only thing that comes to mind is that the write process maintains a not WAL-journal additional copy of the database.
Bottom line: to the best of my knowledge, WAL and read-only can't be done.
I think what the documentation is saying is that the WAL database itself may not be present on a readonly media, which does not necessarily mean you cannot use SQLITE_OPEN_READONLY. In fact, I have successfully opened two connections, a read-write as well as one with SQLITE_OPEN_READONLY, both on a WAL sqlite database. These work just fine. I tested an INSERT query using the read-only connection and the statement correctly returned an error that the database is read-only.
Just make sure that the database is stored on some media with write-access as a -shm file needs to be created and maintained, and so even a 'ready-only' connection may actually physically write something to disk - which doesn't necessarily mean that it can modify data using SQL.

why do you copy the SQLite DB before using it?

Everything I have read so far, it seems as though you copy the DB from assets to a "working directory" before it is used. If I have an existing SQLite DB I put it in assets. Then I have to copy it before it is used.
Does anyone know why this is the case?
I can see a possible application to that, where one doesn't want to accidentally corrupt database during write. But in that case, one would have to move database back when it's done working on it, otherwise, next time program is run will start from "default" database state.
That might be another use case - you might always want to start program execution with known data state. Previous state might be set from external application.
Thanks everyone for your ideas.
I think what I might have figured out is that the install cannot put a DB directly to the /data directory.
In Eclipse there is no /data which is where most of the discussions I have read say to put it.
This is one of the several I found:
http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/comment-page-4/#comment-37008

Resources