Temporary in-memory database in SQLite - sqlite

Is it possible somehow to create in-memory database in SQLite and then destroy it just by some query?
I need to do this for unit testing my db layer. So far I've only worked by creating normal SQLite db file and delete if after all tests, but doing it all in memory would be much better.
So is it possible to instanciate database only in memory without writing anything to disc?
I can't use just transactions, because I want to create whole new database.

Create it with the filename ":memory:": In-Memory Databases.
It'll cease to exist as soon as the connection to it is closed.

As an alternative to in memory databases, you can create a SQLite temporary database by using an empty string for the filename. It will be deleted when the connection is closed. The advantage over an in-memory database is your databases are not limited to available memory.
Alternatively, you can create your database in a temp file and let the operating system clean it up. This has the advantage of being accessible for inspection.

I'd suggest mounting a tmpfs filesystem somewhere (RAM only filesystem) and using that for your unit tests.
Instantiate DB files as normal then blow them away using rm - yet nothing has gone to disk.
(EDIT: Nice - somebody beat me to a correct answer ;) Leaving this here as another option regardless)

I suggest you using lmDisk toolkit.
It's a tool kit to mount a part of ram or image file as normal disk. you can copy your project (or just your db) there.
I've try it to process raw data and create a db for a game ai.

Related

SQLite - make true read only database

I would like to open and read an SQLite .db file, read-only. I guarantee that nobody else will touch it during this time (perhaps, except for read only).
What I need from SQLite3 in return, is that it will write nothing to disk, ever (specifically - none of those described here), and not use any file-system locks on the file.
Is that too much to ask?
If you are running under some Unix, you can use the unix-none VFS to disable all locking.
In Windows, SQLite always uses locks.
If you really want to avoid locks, you can either write your own VFS, or override the locking system calls with xSetSystemCall.
If SQLite needs a temporary file, you cannot prevent it from creating one.
However, you can configure it to create them in memory instead of on disk.
The VFS does not have a Lock method that can be injected. Therefore there is no a direct method to inject dummy LockFile and LockFileEx methods.
These methods are referenced inside sqlite3_io_methods (winIoMethod) and don't seem to be easy to modify in runtime without altering SQLite source code.
So, if I understand correctly, VFS is not the right direction? Or is it?
May be use a read-only user? I don't know if such role exists in SQL Lite.

copying sqlite3 db while being read

I have a script that was reading data from a sqlite3 database and while this script was running I made a copy of the database cp mydatabase mydatabase.bak. Will this affect either the script that was reading from the db or the copy of the db? I had a look at the sqlite documentation here [0] but I didn't put a lock on the db as per the instructions.
[0] http://www.sqlite.org/backup.html
Copying the file should be analogous to another application reading the database, so it shouldn't be a problem. Multiple applications can safely read the database file at the same time (per the SQLite FAQ).
As another point, consider that you can read from a database even if the database and its directory both lack write permissions. Since in that scenario there's no way for the reading application to be modifying the database file or creating a temp file that needs to be incorporated into it, there's no way for any of a number of simultaneously reading applications to affect what any of the others see.

Can you use rsync to replicate block changes in a Berkeley DB file?

I have a Berkeley DB file that is quite large (~1GB) and I'd like to replicate small changes that occur (weekly) to an alternate location without having the entire file be re-written at the target location.
Does rsync properly handle Berkeley DBs by it's block level algo?
Does anyone have an alternative to only have changes be written to the Berkeley DBs files that are targets of replication?
Thanks!
Rsync handles files perfectly, at the block level. The problem with databases can come into play in a number of ways.
Caching
File locking
Synchronization/transaction logs
If you can insure that during the period of the rsync, no applications have the berkeley db open, then rsync should work fine, and offer a significent advantage over copying the entire file. However, depending on the configuration and version of bdb, there are transaction logs. You probably want to investigate the same mechanisms used for backups and hot backups. They also have a "snapshot" feature that might better facilitate a working solution.
You should probably read this carefully: http://www.cs.sunysb.edu/documentation/BerkeleyDB/ref/transapp/archival.html
I'd also recommend you consider using replication as an alternative solution that is blessed by BDB https://idlebox.net/2010/apidocs/db-5.1.19.zip/programmer_reference/rep.html
They now call this High Availabity -> http://www.oracle.com/technetwork/database/berkeleydb/overview/high-availability-099050.html

How enable iCloud support for sqlite?

I want to provide iCloud support for my wrapper around sqlite. Is not using coredata.
I wonder how enable iCloud for it. The database content is changed all the time (is for invoicing). Also, if is possible to have some kind of versioning will be great.
Exist any sample I can use to do this?
The short answer is no, you would need to use Core Data as you suspected. Apple has stated that sqlite is unsupported.
Edit: Check out the section on iCloud that's now in the iOS Application Programming Guide under Using iCloud in Conjunction with Databases
Using iCloud with a SQLite database is possible only if your app uses
Core Data to manage that database. Accessing live database files in
iCloud using the SQLite interfaces is not supported and will likely
corrupt your database. However, you can create a Core Data store based
on SQLite as long as you follow a few extra steps when setting up your
Core Data structures. You can also continue to use other types of Core
Data stores—that is, stores not based on SQLite—without any special
modifications.
You can't just put the SQLite database in the iCloud container, because it might get corrupted. (As you modify an SQLite DB, temporary files are created and renamed, so if the sync process starts copying those files, you'll get a corrupt database.)
If you don't want to move to Core Data, you can do what Core Data does: store your database in your document folder, and store a transaction log in the iCould container. Every time you change the database, you add those changes to a log file, so you can play them back and make equivalent changes on other devices.
This gets pretty complicated: aside from getting the log/reply logic right, you'll want to coalesce redundant changes and periodically collapse the log into a complete copy of the database.
You might have an easier time developing a solution if you can exploit knowledge of your application (Core Data has to solve the problem in the general case). For example, you could save invoices as separate files in the cloud container (text, Property List, XML, JSON, whatever), writing them out as the database changes and only importing ones if the system tells you they were created or changed.
In summary, your choice is either to migrate to Core Data or write a sync solution yourself. Which one is best depends on the particulars of your application.

Is there any value in including SQLite in VCS's

Having an argument with my team. We are developing an application using SQLite and some want to add it to the repo (GIT) and some don't. Previously with RDBMS system there has been no perceived benefit of using VCS on the DB. However SQLite is a self contained file with no external dependencies so i assume, even though it is binary, that a commit of the project code + the SQLite file will give an accurate snapshot of the state of play at that point.
I also assume that a branch and merge would work as well.
Has anyone actually done this and if so does it work?
You'd get more benefit from GIT's versioning facilities if you stored a dump of the SQLite database (i.e. commands required to create it) rather than the database file itself. That way you could look at the history of the dump file and see tables or data being added etc.
Generally speaking, it's preferable to include full set of dependencies in a VCS repository. This makes your life a whole lot simpler.
If you're after versioning DB schema, check out Wizardby.

Resources