How to attach :memory: databases that have already been created? - sqlite

I'm only using in memory databases. Now I am creating a file, and want to copy one of the many tables into this file once the database is attached. However, I can't seem to determine how to attach an already instantiated in-memory database.
Passing ':memory:' to the attach statement creates a new database, as it would have no idea which in-memory database to attach to if more than one open. Is there a way to attach by, say, the C pointer of the database to be attached?
This would also be useful if I already have two disk databases open and do not want to call open a third time implicitly by the attach command. If not possible, are there other ways, preferably without creating temporary files?

Just attach the file DB to the in-memory DB.
If you really want to do this the other way around, you must enable URI file names and use them; the documentation says:
If the unadorned ":memory:" name is used to specify the in-memory database, then that database always has a private cache and is this only visible to the database connection that originally opened it. However, the same in-memory database can be opened by two or more database connections as follows:
ATTACH DATABASE 'file::memory:?cache=shared' AS aux1;
This allows separate database connections to share the same in-memory database. Of course, all database connections sharing the in-memory database need to be in the same process.

Related

Store live data in sqlite database to share with other processes but not persist on disk

I am using SQLite with SQLAlchemy on a embedded linux system (think Raspberry Pi) to do some data logging.
The DB has a schema for configuration of sensors and adc boards. It has a measurement table which has columns for timestamp, value, adc_board_id (FK), channel_id (FK).
Measurements are saved to the DB every minute.
However I want to update an LCD with live measurements (every second).
The LCD app is a separate process and gets data to display from the DB.
Is there a way to have a table (a special table) just for live measurements that is only RAM based?
I want it accessible via the DB (like shared memory) but never to persist (i.e. never written to disk).
NOTE: I want most of the tables to persist on disk, but specify one (or more) special tables that never get written to disk.
Would that even work with separate processes accessing the DB?
The table will be written by only one process, but may be read by many processes, possibly via a View (once I learn how to do that with SQLAlchemy).
I did see some SQLite documentation that states:
The database in which the new table is created. Tables may be created
in the main database, the temp database, or in any attached database.
Maybe using the temp database could do the trick?
Would it be accessible via other processes?
Alternatively, an attached database that lives on a ramdisk (/dev/shm/...)?
Are there any other techniques with DBs/SQLite/SQLAlchemy to achieve the same outcome?
Storing an SQLite DB in memory, while sharing with another process
You can do this with a "ramdisk". This works just like a normal file-system, but uses RAM as physical storage. You can create one using the tmpfs filesystem:
mount -t tmpfs -o size=512m tmpfs /mnt/ramdisk
After that you can just store your SQLite database in /mnt/ramdisk and access it with another application.
Can two processes can access the same datatabse
Technically yes. But you should - in the case of sqlite - ensure that only one process has write access. This seems to be feasible in your case.

System.Data.SQLite in-memory database multi-threading

I am creating a System.Data.SQLite in-memory database using connection string as
"Data Source=:memory:",
and want to access this database among multi-threads.
Now what I do is to clone the SQLiteConnection object and pass the copy to worker threads.
But I found that different threads actually get individual instances of in-memory database, not a shared one. How can I share one in-memory database among threads?
Thanks!
Based on the SQLite documentation for in-memory databases, I would try a datasource named with URI filename convention file::memory:?cache=shared or the like instead of :memory: (and note specifically the cache name that all connections are being told to use). As explained on the page, every instance of a :memory: is distinct from one another, exactly as you found.
Note you may also have to first enable shared-cache mode before making the connections to the in-memory database (as specified in the shared cache documentation with a call to sqlite3_enable_shared_cache(int) for this to work.

Can I achieve scalable multi-threaded access to an in-memory SQLite database

I have a multi-threaded Linux C++ application that needs a high performance reference data lookup facility. I have been looking at using an in-memory SQLite database for this but can't see a way to get this to scale in my multi-threaded environment.
The default threading mode (serialized) seems to suffer from a single coarse grained lock even when all transactions are read only. Moreover, I don't believe I can use multi-thread mode because I can't create multiple connections to a single in-memory database (because every call to sqlite3_open(":memory:", &db) creates a separate in-memory database).
So what I want to know is: is there something I've missed in the documentation and it is possible to have multiple threads share access to the same in-memory database from my C++ application.
Alternatively, is there some alternative to SQLite that I could be considering ?
Yes!
see the following extracted from the documentation at:
http://www.sqlite.org/inmemorydb.html
But its not a direct connection to DB memory, instead to the shared cache.Its a workaround. see the picture.
In-memory Databases And Shared Cache
In-memory databases are allowed to use shared cache if they are opened using a URI filename. If the unadorned ":memory:" name is used to specify the in-memory database, then that database always has a private cache and is this only visible to the database connection that originally opened it. However, the same in-memory database can be opened by two or more database connections as follows:
rc = sqlite3_open("file::memory:?cache=shared", &db);
Or,
ATTACH DATABASE 'file::memory:?cache=shared' AS aux1;
This allows separate database connections to share the same in-memory database. Of course, all database connections sharing the in-memory database need to be in the same process. The database is automatically deleted and memory is reclaimed when the last connection to the database closes.
If two or more distinct but shareable in-memory databases are needed in a single process, then the mode=memory query parameter can be used with a URI filename to create a named in-memory database:
rc = sqlite3_open("file:memdb1?mode=memory&cache=shared", &db);
Or,
ATTACH DATABASE 'file:memdb1?mode=memory&cache=shared' AS aux1;
When an in-memory database is named in this way, it will only share its cache with another connection that uses exactly the same name.
No, with SQLite you cannot access the same in-memory database from different threads. That's by design. More info at SQLite documentation.

read-only sqlite database with temporary changes

I have an sqlite database, and I would like to keep it read-only without any write operations on the database file.
Is there a way to make temporary modifications to the database, without flushing them to disk permanently?
Right now I am doing this in a workaround way, by storing the temp data in an in-memory database with the same schema structure as the main database file. The problem with this approach is that it increases code complexity, as I have to run all my queries on both databases.
Ideally, the solution to the problem would treat the additional temporary data as if it were part of the main database but without actually committing it to disk.
Because SQLite is transactional it should be enough to not to COMMIT transaction (SQLite will rollback it automatically on connection close). You should set autocommit to off using BEGIN statement first.
You could create a temporary table, which will be automatically removed from the db when the connection is closed.
BEGIN IMMEDIATE TRANSACTION;
do a bunch of stuff;
ROLLBACK TRANSACTION;
If another thread is reading from the database then the ROLLBACK will fail with SQLITE_BUSY, but you can execute it again after the pending reads finish. If another thread wants to write to the database then they will hit your lock.
Now, there is something a bit funny about using transactions in this way, how about having a programmatic ORM-style layer between you and the database that works directly with your native objects and keeps your in-memory temporary changes?
I mean, if you don't want to change the database, perhaps you need another code layer, not a database feature?

Why is SQLite fit for template cache?

The benefits of using SQLite storage
for the template cache are faster read
and write operations when the number
of cache elements is important.
I've never used it yet,but how can using SQLite by faster than plain file system?
IMO the overhead(initiating a connection) will make it slower.
BTW,can someone provide a demo how to use SQLite?
There is no real notion of "initiating a connection" : an SQLite database is stored as a single file, in the local filesystem ; so there is nothing like a network connection.
I suppose using an SQLite database can be seen as fast as there is only one file (the database), and not one file per template -- and each access to a file costs some resources ; the operating system might be able to cache accesses to one big file more efficiently to several accesses to several distinct small files.
About a "demo how to use SQLite", it kind of depends on the language you'll be using, but you can start by taking a look at the SQLite documentation, and the API that's available in your programming language ; accessing an SQLite DB is not that hard : basically, you have to :
"Connect" to the DB -- i.e. open the file
Issue some SQL queries
Close the connection
It's not much different than with any other DB engine : the biggest difference is there is no need to setup any DB server.
The benefits of SQLite over a standard file system would lie in it's caching mechanism. SQLite stores data in pages and caches pages to memory. Repeated calls for data that are on pages already in memory will skip a call out to the file system.
There is some overhead in using SQLite though. When you connect to a SQLite database the engine reads and parses the schema. On our system, that takes 30ms (although it's usually less than 1ms for smaller schemas--we have just under a hundred tables and hundreds of triggers and indexes).

Resources