Feasibility of SQLite in-memory as main database for web app - sqlite

Is anyone running SQLite in-memory db as their main database for their web app? I'm trying to get a handle on the feasibility/stupidity of such a setup.
If I have a web application with fewer than 500 concurrent users and a database that's smallish in size (between 0 and 4 GB), what's the feasibility of running SQLite in-memory with the entire database in it as the main database for the app?
Running in-memory would obviously make the "Durable" aspect of ACID difficult, but with the SQLite backup API, it seems like it's possible to keep an in-memory db in sync with a file-based db. I'm thinking that when a user executes a Save/Update/Delete command, it could instantly update the in-memory copy and then be queued to make it to the file-based db. Whenever the app gets cycled, it could just be loaded from the file-based db via the backup API, right?
According to the SQLite documentation, a single connection must be kept open in order to keep the in-memory db up and running, so is this a problem if it's kept open for hours or days?

You should only use an in-memory database for data processing, not to store data. Because you need to store the data on disk anyway, it's much simpler and more effective to store the database on disk in the first place.

If you need to sync the database to disk, why use an in-memory database? The advantage of an in-memory database is that data never needs to be written to a filesystem. Since you need data to be written to disk, you've turned the sole advantage into a disadvantage. So why do it? Just crank up the cache as large as you can.

Related

How to export SQLite database to Google cloud SQL

So I have a 150 GB database that I want to import to Google Cloud SQL (it doesn't matter if its PSQL or MySQL.) The reason I want to do this is to have more flexible space, and faster computation.
However, I find no easy intro on how this is done. It seems like the way to do this is to also make a Google Cloud Storage instance, then dump my SQLite database to an SQL file, upload it to the Cloud Storage bucket, then import it to Google Cloud SQL. Is this the best and quickest way?
Dumping a 150 GB database would probably require lots of space and lots and lots of time.
Ultimately, what you need to do is to:
make sql dump of your sqlite database
convert it to be mysql- or postgresql- compatible (either manually or using some tool)
upload it to Google
import into your CloudSQL instance
You can try to minimize intermediate steps by using something like https://github.com/dimitri/pgloader. It seems like you can use this tool to connect your sqlite database directly to CloudSQL instance. It will take time - there's no getting around trasferring ~150G worth of data to Google
Where is your sqlite database stored now? If it's already in GCE VM, running pgloader from the same region as your CloudSQL instance should make it much faster.

Synchronize Postgres Server Database to Sqllite Client database

I am trying to create an app that receives an Sqlite database from a server for offline use but cloud synchronization. The server has a postgres database with information from many clients.
1) Is it better to delete the sql database and create a new one from a query, or try to synchronize and update the existing separate sqlite files (or another better solution). The refreshes will be a few times a day per client.
2) if it is the latter, could you give me any leads to resources on how I could do this?
I am pretty new to database applications so please excuse my ignorance and let me know if there is any way I could clarify.
There is no one size fits all approach here. You need to carefully consider exactly what needs to be done, what you are replicating, how much data is involved, and what your write models are, all before you build a solution. Along the way you have to decide how to handle write conflicts and more.
In general the one thing I would say is that such synchronization works best with append-only write models (i.e. inserts, no deletes, no updates), and one way to do it is to log changes that need to be made and replicate those changes.
However, master-master replication is difficult on the best of days and with the best of tools available. Jumping between databases with very different capabilities will introduce a number of additional problems. You are in for a big job.
Here's an open source product that claims to solve this for many database types including Postgres. I have no affiliation or commercial interest in this company.
https://github.com/sqlite-sync/SQLite-sync.com
http://sqlite-sync.com/
If you're able and willing to step outside relational databases to use an object store you might want to have a look at CouchDb and perhaps PouchDb that use a MVCC based replication protocol designed to support multi-master replication including conflict resolution. Under the covers, PouchDb uses adaptors for Sqlite, IndexDb, Local storage or a remote CouchBb instance to persist client side data. It auto selects the best client side storage option for the given desktop or mobile browser. The Sqlite engine can be either WebSQL or a Cordova Sqlite plugin.
http://couchdb.apache.org/
https://pouchdb.com/

SQLite db protection using events/mutex

I have two windows applications which will be accessing same database(stored in a hard disk). Only one of these two application is performing both read/write operations in this db while the second one only performs read operations. Do I need mutex/events to protect my db while both applications are accessing it? I was reading FAQ of SQLite which says that I might not need any resource protection, SQLite has in-built feature..

SQLite and Cloud applications

i was wondering if there is a way to enable cloud features for a SQLite database application.
Should i save the whole database to the cloud each time ? For example when i quit the application is it required to save the whole database to the cloud.
What do you suggest ?
Should i drop SQLite and use another database for cloud programming .
iCloud supports SQLite databases.
When properly setup it will only sync change logs instead of the entire database. In theory it's pretty nice. I haven't however had the best of luck using it yet, it seems to be a little too buggy to actually use in ios 5, hopefully it's better in 6.
To be most efficient you could manage a changelog of objects that are modified by the app. Then when its time to sync (while closing the app for instance), you can make operational requests to the Cloud. For add and update you can send the entire object, while for delete just the oid should suffice.
This is a very simple sync scenario. Things can get complicated fast if you are looking to send changes that happen in the Cloud down to the device. That is a scenario for a different thread.
Based on your question, you just need to sync from the device to the Cloud.

What is database backed Cache and how does it work?

What is database backed Cache and how does it work ? Something similar in line of when the app server goes down and database is backed by cache there is no time wasted to repopulate an in memory cache
A database backed cache is a method of storing data that is costly (in resources or time) to generate or derive. You could see them implemented for such things as:
improving web server performance by caching dynamic pages in a db as static html so additional hits to the page do not incur the overhead of regenerating the page. Yes, this might be counter-intuitive as often database access is the bottleneck, though in some cases it is not.
Improving query time against a slow (or offsite) directory server or database.
If I understand your example correctly, I believe you might have it backwards. The database is backing some other primary location. For example, in an app server farm, if a security token is stored in a db backed cache and the app server you are currently interacting with goes down you could be routed to a different service instance. The token cache check it's in-memory cache which won't contain the token, so it will be retrieved from the database, deserialized and added to the (new) local cache. The benefits are minimized network transport and improved resilience to failures.
Hope this helps.

Resources