i was wondering if there is a way to enable cloud features for a SQLite database application.
Should i save the whole database to the cloud each time ? For example when i quit the application is it required to save the whole database to the cloud.
What do you suggest ?
Should i drop SQLite and use another database for cloud programming .
iCloud supports SQLite databases.
When properly setup it will only sync change logs instead of the entire database. In theory it's pretty nice. I haven't however had the best of luck using it yet, it seems to be a little too buggy to actually use in ios 5, hopefully it's better in 6.
To be most efficient you could manage a changelog of objects that are modified by the app. Then when its time to sync (while closing the app for instance), you can make operational requests to the Cloud. For add and update you can send the entire object, while for delete just the oid should suffice.
This is a very simple sync scenario. Things can get complicated fast if you are looking to send changes that happen in the Cloud down to the device. That is a scenario for a different thread.
Based on your question, you just need to sync from the device to the Cloud.
Related
We are building an app for our teams out in the field that they collect their daily information using Firebase. However one of our concerns is poor connectivity. We are looking to build an Online/Offline button they can click to essentially work offline for when things slow down. We've built a workflow in which we query all the relevant information from Firestore.
I wanted to know if there was a way to tell Firestore to work directly on the cache only and not try to hit the servers directly. I don't want Firestore attempting to make server calls until they enable online again.
You shouldn't need to do this. If you use realtime listeners, they will already first return the data from the local cache, and only then reach out to the server to check for updates.
If you are performing one-time reads, the SDK will by default try to reach the server first (since it has only one chance to give you a value). If you want it to only check the local cache, you can pass an argument to the get call to do so.
You can also disable the network completely, in which case the client will never call on the network and only serve from the local cache. I recommend reading about that and more in the documentation on using Firestore offline.
I need to store information in the mongoDB database on the phone to be used while off-line. The app will download the data while online, and store it in the DB, to be used while off-line. Then when the user is online again, I will send the mongoDB info collected, using my API.
I don't want the mongoDB to be synced with the server while online, either. I want to keep the data on the individual phone. I want to use the data in mongoDB while offline. I need the app to be able to quit/restart, without losing the data on the phone locally.
What is the best way to go about doing this?
There are some options to consider.
1) Create a local mongo database - this is client only storage with no server publication (not sure if it persists between app invocations)
2) sqlite can do the job, but only on Android (not IOS)
3) LokiJS is a fast JS only database that promises to be useful - haven't explored it, but it would be good to hear some feedback
4) If the data is small, you could use LocalStorage, it's pretty simple, you just need to look after serialising and de-serialising it yourself
I have an application that needs to send data to a cloud database (DynamoDb).
The app runs on a computer that can lose internet connectivity or be switched off at any time, but I must ensure that all data eventually gets to the cloud database.
I can assume the application will eventually be switched on, and will eventually get internet access back.
The app is written in VB .NET
What are some schemes for achieving this, and are there any ready-made products that already achieve this?
You could implement a write-through cache using a local DynamoDB instance (or even using SQLite). But without getting specific details about what kind of data you'd be storing into the database, and what data should be made available "offline" it's hard to say exactly how you should structure your application. You'll definitely want to not keep everything local, unless the volume of data is really small overall.
Then there is the problem of resolving conflicts that may occur during network partitions (ie. a client goes offline and makes some database modifications, while other clients also make modifications to the database; these need to be reconciled and it's up to you, and your users to determine how)
It's not a simple problem to solve.
I am trying to create an app that receives an Sqlite database from a server for offline use but cloud synchronization. The server has a postgres database with information from many clients.
1) Is it better to delete the sql database and create a new one from a query, or try to synchronize and update the existing separate sqlite files (or another better solution). The refreshes will be a few times a day per client.
2) if it is the latter, could you give me any leads to resources on how I could do this?
I am pretty new to database applications so please excuse my ignorance and let me know if there is any way I could clarify.
There is no one size fits all approach here. You need to carefully consider exactly what needs to be done, what you are replicating, how much data is involved, and what your write models are, all before you build a solution. Along the way you have to decide how to handle write conflicts and more.
In general the one thing I would say is that such synchronization works best with append-only write models (i.e. inserts, no deletes, no updates), and one way to do it is to log changes that need to be made and replicate those changes.
However, master-master replication is difficult on the best of days and with the best of tools available. Jumping between databases with very different capabilities will introduce a number of additional problems. You are in for a big job.
Here's an open source product that claims to solve this for many database types including Postgres. I have no affiliation or commercial interest in this company.
https://github.com/sqlite-sync/SQLite-sync.com
http://sqlite-sync.com/
If you're able and willing to step outside relational databases to use an object store you might want to have a look at CouchDb and perhaps PouchDb that use a MVCC based replication protocol designed to support multi-master replication including conflict resolution. Under the covers, PouchDb uses adaptors for Sqlite, IndexDb, Local storage or a remote CouchBb instance to persist client side data. It auto selects the best client side storage option for the given desktop or mobile browser. The Sqlite engine can be either WebSQL or a Cordova Sqlite plugin.
http://couchdb.apache.org/
https://pouchdb.com/
I am considering using Firebase for an application that should people to use full-text search over a collection of a few thousand objects. I like the idea of delivering a client-only application (not having to worry about hosting the data), but I am not sure how to handle search. The data will be static, so the indexing itself is not a big deal.
I assume I will need some additional service that runs queries and returns Firebase object handles. I can spin up such a service at some fixed location, but then I have to worry about its availability ad scalability. Although I don't expect too much traffic for this app, it can peak at a couple of thousand concurrent users.
Architectural thoughts?
Long-term, Firebase may have more advanced querying, so hopefully it'll support this sort of thing directly without you having to do anything special. Until then, you have a few options:
Write server code to handle the searching. The easiest way would be to run some server code responsible for the indexing/searching, as you mentioned. Firebase has a Node.JS client, so that would be an easy way to interface the service into Firebase. All of the data transfer could still happen through Firebase, but you would write a Node.JS service that watches for client "search requests" at some designated location in Firebase and then "responds" by writing the result set back into Firebase, for the client to consume.
Store the index in Firebase with clients automatically updating it. If you want to get really clever, you could try implementing a server-less scheme where clients automatically index their data as they write it... So the index for the full-text search would be stored in Firebase, and when a client writes a new item to the collection, it would be responsible for also updating the index appropriately. And to do a search, the client would directly consume the index to build the result set. This actually makes a lot of sense for simple cases where you want to index one field of a complex object stored in Firebase, but for full-text-search, this would probably be pretty gnarly. :-)
Store the index in Firebase with server code updating it. You could try a hybrid approach where the index is stored in Firebase and is used directly by clients to do searches, but rather than have clients update the index, you'd have server code that updates the index whenever new items are added to the collection. This way, clients could still search for data when your server is down. They just might get stale results until your server catches up on the indexing.
Until Firebase has more advanced querying, #1 is probably your best bet if you're willing to run a little server code. :-)
Google's current method to do full text search seems to be syncing with either Algolia or BigQuery with Cloud Functions for Firebase.
Here's Firebase's Algolia Full-text search integration example, and their BigQuery integration example that could be extended to support full search.