Consider a python webapp using SQLite and the DB Browser for SQLite app. Sometimes a database change (insert, update, delete) is made in DB Browser and not committed. This locks up the python web server if it saves to the same table.
I am wondering if there is an SQLite magic cookie that will allow me to commit any outstanding transaction. The concept doesn't leap to the imagination, but there is no "user". One simply attaches to the SQLite database. Technically, transactions are owned by anyone attached to that database.
My goal is to issue a "commit" from python to ensure the database status is nominal.
Any ideas?
Related
I'm creating an android app which uses Firebase Firestore database to store data. I store about 3600 questions and about 400 images. I know that firestore support offline. But I'm not sure whether the database download itself or not every time I request data from it. And under which circumstance does the firestore database get downloaded from the cloud? Does the database get only updated when the cloud database change?
Every query performed while the app is online will download all the necessary data from the database to satisfy that query.
Any data cached locally as a result of a prior query is only used when the app is offline. As suggested by the documentation, the idea is for your app to be usable when internet connectivity is interrupted. The assumption is that the connectivity will eventually come back, and queries will revert to using the online database as the primary source of data.
I am new to Flutter and mobile development in general and currently trying to wrap my head around certain database principles. I am building an note-taking app which uses Firestore for storing note data to the cloud. Users first need to register an account and log in, after which they are able to view and store notes to the cloud. The way I initially designed this was by using StreamProvider connected to the Firestore instance to update the user's list of notes after they add a note to their list.
After some reflection and worries that my app would be 'read' intensive, I realised that what I am trying to build does not require the notes to be constantly fetched from the server, as the data is private to them and should only be fetched by the user that creates it. My solution is instead of creating notes and listening to changes on the Firestore server to update the list, when a user creates a new note, a function should run and update the Firestore server as well as a local nosql database, removing the need for a read operation on the server after writing to it. This would allow me to have a local duplicate copy of the Firestore server and to periodically update it with write operations when needed. The only read operation would be on the app's startup to fetch a copy of the Firestore server.
My questions are:
Is storing a local version of the Firestore server a viable solution to minimise the number of read operations in my app, or am I overcomplicating things?
The StreamProvider allowed me to easily access data relating to the user, notes, etc throughout the app using Provider.of<Model>(Context). How can I easily access this data regardless of where I am in the app?
Given that my Firestore database structure for accessing notes is users/{user.id}/notes/{note.id}/note , how can I get a snapshot of the current logged in user ID and all the child fields in a single request (i.e. all the user's notes in users/{user.id}/notes and user data under users/{user.id}/)
In Expo, the react-native + extra development environment,
the SQLite API in react-native uses the notion of 'transaction'. However there is no discussion of committing that transaction.
Is the RN notion of transaction just a set of consecutive calls SQLite? or is it a real transaction in the SQLite sense (i.e. all commands succeed or they are rolled back). If the latter, how does one correctly test that the transaction does or doesn't need to be retried?
If not, then I presume one has to do transaction and commit commands on one's own. Still how does one correctly handle a failed transaction?
I am new to firebase and firestore and would like to know if the behavior I found is a bug or by design.
I am using angularfire2 in my ionic project where the user will be offline most of the timeā¦ so offline support is a big deal.
The problem: snapshotChanges is not called in a offline batch delete of subcolletion.
I have something like this in firestore /users/{userId}/projects/{projectId}/points/{pointId}
When a user inserts a new project or point a use set and the object is written fine, my lists are updated with the new instance and it works great thanks to snapshotChanges. When the user removes a project is the problem..
I execute a batch delete in the points of a project and after that I delete the project itself. This works fine online, but not offline. My lists are not being updated even though the operation completes with success. I could reproduce it multiple times, but only if the app is offline all the time ( the inserts and deletes are only local)
am I missing something?
The documentation states here:
Batched writes have fewer failure cases than transactions and use
simpler code. They are not affected by contention issues, because they
don't depend on consistently reading any documents. Batched writes
execute even when the user's device is offline.
This "Batched writes execute even when the user's device is offline" this makes me understand that the events of removal should be propagated to snapshotChanges.
I wrote an app that contains data that is sensitive to certain users which so not want it to end up online. I want to allow to use the app with firebase offline only with the option to sync at a later time. Is this possible with current ios and android firebase implementations as a replacement for sqlite database?
The Firebase Database is primarily an online database that can handle intermittent and medium-term lack of connectivity.
While the user is not connected, Firebase will keep a queue of pending write operations. It will aggregate those operations locally when it loads the data from disk into memory. This means that the larger the number of write operations while the user is offline, the longer loading will take and the more memory the database will use.
This is not a problem in the intended use-case: online apps that need to handle short/medium term lack of connectivity. But it is not a suitable database for long-term offline databases.