Does Cloud Firestore lock when a backup is taking place? - firebase

I'm scheduling a backup operation on Cloud Firestore. I'm not sure if data changes while the backup function is running, if those changes would be reflected in the backup.
For completeness that's not the behavior I'm looking for and if the changes would be reflected, is there anyway I can lockout the db? Perhaps dynamically changing the security rules?

Firestore's export feature is not a true "backup". Notice that nowhere in the documentation is the word "backup" ever used. It's just an export, and that export is effectively just querying every collection and writing the documents to a file in a storage bucket.
Data can change during an export. The export might not contain everything that might have changed while the export happened. You can expect the export to be inconsistent in that case.
Security rules don't affect the export. They just affect web and mobile clients.
It's not really possible to "lock" the entire database, except by preventing your code from writing it entirely by controlling it yourself.

Related

How can I use local cache and only update changed documents using Firestore?

I'm working on a calendar app and I'm currently fetching the data from firestore to populate the calendar. Eventually, there will be a lot of data to be fetched and I'm trying to understand the caching system of Firestore but can't get behind it.
What I ideally want to achieve:
Always use cached data and only update those documents, which are new, edited or deleted.
How would I achieve that?
If you want to force the SDK to read from the cache, you can specify source options when calling get().
If you find yourself doing this everywhere though, it might make sense to consider using another database than Firestore as that is primarily an online, cloud-hosted database that continues to work while you're temporary offline.

Firestore Offline: Make sure a specific Document is always cached

Lets say I have a workout app, where every workout is one DocumentSnapshot. I want to have a donwload button, that downloads a workout/document.
I'm already using firestore's offline capabilities, but I need to ensure, that when I have downloaded this document, it is always available when opening the app without a connection.
So is it possible to ensure, that a specific document is always being cached in the local firestore cache?
I could also just persist the data of the DocumentSnapshot, the problem with this is, I can't update the Document and have the changes being synchronized with the "online" database when reconnecting with the wifi.
Is there any good way to achieve this?
So is it possible to ensure, that a specific document is always being cached in the local firestore cache?
It's not possible to ensure 100% of the time. The local cache is fully managed by the Firestore SDK. You don't have control over how it chooses to evict data from the cache. Any given cached document might be removed to make room for other documents in the future, if the cache becomes full.
Also, the cached document will not stay in sync with whatever is on the server, unless you write code to periodically query for (or listen for changes) in that document.
The functionality you're describing is best implemented with application code (probably with its own persistence layer) that specifically meets the needs of your app. The Firestore SDK won't do it for you.

Does Firebase have a way to limit access to all public data in the security rules?

Update: Editing the question title/body based on the suggestion.
Firebase store makes everything that is publicly readable also publicly accessible to the browser with a script, so nothing stops any user from just saying db.get('collection') and saving all the data as theirs.
In more traditional db setup where an app's frontend is pulling data from backend, and the user would have to at least go through the extra trouble of tweaking the UI and then scraping the front end to pull more-and-more data (think Twitter load more button).
My question was whether it was possible to limit users from accessing the entire database in a click, while also keeping the data publicly available.
Old:
From what I understand, any user who can see data coming out of a Firebase datastore can also run a query to extract all of that data. That is not desirable when data itself is of any value, and yet Firebase is such an easy to use tool, it's great for pretty much everything else.
Is there a way, or a best practice, for how to structure the data or access rules s.t. users see the data, but can't just run a script to download all of it entirely?
Thanks!
Kato once implemented a simplistic rate limit for writes in Realtime Database security rules: Firebase rate limiting in security rules?. Something similar could be possible in Cloud Firestore rules. But this approach won't work for reads, since you can't update the timestamp at the same time the read is performed.
You can however limit what queries a user can perform on your database. For example, to limit them to reading 50 documents at a time:
allow list: if request.query.limit <= 50;

Rollback on failure of Firebase storage upload

My goal is to have a firebase cloud function track the upload of three separate files to the same storage bucket. These uploads are preceded by a write to the real time database which would preferably be the trigger for the cloud function to track the uploads.
The context is a user is adding an item to her shopping cart. The data is written to the RTDB and then a custom 3d model and 2 images are copied into a storage bucket. If any of these files don't successfully upload, I need to know that and conduct a rollback of the 3 files in the storage bucket and also remove the entry in the database. I could handle this client side, but that isn't ideal since usually if the uploads fail, its because the connection with the client has failed.
I haven't been able to find any sort of batch add or transaction-type uploads to firebase storage. Sorry for not having any code to show, but I'm not even really sure how to get started on this. Any suggestions would be much appreciated. Thanks!
There are no transactions that cross products like this. Nor are there any transactions offered by Cloud Storage. You're going to have to check errors and manually undo things previously done. Or, have some job that checks for orphaned data and deletes it later.

What is the best way to store a large constant hashmap in node.js (firebase cloud functions)?

If I try to download or read the file from Firebase database or firebase storage, it will incur unnecessarily large quota costs and be financially unsustainable. And since firebase only has an in-memory file system with a "tmp" directory it is impossible to deploy any files there.
The reason I think my request should be very reasonable, is I could accomplish it by literally declaring the entire 80 MB of hashmaps data in the code. Maybe I'll write a script that enumerates all those fields in js and put it inside the index.js itself? Then it never has to download from anywhere, and won't incur any quotas. However this seems like a very desperate solution for what seems to be a simple problem.
Cross post of https://www.quora.com/How-can-I-keep-objects-stored-in-RAM-with-Cloud-Functions

Resources