Import and Export helps make it easier to run backups in Firestore as described here
Backing up a Firestore database means a read for every document. That seems incredibly expensive.
Similarly, it would seemingly cost highly when trying to restore a backup.
Is there any way to run these backups without having to incur such high costs?
Creating a backup of your documents requires reading those documents. There is currently no way to create a backup without (charged) reading of each document.
Related
I'm working on Flutter and Firebase Application.
I saw some documents about Firebase that Firebase need transaction control to process lot's of firestore documents update.(==> many users)
However I'm wondering about just document reading and writing.
Is transaction also needed for just lot's of trafic of reading or just lot's of writing new documents?
Also when just one application(admin app) is updating or deleting document and lot's of other users are reading that documents, is transaction also needed?
A transaction needed when multiple users may be updated the same document at (almost) the same time in a way that may product conflicting updated. In such scenarios you use a transaction to prevent the concurrent writes from producing a conflict.
A slight variant of this is Firestore's batched write, which you can use when you want to update multiple documents atomically, but don't to first read any data to determine the new value in those documents.
If there is no chance of conflicts in your writes, you don't need to use a transaction or batched write, and using them will likely actually hurt performance.
So in your scenario, if there's only a single concurrent write coming from the admin app, you won't need transactions. If you have a specific use-case where you are not sure though, it's always best to share that specific use-case and the implementation code with us.
I have a large node in Realtime Database that I want to delete everyday using Cloud Functions schedule.
Is there a limit on how much data I can delete using Cloud Functions on Realtime Database? And where can I find the cost for delete?
I've read the billing doc (link) but I'm not sure where it is mentioned about delete cost.
I'll start by adding this link.
Combining the informations in your link and in this one I've added, the answer is: no, you'll not be billed if you just delete data. The important information here is: if you just delete it. You will still be billed if, before you delete it, you download it. In other words, if you get the reference to a node in your code, and then you just perform a remove(ref), you won't be billed.
There is a remote possibility that you can be billed for a huge CPU consumption. This could happen if the node you're deleting is really big, but you can estimate this by testing it out and checking the "Usage" tab in your Firebase console, under the voice "Load". If the load for a testing delete is low, you're good and you won't be billed.
I'm scheduling a backup operation on Cloud Firestore. I'm not sure if data changes while the backup function is running, if those changes would be reflected in the backup.
For completeness that's not the behavior I'm looking for and if the changes would be reflected, is there anyway I can lockout the db? Perhaps dynamically changing the security rules?
Firestore's export feature is not a true "backup". Notice that nowhere in the documentation is the word "backup" ever used. It's just an export, and that export is effectively just querying every collection and writing the documents to a file in a storage bucket.
Data can change during an export. The export might not contain everything that might have changed while the export happened. You can expect the export to be inconsistent in that case.
Security rules don't affect the export. They just affect web and mobile clients.
It's not really possible to "lock" the entire database, except by preventing your code from writing it entirely by controlling it yourself.
Is there a way to take incremental backups for firestore? We need to take regular backups and taking full backup everyday will only get more expensive over time.
Also is there any plan of supporting point in time restore for firestore?
Firestore does not have a backup mechanism. It has an import/export mechanism, which is not really the same thing. If you need backups, especially incremental backups, you will need to implement that yourself or find another solution.
If I try to download or read the file from Firebase database or firebase storage, it will incur unnecessarily large quota costs and be financially unsustainable. And since firebase only has an in-memory file system with a "tmp" directory it is impossible to deploy any files there.
The reason I think my request should be very reasonable, is I could accomplish it by literally declaring the entire 80 MB of hashmaps data in the code. Maybe I'll write a script that enumerates all those fields in js and put it inside the index.js itself? Then it never has to download from anywhere, and won't incur any quotas. However this seems like a very desperate solution for what seems to be a simple problem.
Cross post of https://www.quora.com/How-can-I-keep-objects-stored-in-RAM-with-Cloud-Functions