if you see here https://firebase.google.com/docs/firestore/solutions/delete-collections
you can see the below
Consistency - the code above deletes documents one at a time. If you
query while there is an ongoing delete operation, your results may
reflect a partially complete state where only some targeted documents
are deleted. There is also no guarantee that the delete operations
will succeed or fail uniformly, so be prepared to handle cases of
partial deletion.
so how to handle this correctly?
this means "preventing users from accessing this collection while deletion is in progress?"
or "If the work is stopped by accessing the collection in the middle, is it to call the function again from the failed part to proceed with the complete deletion?"
so how to handle this correctly?
It's suggesting that you should check for failures, and retry until there are no documents remaining (or at least until you are satisfied with the result).
Related
If every document in a collection is a user resource that is limited, how can you ensure the user does not go over their assigned limit?
My first thought was to take advantage of the Firestore triggers to avoid building a real backend, but the triggers sometimes fire more than once even if the inputed data has not changed. I was comparing the new doc to the old doc and taking action if certain keys did not match but if GCP fires the same function twice I get double the result. In this case incrementing or decrementing counts.
The Firestore docs state:
Events are delivered at least once, but a single event may result in multiple function invocations. Avoid depending on exactly-once mechanics, and write idempotent functions.
So in my situation the only solution I can think of is saving the event id's somewhere and ensuring they did not fire already. Or even worse doing a read on each call to count the current docs and adjust them accordingly (increasing read costs).
Whats a smart way to approach this?
If reinvocations (which while possible are quite uncommon) are a concern for your use-case, you could indeed store the ID of the invocation event or something less frequent, like (depending on the use-case) the source document ID.
I know what you are probably thinking, "why does it matter? Don't try to over-complicate it just to optimize pricing". In my case, I need to.
I have a collection with millions of records in Firestore, and each document gets updated quite often. Every-time one gets updated, I need to do some data-cleaning (and more). So I have a function trigger by onUpdate that does that. In the function there's two parameters: document before update and document after update.
My question is:
Because the document is been passed as an argument, does that count as a database read?
The event generated by Cloud Firestore to send to Cloud Functions should not count as an extra read beyond what what was done by the client to initially trigger that event.
I am use Firestore and try to remove race condition in Flutter app by use transaction.
I have subcollection where add 2 document maximum.
Race condition mean more than 2 document may be add because client code is use setData. For example:
Firestore.instance.collection(‘collection').document('document').collection('subCollection’).document(subCollectionDocument2).setData({
‘document2’: documentName,
});
I am try use transaction to make sure maximum 2 document are add. So if collection has been change (For example new document add to collection) while transaction run, the transaction will fail.
But I am read docs and it seem transaction use more for race condition where set field in document, not add document in subcollection.
For example if try implement:
Firestore.instance.collection(‘collection').document('document').collection('subCollection').runTransaction((transaction) async {
}),
Give error:
error: The method 'runTransaction' isn't defined for the class 'CollectionReference'.
Can transaction be use for monitor change to subcollection?
Anyone know other solution?
Can transaction be use for monitor change to subcollection?
Transactions in Firestore work by a so-called compare-and-swap operation. In a transaction, you read a document from the database, determine its current state, and then set its new state based on that. When you've done that for the entire transaction, you send the whole package of current-state-and-new-state documents to the server. The server then checks whether the current state in the storage layer still matches what your client started with, and if so it commits the new state that you specified.
Knowing this, the only way it is possible to monitor an entire collection in a transaction is to read all documents in that collection into the transaction. While that is technically possible for small collections, it's likely to be very inefficient, and I've never seen it done in practice. Then again, for just the two documents in your collection it may be totally feasible to simply read them in the transaction.
Keep in mind though that a transaction only ensures consistent data, it doesn't necessarily limit what a malicious user can do. If you want to ensure there are never more than two documents in the collection, you should look at a server-side mechanism.
The simplest mechanism (infrastructure wise) is to use Firestore's server-side security rules, but I don't think those will work to limit the number of documents in a collection, as Doug explained in his answer to Limit a number of documents in a subcollection in firestore rules.
The most likely solution in that case is (as Doug also suggests) to use Cloud Functions to write the documents in the subcollection. That way you can simply reject direct writes from the client, and enforce any business logic you want in your Cloud Functions code, which runs in a trusted environment.
I need to delete very large collections in Firestore.
Initially I used client side batch deletes, but when the documentation changed and started to discouraged that with the comments
Deleting collections from an iOS client is not recommended.
Deleting collections from a Web client is not recommended.
Deleting collections from an Android client is not recommended.
https://firebase.google.com/docs/firestore/manage-data/delete-data?authuser=0
I switched to a cloud function as recommended in the docs. The cloud function gets triggered when a document is deleted and then deletes all documents in a subcollection as proposed in the above link in the section on "NODE.JS".
The problem that I am running into now is that the cloud function seems to be able to manage around 300 deletes per seconds. With the maximum runtime of a cloud function of 9 minutes I can manage up to 162000 deletes this way. But the collection I want to delete currently holds 237560 documents, which makes the cloud function timeout about half way.
I cannot trigger the cloud function again with an onDelete trigger on the parent document, as this one has already been deleted (which triggered the initial call of the function).
So my question is: What is the recommended way to delete large collections in Firestore? According to the docs it's not client side but server side, but the recommended solution does not scale for large collections.
Thanks!
When you have too muck work that can be performed in a single Cloud Function execution, you will need to either find a way to shard that work across multiple invocations, or continue the work in a subsequent invocations after the first. This is not trivial, and you have to put some thought and work into constructing the best solution for your particular situation.
For a sharding solution, you will have to figure out how to split up the document deletes ahead of time, and have your master function kick off subordinate functions (probably via pubsub), passing it the arguments to use to figure out which shard to delete. For example, you might kick off a function whose sole purpose is to delete documents that begin with 'a'. And another with 'b', etc by querying for them, then deleting them.
For a continuation solution, you might just start deleting documents from the beginning, go for as long as you can before timing out, remember where you left off, then kick off a subordinate function to pick up where the prior stopped.
You should be able to use one of these strategies to limit the amount of work done per functions, but the implementation details are entirely up to you to work out.
If, for some reason, neither of these strategies are viable, you will have to manage your own server (perhaps via App Engine), and message (via pubsub) it to perform a single unit of long-running work in response to a Cloud Function.
Looking at https://firebase.google.com/docs/reference/js/firebase.firestore.Transaction I see four methods: delete, set, get, update.
I was about to construct a lovely little collection query and pass it to .get, but I see the docs say that .get "Reads the document referenced by the provided DocumentReference."
It appears this means we cannot get a collection, or query a collection, with a Transaction object.
I could query those with the query's .get() method instead of the transaction's .get() method, but if the collection changes out from under me, the transaction will end up in an inconsistent state without retrying.
It seems I am hitting a wall here. Is my understanding correct? Can we not access collections inside a transaction in a consistent way?
Your understanding is correct. When using the web and mobile SDKs, you have to identify the individual documents that you would like to ensure will not change before your transaction is complete. If those documents come from a collection query ahead of time, fine. But think for a moment about how not-scalable it would be if you had to track every document in a (very large) collection in order to complete your transaction.
However, for backend SDKs, you can perform a query inside a transacction and effectively transact on all the documents that were returned by the query, up to the limit of number of documents in a transaction (500).
You can run queries (not just fetch single documents) in a transaction's get() method, but that's only for server execution. So if you really need to do that (say for maintaining denormalized data's consistency), you can put that code in a cloud function and make use of server-side transactions