With Cosmos DB ChangeFeed, can anyone please provide some help with exception handling?
Let's say if I have 10 documents in the change feed, I have a loop to iterate through the documents one by one. Let's assume if there was an exception happened after the 5th document that is processed.
What is going to happen with the changefeed?
So far, it looks to me that the entire changefeed is swallowed, i.e. the rest documents after the exception are gone.
I am just wondering what is the backout strategy on this? Is there a way I can completely backout the entire batch so I do not loose any changes.
It is an old question, but hopefully other may find it useful.
To handle the error, the recommended pattern is to wrap your code with try-catch. Catch the error and put that document on a queue (dead-letter). Have a separate program to deal with those document which produced the error. This way if you have 100-document batch, and just one document failed, you do not have to throw away the whole batch.
Second reason is if you can keep getting those documents from Change Feed then you may lose the last snapshot on the document. Change Feed keeps only one last version of the document, in between other processes can come and change the document.
As you keep fixing your code, you will soon find no documents on dead-letter queue.
Azure Function is automatically called by Change Feed system. If you want to roll back the Change Feed and control every aspect of it, you should consider using Change processor Feed SDK.
Recommendation from MS, to add try-catch in your CosmosDB trigger function. If any document throw exception you have to store in place.
Once you will start storing failed messages in some location, you have to build metrics, alerts and re-process strategy.
Below is my strategy to handle this scenario. My One function listing to DB changefeed and pushing data into "Topic" (without any process). I created multiple subscriptions so each subscription maintain own dead-letter queue.
Related
Let's say I have a collection called persons and another collection called cities with a field population. When a Person is created in a City, I would like to increment the population field in the corresponding city.
I have two options.
Create a onCreate trigger function. Find the city document and increment using FieldValue.increment(1).
Create an HTTPS callable cloud function to create the person. The cloud function executes a transaction in which the person is created and the population is incremented.
The first one is simpler and I am using it right now. But, I am wondering if there could be cases where the onCreate is not called due to some glitch...
I am thinking of moving to the second option. I am wondering if there are any disadvantages. Does HTTPS callable function cost more?
The only problem I see with the HTTPS callables would be that if something fails you would need to handle that on your client side. That would be (at least for me) a little bit to much logic for the client side.
What I can recommend you after almost 4 years experience with exactly that problem is a solution with a virtual queue. I had a long dicussion on that theme here and even with the Firebase ppl on the last in person Google IO and Firebase Summit.
Our problem was that there where those glitches and even if they happend sometimes the changes and transaction failed due to too much requests. After trying every offical recommendation like the shard counters etc. we ended up creating a virtual queue where each onCreate adds an entry to just a Firestore or RTD list/collection and another function that runs eaither by crone or another trigger (that doesn't matter). That cloud function handles each entry in the queue one by one and starts again for each of them to awoid timouts and memeroy limits. We made sure one handler/calculation is enought for a single function to handle it.
This method was the only bullet proof one that could handle thousands of new entries in a second without having an issue. The only downside is that it takes more time than an usual trigger because each entries is calculated one by one. If your calculations are smaller you could do them in batches (that is how we started to).
if you see here https://firebase.google.com/docs/firestore/solutions/delete-collections
you can see the below
Consistency - the code above deletes documents one at a time. If you
query while there is an ongoing delete operation, your results may
reflect a partially complete state where only some targeted documents
are deleted. There is also no guarantee that the delete operations
will succeed or fail uniformly, so be prepared to handle cases of
partial deletion.
so how to handle this correctly?
this means "preventing users from accessing this collection while deletion is in progress?"
or "If the work is stopped by accessing the collection in the middle, is it to call the function again from the failed part to proceed with the complete deletion?"
so how to handle this correctly?
It's suggesting that you should check for failures, and retry until there are no documents remaining (or at least until you are satisfied with the result).
I'm developing a Flutter App and I'm using the Firebase services. I'd like to stick only to using transactions as I prefer consistency over simplicity.
await Firestore.instance.collection('user').document(id).updateData({'name': 'new name'});
await Firestore.instance.runTransaction((transaction) async {
transaction.update(Firestore.instance.collection('user').document(id), {'name': 'new name'});
});
Are there any (major) downsides to transactions? For example, are they more expensive (Firebase billing, not computationally)? After all there might be changes to the data on the Firestore database which will result in up to 5 retries.
For reference: https://firebase.google.com/docs/firestore/manage-data/transactions
"You can also make atomic changes to data using transactions. While
this is a bit heavy-handed for incrementing a vote total, it is the
right approach for more complex changes."
https://codelabs.developers.google.com/codelabs/flutter-firebase/#10
With the specific code samples you're showing, there is little advantage to using a transaction. If your document update makes a static change to a document, without regard to its existing data, a transaction doesn't make sense. The transaction you're proposing is actually just a slower version of the update, since it has to round-trip with the server twice in order to make the change. A plain update just uses a single round trip.
For example, if you want to append data to a string, two clients might overwrite each other's changes, depending on when they each read the document. Using a transaction, you can be sure that each append is going to take effect, no matter when the append was executed, since the transaction will be retried with updated data in the face of concurrency.
Typically, you should strive to get your work done without transactions if possible. For example, prefer to use FieldValue.increment() outside of a transaction instead of manually incrementing within a transaction.
Transactions are intended to be used when you have changes to make to a document (or, typically, multiple documents) that must take the current values of its fields into account before making the final write. This prevents two clients from clobbering each others' changes when they should actually work in tandem.
Please read more about transactions in the documentation to better understand how they work. It is not quite like SQL transactions.
Are there any (major) downsides to transactions?
I don't know any downsides.
For example, are they more expensive (Firebase billing, not computationally)?
No, a transaction costs like any other write operaton. For example, if you create a transaction to increase a counter, you'll be charged with only one write operation.
I'm not sure I understand your last question completely but if a transaction fails, Cloud Firestore retries the transaction for sure.
I need to delete very large collections in Firestore.
Initially I used client side batch deletes, but when the documentation changed and started to discouraged that with the comments
Deleting collections from an iOS client is not recommended.
Deleting collections from a Web client is not recommended.
Deleting collections from an Android client is not recommended.
https://firebase.google.com/docs/firestore/manage-data/delete-data?authuser=0
I switched to a cloud function as recommended in the docs. The cloud function gets triggered when a document is deleted and then deletes all documents in a subcollection as proposed in the above link in the section on "NODE.JS".
The problem that I am running into now is that the cloud function seems to be able to manage around 300 deletes per seconds. With the maximum runtime of a cloud function of 9 minutes I can manage up to 162000 deletes this way. But the collection I want to delete currently holds 237560 documents, which makes the cloud function timeout about half way.
I cannot trigger the cloud function again with an onDelete trigger on the parent document, as this one has already been deleted (which triggered the initial call of the function).
So my question is: What is the recommended way to delete large collections in Firestore? According to the docs it's not client side but server side, but the recommended solution does not scale for large collections.
Thanks!
When you have too muck work that can be performed in a single Cloud Function execution, you will need to either find a way to shard that work across multiple invocations, or continue the work in a subsequent invocations after the first. This is not trivial, and you have to put some thought and work into constructing the best solution for your particular situation.
For a sharding solution, you will have to figure out how to split up the document deletes ahead of time, and have your master function kick off subordinate functions (probably via pubsub), passing it the arguments to use to figure out which shard to delete. For example, you might kick off a function whose sole purpose is to delete documents that begin with 'a'. And another with 'b', etc by querying for them, then deleting them.
For a continuation solution, you might just start deleting documents from the beginning, go for as long as you can before timing out, remember where you left off, then kick off a subordinate function to pick up where the prior stopped.
You should be able to use one of these strategies to limit the amount of work done per functions, but the implementation details are entirely up to you to work out.
If, for some reason, neither of these strategies are viable, you will have to manage your own server (perhaps via App Engine), and message (via pubsub) it to perform a single unit of long-running work in response to a Cloud Function.
Let's say i update multiple items in a loop and then call executeQueryAsync() on ClientContext class and this call returns error (failed callback is invoked). Can i be sure that not a sinle item was updated of all these i wanted to update? is there a chance that some of them will get updated and some of them will not? In other words, is this operation transactional? Thank you, i cannot find a single post about it.
I am asking about CSOM model not server solutions.
SharePoint handles its internal updates in a transactional method, so updating a document will actually be multiple calls to the DB that if one method fails, will roll back the other changes so nothing is half updated on a failure.
However, that is not made available to us as an external developer. If you create an update that updates 9 items within your executeQueryAsync call and it fails on #7, then the first 6 will not be rolled back. You are going to have to write code to handle the failures and if rolling back is important, then you will have to manually roll back the changes within your code.