Does document creation failure count as a write in firestore? - firebase

I am in a situation where I have to create a document if it didn't exist in a collection, or delete it if exists.
In order to solve this situation, I have thought to:
1. Read doc
2. If !doc.exists -> create it
3. Else -> delete it
But, maybe, it will be cheaper to do:
1. Try to create a doc.
2. If fails because it exists, delete it.
I have been looking the documentation https://firebase.google.com/docs/firestore/pricing but I don't find anything related to unseccesful operations. Will I be charged with a document creation if it fails?

The create, though failed, is still counts as a write operation.
Unfortunately I am unable to provide steps to observe this behavior. The response is based on my own use of Firestore and seeing write counts increase with failed writes. It is not clear if all failure types will increase the write count or what the needed conditions are. Unfortunately this information is not available from GCP documentation.

Related

Firebase realtime database limit for delete operations

I'm a firebase user recently diving into rtdb, and just found a limit docs explaining write limit for a single db instance, saying the quote below:
The limit on write operations per second on a single database. While not a hard limit, if you sustain more than 1,000 writes per second, your write activity may be rate-limited.
In firestore's security rules for example, delete operation is in the category of write operation, and i guess such concept would be applicable to other firebase services. So i want to exactly know if delete operation is subject to write limit for rtdb instance.
FYI, i'm planning to use the latest node js admin sdk with cloud functions to operate a huge number of deletes, using this link's method for huge number of different paths.
So, if the delete op is subject to rtdb write operation, it seems to be a critical mistake to deploy this function even if only few number of users are likely to trigger this function concurrently. And even few concurrent invocations would soon max out the per-second write limit, considering that firebase admin sdk is good at iterating those ops really quickly.
Since i have to specify the id(key) of path for each removal(-so that no nested data would be deleted unintentionally), simply deleting parent path is not applicable to this situation, and even really dangerous..
If delete op is not subject to write limit, then i also want to know if there is truly no single limit for delete operations for rtdb!! Hope this question reach to firebase gurus in the community! Comments are welcomed and appreciate! Thank you in advance [:
A delete operation does count as a write operation. If you run 20K delete operations i.e. 20K separate .remove() operations simultaneously using Promise.all(), they all will be counted as unique operation and you'll be rate limited. Those additional delete requests over the limit will take time to succeed.
Instead if you are using a Cloud function you can create a single object including all paths to be deleted and use update() to remove all those nodes in a single write operation. Let's say you have a root node users and each user node has a points node and you want to remove it from all the users.
const remObject = {
"user_id_1/points": null,
"user_id_2/points": null
}
await admin.database().ref("users").update(remObject)
Although you would need to know IDs of all users, this will remove points node from all users in a single operation and hence you won't be rate limited. Another benefit of doing this would be all those nodes will be deleted for sure unlike executing individual requests where some of them may fail.
If you run different `remove()` operation for each user as shown below, then it'll count as N writes where N is number of operations.
const userIDs = []
const removeRequests = userIDs.map(u => admin.database().ref(`users/${u}/points`).remove())
await Promise.all(removeRequests)
// userIDs.length writes which will count towards that rate limit
I ran some test functions with above code and no surprise both adding and removing 20K nodes using distinct operations with Promise.all() took over 40 seconds while using a single update operation with an object took just 3.
Do note that using the single update method maybe limited by "Size of a single write request to the database" which is 16 MB for SDKs and 256 MB for REST API. In such cases, you may have to break down the object in smaller parts and use multiple update() operations.

Firebase Firestore, Delete Collection with a Callable Cloud Function

if you see here https://firebase.google.com/docs/firestore/solutions/delete-collections
you can see the below
Consistency - the code above deletes documents one at a time. If you
query while there is an ongoing delete operation, your results may
reflect a partially complete state where only some targeted documents
are deleted. There is also no guarantee that the delete operations
will succeed or fail uniformly, so be prepared to handle cases of
partial deletion.
so how to handle this correctly?
this means "preventing users from accessing this collection while deletion is in progress?"
or "If the work is stopped by accessing the collection in the middle, is it to call the function again from the failed part to proceed with the complete deletion?"
so how to handle this correctly?
It's suggesting that you should check for failures, and retry until there are no documents remaining (or at least until you are satisfied with the result).

Regarding firebase reads and writes

I have some questions regarding firebase, which I think many of the beginners have.
Let's say I have this query:-
var collecRef=FirebaseFirestore.instance.collection('aCollection').where("a"=="b").orderBy(//some more code);
If I execute this, how many reads will it cost? If :-
There are 5 documents which match the condition (a==b)
There are no documents which match the condition.
Now,
if I want to update the data in a document using setData(), with merge=true, would it cost a write? If data is intact? For example in a document I have saved the user name of a user and in my app, my users can change their names.
Now,
If they try to update their name with (setData()), and they haven't entered a DIFFERENT NAME(the name is same), would it cost a write?
One document received from a query costs one read. That is all you need to know. The conditions don't matter, and the size of the collection doesn't matter. Just the number of documents received.
One call to setData costs one write. It doesn't matter what you write, or the current contents of the document.

firebase database equivalent of MySQL transaction

I'm seeking something where I can thread through multiple updates to multiple firebase.database.References (before performing a commit) a single object and then commit that at the end and if it is unsuccessful no changes are made to any of my Firebase References.
Does this exist? the firebase.database.Transaction I thought would be similar since it is an atomic update and it does involve a callback which says if it has been committed or not, but the update function, I believe, is only for a single object, and the function doesn't seem to return a transactionId or something I could pass to other firebase.database.Transactionss or something.
UPDATE
This transaction's update seems to return a Transaction which would lend itself to perhaps chaining: https://firebase.google.com/docs/reference/js/firebase.firestore.Transaction
however this is different from the other Transaction:
Firebase Database transactions perform an update to a single location based on the current value of that same location. They explicitly do not work across multiple locations, since that would limit their scalability. Sometimes developers work around this by performing a transaction higher up in their JSON tree (at the first common point of the locations). I'd recommend against that, as that would limit the scalability even further.
The only way to efficiently update multiple locations with one API call, is with a multiple location update. This does however not have reading of the current value built-in.
So if you want to update multiple locations based on their current value, you'll have to perform the read operation in your application code, turn that into a multi-location update, and then use security rules to ensure all of those updates follow your application rules. This is a quite non-trivial approach, so I hardly see it being done in practice. See my answer here for an example: Is the way the Firebase database quickstart handles counts secure?

Cosmos DB ChangeFeed Exception Handling

With Cosmos DB ChangeFeed, can anyone please provide some help with exception handling?
Let's say if I have 10 documents in the change feed, I have a loop to iterate through the documents one by one. Let's assume if there was an exception happened after the 5th document that is processed.
What is going to happen with the changefeed?
So far, it looks to me that the entire changefeed is swallowed, i.e. the rest documents after the exception are gone.
I am just wondering what is the backout strategy on this? Is there a way I can completely backout the entire batch so I do not loose any changes.
It is an old question, but hopefully other may find it useful.
To handle the error, the recommended pattern is to wrap your code with try-catch. Catch the error and put that document on a queue (dead-letter). Have a separate program to deal with those document which produced the error. This way if you have 100-document batch, and just one document failed, you do not have to throw away the whole batch.
Second reason is if you can keep getting those documents from Change Feed then you may lose the last snapshot on the document. Change Feed keeps only one last version of the document, in between other processes can come and change the document.
As you keep fixing your code, you will soon find no documents on dead-letter queue.
Azure Function is automatically called by Change Feed system. If you want to roll back the Change Feed and control every aspect of it, you should consider using Change processor Feed SDK.
Recommendation from MS, to add try-catch in your CosmosDB trigger function. If any document throw exception you have to store in place.
Once you will start storing failed messages in some location, you have to build metrics, alerts and re-process strategy.
Below is my strategy to handle this scenario. My One function listing to DB changefeed and pushing data into "Topic" (without any process). I created multiple subscriptions so each subscription maintain own dead-letter queue.

Resources