Do Firestore Function Triggers count as reads? - firebase

I know what you are probably thinking, "why does it matter? Don't try to over-complicate it just to optimize pricing". In my case, I need to.
I have a collection with millions of records in Firestore, and each document gets updated quite often. Every-time one gets updated, I need to do some data-cleaning (and more). So I have a function trigger by onUpdate that does that. In the function there's two parameters: document before update and document after update.
My question is:
Because the document is been passed as an argument, does that count as a database read?

The event generated by Cloud Firestore to send to Cloud Functions should not count as an extra read beyond what what was done by the client to initially trigger that event.

Related

Using Firestore Triggers to Manage User Document Count

If every document in a collection is a user resource that is limited, how can you ensure the user does not go over their assigned limit?
My first thought was to take advantage of the Firestore triggers to avoid building a real backend, but the triggers sometimes fire more than once even if the inputed data has not changed. I was comparing the new doc to the old doc and taking action if certain keys did not match but if GCP fires the same function twice I get double the result. In this case incrementing or decrementing counts.
The Firestore docs state:
Events are delivered at least once, but a single event may result in multiple function invocations. Avoid depending on exactly-once mechanics, and write idempotent functions.
So in my situation the only solution I can think of is saving the event id's somewhere and ensuring they did not fire already. Or even worse doing a read on each call to count the current docs and adjust them accordingly (increasing read costs).
Whats a smart way to approach this?
If reinvocations (which while possible are quite uncommon) are a concern for your use-case, you could indeed store the ID of the invocation event or something less frequent, like (depending on the use-case) the source document ID.

How to initiate a single calculation from batched document onWrite triggers

I have a subcollection with documents. If any of them are added or removed, I need to trigger a calculation to derive an overall count (amongst other things) and store that in the parent document.
In order to listen for the document changes, I have a firestore onWrite background function. From this function, I would like to trigger the calculation via pubsub. However, it will happen that the system updates many subcollection documents at once. If I delete 100 documents, I do not want the calculation to be triggered 100 times. That would be a real waste of resources.
So I'm wondering, is there already some sort of mechanism in place that would batch these triggers or the pubsub topic publishing, or do I need to do something specific to make this happen?
If there are other ways to better solve this problem I'm open to suggestions of course. I could possibly even introduce a Redis store if that helps.

Is it possible to do batched writes to add to existing fields?

My app has a for loop that writes data to my Firestore database.
However, right now, when I click my update button, Firestore updates the documents one by one, using transactions.
Thus, this results in me having to read every single document before being able to update it, which is extremely inefficient.
Is it possible for batched writes to perform an update feature similar to how transactions do?
For my case, the field I intend to update is a number, thus,
I am wondering if its possible to update the field by adding to it.
await transaction.update(stockListDocRef,
{'Num': outerStockListSnapshot.data['Num'] + Add});
You can use FieldValue.increment(x) to increment a field value in any sort of update operation, including batches.
See also: FieldValue.increment for Cloud Firestore in Flutter

How to abort Document Creation in Firestore onCreate trigger

I am trying to limit document creation of my each subscriber for a certain number of documents.
In my cloud function, I create an onCreate trigger and even I return "null" if document count doesn't match with my limits, firestore still creates this document.
I digged into firestore and cloud functions documentations but could not find any example how to cancel/abort a cloud function trigger.
Bonus question;
Do I have any way to alert customer wheter he/she exceeded the limit of this document creation? I thought I can update a seperate alert document in my trigger function and read and display this to customer. Do you know any way to listen onCreate trigger's result and display the error real time?
Any help please?
Many Thanks.
What you're trying to do isn't possible. Cloud Functions respond to events that occur within some product (such as Firestore). The event indicates that some change already happened. All you can do is choose what you want to do in response to that event. You can't prevent the change from taking place. The best you could do is undo the change by performing the opposite of what already happened. So, if a document was created, and that violates whatever rules you want to enforce, then you can simply delete the document.

Best way to trigger function when data is being read. Google Cloud Functions

I am trying to figure out the best way to execute my cloud function for my firestore database, when data is being read.
I have a field on all of my documents with the timestamp of when the document was last used, this is used to delete documents that haven't been used in two days. The deletion is done by another cloud function.
I want to update this field, when the documents is being used AKA read from my db. What would be the best way to do this?
onWrite(), onCreate(), onUpdate() and onDelete() is not an option.
My database is used by a Android App written in Kotlin.
There are no triggers for reading data. That would not be scalable to provide. If you require a last read time, you will have to control access to your database via some middleware component that you write, and have all readers query that instead. It will be responsible for writing the last read time back to the database.
Bear in mind that Firestore documents can only be written about once every second, so if you have a lot of access to a document, you may lose data.

Resources