Two StreamBuilder using one stream(); Will it increase the number of reads? - firebase

I have two Screens(Scaffold) and I'm thinking to use a single stream() in both StreamBuilder. My question is will I be charged double reads? Or the read will be equivalent to one read?

The Firestore clients try to minimize the amount of data they have to read from the server. It depends on your code of course, but in most cases there will be a lot of re-use of data that was already read - so that you won't get charged again.
Some examples:
If both builders are using the same stream, the data for the second one will come from what was already ready into memory for the first reader. So there will be no charges for the second builder.
If both builder use their own stream, there typically is also a lot of reuse. If the streams are listening at the same time, the SDK will reuse the data between them where possible.
And if you have disk caching enabled (which it is by default on iOS and Android), the streams will even share results if they're not active at the same time.

Related

Does {merge: true} send the entire document in the back end in Firestore?

I have a specific feature I'm using that requires many (thousands) of small pieces of indexed data. Rather than fetching a thousand documents per startup and incurring unnecessary costs, I would like to simply download the whole giant document at once, and merge changes by key.
This means the document might approach the 1MB limit.
I have a curiosity about bandwidth though. I'm wondering if Firestore intelligently only sends/receives the most economical amount of the document. This means, for example, if I have 2000 entries in this one document, and I update one using {merge:true}, how much bandwidth will my browser use for this? Would it use only what's needed? Sending only the merged part rather than merging it in the background and sending the whole document?
And what about onSnapshot. For example, if I'm listening for changes in this large document, and it changes, and the new document is downloaded, is the onSnapshot logic behind the scenes smart enough to know to only download the necessary (changed) portion of the document rather than the full 1MB?
My users will be on data and I don't want to waste their data.
Thanks!
When you call documentRef.set(..., { merge: true }) the Firestore SDK sends exactly what you pass as ... to the server. The same happens when you call it without { merge: true }.
An onSnapshot listener always received the complete document, regardless of what/how much has changed in that document.
So by merging the many small documents into a single document you are trading the cost of document reads for a cost in bandwidth consumption. Whether this trade-off is worth it, all depends on your use-case and data. I recommend using the pricing calculator to determine the exact values.

flutter firebase with streambuilder: Is server fee going to increase exponentially?

I'm beginner in flutter-fire app [: And here i've got a simple issue but not easy to figure out.
When we use Streambuilder in the app, we can usually see awesome synchronization between UI and DB on time. I'm really feeling joyful when using Streambuilder. It seemed like only beautiful things would happen with Streambuilder.
Meanwhile, a suspicious question popped out. When we enter print('hello world!'); in the Streambuilder, we can see that RUN console is printing out the phrase every milliseconds, otherwise would be just printed once. It means that RAM usage is extremely increased. When it comes to DB synchronization, we can easily guess that the use of Streambuilder leads to huge usage of client-server communication fee, which in my case is Firebase fee.
So, here is my question.
Can i feel free to use Streambuilder when the stream is connected to DB(in my case, firebase)?
I'm worrying about communication fee between UI and firebase, because streambuilder seems like literally using huge amount of energy every milliseconds(IMO additionally, server fee) unlike normal builders. Especially when the length of collection is so long that reading the collection once may cost a lot of energy, the fee would increase a lot more on and on, because the streambuilder have to check thousands of firebase documents only to figure out a single line of condition.
I guess many backend-connected flutter methods use the Streambuilder, so someone could clearly figure it out how much we're gonna pay for Google when we use Streambuilder. I know it's quite ambiguous question, but hope you understand. [:
Content coming from the Firebase database, whether FutureBuilder or StreamBuilder, pays only for the query value that has been processed once, and after that, in case the same response value is the same for the same query, it does not pay for that cost and displays the stored list stored in the cache on the client screen again. .
And check that it's not being called on something like setState. if so, of course StreamBuilder is called again.

Firestore Document "Too much contention": such thing in realtime database?

I've built an app that let people sell tickets for events. Whenever a ticket is sold, I update the document that represents the ticket of the event in firestore to update the stats.
On peak times, this document is updated quite a lot (10x a second maybe). Sometimes transactions to this item document fail due to the fact that there is "too much contention", which results in inaccurate stats since the stat update is dropped. I guess this is the result of the high load on the document.
To resolve this problem, I am considering to move the stats of the items from the item document in firestore to the realtime database. Before I do, I want to be sure that this will actually resolve the problem I had with the contention on my item document. Can the realtime database handle such load better than a firestore document? Is it considered good practice to move such data to the realtime database?
The issue you're running into is a documented limit of Firestore. There is a limit to the rate of sustained writes to a single document of 1 per second. You might be able to burst writes faster than that for a while, but eventually the writes will fail, as you're seeing.
Realtime Database has different documented limits. It's measured in the total volume of data written to the entire database. That limit is 64MB per minute. If you want to move to Realtime Database, as long as you are under that limit, you should be OK.
If you are effectively implementing a counter or some other data aggregation in Firestore, you should also look into the distributed counter solution that works around the per-document write limit by sharding data across multiple documents. Your client code would then have to use all of these document shards in order to present data.
As for whether or not any one of these is a "good practice", that's a matter of opinion, which is off topic for Stack Overflow. Do whatever works for your use case. I've heard of people successfully using either one.
On peak times, this document is updated quite a lot (10x a second maybe). Sometimes transactions to this item document fail due to the fact that there is "too much contention"
This is happening because Firestore cannot handle such a rate. According to the official documentation regarding quotas for writes and transactions:
Maximum write rate to a document: 1 per second
Sometimes it might work for two or even three writes per second but at some time will definitely fail. 10 writes per second are way too much.
To resolve this problem, I am considering to move the stats of the items from the item document in Firestore to the realtime database.
That's a solution that I even I use it for such cases.
According to the official documentation regarding usage and limits in Firebase Realtime database, there is no such limitation there. But it's up to you to decide if it fits your needs or not.
There one more thing that you need to into consideration, which is distributed counter. It can solve your problem for sure.

Firestore Realtime Updates 1M Limit

When using Firestore and subscribing to document updates, it states a limit of 1M concurrent mobile/web connections per database.
https://firebase.google.com/docs/firestore/quotas#realtime_updates
Is that a hard limit (enforced/throttled in code)? Or is it a theoretical limit (like you're safe up to 1M, then things get dicey)? Is it possible to get an uplift?
Trying to understand how to support a large user base without needing to shard the database (which is one of the advantages of Firestore). Even at 5M users, it seems you would start having problems because you'd probably hit times when >20% of those users were on your app simultaneously.
As you already noticed, the maximum size of a single document in Firestore is 1 Megabyte. Trying to store large number of objects (maps) that may exceed this limitation, is generally considered a bad design.
You should reconsider the logic of you app and think at the reson why you need to have more than 1Mib in single a document, rather than each object being their own document. So to be able to use Firestore, you should change the way you are holding the data from within a single documents to a collection. In case of collections, there are no limitations. You can add as many documents as you want. According to the official documentation regarding Cloud Firestore Data model:
Cloud Firestore is optimized for storing large collections of small documents.
IMHO, you should take advantage of this feature.
For details, I recommend you see my answer from this post where I have explained some practices regarding storing data in arrays (documents), maps or collections.
Edit:
Without sharding, I'm affraid it is not an option. So in this case, sharding will work for sure. So in my opinion, that's certainly a reasonable option.

Firestore order of writes

we've been using realtime database to save some data from mobile devices (ios, android + now web). I earlier asked if the order, in which other clients see the data, is guaranteed to be the same order in which client wrote those. (here Does Firebase guarantee that data set using updateValues or setValue is available in the backend as one atomic unit? , the title is a bit misleading, but the answer is there)
The answer was yes, and now we're migrating to Firestore and I'm wondering if the same applies to Firestore too?
So, If I write in client A documents 1, 2 and 3 is it guaranteed that Client N will observe the writes (given that there is a suitable listener) in the same order as client A wrote those?
Does this apply to Cloud Functions too? We're writing 3 pieces of data to separate documents and then we write fourth document as a way to trigger a function to do some processing. So is it guaranteed that the 3 documents written earlier will be available when the function is triggered?
Note that the 4 documents are NOT written in the same transaction or batch, but as separate document.create calls.
It would be catastrophically bad if the order of writes was not maintained within an individual client. The client would not have a strong internal understanding of the state of the system after such a series of writes, and it would have to read every document back in order to validate the contents written.
So you can expect that the order would be maintained. However, if you aren't using a transaction, there are no guarantees about the order of writes to document coming from multiple clients.

Resources