With the new Firestore from Firebase, I discovered that I have poor knowledge with Observables.
My problem is the following:
I get some data with db.collection('room').
If I don't listen to the observable with a subscription, do I fetch the document? (I think so).
For every change in my collection "room", is it considered as a "new document read" by Firestore?
If I have duplicated Observables which return db.collection('room') in my app, will I have X calls to the Firestore database or just one?
Thanks!
If I don't listen to the observable with a subscription, do I fetch the document? (I think so).
When you call var ref = db.collection('room'), ref is not really an observable it is a reference to the 'room' collection. Creating this reference does not perform any data reads (from network or disk).
When you call ref.get() or ref.onSnapshot() then you are fetching the documents from the server.
For every change in my collection "room", is it considered as a "new document read" by Firestore?
If you are listening to the whole collection (no where() or .orderBy() clauses) and you have an active onSnapshot() listener then yes, you will be charged for a document read operation each time a new document is added, changed, or deleted in the collection.
If I have duplicated Observables which return db.collection('room') in my app, will I have X calls to the Firestore database or just one?
If you are listening to the same Cloud Firestore data in two places you will only make one call to the server and be charged for the read operations one time. There's no cost/performance penalty to attaching multiple listeners to one reference.
Related
I use Google Firestore for my iOS app built in Swift/SwiftUI and would like to implement the Snapshot listeners feature to my app.
I want to list all documents in debts collection in realtime by using snapshot listeners. Every document in this collection has subcollection debtors, which I want to get in realtime for each debts document as well. Each document in debtors has field userId, which refers to DocumentID in users collection which I would also love to have realtime connection on (for example when user changes his name I would love to see it instantly in the debt entity inside the list). This means I must initialize 2 more snapshot listeners for each document in debts collection. I'm concerned that this is too many opened connections once I have like 100 debts in the list. I can't come up with no idea apart from doing just one time fetches.
Have anyone of you ever dealt with this kind of nested snapshot listeners? Do I have a reason to worry?
This is my Firestore db
Debts
document
- description
- ...
- debtors (subcollection)
- userId
- amount
- ...
Users
document
- name
- profileImage
- email
I uploaded this gist where you can see how I operate with Firestore right now.
https://gist.github.com/michalpuchmertl/6a205a66643c664c46681dc237e0fb5d
If you want to read all debtors documents anywhere in the database with a given value for userId, you can use a collection group query to do so.
In Swift that'd look like:
db.collectionGroup("debtors").whereField("userId", isEqualTo: "uidOfTheUser").getDocuments { (snapshot, error) in
// ...
}
This will read from any collection name debtors. You'll have to add the index for this yourself, and set up the proper security rules. Both of those are documented in the link I included above.
I am writing a web app on Firebase and have the following Firestore schema and data structure:
db.collection('users').doc{{userid}) // Each doc stores data under 'userinfo' index, which is an object(map).
db.collection('posts').doc({postid}) // Each doc contains 'userinfo', which is the data about the person who posted.
db.collection('saved').doc({userid}) // Each doc stores data under 'saved' index, which is an array of carbon copy of a document in 'posts' collection.
I am thinking about writing below cloud functions:
-Cloud function A: Listen to updates in one of the docs in the 'users' collection, and update each of the docs in 'posts' collection with the previous userinfo.
-Cloud function B: Lisen to updates in one of the docs in the 'posts' collection, and update each of the docs in 'saved' collection which contains the previous postinfo.
The complication here is, the cloud function A, once triggered, will update multiple documents in the 'posts' collection, each of which will again be a trigger. If the user has written 100 posts, then there can be 100 triggers at once.
To handle such case, which of the following is the natural next step for me? Asking because the answer would depend on how Firebase Cloud Functions handle this kind of situation, but I don't have much knowledge on that at the moment:
a) Write function B as a transaction, because Firebase Cloud Function will handle this situation, by queuing each of the triggers in some order.
b) Write function B to still listen to the update and reflect it to the 'saved' collection, but not as a transaction to avoid massive backlogs caused by database lockups.
c) Rethink the database structure and/or cloud function logic to avoid such situation from the beginning.
There might be a right or wrong answer (or is there one in this case?), but just wanted to get some guidance on direction before actually writing the codes. Any advice? Thanks a lot in advance!
Let's say I do a query against a Firestore collection over a date range or something. If I get an observable to the set of documents and iterate through it to build up a local collection, will it re-read all the data from Firestore every time there is a change in Firestore? Say this observable is from a where clause that contains 500 documents and I iterate through doing something:
this.firestoreObservable$.subscribe(documents => {
documents.forEach(async doc => {
// do something
})
})
If one field on one documents change on Firestore, will that count as another 500 document reads? If so (ouch!) what would the recommenced best practice be to keep from spending so many reads?
Thanks.
No. If only one document changes, then it will cost only one read. The entire set of documents is cached in memory as long as the query is actively listening to updates, and the SDK will deliver you the cached results in addition to whatever actually changed.
If the query ends and a new one starts up, then you will be charged for the full set of results again.
I am implementing a one-to-one chat app using firestore in which there is a collection named chat such that each document of a collection is a different thread.
When the user opens the app, the screen should display all threads/conversations of that user including those which have new messages (just like in whatsapp). Obviously one method is to fetch all documents from the chat collection which are associated with this user.
However it seems a very costly operation, as the user might have only few updated threads (threads with new messages), but I have to fetch all the threads.
Is there an optimized and less costly method of doing the same where only those threads are fetched which have new messages or more precisely threads which are not present in the user's device cache (either newly created or modified threads).
Each document in the chat collection have these fields:
senderID: (id of the user who have initiated the thread/conversation)
receiverID: (id of the other user in the conversation)
messages: [],
lastMsgTime: (timestamp of last message in this thread)
Currently to load all threads of a certain user, I am applying the following query:
const userID = firebase.auth().currentUser.uid
firebase.firestore().collection('chat').where('senderId', '==', userID)
firebase.firestore().collection('chat').where('receiverId', '==', userID)
and finally I am merging the docs returned by these two queries in an array to render in a flatlist.
In order to know whether a specific thread/document has been updated, the server will have to read that document, which is the charged operation that you're trying to avoid.
The only common way around this is to have the client track when it was last online, and then do a query for documents that were modified since that time. But if you want to show both existing and new documents, this would have to be a separate query, which means that it'd end up in a separate area of the cache. So in that case you'll have to set up your own offline storage on top of Firestore's, which is more work than I'm typically willing to do.
I have some RxFire code that listens to a Firestore collection query (representing channels) and, for each of the results, listens to a Realtime Database ref for documents (representing messages in that channel).
The problem I'm running into is that the Realtime Database documents are re-downloaded every time the Firestore query changes, even if they're for a path/reference that hasn't changed.
Here's some pseudo-code:
collection(channelsQuery).pipe(
// Emits full array of channels whenever the query changes
switchMap(channels => {
return combineLatest(
channels.map(channel =>
// Emits the full set of messages for a given channel
list(getMessagesRef(channel)),
),
);
})
)
Imagine the following scenario:
Query intially emits 3 Firestore channel documents
Observables are created for corresponding Realtime Database refs for those 3 channels, which emit their message documents
A new Firestore document is added that matches the original query, which now emits 4 channel documents
The previous observables for Realtime Database are destroyed, and new ones are created for the now 4 channels, re-downloading and emitting all the data it already had for the previous 3.
Obviously this is not ideal as it causes a lot of redundant reads on the Realtime Database. What's the best practice in this case? Keep in mind that when a channel is removed, I would like to destroy the corresponding observable, which switchMap already does.