How do I know if there are more documents left to get from a firestore collection? - firebase

I'm using flutter and firebase. I use pagination, max 5 documents per page. How do I know if there are more documents left to get from a firestore collection. I want to use this information to enable/disable a next page button presented to the user.
limit: 5 (5 documents each time)
orderBy: "date" (newest first)
startAfterDocument: latestDocument (just a variable that holds the latest document)
This is how I fetch the documents.
collection.limit(5).orderBy("date", descending: true).startAfterDocument(latestDocument).get()
I thought about checking if the number of docs received from firestore is equal to 5, then assume there are more docs to get. But this will not work if I there are a total of n * 5 docs in the collection.
I thought about getting the last document in the collection and store this and compare this to every doc in the batches I get, if there is a match then I know I've reach the end, but this means one excess read.
Or maybe I could keep on getting docs until I get an empty list and assume I've reached the end of the collection.
I still feel there are a much better solution to this.
Let me know if you need more info, this is my first question on this account.

There is no flag in the response to indicate there are more documents. The common solution is to request one more document than you need/display, and then use the presence of that last document as an indicator that there are more documents.
This is also what the database would have to do to include such a flag in its response, which is probably why this isn't an explicit option in the SDK.
You might also want to check the documentation on keeping a distributed count of the number of documents in a collection as that's another way to determine whether you need to enable the UI to load a next page.

here's a way to get a large data from firebase collection
let latestDoc = null; // this is to store the last doc from a query
//result
const dataArr = []; // this is to store the data getting from firestore
let loadMore = true; // this is to check if there's more data or no
const initialQuery = async () => {
const first = db
.collection("recipes-test")
.orderBy("title")
.startAfter(latestDoc || 0)
.limit(10);
const data = await first.get();
data.docs.forEach((doc) => {
// console.log("doc.data", doc.data());
dataArr.push(doc.data()); // pushing the data into the array
});
//! update latest doc
latestDoc = data.docs[data.docs.length - 1];
//! unattach event listeners if no more docs
if (data.empty) {
loadMore = false;
}
};
// running this through this function so we can actual await for the
//docs to get from firebase
const run = async () => {
// looping until we get all the docs
while (loadMore) {
console.log({ loadMore });
await initialQuery();
}
};

Related

Why is it not possible to orderBy on different fields in Cloud Firestore and how can I work around it?

I have a collection in firebase cloud firestore called 'posts' and I want to show the most liked posts in the last 24h on my web app.
The post documents have a field called 'like_count' (number) and another field called 'time_posted' (timestamp).
I also want to be able to limit the results to apply pagination.
I tried to apply a filter to only get the posts posted in the last 24 hours and then ordering them by the 'like_count' and then the 'time_posted' since I want the posts with the most likes to appear first.
postsRef.where("time_posted", ">", twentyFourHoursAgo)
.orderBy("like_count", "desc")
.orderBy("time_posted", "desc")
.limit(10)
However, I quickly found out that it is not possible to filter and then sort by two different fields.
(See the Limitations part of the documentation for Order and limit data with Cloud Firestore)
It states:
Invalid: Range filter and first orderBy on different fields
I thought about sorting the results by 'like_count' in the frontend, but this won't work properly because I don't have all the documents. And getting all the documents is infeasible for a large number of daily posts.
Is there an easy work-around I am missing or how can I go about this?
When performing a query, Firestore must be able to traverse an index in a continuous fashion.
This introduction video is a little outdated (because "OR" queries are now possible using the "in" operator) but it does give a good visualization of what Firestore is doing as it runs a query.
If your query was just postsRef.orderBy("like_count", "desc").limit(10), Firestore would load up the index it has for a descending "like_count", pluck the first 10 entries and return them.
To handle your query, it would have to pluck an entry off the descending "like_count" index, compare it to your "time_posted" requirement, and either discard it or add it to a list of valid entries. Once it has all of the recent posts, it then needs to sort the results as you specified. As these steps don't make use of a continuous read of an index, it is disallowed.
The solution would be to build your own index from the recent posts and then pluck the results off of that. Because doing this on the client is ill-advised, you should instead make use of a Cloud Function to do the work for you. The following code makes use of a Callable Cloud Function.
const MS_TWENTY_FOUR_HOURS = 24 * 60 * 60 * 1000;
export getRecentTopPosts = function.https.onCall((data, context) => {
// unless otherwise stated, return only 10 entries
const limit = Number(data.limit) || 10;
const postsRef = admin.firestore().collection("posts");
// OPTIONAL CODE SEGMENT: Check Cached Index
const twentyFourHoursAgo = Date.now() - MS_TWENTY_FOUR_HOURS;
const recentPostsSnapshot = await postsRef
.where("time_posted", ">", twentyFourHoursAgo)
.get();
const orderedPosts = recentPostsSnapshot.docs
.map(postDoc => ({
snapshot: postDoc,
like_count: postDoc.get("like_count"),
time_posted: postDoc.get("time_posted")
})
.sort((p1, p2) => {
const deltaLikes = p2.like_count - p1.like_count; // descending sort based on like_count
if (deltaLikes !== 0) {
return deltaLikes;
}
return p2.time_posted - p1.time_posted; // descending sort based on time_posted
});
// OPTIONAL CODE SEGMENT: Save Cached Index
return orderedPosts
.slice(0, limit)
.map(post => ({
_id: post.snapshot.id,
...post.snapshot.data()
}));
})
If this code is expected to be called by many clients, you may wish to cache the index to save it getting constantly rebuilt by inserting the following segments into the function above.
// OPTIONAL CODE SEGMENT: Check Cached Index
if (!data.skipCache) { // allow option to bypass cache
const cachedIndexSnapshot = await admin.firestore()
.doc("_serverCache/topRecentPosts")
.get();
const oneMinuteAgo = Date.now - 60000;
// if the index was created in the past minute, reuse it
if (cachedIndexSnapshot.get("timestamp") > oneMinuteAgo) {
const recentPostMetadataArray = cachedIndexSnapshot.get("posts");
const recentPostIdArray = recentPostMetadataArray
.slice(0, limit)
.map((postMeta) => postMeta.id)
const postDocs = await fetchDocumentsWithId(postsRef, recentPostIdArray); // see https://gist.github.com/samthecodingman/aea3bc9481bbab0a7fbc72069940e527
// postDocs is not ordered, so we need to be able to find each entry by it's ID
const postDocsById = {};
for (const doc of postDocs) {
postDocsById[doc.id] = doc;
}
return recentPostIdArray
.map(id => {
// may be undefined if not found (i.e. recently deleted)
const postDoc = postDocsById[id];
if (!postDoc) {
return null; // deleted post, up to you how to handle
} else {
return {
_id: postDoc.id,
...postDoc.data()
};
}
});
}
}
// OPTIONAL CODE SEGMENT: Save Cached Index
if (!data.skipCache) { // allow option to bypass cache
await admin.firestore()
.doc("_serverCache/topRecentPosts")
.set({
timestamp: Date.now(),
posts: orderedPosts
.slice(0, 25) // cache the maximum expected amount
.map(post => ({
id: post.snapshot.id,
like_count: post.like_count,
time_posted: post.time_posted,
}))
});
}
Other improvements you could add to this function include:
A field mask - i.e. instead of return every part of the post documents, return just the title, like count, time posted and the author.
Variable post age (instead of 24 hours)
Variable minimum likes count
Filter by author

Flutter & Firebase Get more than 10 Firebase Documents into a Stream<List<Map>>

With Flutter and Firestore, I am trying to get more than 10 documents into a Stream<List>. I can do this with a .where clause on a collection mapping the QuerySnapshot. However, the 10 limit is a killer.
I'm using the provider package in my app. So, in building a stream in Flutter with a StreamProvider, I can return a
Stream<List<Map from the entire collection. too expensive. 200 plus docs on these collections and too many users. Need to get more efficient.
Stream<List<Map using a .where from a Collection that returns a Stream List 10 max on the list...doesn't cut the mustard.
Stream<Map from a Document, that returns 1 stream of 1 document.
I need something in between 1 and 2.
I have a Collection with up to 500 Documents, and the user will choose any possible combination of those 500 to view. The user assembles class rosters to view their lists of users.
So I'm looking for a way to get individual streams of, say 30 documents, and then compile them into a List: But I need this List<Stream<Map to be a Stream itself so each individual doc is live, and I can also filter and sort this list of Streams. I'm using the Provider Package, and if possible would like to stay consistent with that. Here's where I am currently stuck:
So, my current effort:
Future<Stream<List<AttendeeData>>> getStreams() async {
List<Stream<AttendeeData>> getStreamsOutput = [];
for (var i = 0; i < teacherRosterList.length; i++) {
Stream thisStream = await returnTeacherRosterListStream(facility, teacherRosterList[i]);
getStreamsOutput.add(thisStream);
}
return StreamZip(getStreamsOutput).asBroadcastStream();
}
Feels like I'm cheating below: I get an await error if I put the snapshot directly in Stream thisStream above as Stream is not a future if I await, and if I don't await, it moves too fast and gets a null error.
Future<Stream<AttendeeData>> returnTeacherRosterListStream(String thisFacility, String thisID) async {
return facilityList.doc(thisFacility).collection('attendance').doc(thisID).snapshots().map(_teacherRosterListFromSnapshot);
}
}
Example of how I'm mapping in _teacherRosterListFromSnapshot (not having any problem here):
AttendeeData _teacherRosterListFromSnapshot(DocumentSnapshot doc) {
// return snapshot.docs.map((doc) {
return AttendeeData(
id: doc.data()['id'] ?? '',
authorCreatedUID: doc.data()['authorCreatedUID'] ?? '',
);
}
My StreamProvider Logic and the error:
return MultiProvider(
providers: [
StreamProvider<List<AttendeeData>>.value(
value: DatabaseService(
teacherRosterList: programList,
facility: user.claimsFacility,
).getStreams()),
]
Error: The argument type 'Future<Stream<List>>' can't be assigned to the parameter type 'Stream<List>'.
AttendeeData is my Map Class name.
So, the summary of questions:
Can I even do this? I'm basically Streaming a List of Streams of Maps....is this a thing?
If I can, how do I do it?
a. I can't get this into the StreamProvider because getStreams is a Future...how can I overcome this?
I can get the data in using another method from StreamProvider, but it's not behaving like a Stream and the state isn't updating. i'm hoping to just get this into Provider, as I'm comfortable there, and I can manage state very easily that way. However, beggars can't be choosers.
Solved this myself, and since there is a dearth of good start to finish answers, I submit my example for the poor souls who come after me trying to learn these things on their own. I'm a beginner, so this was a slog:
Objective:
You have any number of docs in a collection and you want to submit a list of any number of docs by their doc number and return a single stream of a list of those mapped documents. You want more than 10 (firestore limit on .where query), less than all the docs...so somewhere between a QuerySnapshot and a DocumentSnapshot.
Solution: We're going to get a list of QuerySnapshots, we're going to combine them and map them and spit them out as a single stream. So we're getting 10each in chunks (the max) and then some odd number left over. I plug mine into a Provider so I can get it whenever and wherever I want.
So from my provider I call this as the Stream value:
Stream<List<AttendeeData>> filteredRosterList() {
var chunks = [];
for (var i = 0; i < teacherRosterList.length; i += 10) {
chunks.add(teacherRosterList.sublist(i, i + 10 > teacherRosterList.length ? teacherRosterList.length : i + 10));
} //break a list of whatever size into chunks of 10.
List<Stream<QuerySnapshot>> combineList = [];
for (var i = 0; i < chunks.length; i++) {
combineList.add(*[point to your collection]*.where('id', whereIn: chunks[i]).snapshots());
} //get a list of the streams, which will have 10 each.
CombineLatestStream<QuerySnapshot, List<QuerySnapshot>> mergedQuerySnapshot = CombineLatestStream.list(combineList);
//now we combine all the streams....but it'll be a list of QuerySnapshots.
//and you'll want to look closely at the map, as it iterates, consolidates and returns as a single stream of List<AttendeeData>
return mergedQuerySnapshot.map(rosterListFromTeacherListDocumentSnapshot);
}
Here's a look at how I mapped it for your reference (took out all the fields for brevity):
List<AttendeeData> rosterListFromTeacherListDocumentSnapshot(List<QuerySnapshot> snapshot) {
List<AttendeeData> listToReturn = [];
snapshot.forEach((element) {
listToReturn.addAll(element.docs.map((doc) {
return AttendeeData(
id: doc.data()['id'] ?? '',
authorCreatedUID: doc.data()['authorCreatedUID'] ?? '',
);
}).toList());
});
return listToReturn;
}

What's the best way to paginate and filters large set of data in Firebase?

I have a large Firestore collection with 10,000 documents.
I want to show these documents in a table by paging and filtering the results at 25 at a time.
My idea, to limit the "reads" (and therefore the costs), was to request only 25 documents at a time (using the 'limit' method), and to load the next 25 documents at the page change.
But there's a problem. In order to show the number of pages I have to know the total number of documents and I would be forced to query all the documents to find that number.
I could opt for an infinite scroll, but even in this case I would never know the total number of results that my filter has found.
Another option would be to request all documents at the beginning and then paging and filtering using the client.
so, what is the best way to show data in this type of situation by optimizing performance and costs?
Thanks!
You will find in the Firestore documentation a page dedicated to Paginating data with query cursors.
I paste here the example which "combines query cursors with the limit() method".
var first = db.collection("cities")
.orderBy("population")
.limit(25);
return first.get().then(function (documentSnapshots) {
// Get the last visible document
var lastVisible = documentSnapshots.docs[documentSnapshots.docs.length-1];
console.log("last", lastVisible);
// Construct a new query starting at this document,
// get the next 25 cities.
var next = db.collection("cities")
.orderBy("population")
.startAfter(lastVisible)
.limit(25);
});
If you opt for an infinite scroll, you can easily know if you have reached the end of the collection by looking at the value of documentSnapshots.size. If it is under 25 (the value used in the example), you know that you have reached the end of the collection.
If you want to show the total number of documents in the collection, the best is to use a distributed counter which holds the number of documents, as explained in this answer: https://stackoverflow.com/a/61250956/3371862
Firestore does not provide a way to know how many results would be returned by a query without actually executing the query and reading each document. If you need a total count, you will have to somehow track that yourself in another document. There are plenty of suggestions on Stack Overflow about counting documents in collections.
Cloud Firestore collection count
How to get a count of number of documents in a collection with Cloud Firestore
However, the paging API itself will not help you. You need to track it on your own, which is just not very easy, especially for flexible queries that could have any number of filters.
My guess is you would be using Mat-Paginator and the next button is disabled because you cannot specify the exact length? In that case or not, a simple workaround for this is to get (pageSize +1) documents each time from the Firestore sorted by a field (such as createdAt), so that after a new page is loaded, you will always have one document in the next page which will enable the "next" button on the paginator.
What worked best for me:
Create a simple query
Create a simple pagination query
Combine both (after validating each one works separately)
Simple Pagination Query
const queryHandler = query(
db.collection('YOUR-COLLECTION-NAME'),
orderBy('ORDER-FIELD-GOES-HERE'),
startAt(0),
limit(50)
)
const result = await getDocs(queryHandler)
which will return the first 50 results (ordered by your criteria)
Simple Query
const queryHandler = query(
db.collection('YOUR-COLLECTION-NAME'),
where('FIELD-NAME', 'OPERATOR', 'VALUE')
)
const result = await getDocs(queryHandler)
Note that the result object has both query field (with relevant query) and docs field (to populate actual data)
So... combining both will result with:
const queryHandler = query(
db.collection('YOUR-COLLECTION-NAME'),
where('FIELD-NAME', 'OPERATOR', 'VALUE'),
orderBy('FIELD-NAME'),
startAt(0),
limit(50)
)
const result = await getDocs(queryHandler)
Please note that the field in the where clause and in orderBy must be the same one! Also, it is worth mentioning that you may be required to create an index (for some use cases) or that this operation will fail while using equality operators and so on.
My tip: inspect the error itself where you will find a detailed description describing why the operation failed and what should be done in order to fix it (see an example output using js client in image below)
Firebase V9 functional approach. Don't forget to enable persistence so you won't get huge bills. Don't forget to use where() function if some documents have restrictions in rules. Firestore will throw error if even one document is not allowed to read by user. In case bellow documents has to have isPublic = true.
firebase.ts
function paginatedCollection(collectionPath: string, initDocumentsLimit: number, initQueryConstraint: QueryConstraint[]) {
const data = vueRef<any[]>([]) // Vue 3 Ref<T> object You can change it to even simple array.
let snapshot: QuerySnapshot<DocumentData>
let firstDoc: QueryDocumentSnapshot<DocumentData>
let unSubSnap: Unsubscribe
let docsLimit: number = initDocumentsLimit
let queryConst: QueryConstraint[] = initQueryConstraint
const onPagination = (option?: "endBefore" | "startAfter" | "startAt") => {
if (option && !snapshot) throw new Error("Your first onPagination invoked function has to have no arguments.")
let que = queryConst
option === "endBefore" ? que = [...que, limitToLast(docsLimit), endBefore(snapshot.docs[0])] : que = [...que, limit(docsLimit)]
if (option === "startAfter") que = [...que, startAfter(snapshot.docs[snapshot.docs.length - 1])]
if (option === "startAt") que = [...que, startAt(snapshot.docs[0])]
const q = query(collection(db, collectionPath), ...que)
const unSubscribtion = onSnapshot(q, snap => {
if (!snap.empty && !option) { firstDoc = snap.docs[0] }
if (option === "endBefore") {
const firstDocInSnap = JSON.stringify(snap.docs[0])
const firstSaved = JSON.stringify(firstDoc)
if (firstDocInSnap === firstSaved || snap.empty || snap.docs.length < docsLimit) {
return onPagination()
}
}
if (option === "startAfter" && snap.empty) {
onPagination("startAt")
}
if (!snap.empty) {
snapshot = snap
data.value = []
snap.forEach(docSnap => {
const doc = docSnap.data()
doc.id = docSnap.id
data.value = [...data.value, doc]
})
}
})
if (unSubSnap) unSubSnap()
unSubSnap = unSubscribtion
}
function setLimit(documentsLimit: number) {
docsLimit = documentsLimit
}
function setQueryConstraint(queryConstraint: QueryConstraint[]) {
queryConst = queryConstraint
}
function unSub() {
if (unSubSnap) unSubSnap()
}
return { data, onPagination, unSub, setLimit, setQueryConstraint }
}
export { paginatedCollection }
How to use example in Vue 3 in TypeScript
const { data, onPagination, unSub } = paginatedCollection("posts", 8, [where("isPublic", "==", true), where("category", "==", "Blog"), orderBy("createdAt", "desc")])
onMounted(() => onPagination()) // Lifecycle function
onUnmounted(() => unSub()) // Lifecycle function
function next() {
onPagination('startAfter')
window.scrollTo({ top: 0, behavior: 'smooth' })
}
function prev() {
onPagination('endBefore')
window.scrollTo({ top: 0, behavior: 'smooth' })
}
You might have problem with knowing which document is last one for example to disable button.

How can I limit the amount of writes that can be done to a certain collection in Cloud Firestore?

I'm looking for a way to prevent writing more than a given limit of documents to a (sub)collection in a given periode.
For example: Messenger A is not allowed to write more then 1000 Messages per 24 hours.
This should be done in the context of an Firebase Function API endpoint because it's called by third parties.
The endpoint
app.post('/message', async function(req:any, res:any) {
// get the messenger's API key
const key = req.query.key
// if no key was provided, return error
if(!key) {
res.status(400).send('Please provide a valid key')
return
}
// get the messenger by the provided key
const getMessengerResult = await admin.firestore().collection('messengers')
.where('key', '==', key).limit(1).get()
// if there is no result the messenger is not valid, return error
if (getMessengerResult.empty){
res.status(400).send('Please provide a valid key')
return
}
// get the messenger from the result
const messenger = getMessengerResult.docs[0]
// TODO: check if messenger hasn't reached limit of 1000 messages per 24 hours
// get message info from the body
const title:String = req.body.title
const body: String = req.body.body
// add message
await messenger.ref.collection('messages').add({
'title':title,
'body':body,
'timestamp': Timestamp.now()
})
// send response
res.status(201).send('The notification has been created');
})
One thing I've tried was the following piece of code in place of the TODO::
// get the limit message and validate its timestamp
const limitMessageResult = await messenger.ref.collection('messages')
.orderBy('timestamp',"desc").limit(1).offset(1000).get()
if(!limitMessageResult.empty) {
const limitMessage = limitMessageResult.docs[0]
const timestamp: Timestamp = limitMessage.data()['timestamp']
// create a date object for 24 hours ago
const 24HoursAgo = new Date()
24HoursAgo.setDate(24HoursAgo.getDate()-1)
if(24HoursAgo < timestamp.toDate()) {
res.status(405).send('You\'ve exceeded your messages limit, please try again later!')
return
}
}
This code works, but there is a big BUT. The offset does indeed skip the 1000 results, but Firebase will still charge you for it! So every time the messenger tries to add 1 message, 1000+ are read... and that's costly.
So I need a better (cheaper) way to do this.
One thing I've come up with, but haven't yet tried would be adding an index/counter field to a message that increases by 1 every message.
Then instead of doing:
const limitMessageResult = await messenger.ref.collection('messages')
.orderBy('timestamp',"desc").limit(1).offset(1000).get()
I could do something like:
const limitMessageResult = await messenger.ref.collection('messages')
.where('index','==', currentIndex-1000).limit(1).get()
But I was wondering if that would be a save way.
For example, what would happen if there are multiple request at the same time.
I would first need to get the current index from the last message and add the new message with index+1. But could two requests read, and thus write the same index? Or could this be handled with transactions?
Or is there a totally different way to solve my problem?
I have a strong aversion against using offset() in my server-side code, precisely because it makes it seem like it's skipping documents, where it's actually reading-and-discarding them.
The simplest way I can think of to implement your maximum-writes-per-day count is to keep a writes-per-day counter for each messenger, that you then update whenever they write a message.
For example, you could do the following whenever you write a message:
await messenger.ref.collection('messages').add({
'title':title,
'body':body,
'timestamp': Timestamp.now()
})
const today = new Date().toISOString().substring(0, 10); // e.g. "2020-04-11"
await messenger.ref.set({
[today]: admin.firestore.FieldValue.increment(1)
}, { merge: true })
So this adds an additional field to your messenger document for each day, whee it then keeps a count of the number of messages that messenger has written for that day.
You'd then use this count instead of your current limitMessageResult.
const messageCount = (await messenger.get()).data()[today] || 0;
if (messageCount < 1000) {
... post the message and increase the counter
}
else {
... reject the message, and return a message
}
Steps left to do:
You'll want to secure write access to the counter fields, as the messenger shouldn't be able to modify these on their own.
You may want to clean out older message counts periodically, if you're worried about the messenger's document becoming too big. I prefer to leave these types of counters, as they give an opportunity to provide some stats cheaply if needed.

Firestore Cloud Functions - Keeping Count of Amount of Documents in Collection

I am trying to write a cloud function that will keep track of the amount of Documents in the Collection. There isn't a ton of documentation on this probably because of Firestore is so now.. so I was trying to think of the best way to do this.. this is the solution I come up with.. I can't figure out how to return the count
Document 1 -> Collection - > Documents
In Document 1 there would ideally store the Count of Documents in the Collection, but I can't seem to figure out how to relate this
Let's just assume Document1 is a Blog post and the subcollection is comments.
Trigger the function on comment doc create.
Read the parent doc and increment its existing count
Write the data to the parent doc.
Note: If your the count value changes faster than once-per-second, you may need a distributed counter https://firebase.google.com/docs/firestore/solutions/counters
exports.aggregateComments = functions.firestore
.document('posts/{postId}/comments/{commentId}')
.onCreate(event => {
const commentId = event.params.commentId;
const postId = event.params.postId;
// ref to the parent document
const docRef = admin.firestore().collection('posts').doc(postId)
return docRef.get().then(snap => {
// get the total comment count and add one
const commentCount = snap.data().commentCount + 1;
const data = { commentCount }
// run update
return docRef.update(data)
})
});
I put together a detailed firestore aggregation example if you need to run advanced aggregation calculations beyond a simple count.

Resources