Is It Possible to Have a Slow Query In Cloud Firestore? - firebase

I have read in the documentation that the amount of time for retreiving data will be the same for querying a collection of 6 documents and a collection of 60M.
So is it safe to save all of the data of a specific kind (like users) under the same collection? Will I never have to split them into separate collections for getting better performance?

It is definitely possible to have slow-performing queries on Firestore, but the performance will not be related to the number of documents in the collection that you're querying. A common cause of slow reads is for example having documents that contain way more data than the application needs, which means that it takes more time to download that data to the client than is necessary for the use-case.
In your example: it is indeed normal to store all user profiles in a single collection. Querying 6 users out of that collection will always take the same amount of time, even if you app grows to millions or hundreds of millions of users.

Related

Firestore - what is best data structure for my case(performance/price)?

In my application there will be users (tens/hundreds). Each user will have many documents of the same type(typeA). I will need to read all these documents for a current user. I plan to use the following option:
root collection: typeACollection
|
nested collections for users: user1Collection, user2Collection, user3Collection ....
|
all documents for a specific user
An alternative to this solution is to create a separate root collection for each user and store documents of this type in it. But I do not like this solution - there will be a "not clear" structure.
user1typeACollection, user2typeACollection, user3typeACollection ....
your opinion which of the options is preferable (performance/price) - first or second?
There is no singular best structure here, it all depends on the use-cases of your app.
The performance of a read operation in Firestore purely depends on the amount of data retrieved, and not on the size of the database. So it makes no difference if you read 20 user documents from a collection of 100 documents in total, or if there are 100 million documents in there - the performance will be the same.
What does make a marginal difference is the number of API calls you need to make. So loading 20 user documents with 20 cals will be slower than loading them with 1 call. But if you use a single collection group query to load the documents from multiple collections of the same name, that's the same performance again - as you're loading 20 documents with a single API call.
The cost is also going to be the same, as you pay for the number of documents read and the bandwidth consumed by those documents, which is the same in these scenarios.
I highly recommend watching the Getting to know Cloud Firestore video series to learn more about data modeling considerations and pricing when using Firestore.

Are Firestore Collections Physically Isolated from Each Other?

I am considering storing multiple tenants in a single Firebase Firestore database. There will only be one collection per tenant and a few shared collections. Some will have more data than others. Some tenants may have a few million records while others may end up with a few billion. I want to confirm that the size of data in one collection will not impact the performance or storage of another collection in the same database.
I couldn't find much in the documentation about how the data is physically stored. Is all the data in Firestore stored in a single blob/file? If so, this could be a problem when there are hundreds of tenants with billions of records each. In an ideal world, each collection would be a physically separate file, and the server orchestration would separate the collections onto multiple servers so that a single server is not sharing the load between a very heavy tenant, and a very light tenant. This scenario would mean that a heavy tenant would slow down a light tenant.
My basic question is: can a single Firestore database infinitely scale up in size assuming that no single collection is bigger than a few billion records?
I know that there are two types of databases: native and datastore. Which of these seems more appropriate, and is the answer to my question different depending on which of these I select?
If the answer is that Firestore cannot scale infinitely in this way, what is the alternative approach? Should I be using Bigtable instead? Cassandra? Or, is there another way to physically divide my Firestore database other than collections?
Some tenants may have a few million records while others may end up with a few billion. I want to confirm that the size of data in one collection will not impact the performance or storage of another collection in the same database.
The performance in Firestore isn't related to the number of documents that exist in a collection. In terms of speed, it doesn't matter if you perform a query on:
A top-level (root-level) collection.
A sub-collection, which basically represents a collection that is nested under a document.
A collection group, which actually means querying collections and sub-collections that exist across the entire database.
The speed will always be the same, as long as the query returns the same number of documents. This is happening because the query performance depends on the number of documents you request and not on the number of documents you search. So it doesn't really matter if you query a collection with 1 MILLION documents or even 1 BILLION documents, the time for getting the same results will be the same.
I couldn't find much in the documentation about how the data is physically stored. Is all the data in Firestore stored in a single blob/file? If so, this could be a problem when there are hundreds of tenants with billions of records each.
In Cloud Firestore, the unit of storage is the document. Documents live in collections, which are simply containers for documents. Please note that Firestore is optimized for storing large collections of small documents. And when I say large, I mean extremely large. So when you perform a query against a collection of 1 MILLION documents, the speed depends on the number of results you return and it does not depend on the number of the documents in which you search, or on the number of documents that exist in other collections in which you aren't performing a search.
Can a single Firestore database infinitely scale up in size assuming that no single collection is bigger than a few billion records?
While when using the Firebase Realtime Database you had to scale using multiple databases, in Firestore this practice is not necessary. However, the are some techniques that are really good explained in the official docs:
Building scalable applications with Firestore
If the answer is that Firestore cannot scale infinitely in this way, what is the alternative approach?
I can definitely massively scale.
See the Firestore best practices and security rules.
You may conceptualize Firestore as being one service being shared by all of Google's customers. Just as Google's attempts to ensure that one customer's (so-called "noisy neighbor") impact on the service does not affect others, you don't want to be a noisy neighbor to yourself.
You need to consider more than just performance.
Security. E.g.see security rules as a mechanism that you may be able to use to help enforce segregation of your tenants' data. You will want to understand fully how to keep different customers' data separated securely. Your customers will want to understand what measures you're employing to ensure their data is keep separate too.
Multitenancy. Google Cloud Platform has no intrinsic (platform-wide) multitenant capabilities and, often, a way to manifest tenancy has been to use different Google Projects for different customers. This is because Projects provide a well-defined security perimeter. You may want to investigate whether (some subset of your customers) would benefit from being one customer, one project.
Quota. Another important consideration is quota. Every Cloud Platform method is constrained by some quota. You will want to be careful in ensuring that quota is distributed fairly across customers so that some customers don't consume all the quota denying other customers access to the service.

Firebase Firestore database structure

I'm building an app using flutter and firebase and was wondering what the best firestore database structure.
I want the ability for users to post messages and then search by both the content of the post and the posters username.
Does it make sense to create one collection for users with each document storing username and other info and a separate collection for the posts with each document containing the post and the username of the poster?
In the unlikely event where the number of posts exceeds a million or more, is there an additional cost of querying this kind of massive collection?
Would it make more sense to store each user's posts as a sub-collection under their user document? I believe this would require additional read operations to access each document's sub-collection. Would this be cheaper or more expensive if I end up getting a lot of traffic?
is there an additional cost of querying this kind of massive collection?
The cost and performance of reading from Firestore are purely based on the amount of data (number of documents and their size) you retrieve, and not in any way on the number of documents in the collection.
But what is limited in Firestore is the number of writes you can do to data that is "close to each other". That intentionally vague definition means that it's typically better for write scalability to spread the data over separate subcollections, if the data naturally lends itself to that (such as in your case).
To get a great introduction to Firestore, and to data modeling trade-offs, watch Getting to know Cloud Firestore.

Complicated data structuring in firebase/firestore

I need an optimal way to store a lot of individual fields in firestore. Here is the problem:
I get json data from some api. it contains a list of users. I need to tell if those users are active, ie have been online in the past n days.
I cannot query each user in the list from the api against firestore, because there could be hundreds of thousands of users in that list, and therefore hundreds of thousands of queries and reads, which is way too expensive.
There is no way to use a list as a map for querying as far as I know in firestore, so that's not an option.
What I initially did was have a cloud function go through and find all the active users maybe once every hour, and place them in firebase realtime database in the structure:
activeUsers{
uid1: true
uid2: true
uid2: true
etc...
}
and every time I need to check which users are active, I get all fields under activeUsers (which is constrained to a maximum of 100,000 fields, approx 3~5 mb.
Now i was going to use that as my final mechanism, but I just realised that firebase charges for amount of bandwidth used, not number of reads. Therefore it could get very expensive doing this over and over whenever a user makes this request. And I cannot query every single result from firebase database as, while it does not charge per read (i think), it would be very slow to carry out hundreds of thousands of queries.
Now I have decided to use cloud firestore as my final hope, since it charges for number of reads and writes primarily as opposed to data downloaded and uploaded. I am going to use cloud functions again to check every hour the active users, and I'm going to try to figure out the best way to store that data within a few documents. I was thinking 10,000 fields per document with all the active users, then when a user needs to get the active users, they get all the documents (would be
10 if there are 100,000 total active users) and maps those client side to filter the active users.
So I really have 2 questions. 1, If I do it this way, what is the best way to store that data in firestore, is it the way I suggested? And 2, is there an all around better way to be performing this check of active users against the list returned from the api? Have I got it all wrong?
You could use firebase storage to store all the users in a text file, then download that text file every time?
Well this is three years old, but I'll answer here.
What you have done is not efficient and not a good approach. What I would do is as follows:
Make a separate collection, for all active users.
and store all the active users unique field such as ID there.
Then query that collection. Update that collection when needed.

Understanding Firestore Pricing

Before creating a new app I wanna make sure I get the pricing model correct.
For example in a phonebook app, I have a collection called userList that has a list of users which are individual documents.
I have 50k users on my list, which means I have 50k documents in my collection.
If I were to get the userList collection it will read all 50k documents.
FireStore allows 50k document reads. Does that mean 50k document reads in total or 50k document read per document?
As in the example of my phonebook app if it is 50k document reads in total I will run out of the free limit in just one get call.
If you actually have to pull an entire collection of 50k documents, the question you likely should be asking is how to properly structure a Firestore Database.
More than likely you need to filter these documents based on some criteria within them by using the query WHERE clause. Having each client device hold 50k documents locally sounds like poor database planning and possibly a security risk.
Each returned document from your query counts as 1 read. If there are no matches to your query, 1 read is charged. If there are 50k matches, there are 50k reads charged.
For example, you can retrieve the logged in user's document and be charged 1 read with something like:
db.collection('userList').where('uid', '==', clientUID)
Note: As of 10/2018 Firestore charges 6 cents (USD) per 100k reads after the first 50k/ day.
The free quota is for your entire project. So you're allowed 50.000 document reads under the entire project.
Reading 50K user profile documents will indeed use that free quota in one go.
Reading large numbers of documents is in general something you should try to prevent when using NoSQL databases.
The client apps that access Firestore should only read data that they're going to immediately show to the user. And there's no way you'll fit 50K users on a screen.
So more likely you have a case where you're aggregating over the user collection. E.g. things like:
Count the number of users
Count the number of users named Frank
Calculate the average length of the user names
NoSQL databases are usually more limited in their query capabilities than traditional relational databases, because they focus on ensuring read-scalability. You'll frequently do extra work when something is written to the database, if in exchange you can get better performance when reading from the database.
For better performance you'll want to store these aggregation values in the database, and then update them whenever a user profile is written. So you'll have a "userCount", a document with "userCount for each unique username", and a "averageUsernameLength".
For an example of how to run such aggregation queries, see: https://firebase.google.com/docs/firestore/solutions/aggregation. For lower write volumes, you can also consider using Cloud Functions to update the counters.
Don't call all users in one go. You can limit your query to get a limited number of users. And when a user will scroll your query will get more users. And as no one is going to scroll fro 50k users so you can get rid of a bundle of cost. This is something like saving memory in case of recycle view.

Resources