I just hit a situation which pushed me to ask this question:
I have about 150 active monthly users and I just hit 1k concurrent connections on a single day.
I did research and found many questions on "firebase concurrent connections" topic and those who refers to user-to-concurrent ratio say that on average it's close to 1 concurrent = ~1400 monthly users (like here and here).
I'm now trying to understand if I really did something wrong and if yes, how to fix that?
The questions are:
Is it look ok to get 1k concurrent connections with about 150 active users? Or am I reading it wrong?
Is it possible to profile concurrent connections somehow?
What are the typical "connection leaks" when it comes to chrome extensions and how to avoid them?
So far the architecture of the extension is that all the communication with firebase database is made from the background persistent script which is global to a browser instance.
And as a note, 150 active users is an estimation. For upper boundary I can say that I have 472 user records in total and half of them installed the extension and uninstalled it shortly after that - so they are not using it. And about 20% of the installed instances are also disabled in chrome.
Here is what I get after discussing with the support team:
here are other common use cases that can add up to the number of
connections in your app:
Opening your web app in multiple tabs on your browser (1 connection per tab)
Accessing the Realtime Database dashboard from the Firebase Console (1 connection per tab)
Having Realtime Database triggers
So Realtime Database triggers appeared to be my case.
Further discussion revealed the following:
In the case of uploading 200 data points which each trigger a
function, any number of concurrent connections between 1 and 200 is
possible. It all depends on how Cloud Functions scales. In an extreme
case it could just be one instance that processes all 200 events one
at a time. In another extreme case the Cloud Functions system could
decide to spin up 200 new server instances to handle the incoming
events. There's no guarantee as to what will happen here, Cloud
Functions will try to do the right thing. Each case would cost the
user the same amount on their Cloud Functions bill. What's most likely
in a real application (where it's not a complete cold start) is
something in the middle.
There's no need to worry about the number of concurrent connections
from Cloud Functions to RTDB. Cloud Functions would never spin up
anywhere near 100k server instances. So there's no way Cloud Functions
would eat up the whole concurrency limit. That would only happen if
there are many users on the client app accessing your database
directly from their devices.
So the described behavior in my question seems to be expected and it will not come any close to the limit of 100k connections from server side.
Related
Little question about liseners and firebase in general.
I know that with the free spark programm a max of 100 simultaneous listeners connected to the firebase project are allowed, as far as I understand.
I face the little problem that I use multiple .onDiconnect calls which work simultanously in my app.
Therefore my question would be if this .onDiconnect is considered to be a listener and also if it counts towards those 100 listeners that are allowed before I need to pay, each one individual counting of course.
with the free spark programm a max of 100 simultaneous listeners connected to the firebase project are allowed
This is not how it works. On the Spark plan there can be 100 concurrent clients listening. Each client can have as many listeners as it wants.
So having multiple (onDisconnect or other) listeners on a single client does not affect how many clients can connect to the servers at the same tie.
From what I can find, onDisconnect is not a listener and should not affect that limit, but rather creates an object that is handed to the server that will be processed when the client is disconnected.
You can read about it in the following articles:
Firebase Realtime
Google Play
Firebase Google Play
I am developing an Android app which basically does this: On the landing(home) page it shows a couple of words. These words need to be updated on daily basis. Secondly, there is an 'experiences' tab in which a list of user experiences (around 500) shows up with their profile pic, description,etc.
This basic app is expected to get around 1 million users daily who will open the app daily at least once to see those couple of words. Many may occasionally open up the experiences section.
Thirdly, the app needs to have a push notification feature.
I am planning to purchase a managed wordpress hosting, set up a website, and add a post each day with those couple of words, use the JSON-API to extract those words and display them on app's home page. Similarly for the experiences, I will add each as a wordpress post and extract them from the Wordpress database. The reason I am choosing wordpress is that it has ready made interfaces for data entry which will save my time and effort.
But I am stuck on this: will the wordpress DB be able to handle such large amount of queries ? With such a large userbase and spiky traffic, I suspect I might cross the max. concurrent connections limit.
What's the best strategy in my case ? Should I use WP, or use firebase or any other service ? I need to make sure the scheme is cost effective also.
My app is basically very similar to this one:
https://play.google.com/store/apps/details?id=com.ekaum.ekaum
For push notifications, I am planning to use third party services.
Kindly suggest the best strategy I should go with for designing the back end of this app.
Thanks to everyone out there in advance who are willing to help me in this.
I have never used Wordpress, so I don't know if or how it could handle that load.
You can still use WP for data entry, and write a scheduled function that would use WP's JSON API to copy that data into Firebase.
RTDB-vs-Firestore scalability states that RTDB can handle 200 thousand concurrent connections and Firestore 1 million concurrent connections.
However, if I get it right, your app doesn't need connections to be active (i.e. receive real-time updates). You can get your data once, then close the connection.
For RTDB, Enabling Offline Capabilities on Android states that
On Android, Firebase automatically manages connection state to reduce bandwidth and battery usage. When a client has no active listeners, no pending write or onDisconnect operations, and is not explicitly disconnected by the goOffline method, Firebase closes the connection after 60 seconds of inactivity.
So the connection should close by itself after 1 minute, if you remove your listeners, or you can force close it earlier using goOffline.
For Firestore, I don't know if it happens automatically, but you can do it manually.
In Firebase Pricing you can see that 100K Firestore document reads is $0.06. 1M reads (for the two words) should cost $0.6 plus some network traffic. In RTDB, the cost has to do with data bulk, so it requires some calculations, but it shouldn't be much. I am not familiar with the pricing small details, so you should do some more research.
In the app you mentioned, the experiences don't seem to change very often. You might want to try to build your own caching manually, and add the required versioning info in the daily data.
Edit:
It would possibly be more efficient and less costly if you used Firebase Hosting, instead of RTDB/Firestore directly. See Serve dynamic content and host microservices with Cloud Functions and Manage cache behavior.
In short, you create a HTTP function that reads your database and returns the data you need. You configure hosting to call that function, and configure the cache such that subsequent requests are served the cached result via hosting (without extra function invocations).
We have a mobile app service on Firebase.
Our service's concurrent connection is of almost 5,000 ~ 10,000.
I know, it does NOT matter with performance limitations.
But, we have an issue about Realtime Database's pending (1~3 minutes).
It occurs every single day at night time, even with few connections.
We started logging the main realtime database with Elasticsearch because of these pending issues.
This pending issue can be checked in detail.
If occurs db's pending, our app going to disable in 1~3 minutes suddenly (db load 3% -> 100%)
and we can checked 'concurrent-connect/concurrent-disconnect' of almost concurrent users in same time.
I have attached a relevant screen capture. This issue can also be checked in GCP's Stackdriver.
We guess this issue was generated logic of Firebase assigned performance because of usage difference daytime with nighttime.
I've contacted support 3 days ago but haven't received a reply yet.
So, I wondering if anyone has the same problem or knows of this issue.
stackdriver captured
elk captured
It sounds like you're performing a bulk read or write operation every day at the same time. This type of operation typically comes from a batch process and is unrelated to the number of active users of your app. In fact, the bulk read/write will lock out the regular users of your app.
An example of such a batch process could be a bulk read that every night synchronizes the data in the Firebase Realtime Database with Elasticsearch, but there are many more options.
If there's indeed a batch process that is causing this load, you'll want to look into:
Either splitting the process into multiple smaller steps, so that they can intersperse with the traffic from your regular clients.
Or running the same type of process on a backup of the database. Since you'd be running against a local file, it won't interfere with the regular clients, and the backup is made by an out-of-band process (so not causing interference either).
I'm building an app using firebase in Polymer. It is tempting to create a new firebase-collection for each ajax call I might have made in the past. Since firebase.com bills based on the maximum number of simultaneous connections (sessions?), I'm worried that the firebase-element components each count as one connection. Thus, if there are ten firebase-elements in a page it will be counted as 10 connections instead of one.
Do I need to design the page to minimize the number of firebase-elements? How are connections counted when using firebase-elements?
Firebase opens a web socket when a page makes a first connection to its servers. All subsequent data reads/writes for that page happen over the same web socket.
Also see:
Concurrent users and multiple observers
How exactly are concurrent users determined for a Firebase app?
I've read quite a few posts (including the firebase.com website) on Firebase connections. The website says that one connection is equivalent to approximately 1400 visiting users per month. And this makes sense to me given a scenario where the client makes a quick connection to the Firebase server, pulls down some data, and then closes the connection. However, if I'm using angular bindings (via angularfire), wouldn't each client visit (in the event the user stays on the site for a period of time) be a connection? In this example having 100 users (each of which is making use of firebase angular bindings) connecting to the site at the same time would be 100 connections. If I opted not to use angular bindings, that number could be (in a theoretical sense) 0 if all the clients already made their requests for data and were just idling.
Do I understand this properly?
AngularFire is built on top of Firebase's regular JavaScript/Web SDK. The connection count is fundamentally the same between them: if a 100 users are using your application at the same time and you are synchronizing data for each of them, you will have 100 concurrent connections at that time.
The statement that one concurrent connection is the equivalent of about 1400 visits per month is based on the extensive experience that the Firebase people have with how long the average connection lasts. As Andrew Lee stated in this answer: most developers vastly over-estimate the number of concurrent connections they will have.
As said: AngularFire fundamentally behaves the same as Firebase's JavaScript API (because it is built on top of that). Both libraries keep an open connection for a user, so that they can synchronize any changes that occur between the connected users. You can manually drop such a connection by calling goOffLine and then re-instate it with goOnline. Whether that is a good approach is largely dependent on the type of application you're building.
Two examples:
There recently was someone who was building a word game. He used Firebase to store the final score for each game. In his case explicitly managing the connections makes sense, because the connection is only needed for a relatively short time when compared to the time the application is active.
The "hello world" for Firebase programming is a chat application. In such an application it doesn't make a lot of sense to manage the connections yourself. So briefly connect every 15 seconds and then disconnect again. If you do this, you're essentially reverting to polling for updates. Doing so will lose you one of the bigger benefits of using Firebase: it automatically synchronizes data to connected clients.
So only you can decide whether explicit connection management is best for you application. I'd recommend starting without it (it's simpler) and first testing your application on a smaller scale to see how actual usage holds up to your expectation.