I'd like to build an app that can communicate with an unlimited number of beacons. The idea is to have a single app that the user can user in anywhere in the world in partner stores. I understand that iOS has a limit of 20 regions for a single app and each region can register unlimited number of beacons.
Does the limit of 20 means the app can simultaneously deal with only 20 regions at a time or 20 is an absolute limit of UUID? In other words can I register thousands of UUID and based on the user location only 20 are active at a time?
Many thanks for your help.
On iOS, a maximum of 20 CLRegion instances can be registered for monitoring at one time. Each one of these must specify at least a ProximityUUID, but can leave the major and minor null, matching any of the billions of beacons with that same ProximityUUID. But there are also potentially many many billions of different Proximity UUIDs, so this certainly won't match all beacons.
While you can't register more than 20 ProximityUUIDs for monitoring at any one time, you can change the ones registered as your location changes as you suggest. I actually built a web service called Ningo that lets you fetch a list of the known ProximityUUIDs that have been detected previously within a given distance of your location. There is also a free and open source iOS client library SDK for that here along with a full-blown reference app (free source code included) that does just exactly this so that you can detect almost any beacon around.
One other simpler alternative is to simply use iOS Ranging APIs. Unlike monitoring APIs, there is no limit on how many CLRegion instances you can register for ranging, although practical limits mean that the system really slows down once you register more than about 100. The Ranging API, will let you detect any beacon if your app is already running, but unlike Monitoring API, it they won't let you wake up your app when a beacon appears. But again, since the phone will slow down to a crawl if you try to register many thousands of regions, this is not a practical way to detect any beacon.
Related
Our Firebase server currently uses a one device per topic. However as users of the App are increasing the overhead of sending many topic sends is starting to be significant. What are our options for grouping sends, given that the targets dataset changes on every send.
We started to look at device groups, but these will be unsupported if we are forced to move to HTTP V1. We had considered just adding a group of users to a topic, but managing the lifespan of the topic could become an issue. We would have to create a new one on every send, working out at what point we could tear this subscription down with impacting any messages which have not been received may be problematic.
And suggestions welcome.
The usual approach for this is to either use topics, or implementing your own topic-like system. In the latter you'd store the relation between the Instance ID and your grouping logic in a database, and then use the batch-sending feature of FCM to deliver to up to 1000 devices at a time.
I just hit a situation which pushed me to ask this question:
I have about 150 active monthly users and I just hit 1k concurrent connections on a single day.
I did research and found many questions on "firebase concurrent connections" topic and those who refers to user-to-concurrent ratio say that on average it's close to 1 concurrent = ~1400 monthly users (like here and here).
I'm now trying to understand if I really did something wrong and if yes, how to fix that?
The questions are:
Is it look ok to get 1k concurrent connections with about 150 active users? Or am I reading it wrong?
Is it possible to profile concurrent connections somehow?
What are the typical "connection leaks" when it comes to chrome extensions and how to avoid them?
So far the architecture of the extension is that all the communication with firebase database is made from the background persistent script which is global to a browser instance.
And as a note, 150 active users is an estimation. For upper boundary I can say that I have 472 user records in total and half of them installed the extension and uninstalled it shortly after that - so they are not using it. And about 20% of the installed instances are also disabled in chrome.
Here is what I get after discussing with the support team:
here are other common use cases that can add up to the number of
connections in your app:
Opening your web app in multiple tabs on your browser (1 connection per tab)
Accessing the Realtime Database dashboard from the Firebase Console (1 connection per tab)
Having Realtime Database triggers
So Realtime Database triggers appeared to be my case.
Further discussion revealed the following:
In the case of uploading 200 data points which each trigger a
function, any number of concurrent connections between 1 and 200 is
possible. It all depends on how Cloud Functions scales. In an extreme
case it could just be one instance that processes all 200 events one
at a time. In another extreme case the Cloud Functions system could
decide to spin up 200 new server instances to handle the incoming
events. There's no guarantee as to what will happen here, Cloud
Functions will try to do the right thing. Each case would cost the
user the same amount on their Cloud Functions bill. What's most likely
in a real application (where it's not a complete cold start) is
something in the middle.
There's no need to worry about the number of concurrent connections
from Cloud Functions to RTDB. Cloud Functions would never spin up
anywhere near 100k server instances. So there's no way Cloud Functions
would eat up the whole concurrency limit. That would only happen if
there are many users on the client app accessing your database
directly from their devices.
So the described behavior in my question seems to be expected and it will not come any close to the limit of 100k connections from server side.
The application I'm currently working on needs real-time communication that is scalable. We have been looking into and tried out Firebase real-time database and firestore. It seems Firebase real-time database is more mature and tested out, while firestore is still in beta, which is why we are leaning towards the real-time database.
We are however worried about its scaling capabilities in our context. Our queries will mainly be geo spatial based on the user's location. According to Firebase simultaneous realtime connections to my database and https://firebase.google.com/pricing/#faq-simultaneous the maximum number of concurrent users is 100.000, which will be too low for our needs.
According to their documentation, it seems like database sharding is the way to scale beyond 100.000 concurrent users https://firebase.google.com/docs/database/usage/sharding. Since our queries are based on the user's location, we could group the data into regions, e.g. US West, US Central, and US East and have a database instance for each of those three regions.
While this method may work, it seems very cumbersome to set it up. We would probably need to have a service which the user initially connects to in order to be redirected to the correct database instance that fits the region which the user is in. Additionally, it should handle the case where a user moves into another region, and should therefore be redirected to another database instance containing the data for that specific region.
Another complex task would be to distribute the data into the correct database instances.
Is there a more simple approach to scale beyond 100.000 users or is it possible to increase the amount of concurrent connections for a single Firebase real-time database?
To me it seems like almost a waste to use Firebase if it requires you to do so much "load" balancing yourself.
The 100K concurrent connections is a hard cap on the Firebase Realtime Database.
The approach you describe with a two-step connect is quite idiomatic. The first step is usually quite simple. In fact for many apps it is part of their authentication flow, or based on the outcome of that. For example, many apps base the user's shard on a hash of their UID.
In your case, you could inject the users region into their token as a custom claim when they register. Then you'd get that claim when they sign in, and can redirect them to their shard. You could also persist the shard info in the client when they first connect, so that you only have to determine that only once for each client/device.
Is there a more simple approach to scale beyond 100.000 users or is it
possible to increase the amount of concurrent connections for a single
Firebase real-time database?
Yes. use Firestore database.
Scales completely automatically. Currently, scaling limits are:
Around 1 million concurrent connections and 10,000 writes/second. (they plan to increase these limits in the future) (source)
Maximum write rate to a document is 1 per second (source)
Is officially out of beta and in General Availability from 31/1/2019 (source)
I've read quite a few posts (including the firebase.com website) on Firebase connections. The website says that one connection is equivalent to approximately 1400 visiting users per month. And this makes sense to me given a scenario where the client makes a quick connection to the Firebase server, pulls down some data, and then closes the connection. However, if I'm using angular bindings (via angularfire), wouldn't each client visit (in the event the user stays on the site for a period of time) be a connection? In this example having 100 users (each of which is making use of firebase angular bindings) connecting to the site at the same time would be 100 connections. If I opted not to use angular bindings, that number could be (in a theoretical sense) 0 if all the clients already made their requests for data and were just idling.
Do I understand this properly?
AngularFire is built on top of Firebase's regular JavaScript/Web SDK. The connection count is fundamentally the same between them: if a 100 users are using your application at the same time and you are synchronizing data for each of them, you will have 100 concurrent connections at that time.
The statement that one concurrent connection is the equivalent of about 1400 visits per month is based on the extensive experience that the Firebase people have with how long the average connection lasts. As Andrew Lee stated in this answer: most developers vastly over-estimate the number of concurrent connections they will have.
As said: AngularFire fundamentally behaves the same as Firebase's JavaScript API (because it is built on top of that). Both libraries keep an open connection for a user, so that they can synchronize any changes that occur between the connected users. You can manually drop such a connection by calling goOffLine and then re-instate it with goOnline. Whether that is a good approach is largely dependent on the type of application you're building.
Two examples:
There recently was someone who was building a word game. He used Firebase to store the final score for each game. In his case explicitly managing the connections makes sense, because the connection is only needed for a relatively short time when compared to the time the application is active.
The "hello world" for Firebase programming is a chat application. In such an application it doesn't make a lot of sense to manage the connections yourself. So briefly connect every 15 seconds and then disconnect again. If you do this, you're essentially reverting to polling for updates. Doing so will lose you one of the bigger benefits of using Firebase: it automatically synchronizes data to connected clients.
So only you can decide whether explicit connection management is best for you application. I'd recommend starting without it (it's simpler) and first testing your application on a smaller scale to see how actual usage holds up to your expectation.
I am working on a vendor portal. An owner of a shop will login and in the navigation bar (similar to facebook) I would like the number of items sold to appear INSTANTLY, WITHOUT ANY REFRESH. In facebook, new notifications pop up immediately. I am using sql azure as my database. Is it possible to note a change in the database and INSTANTLY INFORM the user?
Part 2 of my project will consist of a mobile phone app for the vendor. In this app I, too , would like to have the same notification mechanism. In this case, would I be correct if I search on push notifications and apply them?
At the moment my main aim is to solve the problem in paragraph 1. I am able to retrieve the number of notifications, but how on earth is it possible to show the changes INSTANTLY? thank you very much
First you need to define what INSTANT means to you. For some, it means within a second 90% of the time. For others, they would be happy to have a 10-20 second gap on average. And more importantly, you need to understand the implications of your requirements; in other words, is it worth it to have near zero wait time for your business? The more relaxed your requirements, the cheaper it will be to build and the easier it will be to maintain.
You should know that having near-time notification can be very expensive in terms of computing and locking resources. The more you refresh, the more web roundtrips are needed (even if they are minimal in this case). Having data fresh to the second can also be costly to the database because you are potentially creating a high volume of requests, which in turn could affect otherwise good performing requests. For example, if your website runs with 1000 users logged on, you may need 1000 database requests per second (assuming that's your definition of INSTANT), which could in turn create a throttling condition in SQL Azure if not designed properly.
An approach I used in the past, for a similar requirement (although the precision wasn't to the second; more like to the minute) was to load all records from a table in memory in the local website cache. A background thread was locking and refreshing the in memory data for all records in one shot. This allowed us to reduce the database traffic by a factor of a thousand since the data presented on the screen was coming from the local cache and a single database connection was needed to refresh the cache (per web server). Because we had multiple web servers, and we needed the data to be exactly the same on all web servers within a second of each other, we synchronized the requests of all the web servers to refresh the cache every minute. Putting this together took many hours, but it allowed us to build a system that was highly scalable.
The above technique may not work for your requirements, but my point is that the higher the need for fresh data, the more design/engineering work you will need to make sure your system isn't too impacted by the freshness requirement.
Hope this helps.