How to cache firebase data between different android architecture view models? - firebase

This question is with reference to the excellent series: Using Android Architecture Components with Firebase Realtime Database.
If I have two instances of the same view model or if two view models call for same data from firebase, how can I cache them such that I don't end up adding multiple listeners for the same data?
P.S. - #Doug: Part 2 to Part 3 link is broken for the series.

There is no performance hit for adding multiple listeners/observers on the same data. The Firebase SDK will optimize such that only one copy of the data is transferred across the wire no matter how many listeners are waiting for it. Once retrieved, the data is then simply sent to each listener/observer. Likewise, if you have persistence enabled, there is only one local cache on disk that all listeners will use.
You shouldn't have try to optimize your code in any way other than to minimize the amount of data a listener obtains. Any overlap in data is managed automatically.

Related

How to minimise Firebase Function Latency

As per the documentation, Firebase Functions are currently supported for 4 regions only - “us-central1”, “us-east1", “europe-west1”, “asia-northeast1"
That means locations further away would incur more latency, and often that translates to lower performance.
How can this limitation be worked around?
1) Choosing a location that is closest to you. You can set up test cloud functions in different regions, and test the round-trip latency. Only you can discover the specifics about your location.
2) Focus your software architecture on infrastructure that is locally available.
Use the client-side Firestore library directly as much as possible. It supports offline data, queueing data to send out later if you don't have internet, and caching read data locally - you can't get faster latency than that! So make sure you use Firestore for CRUD operations.
3) Architect to use CloudFunctions for batch and background processesing. If any business-logic processing is required, write the data to Firestore (using client libraries), and have a FF trigger to do some processing upon the write data-event. Have that trigger update that record with the additional processing, and state. I believe that if you're using the client-side libraries there is a way to have the updated data automatically pushed back to the client-side. (edited)
You also have the bonus benefit of being able to control authorisation with Firestore Auth, where Functions don't have an admin-level authorisation control.
4) Reduce chatter - minimising the amount of CloudFunction calls overall, and ensuring your CloudFunctions themselves do more in one go and return more complete data in one go.

Do I need to create a new SQLite database every time an application is updated?

I have a Xamarin Forms application I would like to develop. It will have a SQLite database and I wish to make this available on iOS and Android. The database will be populated with data from a SQL Server database on the cloud with initial seed data. I'm thinking this will be about 500 rows of data with each row about 1Kb.
What I don't understand is when and how to populate this. Should I try to put the data into a CSV file and have this populate the database when the application is installed, or when it first starts? What's the normal way to populate seed data other than lines inside of the code with a huge number of insert statements.
Any help or advice on how this is normally done (I'm thinking most people do it the same way) would be much appreciated.
Thanks
Lets break the problem down.
Is the initial data that you wish to use in your app going to change over time?
If you include any pre-populated data (a SQLite, Realm, or CSV-based file, ...) and the data that you are including goes stale and you have to update it on a routine basis, you will need to publish an application update (.apk/.ipa) so your new user installs receive the updated data (more on this below).
Note: This assumes that your current users get the updated data via actually running your app and it is handling the local data updates on routine basis (background service, push notifications, data polling, etc..)
Is this a Line of Business (LoB) application published via Ad-Hoc, private Store, and/or iOS Enterprise publishing?
If you control the user base, than having to force an update install so your users get your new/updated pre-populated data might be an acceptable approach, but not a great user experience if they forced to update the application all the time... but it works...
Is this application going to be distributed via the public Apple and Google App Stores?
This is where you need to be very careful on what pre-populated data you include within your application.
If the data goes stale and you need to push an updated app version to the Stores for your new install installs, beware that it could be days (or weeks or even month+) to get that new app into the store.
The Play Store usually is less then 24 hours on publishing app updates, and while the Apple Store can be the same, do not bet on it.
We routinely see 48-72 hour delays and randomly get rejected and thus it can take a week or more to get an update app into the Apple Store. We have had rejections delaying an app update for over a month and have gone into the appeal process and even removed already existing features to get re-published
Note: Every app update to the Apple store resets your user reviews... :-(
Bottom line: You want to want to publish to the Stores when you are bug fixing and/or adding features, not to update some "static" data that is stored within your app bundle...
What does this data cost your end-user and you?
Negative costs to you as an app developer are bad reviews and uninstalls. Look at how this "data" effects the end-users access to your application and how they react. Longer download time, usually acceptable. Longer initial app startup times, less acceptable... etc....
What markets will your app be used in? Network speeds and the cost of data transfer in many markets across the world are slow and costly...
What really is the true size of the data?
I "pre-populate" a Realm data instance with thousand of rows with 5MB of JSON data in under a second. SQLite takes longer, but it is still not bad. The data itself is stored in a zip and accessed as a static file (https-based get) and at a 80% compression factor, the 1MB of compressed data is pulled from a server (AWS S3) in under one second using LTE cellular data speeds and uncompressing it as stream while deserializing the JSON on-fly to update the Realm instance adds another second...
So, the user impact is very small and I "hide" this initial pre-populate update via a first-time welcome screen and some text that the user hopefully reads before getting to the first "real" app screen...
Note: This does assume that the user will have network data access the first time they open the app... In many markets around the world, this is not true, so factor this into your app design.
I also architect the app so its data can be update on background threads during its launch (the initial one or not) and thus the user does not stand there watching a spinning busy indictor, they can at least interact with the data that they do have.
So should you include any pre-populated data in your app bundle?
Sure, when that data is absolutely required to get the user up and running as fast as possible to enhance the user experience. Games are a great example of this in bundling 100s of megabytes or even gigabytes via .obb... with the various levels, media files, etc... into the app so the user does not experience a 10+ min. wait time upon opening the app the first time.
Now this does mean that their initial download time for the install was longer as that data was bundled within the app, the overall user experience was better as users accept the download/install times and view that as a carrier/phone/service plan issue vs. the time to open your app the first time to actually get to a functional screen.
So what do?
Personally I look at this issue on a case by case basis. I look at the data and if it is not going to change and only get added to and possibly pruned over time, include it as a pre-populated SQLite or Realm store or... Why cause the user to wait for the web requests, database updates and the additional network data usage and associated costs. If the data is going to go stale, do not bundle it in your app.
As for the mechanics of installing pre-populated data:
See my answer on this SO Question about "Bundle prebuilt Realm files"
You don't have to create your sqlite database every time the app is updated.
Actually SQLiteOpenHelper provides the following two methods:
OnCreate() : you should implement this method and create your sqlite database with populated data from the server. It is called when you the app is started for the first time.
OnUpgrade(): you should implement this method if you want to modify the database (add a new table or column in a table) or populate additional data.
The database is preserved between app updates and you don't need to create it each time.
Check the following examples which explain how to use sqlite database with Xamarin:
Using Sqlite in a Xamarin.Android Application Developed using Visual Studio
and
An Introduction to Xamarin.Forms and SQLite

Apple Watch complication network requests

I'm creating a weather application that pulls its information from an online API.
I am able to get the information successfully in the GlanceController and in the InterfaceController. However, I'm a little unsure as to how I should do this for the complication. Can I perform a network request within the ComplicationController class?
If so, How would I go about doing this?
You'll run into issues related to asynchronously fetching data from within the complication data source, mostly due to the data being received after the timeline update is complete.
Apple recommends that you fetch the data from a different part of your app, and have it available in advance of any complication update:
The job of your data source class is to provide ClockKit with any requested data as quickly as possible. The implementations of your data source methods should be minimal. Do not use your data source methods to fetch data from the network, compute values, or do anything that might delay the delivery of that data. If you need to fetch or compute the data for your complication, do it in your iOS app or in other parts of your WatchKit extension, and cache the data in a place where your complication data source can access it. The only thing your data source methods should do is take the cached data and put it into the format that ClockKit requires.
Other ways to approach it:
The best way to update your complication (from your phone once you have received updated weather data) is to use transferCurrentComplicationUserInfo.
Alternately, you could have your watch app or glance cache its most recent weather details to be on hand for the next scheduled update.
If you absolutely must handle it from the complication:
You could have the scheduled timeline update get the extension to start an NSURLSession background task to asynchronously download the information from your weather service. The first (scheduled) update will then end, with no new data. Once the new weather data is received, you could then perform a second (manual) update to reload the complication timeline using the just-received data.
I don't have any personal experience with that approach, mostly because of the unnecessary need for back-to-back timeline updates.

Load data on server start up and refresh on regular time interval using spring 3.1

I am new to spring framework. I would like to pull data from database and set that data in application context. When ever we change data in database according data should be refresh. Please help me out what would be the best approach.
If you want to refresh your data in application context on change in the database I have a bad news for you - that's not really possible (or easy and straightforward, at least).
Most of the common databases are passive in a sense that they won't let you to subscribe for specific events (like data update) because this will require some additional IPC between database and subscribed application and this is generally not the main purpose of database.
In any case something like this will be database-specific, so if you really want this - it is better to check api docs of your database - there is a chance that you'll find means for doing something like this there. Again, this probably won't be very flexible and robust solution.
In general case you go one of 3 routes:
Pull your data from database every time it is needed
Pull your data from database every time it is needed but add some cache
Implement application-level component that will manage data. When data is requested - it will fetch it from the database if it is missing in cache. When data is updated it will update it both in cache and database.
(1) and (2) are are pretty much your only options if your data can be updated not only from your applications. (3) is a good way if data can only be updated from within your application and if amounts of data are small enough to justify it's caching.
Hope this will help.

DDP vs Straight MongoDB access for synching large amounts of data

We are building an app in Meteor that will be participating in an education ecosystem.
There are a number of applications (e.g. a GradeBook, a Student Information System, a Reporting System...) that will all need to have their data stores kept in synch with Meteor. The datastore size will be in the hundreds of thousands of documents.
My understanding is that DDP is used to connect "clients" to a Meteor app (by subscribing to feeds when Meteor is pushing data changes and RPC to get the data in to Meteor). And a "client" is generally scoped to a user...so the size of the data set is relatively small compared to the universe of data (a teacher might have access to 100 of the 250K documents).
If I connected a Reporting System (as a "client") to Meteor with DDP, all data in the store would need to be synched...does that mean that every time the Reporting System lost the connection to Meteor, all data would be re-sent from Meteor to the DDP client? (because the Reporting System is interested in ALL the data)...and if that's the case, DDP wouldn't be the way to keep application in synch, right?...it's meant more for much smaller scoped data sets....and we should probably be interacting directly with Mongo to keep things synch.
Thanks!
Mike
based on this
http://meteor.com/blog/2012/03/21/introducing-ddp
Distributed Data Protocol. DDP is a standard way to solve the biggest problem facing client-side JavaScript developers: querying a server-side database, sending the results down to the client, and then pushing changes to the client whenever anything changes in the database.
it seems clear that any new DDP client, receives all data and then deltas as the data changes.
i would suggest that if your 'client' doesnt need reactivity / realtime updates / 2 way synching, you should pull the data directly from mongo and avoid the overhead of 'syncing'. for a 'reporting system' this should be perfectly acceptable, grab a bunch of data, generate reports. you shouldnt care about changing data in this context, just a snapshot and reports from that snapshot.
if you do need the more real time features, DDP is likely worth the overhead and initial setup difficulty.
I think nate's answer goes perfect on what you should do especially considering the volume of data. And if you need to display a whole lot of data if you're using pages to use a paginated subscription so that you can enjoy the realtime functionality (if you decide to use it) without downloading it all at once. Keep in mind though that at the moment the data is sent down like this (for each session, so if the tab is closed and reopened it will be redone):
1 - Connect to DDP Server/Proxy (Long Polling now due to websocket issues with chrome)
2 - Establish a 'subscription'
3 - Fetch all data relevant to subscription (initial download)
4 - Subscription is complete, now the client will 'listen' for changes
5 - Any updates (remove/update/insert, etc) are sent down to the client
There really isn't a sync system at this point where old data is kept offline (in a localstorage or indexed db or anything) so that step no 3 can be avoided and only the syncs from that point would occur.
This is mind, if there is a connectivity interruption (e.g losing connectivitiy for a short peroid of time Meteor will lose connection to the DDP wire and when it reconnects it download everything again as if it were from scratch.

Resources