this.store
.collection(collectioName)
.onSnapshot((data) => {
});
We can listen to a document with the onSnapshot() method. Each time the contents change, another call updates the document snapshot.
I am looking for hook/rxjs operator that we can use in between, when the data is about to change/emitted and data received by stream.
It will be helpful to
show loading spinner when we get new data or
disable form or table
when we received new update
Something like
this.store
.collection(collectioName)
.onSnapshot()
.pipe(
aboutToInitiate(() => { // start spinner },
dataReceived(() => { // stop spinner, stream received }
).subscribe(() => {
// Or we can stop spinner here, but where exactly we hook the logic to start spinner
);
Note: When creating/updating ( triggered manually ) we can start
spinner and stop on success.
But when the stream return an update that is triggered by server or real time update,
when we are subscribed at that time we need to show spinner or indicator
that we are about to receive some data and data updated/received.
So in short a hook in between subscription that automatically start spinner when about to receive data and stop automatically when data receives.
We can use tap operator but that will fire after data received not when it's initiating the process of getting update.
The onSnapshot() is listening to data changes in realtime. That means you can have a spinner only on initialisation. Every change after that happens immediately your device receives it from the backend or changed from your device.
You could start a Spinner before you initialize the onSnapshot()
//START Spinner
const unsub= this.store
.collection(collectioName)
.onSnapshot((data) => {
//STOP Spinner
});
//Stop listener
unsub()
With the unsub you can stop listening to realtime changes.
One thing you should consider is if you have offline capabilities enabled. In that case you could listen to both changes: data written do device cache and data written to backend.
You can do that by enabling the metatadat changeds to your listener like here:
db.collection("cities").doc("SF")
.onSnapshot({
// Listen for document metadata changes
includeMetadataChanges: true
}, (doc) => {
// ...
});
You can detect then if the data is writen only to the device or to the server:
db.collection("cities").doc("SF")
.onSnapshot((doc) => {
var source = doc.metadata.hasPendingWrites ? "Local" : "Server";
console.log(source, " data: ", doc.data());
});
You can find more about it here.
You could us that metada to determin when the data is written localy (then start the spinner) and when it's written to the server (stop the spinner)
Related
In short: Is there some kind of cold start when connecting to Firestore directly from Client SDK
Hey. I'm using Firestore client sdk in Andoid and IOS application through #react-native-firebase.
Everything works perfectly but I have noticed weird behavior I haven't found explanation.
I have made logging to see how long it takes from user login to retrieve uid corresponding data from Firestore and this time has been ~0.4-0.6s. This is basically the whole onAuthStateChanged workflow.
let userLoggedIn: Date;
let userDataReceived: Date;
auth().onAuthStateChanged(async (user) => {
userLoggedIn = new Date();
const eventsRetrieved = async (data: UserInformation) => {
userDataReceived = new Date();
getDataDuration = `Get data duration: ${(
(userDataReceived.getTime() - userLoggedIn.getTime()) /
1000
).toString()}s`;
console.log(getDataDuration)
// function to check user role and to advance timing logs
onUserDataReceived(data);
};
const errorRetrieved = () => {
signOut();
authStateChanged(false);
};
let unSub: (() => void) | undefined;
if (user && user.uid) {
const userListener = () => {
return firestore()
.collection('Users')
.doc(user.uid)
.onSnapshot((querySnapshot) => {
if (querySnapshot && querySnapshot.exists) {
const data = querySnapshot.data() as UserInformation;
data.id = querySnapshot.id;
eventsRetrieved(data);
} else errorRetrieved();
});
};
unSub = userListener();
} else {
if (typeof unSub === 'function') unSub();
authStateChanged(false);
}
});
Now the problem. When I open the application ~30-50 minutes after last open the time to retrieve uid corresponding data from Firestore will be ~3-9s. What is this time and why does it happen? And after I open the application right after this time will be low again ~0.4-0-6s.
I have been experiencing this behavior for weeks. It is hard to debug as it happens only on build application (not in local environments) and only between +30min interval.
Points to notice
The listener query (which I'm using in this case, I have used also simple getDoc function) is really simple and focused on single document and all project configuration works well. Only in this time interval, which seems just like cold start, the long data retrieval duration occurs.
Firestore Rules should not be slowing the query as subsequent request are fast. Rules for 'Users' collection are as follows in pseudo code:
function checkCustomer(){
let data =
get(/databases/$(database)/documents/Users/$(request.auth.uid)).data;
return (resource.data.customerID == data.customerID);
}
match /Users/{id}{
allow read:if
checkUserRole() // Checks user is logged in and has certain customClaim
&& idComparison(request.auth.uid, id) // Checks user uid is same as document id
&& checkCustomer() // User can read user data only if data is under same customer
}
Device cache doesn't seem to affect the issue as application's cache can be cleaned and the "cold start" still occurs
Firestore can be called from another environment or just another mobile device and this "cold start" will occur to devices individually (meaning that it doesn't help if another device opened the application just before). Unlike if using Cloud Run with min instances, and if fired from any environment the next calls right after will be fast regardless the environment (web or mobile).
EDIT
I have tested this also by changing listener to simple getDoc call. Same behavior still happens on a build application. Replacing listener with:
await firestore()
.collection('Users')
.doc(user.uid)
.get()
.then(async document => {
if (document.exists) {
const data = document.data() as UserInformation;
if (data) data.id = document.id;
eventsRetrieved(data);
}
});
EDIT2
Testing further there has been now 3-15s "cold start" on first Firestore getDoc. Also in some cases the timing between app open has been only 10 minutes so the minimum 30 min benchmark does not apply anymore. I'm going to send dm to Firebase bug report team to see things further.
Since you're using React Native, I assume that the documents in the snapshot are being stored in the local cache by the Firestore SDK (as the local cache is enabled by default on native clients). And since you use an onSnapshot listener it will actually re-retrieve the results from the server if the same listener is still active after 30 minutes. From the documentation on :
If offline persistence is enabled and the listener is disconnected for more than 30 minutes (for example, if the user goes offline), you will be charged for reads as if you had issued a brand-new query.
The wording here is slightly different, but given the 30m mark you mention, I do expect that this is what you're affected by.
In the end I didn't find straight answer why this cold start appeared. I ended up changing native Client SDK to web Client SDK which works correctly first data fetch time being ~0.6s (always 0.5-1s). Package change fixed the issue for me while functions to fetch data are almost completely identical.
I have successfully implemented a basic notification feature using react-native-firebase library, everything is working as expected, information is properly received and ready to be used for a purpose I have yet to determine. My code currently look like this for the notification handling part:
componentDidMount() {
/**
* When app on foreground, rewrap received notification and re-send it as notification using channelId
* A workaround because channelId never set by default by FCM API so we need to rewrap to make sure it is
* shown on user's notification tray
*/
this.notificationListener = firebase.notifications().onNotification((notification) => {
//data object must have channelId props as a workaround for foreground notification on Android
console.log('Notif ', notification);
notification.android.setChannelId(notification.data.channelId);
firebase.notifications().displayNotification(notification);
});
//On Notification tapped, be it from foreground or background
this.notificationOpen = firebase.notifications().onNotificationOpened((notificationOpen) => {
//body and title lost if accessed from background, taking info from data object by default
const notification = notificationOpen.notification;
console.log('Open ', notification)
Alert.alert(notification.data.title, notification.data.body);
});
//When notification received when app is closed
this.initialNotification = firebase.notifications().getInitialNotification()
.then((notificationOpen) => {
//body and title lost if accessed this way, taking info from data object where info will persist
if (notificationOpen) {
const notification = notificationOpen.notification;
console.log('Initial ', notification)
Alert.alert(notification.data.title, notification.data.body);
}
});
}
componentWillUnmount() {
this.notificationListener();
this.initialNotification()
this.notificationOpen();
}
The above code let me use any information I sent from firebase console or a php server set up by my colleague from within the above scope (not sure how the server side implementation was done, but it gives me the exact same notification object on my end).
So that's good and all, but the problem is when I set badge on IOS from firebase console, the badge doesn't go away once I opened the notification.
I have been trying to figure out if there's any extra bit I have to add to the above block to programatically decrement the badge counter, but have no luck so far.
So if anyone here can show me how to manage these notification objects properly (especially explaining the nature and lifecycle of these objects -- i.e. which data on which property/method persists or is static within the scope of the notification object) on both Android and IOS, that would be greatly appreciated :)
Turns out a simple firebase.notifications().setBadge(0) on root componentDidMount() clears out the badge count whenever the app is opened.
May need to use firebase.notifications().removeAllDeliveredNotifications() or firebase.notifications().cancelAllNotifications() to remove them from notification tray too.
May be you have to set code for badge while creating a notification
this.notificationListener = firebase.notifications().onNotification((notification) => {
const localNotification = new firebase.notifications.Notification()
.setNotificationId(notification.notificationId)
.setTitle(notification.title)
.setSubtitle(notification.subtitle)
.setBody(notification.body)
.setData(notification.data)
.ios.setBadge(notification.ios.badge);
firebase.notifications()
.displayNotification(localNotification)
.catch(err => console.error(err));
}
Put this line in code .ios.setBadge(notification.ios.badge); while building a notification and try again
According to the docs Realm can notify you when certain actions are taking place like "every time a write transaction is committed". I am using the Realm Object Server and the first time a user opens my app a large set of data is synched from the server down to the app. I would like to show a loading screen and not present the main UI of my app until Realm has completed its initial sync. Is there a way to be notified / determine when this process is complete?
The realm.io website just posted documentation on how to do this.
Asynchronously Opening Realms
If opening a Realm might require a time-consuming operation, such as applying migrations or downloading the remote contents of a synchronized Realm, you should use the openAsync API to perform all work needed to get the Realm to a usable state on a background thread before dispatching to the given queue. You should also use openAsync with Realms that are set read-only.
For example:
Realm.openAsync({
schema: [PersonSchema],
schemaVersion: 42,
migration: function(oldRealm, newRealm) {
// perform migration (see "Migrations" in docs)
}
}, (error, realm) => {
if (error) {
return;
}
// do things with the realm object returned by openAsync to the callback
console.log(realm);
})
The openAsync command takes a configuration object as its first parameter and a callback as its second; the callback function receives a boolean error flag and the opened Realm.
Initial Downloads
In some cases, you might not want to open a Realm until it has all remote data available. In such a case, use openAsync. When used with a synchronized Realm, this will download all of the Realm’s contents before the callback is invoked.
var carRealm;
Realm.openAsync({
schema: [CarSchema],
sync: {
user: user,
url: 'realm://object-server-url:9080/~/cars'
}
}, (error, realm) => {
if (error) {
return;
}
// Realm is now downloaded and ready for use
carRealm = realm;
});
As I understand when a request to an event emitter on the server arrives, that request is never closed and you only need to res.write() every time you would like to send a message. However is there a way to be notified when the client that performed this request has left? Is there a property on the request object?
suppose I have the following route
app.get('/event',function(req,res){
//set response headers
//how do I check if req object is still active to send a message and perform other actions?
})
The basic sequence of events should be similar in other frameworks, but this example is Grails 3.3.
First set up endpoints to subscribe, and to close the connection.
def index() {
// handler for GET /api/subscribe
rx.stream { Observer observer ->
// This is the Grails event bus. background tasks,
// services and other controllers can post these
// events, CLIENT_HANGUP, SEND_MSG, which are
// just string constants.
eventBus.subscribe(CLIENT_HANGUP) {String msg ->
// Code to handle when the grails event bus
// posts CLIENT_HANGUP
// Do any side effects here, like update your counter
// Close the SSE connection
observer.onCompleted()
return
}
eventBus.subscribe(SEND_MSG) {String msg ->
// Send a Server Sent Event
observer.onNext(rx.respond(msg))
}
}
}
def disconnecting()
{
// handler for GET /api/disconnect
// Post the CLIENT_HANGUP event to the Grails event bus
notify(CLIENT_HANGUP, 'disconnect')
}
Now in the client, you need to arrange to GET /api/disconnect whenever your use-case requires it. Assuming you want to notice when someone navigates away from your page, you could register a function on window.onbeforeunload. This example is using Vue.js and Axios.
window.onbeforeunload = function (e) {
e.preventDefault()
Vue.$http({
method: 'get',
url: 'http://localhost:8080/api/disconnect'
})
.then((response) => { console.log(response) })
.catch(({error}) => { console.log(error) })
}
In the case of Servlet stacks like Grails, I found that I needed to do this even if I had no housekeeping of my own to do when the browser went away. Without it, page reloads were causing IOExceptions on the back end.
I am trying to create a Meteor app that stores content in a Meteor collection to be passed between the server and the client to display a success message after an asynchronous api call through the twit package.
However, I am running into an issue where when I update the collection on the server and the updates are not reflected on the client. My code is as follows:
/lib
Alerts = new Meteor.Collection("alerts");
/client
Template.suggestionForm.events({
"submit form": function (e) {
return Meteor.call('submitMessage', message);
}
});
Meteor.subscribe('alerts');
Meteor.startup(function() {
Tracker.autorun(function() {
console.log(Alerts.find());
})
});
/server
Fiber = Npm.require('fibers')
Twit = new TwitMaker({
consumer_key: '...',
consumer_secret: '...',
access_token: '...',
access_token_secret: '...'
});
Meteor.publish("alerts", function(){
Alerts.find();
});
Meteor.methods({
submitMessage: function(message) {
this.unblock();
Twit.post('statuses/update', { 'status': message }, function(err, data, response) {
Fiber(
Alerts.remove({});
Alerts.insert({response: err});
).run();
}));
}
});
When I submit the form the function calls just fine and updates the collection, however the Tracker.autorun() does not run. Any ideas why this is happening or how I can make the client listen for changes in collections would be super helpful. Thank you!
Remember to return the resulting cursor in the publish():
Meteor.publish("alerts", function(){
return Alerts.find();
});
Reference: http://docs.meteor.com/#/full/meteor_publish
Publish functions can return a Collection.Cursor, in which case Meteor will publish that cursor's documents to each subscribed client. You can also return an array of Collection.Cursors, in which case Meteor will publish all of the cursors.
and
Alternatively, a publish function can directly control its published record set by calling the functions added (to add a new document to the published record set), changed (to change or clear some fields on a document already in the published record set), and removed (to remove documents from the published record set). These methods are provided by this in your publish function.
If a publish function does not return a cursor or array of cursors, it is assumed to be using the low-level added/changed/removed interface, and it must also call ready once the initial record set is complete.