Firebase onDisconnect not 100% reliable now? - firebase

I've been using Firebase extensively and still face only one real issue: onDisconnect isn't 100% reliable in my experience.
If you close a computer without closing the window first, or kill the browser, you sometime have the "garbage collector" getting your onDisconnect executed, sometimes it doesn't.
My question is the following: I just don't use /.connected for now, I basically use a simple
userRef.set('status', 1);
userRef.onDisconnect().update({ 'status' : 0 });
Is there anything wrong with this approach? Do we agree that the update parameters are passed to the server at the time the line is executed and not before window unload ?
NB: I happen to try to keep a multi-window status, using the following approach to keep the status at 1 if another window is closed:
userRef.child('status').on('value', function(snap) {
if (snap.val() != 1) {
userRef.set('status', 1);
}
});
I don't this how this could be related, but...
MY SOLUTION: In fact, I had just missed the part where you learn that onDisconnect is only triggered once. To get a persistent onDisconnect, you need to implement basic persistence.
Helpers.onConnected = function(callback) {
var connectedRef = lm.newFirebase('.info/connected');
var fn = connectedRef.on('value', function(snap) {
if (snap.val() === true) {
if (callback) callback();
}
});
var returned = {};
returned.cancel = function() {
connectedRef.off('value', fn);
};
return returned;
};
Simple use case:
this._onConnected = lm.helpers.onConnected(function() {
this.firebase.onDisconnect().update({ 'tu': 0 });
}.bind(this));
And then to cancel:
if (this._onConnected) this._onConnected.cancel();
this.firebase.onDisconnect().cancel();

You should always call the onDisconnect() operation BEFORE you call the set() operation. That way if the connection is lost between the two you don't end up with zombie data.
Also note that in the case where the network connection is not cleanly killed, you may have to wait for a TCP timeout before we're able to detect the user as gone and trigger disconnect cleanup. The cleanup will occur, but it may take a few minutes.

After a lot of digging around, I found this issue which was actually happening in my case and I think for most of the others also who are coming to this page for a solution.
So the problem is firebase checks it's accessToken twice when 1. the onDisconnect is enqueued and 2. when the onDsiconnect is applied.
Firebase doesn't proactively refreshes tokens when a tab is not visible. If the page is inactive for more than the expiry of the accessToken and closed without focusing on that tab firebase will not allow the onDisconnect because of the expired accessToken.
Solutions to this:
You can get a new accessToken by setting some sort of
interval like this:
let intervalId;
document.addEventListener('visibilitychange', () => {
if (document.visibilityState === 'visible') {
if (intervalId) {
window.clearTimeout(intervalId);
intervalId = undefined;
}
} else {
firebaseApp.auth().currentUser.getIdToken(true);
intervalId = setInterval(() => {
firebaseApp.auth().currentUser.getIdToken(true);
}, intervalDuration);
}
});
Or you can disconnect the database manually firebase.database().goOffline() whenever tab visibility changes from "visible".

Expanding on the comment :
This is why we provide the /.info/connected endpoint.
We recommend re-establishing the disconnect handler every time the connection comes back online
I followed the above and got it fixed:
const userRef = Firebase.database().ref('/users/<user-id>')
Firebase.database()
.ref('.info/connected')
.on('value', async snap => {
if (snap.val() === true) {
await userRef.onDisconnect().remove();
await userRef.update({ status : 'online' })
} else {
await this.userRef.remove()
}
});
For reference, my previous code was:
const userRef = Firebase.database().ref('/users/<user-id>')
userRef.onDisconnect().remove();
await userRef.update({ status : 'online' })
The issue with this is that the onDisconnect may not work when you go offline and come back online multiple times

Related

Am I doing Firestore Transactions correct?

I've followed the Firestore documentation with relation to transactions, and I think I have it all sorted correctly, but in testing I am noticing issues with my documents not getting updated properly sometimes. It is possible that multiple versions of the document could be submitted to the function in a very short interval, but I am only interested in only ever keeping the most recent version.
My general logic is this:
New/Updated document is sent to cloud function
Check if document already exists in Firestore, and if not, add it.
If it does exist, check that it is "newer" than the instance in firestore, if it is, update it.
Otherwise, don't do anything.
Here is the code from my function that attempts to accomplish this...I would love some feedback if this is correct/best way to do this:
const ocsFlight = req.body;
const procFlight = processOcsFlightEvent(ocsFlight);
try {
const ocsFlightRef = db.collection(collection).doc(procFlight.fltId);
const originalFlight = await ocsFlightRef.get();
if (!originalFlight.exists) {
const response = await ocsFlightRef.set(procFlight);
console.log("Record Added: ", JSON.stringify(procFlight));
res.status(201).json(response); // 201 - Created
return;
}
await db.runTransaction(async (t) => {
const doc = await t.get(ocsFlightRef);
const flightDoc = doc.data();
if (flightDoc.recordModified <= procFlight.recordModified) {
t.update(ocsFlightRef, procFlight);
console.log("Record Updated: ", JSON.stringify(procFlight));
res.status(200).json("Record Updated");
return;
}
console.log("Record isn't newer, nothing changed.");
console.log("Record:", JSON.stringify("Same Flight:", JSON.stringify(procFlight)));
res.status(200).json("Record isn't newer, nothing done.");
return;
});
} catch (error) {
console.log("Error:", JSON.stringify(error));
res.status(500).json(error.message);
}
The Bugs
First, you are trusting the value of req.body to be of the correct shape. If you don't already have type assertions that mirror your security rules for /collection/someFlightId in processOcsFlightEvent, you should add them. This is important because any database operations from the Admin SDKs will bypass your security rules.
The next bug is sending a response to your function inside the transaction. Once you send a response back the client, your function is marked inactive - resources are severely throttled and any network requests may not complete or crash. As a transaction may be retried a handful of times if a database collision is detected, you should make sure to only respond to the client once the transaction has properly completed.
You use set to write the new flight to Firestore, this can lead to trouble when working with transactions as a set operation will cancel all pending transactions at that location. If two function instances are fighting over the same flight ID, this will lead to the problem where the wrong data can be written to the database.
In your current code, you return the result of the ocsFlightRef.set() operation to the client as the body of the HTTP 201 Created response. As the result of the DocumentReference#set() is a WriteResult object, you'll need to properly serialize it if you want to return it to the client and even then, I don't think it will be useful as you don't seem to use it for the other response types. Instead, a HTTP 201 Created response normally includes where the resource was written to as the Location header with no body, but here we'll pass the path in the body. If you start using multiple database instances, including the relevant database may also be useful.
Fixing
The correct way to achieve the desired result would be to do the entire read->check->write process inside of a transaction and only once the transaction has completed, then respond to the client.
So we can send the appropriate response to the client, we can use the return value of the transaction to pass data out of it. We'll pass the type of the change we made ("created" | "updated" | "aborted") and the recordModified value of what was stored in the database. We'll return these along with the resource's path and an appropriate message.
In the case of an error, we'll return a message to show the user as message and the error's Firebase error code (if available) or general message as the error property.
// if not using express to wrangle requests, assert the correct method
if (req.method !== "POST") {
console.log(`Denied ${req.method} request`);
res.status(405) // 405 - Method Not Allowed
.set("Allow", "POST")
.end();
return;
}
const ocsFlight = req.body;
try {
// process AND type check `ocsFlight`
const procFlight = processOcsFlightEvent(ocsFlight);
const ocsFlightRef = db.collection(collection).doc(procFlight.fltId);
const { changeType, recordModified } = await db.runTransaction(async (t) => {
const flightDoc = await t.get(ocsFlightRef);
if (!flightDoc.exists) {
t.set(ocsFlightRef, procFlight);
return {
changeType: "created",
recordModified: procFlight.recordModified
};
}
// only parse the field we need rather than everything
const storedRecordModified = flightDoc.get('recordModified');
if (storedRecordModified <= procFlight.recordModified) {
t.update(ocsFlightRef, procFlight);
return {
changeType: "updated",
recordModified: procFlight.recordModified
};
}
return {
changeType: "aborted",
recordModified: storedRecordModified
};
});
switch (changeType) {
case "updated":
console.log("Record updated: ", JSON.stringify(procFlight));
res.status(200).json({ // 200 - OK
path: ocsFlightRef.path,
message: "Updated",
recordModified,
changeType
});
return;
case "created":
console.log("Record added: ", JSON.stringify(procFlight));
res.status(201).json({ // 201 - Created
path: ocsFlightRef.path,
message: "Created",
recordModified,
changeType
});
return;
case "aborted":
console.log("Outdated record discarded: ", JSON.stringify(procFlight));
res.status(200).json({ // 200 - OK
path: ocsFlightRef.path,
message: "Record isn't newer, nothing done.",
recordModified,
changeType
});
return;
default:
throw new Error("Unexpected value for 'changeType': " + changeType);
}
} catch (error) {
console.log("Error:", JSON.stringify(error));
res.status(500) // 500 - Internal Server Error
.json({
message: "Something went wrong",
// if available, prefer a Firebase error code
error: error.code || error.message
});
}
References
Cloud Firestore Transactions
Cloud Firestore Node SDK Reference
HTTP Event Cloud Functions

Is it possible to implement an auto logout feature for inactivity?

I am trying to implement the auto logout feature after x mins of inactivity on flutter while using Firebase , authentication method being email.
I have searched online but whatever I've found is not for flutter.
Any help will be greatly appreciated thank you!
you can use interceptor for all api instance like this, but instead customize the onRequest method.
the idea is: save time information when hit api occurred. and then whenever another hit api occur, check duration between now and last saved time.
if the duration is longer than, let's say 5 minutes, then you can call method logout, else you can continue the request
here some snippet to make it clear:
Future<Dio> getApiClient() async {
_dio.interceptors.clear();
_dio.interceptors
.add(InterceptorsWrapper(onRequest: (RequestOptions options) {
// Do something before request is sent
var pref = await SharedPreferences.getInstance();
var timeNow = DateTime.now().millisecondsSinceEpoch;
var lastHitApi = pref.getInt(LAST_HIT_API);
var delay = timeNow - lastHitApi;
pref.setInt(LAST_HIT_API, timeNow);
if (delay > DELAY_MAX) {
// do logout here
}
return options;
},onResponse:(Response response) {
// Do something with response data
return response; // continue
}, onError: (DioError error) async {
// Do something with response error
}));
_dio.options.baseUrl = baseUrl;
return _dio;
}
Edit: i guess this one is more preferable
Set the timeout duration and call logout funtion
Timer(Duration(seconds: 5), () => logOut());

restarting a queue of API requests if a token refresh happened

I'm having a hard time wrapping my brain around this pattern I am trying to implement so I'm hoping the stack overflow community might be able to help me work through a solution to this.
Currently I use redux-thunk along with superagent to handle calls to me API and syncing it all up with redux
An example of this might look like
export const getUser = (id) => {
return (dispatch) => {
const deferred = new Promise((resolve, reject) => {
const call = () => {
API.get(`/users/${id}`)
.then((response) => response.body)
.then((response) => {
if (response.message === 'User found') {
serializeUser(response.data).then((response) => {
resolve(response);
});
} else {
reject('not found');
}
}).catch((err) => {
handleCatch(err, dispatch).then(call).catch(reject)
});
}
call()
});
return deferred;
};
};
In the case where the server comes back with a 200 and some data I continue on with putting the data into the store and rendering to the page or whatever my application does.
In the case I receive an error I have attempted to write a function that will intercept those and determine if it should show an error on page or in the case of a 401 from our API, attempt a token refresh and then try to recall the method...
import { refreshToken } from '../actions/authentication';
export default (err, dispatch) => {
const deferred = new Promise((resolve, reject) => {
if (err.status === 401) {
dispatch(refreshToken()).then(resolve).catch(reject)
} else {
reject(err);
}
})
return deferred;
};
This works, however, I have to add this to each call, and it doesn't account for concurrent calls that should not attempt to call if there is a refresh in progress.
I've seen some things in my research on this topic that maybe redux-saga could work but I haven't been able to wrap my brain around how I might make this work
Basically, I need something like a queue that all my API requests will go into that is maybe debounced so any concurrent requests will just be pushed to the end and once a timeout ends the calls get stacked up, when the first call gets a 401 it pauses the queue until the token refresh either comes back successful, in which case it continues the queue, or with a failure, in which case it cancels all future requests from the queue and sends the user back to a login page
The thing I would be worried about here is if the first call in the stack takes a long time, I don't want the other calls to then have to wait a long time because it will increase the perceived loading time to the user
Is there a better way to handle keeping tokens refreshed?

Meteor Server Latency

I feel like I'm doing something wrong because my results seem to go against the very nature of Meteor's pitch of simulating client/sever interactions for speed. When I do any sort of database update using Meteor.call() the app has to wait for the round trip to the server, often resulting in a slow response or the user hitting the button twice. I just want to make sure I'm doing this correctly. Here's what I'm doing:
Client:
Template.shot.events({
'change #shot-status-select': function (event, template) {
var new_status = $(event.target).val();
var shot_id = Session.get('current_shot_id');
Meteor.call('setShotStatus', shot_id, new_status, function (error, result) {
if (result) {
feedbackSuccess('Status changed to <b>'+new_status+'</b>');
} else {
feedbackError('Status change failed');
console.log(error);
}
});
},
});
And Server:
...
'setShotStatus': function(shot_id, status) {
var result = Shots.update({'_id': shot_id}, {$set: {'status': status}});
if (result) {
return true;
} else {
return false;
}
},
There are a couple of things going on here that are preventing your method from being latency compensated (it's making the complete round trip to the server).
First, if you execute a Meteor.call on the client with a callback, it will always wait for the result from the server. Unfortunately, you can't just write it synchronously because a call will always return undefined on the client, and you need the returned result.
If you really want the result of the stub, you'd need to rewrite it like this:
var args = [shot_id, new_status];
var result = Meteor.apply('setShotStatus', args, {returnStubValue: true});
if (result)
feedbackSuccess('Status changed to <b>'+new_status+'</b>');
Note you should wrap the call in a try/catch if errors are likely. Also note that the client and server return values will not always match in the general case, so use this technique with that in mind.
Next, your method definition needs to be in a shared location for both the client and the server code (putting it somewhere under lib or in a package are good choices). If the client doesn't have the method code, it can't simulate it.
Recommended reading:
How to return value on Meteor.call() in client?
Introduction to Latency Compensation
The "Latency Compensation" articles at the Discover Meteor Encyclopedia
Thank you, David. Your answer got me on the right track, but I think there are a couple of nuggets in there that seemed too much to discuss in a comment. The main thing I found was this:
The challenge of getting back to the "Meteor zero-latency" promise was as simple as moving all of my "server" methods to the lib directory.
Literally, no code changes. After making the methods accessible to both client and server, Meteor did all of the heavy lifting of executing first on the client, then checking the result with the server result.
David's answer said that using a callback will always wait for a result from the server. I found that to be partly true, in that it will asynchronously wait for a result. Though depending on the accessibility of your methods, it could be a result from the client that you experience, not a round-trip from the server. Not using the callback will always return undefined, thus result will not work in the given example
Lastly, I moved truly private logic to the server only directory for security reasons.
Here's the code result:
client/shot.js
Template.shot.events({
'change #shot-status-select': function (event, template) {
var new_status = $(event.target).val();
var shot_id = Session.get('current_shot_id');
Meteor.call('setShotStatus', shot_id, new_status, function (error, result) {
if (!(result)) {
feedbackError('Status change failed');
console.log(error);
}
});
},
});
lib/methods.js
Meteor.methods({
'setShotStatus': function(shot_id, status) {
var result = Shots.update({'_id': shot_id}, {$set: {'status': status}});
if (result) {
return true;
} else {
return false;
}
},
});

How to get current user in custom route?

As per this answer I created my own route so that I could handle file uploads. Here's what I've got:
var router = Connect.middleware.router(function(route) {
route.post('/upload', function(req, res) {
var filename = req.headers['x-filename'];
var path = Path.join('.uploads', filename);
var writeStream = FileSystem.createWriteStream(path);
writeStream.on('error', function(e) {
console.error(e);
res.writeHead(500);
res.end();
}).on('close', function() {
Fiber(function() {
console.log(Meteor.user());
}).run();
res.writeHead(200);
res.end();
});
req.pipe(writeStream);
});
});
app.use(router);
This works great for uploading files, but when I try to acess Meteor.user() it gives me:
app/server/main.js:24
}).run();
^
Error: Meteor.userId can only be invoked in method calls. Use this.userId in publish functions.
at Object.Meteor.userId (app/packages/accounts-base/accounts_server.js:95:13)
at Object.Meteor.user (app/packages/accounts-base/accounts_server.js:100:25)
at app/server/main.js:23:36
Exited with code: 1
I can't see anything in the req object that might help me out.
Is there any way to get access to the user object?
For the time being, I'm getting the user ID client side and passing it along through the headers which I then use to look up server side:
route.post('/upload', function(req, res) {
Fiber(function() {
var userId = req.headers['x-userid'];
var user = Meteor.users.findOne({_id:userId});
if(user) {
...
} else {
res.writeHead(403,'User not logged in');
res.end();
}
}).run();
});
I don't like this because it's not at all secure. It would be easy to upload something under a different user's account.
Edit: Nevermind. The very act of calling Meteor.users.findOne({_id:userId}); somehow breaks the upload stream. Every file gets corrupt as soon as I put that in; they upload up to about 700 KB and then just stop and close the connection without error.
If it's still valid question.
The problem is that there is no way how to get Meteor.user() in this part of code.
But you can always reach Meteor.userId .. and it's not null if user is logged in .. so you can upload only for logged user. (if req.headers['x-userid'] == Meteor.userId)
The very act of calling Meteor.users.findOne({_id:userId}); somehow
breaks the upload stream.
Because it's reactive part.. so every time if Meteor.users collection is updated this part of code is executed again.
So if you can use only Meteor.userId (which is changed only if user is logged in/out) it should work fine.
I've run into this quite a few times, and it's frustrating. I don't think you can make Meteor.userId() calls from inside a fiber. I usually do a var userId = Meteor.userId(); before I call the fiber, and then reference that variable instead.

Resources