Is it possible to implement an auto logout feature for inactivity? - firebase

I am trying to implement the auto logout feature after x mins of inactivity on flutter while using Firebase , authentication method being email.
I have searched online but whatever I've found is not for flutter.
Any help will be greatly appreciated thank you!

you can use interceptor for all api instance like this, but instead customize the onRequest method.
the idea is: save time information when hit api occurred. and then whenever another hit api occur, check duration between now and last saved time.
if the duration is longer than, let's say 5 minutes, then you can call method logout, else you can continue the request
here some snippet to make it clear:
Future<Dio> getApiClient() async {
_dio.interceptors.clear();
_dio.interceptors
.add(InterceptorsWrapper(onRequest: (RequestOptions options) {
// Do something before request is sent
var pref = await SharedPreferences.getInstance();
var timeNow = DateTime.now().millisecondsSinceEpoch;
var lastHitApi = pref.getInt(LAST_HIT_API);
var delay = timeNow - lastHitApi;
pref.setInt(LAST_HIT_API, timeNow);
if (delay > DELAY_MAX) {
// do logout here
}
return options;
},onResponse:(Response response) {
// Do something with response data
return response; // continue
}, onError: (DioError error) async {
// Do something with response error
}));
_dio.options.baseUrl = baseUrl;
return _dio;
}
Edit: i guess this one is more preferable

Set the timeout duration and call logout funtion
Timer(Duration(seconds: 5), () => logOut());

Related

How to make a node.js Asynchronous API call and use a returned value?

It seems like has been asked a 1000 times but in reading all of the responses I still cannot seem to figure out how to make this work.
Here is my use case.
Make a call to an Auth Endpoint and return an access token
do some updates to the options of a 2nd API call
make the next api call with the updated options
I know how to make this work with traditional promises or callbacks but in my case step 1 & 2 could be optional so what I want to do is if step 1 & 2 are required call an async function to get the token and update the options. If 1 & 2 are not required then just make the 2nd API call.
I am trying to use axios but no matter what I do the response is either undefined or
Here is my code, can someone please explain the best way to do this, or is it just easier to use traditional promise\callbacks?
var options = {
'method': httpVerb ,
'url': url,
'headers': {
'Content-Type': 'application/json'
},
body: JSON.stringify(bodyArgs)
};
if(authBody){
auth = getAuth(authBody)
options.headers["Authorization"] = "Bearer " + auth.access_token;
console.log(auth)
}
const getAuth = async (options) => {
try{
const resp = await axios.post(options.authURL, options )
if(resp.status === 200){
return resp.data;
}else{
return ""
}
}catch(err){
console.error(err);
}
}
auth = getAuth(authBody) // wrong
getAuth is an async function, and async functions always return Promises even if the return statements within them return simple values.
If you want to wait for the return value of getAuth, you're going to have to do so in a then function on the return value, or await the result from within a different async function. Consequently, if you have an optional asynchronous step, it is safer to make the whole process appear asynchronous to abstract away the cases where you need to wait to refresh the token.

Minting firebaseId tokens is unusually slow

For authenticated API calls I'm generating a firebaseId token each time, as in the code below. But this sometimes takes up to 2-3 seconds to mint the token, which adds up to a slow UX. Are there any workarounds to this, perhaps to store the token for a longer period or even indefinitely? Am I missing something obvious?
Thanks!
async fetchFromAPIAuthenticatedTimeout(url, params) {
this.fetchStampA = (new Date()).getTime()
try {
const token = await firebase.auth().currentUser.getIdToken(true)
this.fetchStampB = (new Date()).getTime()
const response = await fetch(url,params)
if (response.status != 200) {
throw Error("api-error");
}
var json = null;
try {
json = await response.json();
} catch (error) {
throw Error("api-error")
}
this.fetchStampD = (new Date()).getTime()
console.log(">> token",this.fetchStampB-this.fetchStampA)
console.log(">> fetch",this.fetchStampC-this.fetchStampB)
return json.result
}
catch (error)
{
throw Error(error.message)
}
}
// Prints:
// >> token 2198
// >> fetch 319
Per the reference doc, getIdToken() will automatically refresh the token if it has expired. So there's no need to force a refresh on each call. Removing this will prevent the round trip and speed up the requests considerably, and also keep you out of quota trouble--which you will run into as your app scales.
UPDATE: This edge case is fixed. Tokens now refresh a short time before they expire. So there's never a need to use the force refresh for this use case. Note that there may still be an edge case where, if you send the request seconds before expiration and there is latency, that the server may not call verifyIdToken() before the expiration occurs. So you may want to implement retry logic or check the expiration time (see IdTokenResult#expirationTime) and force a refresh if it's extremely close to the timestamp. But certainly not on each request.

Is there a way to prevent having to await an async method returning a stream?

We currently have a method that returns a Future<Stream<Position>> just because internally we have to await the result of a method returning a Future before we can call another method that returns the Stream<Position> which we are actually interested in. Here is the code:
Future<Stream<Position>> getPositionStream(
[LocationOptions locationOptions = const LocationOptions()]) async {
PermissionStatus permission = await _getLocationPermission();
if (permission == PermissionStatus.granted) {
if (_onPositionChanged == null) {
_onPositionChanged = _eventChannel
.receiveBroadcastStream(
Codec.encodeLocationOptions(locationOptions))
.map<Position>(
(element) => Position._fromMap(element.cast<String, double>()));
}
return _onPositionChanged;
} else {
_handleInvalidPermissions(permission);
}
return null;
}
So what happens here is:
We await the _getLocationPermission() method so that we can test if the user grants us permission to access to the location services on their device (Android or iOS);
If the user grants us permission we return a Stream<Position> which will update every time the device registers a location change.
I have the feeling we can also handle this without doing an await and returning a Future. Something along the lines of:
Manually create and return an instance of the Stream<Position> class;
Handle the logic of checking the permissions and calling the _eventChannel.receiveBroadcastStream in the then() method of the Future<PermissionStatus> returned from the _getLocationPermission() method (so we don't have to await it);
Copy the events send on the stream from the _eventChannel.receiveBroadcastStream onto the earlier created (and returned) stream.
Somehow this seems to be possible, but also includes some overhead in managing the stream and make sure it closes and is cleaned up correctly during the live cycle of the plugin or when the user unsubscribes pass through the events to the _eventChannel etc.
So I guess the question would be, what would be the best way to approach this situation?
You can write the code as an async* function, which will return a Stream and still allows await in the body:
Stream<Position> getPositionStream(
[LocationOptions locationOptions = const LocationOptions()]) async* {
PermissionStatus permission = await _getLocationPermission();
if (permission == PermissionStatus.granted) {
if (_onPositionChanged == null) {
_onPositionChanged = _eventChannel
.receiveBroadcastStream(
Codec.encodeLocationOptions(locationOptions))
.map<Position>(
(element) => Position._fromMap(element.cast<String, double>()));
}
yield* _onPositionChanged;
} else {
_handleInvalidPermissions(permission);
}
}
Alternatively, if you are using a non-async function, you can also use StreamCompleter from package:async.
It allows you to return a Stream now, even if you only get the real stream later. When that happens, you "complete" the StreamCompleter with the real stream, and the original stream will behave as if it was the real stream.

Firebase onDisconnect not 100% reliable now?

I've been using Firebase extensively and still face only one real issue: onDisconnect isn't 100% reliable in my experience.
If you close a computer without closing the window first, or kill the browser, you sometime have the "garbage collector" getting your onDisconnect executed, sometimes it doesn't.
My question is the following: I just don't use /.connected for now, I basically use a simple
userRef.set('status', 1);
userRef.onDisconnect().update({ 'status' : 0 });
Is there anything wrong with this approach? Do we agree that the update parameters are passed to the server at the time the line is executed and not before window unload ?
NB: I happen to try to keep a multi-window status, using the following approach to keep the status at 1 if another window is closed:
userRef.child('status').on('value', function(snap) {
if (snap.val() != 1) {
userRef.set('status', 1);
}
});
I don't this how this could be related, but...
MY SOLUTION: In fact, I had just missed the part where you learn that onDisconnect is only triggered once. To get a persistent onDisconnect, you need to implement basic persistence.
Helpers.onConnected = function(callback) {
var connectedRef = lm.newFirebase('.info/connected');
var fn = connectedRef.on('value', function(snap) {
if (snap.val() === true) {
if (callback) callback();
}
});
var returned = {};
returned.cancel = function() {
connectedRef.off('value', fn);
};
return returned;
};
Simple use case:
this._onConnected = lm.helpers.onConnected(function() {
this.firebase.onDisconnect().update({ 'tu': 0 });
}.bind(this));
And then to cancel:
if (this._onConnected) this._onConnected.cancel();
this.firebase.onDisconnect().cancel();
You should always call the onDisconnect() operation BEFORE you call the set() operation. That way if the connection is lost between the two you don't end up with zombie data.
Also note that in the case where the network connection is not cleanly killed, you may have to wait for a TCP timeout before we're able to detect the user as gone and trigger disconnect cleanup. The cleanup will occur, but it may take a few minutes.
After a lot of digging around, I found this issue which was actually happening in my case and I think for most of the others also who are coming to this page for a solution.
So the problem is firebase checks it's accessToken twice when 1. the onDisconnect is enqueued and 2. when the onDsiconnect is applied.
Firebase doesn't proactively refreshes tokens when a tab is not visible. If the page is inactive for more than the expiry of the accessToken and closed without focusing on that tab firebase will not allow the onDisconnect because of the expired accessToken.
Solutions to this:
You can get a new accessToken by setting some sort of
interval like this:
let intervalId;
document.addEventListener('visibilitychange', () => {
if (document.visibilityState === 'visible') {
if (intervalId) {
window.clearTimeout(intervalId);
intervalId = undefined;
}
} else {
firebaseApp.auth().currentUser.getIdToken(true);
intervalId = setInterval(() => {
firebaseApp.auth().currentUser.getIdToken(true);
}, intervalDuration);
}
});
Or you can disconnect the database manually firebase.database().goOffline() whenever tab visibility changes from "visible".
Expanding on the comment :
This is why we provide the /.info/connected endpoint.
We recommend re-establishing the disconnect handler every time the connection comes back online
I followed the above and got it fixed:
const userRef = Firebase.database().ref('/users/<user-id>')
Firebase.database()
.ref('.info/connected')
.on('value', async snap => {
if (snap.val() === true) {
await userRef.onDisconnect().remove();
await userRef.update({ status : 'online' })
} else {
await this.userRef.remove()
}
});
For reference, my previous code was:
const userRef = Firebase.database().ref('/users/<user-id>')
userRef.onDisconnect().remove();
await userRef.update({ status : 'online' })
The issue with this is that the onDisconnect may not work when you go offline and come back online multiple times

Is there a simple way to simulate lag with Meteor?

is there a way to simulate lag with Meteor? Perhaps something that would delay all calls by say, 300ms?
You can do it in publish using:
Meteor._sleepForMs(5000); // sleeps for 5 seconds
I guess I'm a bit late for the party, but here's a better solution:
There are basically two parts to this question. One is, how to delay Meteor WebSocket (SockJS) writes and one is how to delay HTTP traffic (connect). You'll need to add both of the following snippets to your server-side code in order to delay all traffic sent from the Meteor server.
WebSocket
The hard part was overwriting the WebSocket write to delay it with a setTimeout:
(function () {
// Set the delay you want
var timeout = 3000
// stream_server abstracts sockJS the DDP-Server talks to it.
var streamServer = Meteor.server.stream_server
// The connect event listener
var standardConnect = streamServer.server._events.connection
// Overwrite the default event-handler
streamServer.server._events.connection = function (socket) {
// Overwrite the writes to the socket
var write = socket.write
socket.write = function () {
var self = this
var args = arguments
// Add a delay
setTimeout(function () {
// Call the normal write methods with the arguments passed to this call
write.apply(self, args)
}, timeout)
}
// Call the normal handler after overwritting the socket.write function
standardConnect.apply(this, arguments)
}
})()
HTTP
With connect it's pretty straight forward:
// Add a simple connect handler, wich calls the next handler after a delay
WebApp.rawConnectHandlers.use(function (req, res, next) {
return setTimeout(next, timeout)
})
Not sure about all calls, but you can use Futures to add a lag on the server, that way you can see latency compensation in action.
In a meteor method for example, you can
Meteor.methods({
post: function(post) {
post.title = post.title + (this.isSimulation ? '(client)' : '(server)');
// wait for 5 seconds
if (! this.isSimulation) {
var Future = Npm.require('fibers/future');
var future = new Future();
Meteor.setTimeout(function() {
future.ret();
}, 5 * 1000); // 5 seconds
future.wait();
}
var postId = Posts.insert(post);
return postId;
}
});
This will show the post being inserted with (client) appended to the end, and then 5 seconds later will get the update from the server and post's title will end with (server)
If you want simulate the lag in the subscriptions you can do the next:
Meteor.publish('collection', function(params) {
Meteor._sleepForMs(2000); // Sleep for 2 seconds
return CollectionX.find(params.query,params.projection);
});

Resources