For authenticated API calls I'm generating a firebaseId token each time, as in the code below. But this sometimes takes up to 2-3 seconds to mint the token, which adds up to a slow UX. Are there any workarounds to this, perhaps to store the token for a longer period or even indefinitely? Am I missing something obvious?
Thanks!
async fetchFromAPIAuthenticatedTimeout(url, params) {
this.fetchStampA = (new Date()).getTime()
try {
const token = await firebase.auth().currentUser.getIdToken(true)
this.fetchStampB = (new Date()).getTime()
const response = await fetch(url,params)
if (response.status != 200) {
throw Error("api-error");
}
var json = null;
try {
json = await response.json();
} catch (error) {
throw Error("api-error")
}
this.fetchStampD = (new Date()).getTime()
console.log(">> token",this.fetchStampB-this.fetchStampA)
console.log(">> fetch",this.fetchStampC-this.fetchStampB)
return json.result
}
catch (error)
{
throw Error(error.message)
}
}
// Prints:
// >> token 2198
// >> fetch 319
Per the reference doc, getIdToken() will automatically refresh the token if it has expired. So there's no need to force a refresh on each call. Removing this will prevent the round trip and speed up the requests considerably, and also keep you out of quota trouble--which you will run into as your app scales.
UPDATE: This edge case is fixed. Tokens now refresh a short time before they expire. So there's never a need to use the force refresh for this use case. Note that there may still be an edge case where, if you send the request seconds before expiration and there is latency, that the server may not call verifyIdToken() before the expiration occurs. So you may want to implement retry logic or check the expiration time (see IdTokenResult#expirationTime) and force a refresh if it's extremely close to the timestamp. But certainly not on each request.
Related
I've followed the Firestore documentation with relation to transactions, and I think I have it all sorted correctly, but in testing I am noticing issues with my documents not getting updated properly sometimes. It is possible that multiple versions of the document could be submitted to the function in a very short interval, but I am only interested in only ever keeping the most recent version.
My general logic is this:
New/Updated document is sent to cloud function
Check if document already exists in Firestore, and if not, add it.
If it does exist, check that it is "newer" than the instance in firestore, if it is, update it.
Otherwise, don't do anything.
Here is the code from my function that attempts to accomplish this...I would love some feedback if this is correct/best way to do this:
const ocsFlight = req.body;
const procFlight = processOcsFlightEvent(ocsFlight);
try {
const ocsFlightRef = db.collection(collection).doc(procFlight.fltId);
const originalFlight = await ocsFlightRef.get();
if (!originalFlight.exists) {
const response = await ocsFlightRef.set(procFlight);
console.log("Record Added: ", JSON.stringify(procFlight));
res.status(201).json(response); // 201 - Created
return;
}
await db.runTransaction(async (t) => {
const doc = await t.get(ocsFlightRef);
const flightDoc = doc.data();
if (flightDoc.recordModified <= procFlight.recordModified) {
t.update(ocsFlightRef, procFlight);
console.log("Record Updated: ", JSON.stringify(procFlight));
res.status(200).json("Record Updated");
return;
}
console.log("Record isn't newer, nothing changed.");
console.log("Record:", JSON.stringify("Same Flight:", JSON.stringify(procFlight)));
res.status(200).json("Record isn't newer, nothing done.");
return;
});
} catch (error) {
console.log("Error:", JSON.stringify(error));
res.status(500).json(error.message);
}
The Bugs
First, you are trusting the value of req.body to be of the correct shape. If you don't already have type assertions that mirror your security rules for /collection/someFlightId in processOcsFlightEvent, you should add them. This is important because any database operations from the Admin SDKs will bypass your security rules.
The next bug is sending a response to your function inside the transaction. Once you send a response back the client, your function is marked inactive - resources are severely throttled and any network requests may not complete or crash. As a transaction may be retried a handful of times if a database collision is detected, you should make sure to only respond to the client once the transaction has properly completed.
You use set to write the new flight to Firestore, this can lead to trouble when working with transactions as a set operation will cancel all pending transactions at that location. If two function instances are fighting over the same flight ID, this will lead to the problem where the wrong data can be written to the database.
In your current code, you return the result of the ocsFlightRef.set() operation to the client as the body of the HTTP 201 Created response. As the result of the DocumentReference#set() is a WriteResult object, you'll need to properly serialize it if you want to return it to the client and even then, I don't think it will be useful as you don't seem to use it for the other response types. Instead, a HTTP 201 Created response normally includes where the resource was written to as the Location header with no body, but here we'll pass the path in the body. If you start using multiple database instances, including the relevant database may also be useful.
Fixing
The correct way to achieve the desired result would be to do the entire read->check->write process inside of a transaction and only once the transaction has completed, then respond to the client.
So we can send the appropriate response to the client, we can use the return value of the transaction to pass data out of it. We'll pass the type of the change we made ("created" | "updated" | "aborted") and the recordModified value of what was stored in the database. We'll return these along with the resource's path and an appropriate message.
In the case of an error, we'll return a message to show the user as message and the error's Firebase error code (if available) or general message as the error property.
// if not using express to wrangle requests, assert the correct method
if (req.method !== "POST") {
console.log(`Denied ${req.method} request`);
res.status(405) // 405 - Method Not Allowed
.set("Allow", "POST")
.end();
return;
}
const ocsFlight = req.body;
try {
// process AND type check `ocsFlight`
const procFlight = processOcsFlightEvent(ocsFlight);
const ocsFlightRef = db.collection(collection).doc(procFlight.fltId);
const { changeType, recordModified } = await db.runTransaction(async (t) => {
const flightDoc = await t.get(ocsFlightRef);
if (!flightDoc.exists) {
t.set(ocsFlightRef, procFlight);
return {
changeType: "created",
recordModified: procFlight.recordModified
};
}
// only parse the field we need rather than everything
const storedRecordModified = flightDoc.get('recordModified');
if (storedRecordModified <= procFlight.recordModified) {
t.update(ocsFlightRef, procFlight);
return {
changeType: "updated",
recordModified: procFlight.recordModified
};
}
return {
changeType: "aborted",
recordModified: storedRecordModified
};
});
switch (changeType) {
case "updated":
console.log("Record updated: ", JSON.stringify(procFlight));
res.status(200).json({ // 200 - OK
path: ocsFlightRef.path,
message: "Updated",
recordModified,
changeType
});
return;
case "created":
console.log("Record added: ", JSON.stringify(procFlight));
res.status(201).json({ // 201 - Created
path: ocsFlightRef.path,
message: "Created",
recordModified,
changeType
});
return;
case "aborted":
console.log("Outdated record discarded: ", JSON.stringify(procFlight));
res.status(200).json({ // 200 - OK
path: ocsFlightRef.path,
message: "Record isn't newer, nothing done.",
recordModified,
changeType
});
return;
default:
throw new Error("Unexpected value for 'changeType': " + changeType);
}
} catch (error) {
console.log("Error:", JSON.stringify(error));
res.status(500) // 500 - Internal Server Error
.json({
message: "Something went wrong",
// if available, prefer a Firebase error code
error: error.code || error.message
});
}
References
Cloud Firestore Transactions
Cloud Firestore Node SDK Reference
HTTP Event Cloud Functions
I am trying to implement the auto logout feature after x mins of inactivity on flutter while using Firebase , authentication method being email.
I have searched online but whatever I've found is not for flutter.
Any help will be greatly appreciated thank you!
you can use interceptor for all api instance like this, but instead customize the onRequest method.
the idea is: save time information when hit api occurred. and then whenever another hit api occur, check duration between now and last saved time.
if the duration is longer than, let's say 5 minutes, then you can call method logout, else you can continue the request
here some snippet to make it clear:
Future<Dio> getApiClient() async {
_dio.interceptors.clear();
_dio.interceptors
.add(InterceptorsWrapper(onRequest: (RequestOptions options) {
// Do something before request is sent
var pref = await SharedPreferences.getInstance();
var timeNow = DateTime.now().millisecondsSinceEpoch;
var lastHitApi = pref.getInt(LAST_HIT_API);
var delay = timeNow - lastHitApi;
pref.setInt(LAST_HIT_API, timeNow);
if (delay > DELAY_MAX) {
// do logout here
}
return options;
},onResponse:(Response response) {
// Do something with response data
return response; // continue
}, onError: (DioError error) async {
// Do something with response error
}));
_dio.options.baseUrl = baseUrl;
return _dio;
}
Edit: i guess this one is more preferable
Set the timeout duration and call logout funtion
Timer(Duration(seconds: 5), () => logOut());
We currently have a method that returns a Future<Stream<Position>> just because internally we have to await the result of a method returning a Future before we can call another method that returns the Stream<Position> which we are actually interested in. Here is the code:
Future<Stream<Position>> getPositionStream(
[LocationOptions locationOptions = const LocationOptions()]) async {
PermissionStatus permission = await _getLocationPermission();
if (permission == PermissionStatus.granted) {
if (_onPositionChanged == null) {
_onPositionChanged = _eventChannel
.receiveBroadcastStream(
Codec.encodeLocationOptions(locationOptions))
.map<Position>(
(element) => Position._fromMap(element.cast<String, double>()));
}
return _onPositionChanged;
} else {
_handleInvalidPermissions(permission);
}
return null;
}
So what happens here is:
We await the _getLocationPermission() method so that we can test if the user grants us permission to access to the location services on their device (Android or iOS);
If the user grants us permission we return a Stream<Position> which will update every time the device registers a location change.
I have the feeling we can also handle this without doing an await and returning a Future. Something along the lines of:
Manually create and return an instance of the Stream<Position> class;
Handle the logic of checking the permissions and calling the _eventChannel.receiveBroadcastStream in the then() method of the Future<PermissionStatus> returned from the _getLocationPermission() method (so we don't have to await it);
Copy the events send on the stream from the _eventChannel.receiveBroadcastStream onto the earlier created (and returned) stream.
Somehow this seems to be possible, but also includes some overhead in managing the stream and make sure it closes and is cleaned up correctly during the live cycle of the plugin or when the user unsubscribes pass through the events to the _eventChannel etc.
So I guess the question would be, what would be the best way to approach this situation?
You can write the code as an async* function, which will return a Stream and still allows await in the body:
Stream<Position> getPositionStream(
[LocationOptions locationOptions = const LocationOptions()]) async* {
PermissionStatus permission = await _getLocationPermission();
if (permission == PermissionStatus.granted) {
if (_onPositionChanged == null) {
_onPositionChanged = _eventChannel
.receiveBroadcastStream(
Codec.encodeLocationOptions(locationOptions))
.map<Position>(
(element) => Position._fromMap(element.cast<String, double>()));
}
yield* _onPositionChanged;
} else {
_handleInvalidPermissions(permission);
}
}
Alternatively, if you are using a non-async function, you can also use StreamCompleter from package:async.
It allows you to return a Stream now, even if you only get the real stream later. When that happens, you "complete" the StreamCompleter with the real stream, and the original stream will behave as if it was the real stream.
Stopping a watch channel is not working, though it's not responding with an error, even after allowing for propagation overnight. I'm still receiving 5 notifications for one calendarlist change. Sometimes 6. Sometimes 3. It's sporadic. We're also receiving a second round of notifications for the same action after 8 seconds. Sometimes 6 seconds. Sometimes a third set with a random count. Also sporadic. Received a total of 10 unique messages for a single calendar created via web browser.
You can perform infinite amount of watch requests on specific calendar resource, Google will always return the same calendar resource Id for the same calendar, but the uuid you generate in the request will be different, and because of that, you will receive multiple notifications for each watch request that you've made. One way to stop all notifications from specific calendar resource, is to listen for notifications, pull out "x-goog-channel-id" and "x-goog-resource-id" from notification headers, and use them in Channels.stop request.
{
"id": string,
"resourceId": string
}
Every time you perform a watch request, you should persist the data from the response, and check if the uuid or resource id already exist, if yes don't perform watch request for that resource id again (if you don't want to receive multiple notifications).
e.g.
app.post("/calendar/listen", async function (req, res) {
var pushNotification = req.headers;
res.writeHead(200, {
'Content-Type': 'text/html'
});
res.end("Post recieved");
var userData = await dynamoDB.getSignInData(pushNotification["x-goog-channel-token"]).catch(function (err) {
console.log("Promise rejected: " + err);
});
if (!userData) {
console.log("User data not found in the database");
} else {
if (!userData.calendar) {
console.log("Calendar token not found in the user data object, can't perform Calendar API calls");
} else {
oauth2client.credentials = userData.calendar;
await calendarManager.stopWatching(oauth2client, pushNotification["x-goog-channel-id"], pushNotification["x-goog-resource-id"])
}
}
};
calendarManager.js
module.exports.stopWatching = function (oauth2client, channelId, resourceId) {
return new Promise(function (resolve, reject) {
calendar.channels.stop({
auth: oauth2client,
resource: {
id: channelId,
resourceId: resourceId
}
}, async function (err, response) {
if (err) {
console.log('The API returned an error: ' + err);
return reject(err);
} else {
console.log("Stopped watching channel " + channelId);
await dynamoDB.deleteWatchData(channelId)
resolve(response);
}
})
})
}
Not a google expert but I recently implement it in my application,
I am trying to answer some of your questions for future readers:
It's sporadic
Tha's because you have create more than 1 channels for watching events.
We're also receiving a second round of notifications for the same action after 8 seconds
Google doesn't say anything about the maximum delay for sending a push notification.
Suggestions:
CREATE:
When you create a new channel, always save the channel_id and channel_resource in your database.
DELETE:
When you want to delete a channel just use stop API endpoint with the channel data saved in your database
RENEW:
As you have noticed the channels do expire, so you need to update them once in a while. To do that create a crone in your server that is going to STOP all previous channels and it will create new one.
Comment: Whenever something is going wrong please read the error message sent from the Google API calendar. Most of the time, it tells you what is wrong.
Use Channels.stop which is mentioned in the docs. Supply the following data in your request body:
{
"id": string,
"resourceId": string
}
id is the channel ID when you created your watch request. Same goes with resource ID.
Read this SO thread and this github forum for additional reference.
I've been using Firebase extensively and still face only one real issue: onDisconnect isn't 100% reliable in my experience.
If you close a computer without closing the window first, or kill the browser, you sometime have the "garbage collector" getting your onDisconnect executed, sometimes it doesn't.
My question is the following: I just don't use /.connected for now, I basically use a simple
userRef.set('status', 1);
userRef.onDisconnect().update({ 'status' : 0 });
Is there anything wrong with this approach? Do we agree that the update parameters are passed to the server at the time the line is executed and not before window unload ?
NB: I happen to try to keep a multi-window status, using the following approach to keep the status at 1 if another window is closed:
userRef.child('status').on('value', function(snap) {
if (snap.val() != 1) {
userRef.set('status', 1);
}
});
I don't this how this could be related, but...
MY SOLUTION: In fact, I had just missed the part where you learn that onDisconnect is only triggered once. To get a persistent onDisconnect, you need to implement basic persistence.
Helpers.onConnected = function(callback) {
var connectedRef = lm.newFirebase('.info/connected');
var fn = connectedRef.on('value', function(snap) {
if (snap.val() === true) {
if (callback) callback();
}
});
var returned = {};
returned.cancel = function() {
connectedRef.off('value', fn);
};
return returned;
};
Simple use case:
this._onConnected = lm.helpers.onConnected(function() {
this.firebase.onDisconnect().update({ 'tu': 0 });
}.bind(this));
And then to cancel:
if (this._onConnected) this._onConnected.cancel();
this.firebase.onDisconnect().cancel();
You should always call the onDisconnect() operation BEFORE you call the set() operation. That way if the connection is lost between the two you don't end up with zombie data.
Also note that in the case where the network connection is not cleanly killed, you may have to wait for a TCP timeout before we're able to detect the user as gone and trigger disconnect cleanup. The cleanup will occur, but it may take a few minutes.
After a lot of digging around, I found this issue which was actually happening in my case and I think for most of the others also who are coming to this page for a solution.
So the problem is firebase checks it's accessToken twice when 1. the onDisconnect is enqueued and 2. when the onDsiconnect is applied.
Firebase doesn't proactively refreshes tokens when a tab is not visible. If the page is inactive for more than the expiry of the accessToken and closed without focusing on that tab firebase will not allow the onDisconnect because of the expired accessToken.
Solutions to this:
You can get a new accessToken by setting some sort of
interval like this:
let intervalId;
document.addEventListener('visibilitychange', () => {
if (document.visibilityState === 'visible') {
if (intervalId) {
window.clearTimeout(intervalId);
intervalId = undefined;
}
} else {
firebaseApp.auth().currentUser.getIdToken(true);
intervalId = setInterval(() => {
firebaseApp.auth().currentUser.getIdToken(true);
}, intervalDuration);
}
});
Or you can disconnect the database manually firebase.database().goOffline() whenever tab visibility changes from "visible".
Expanding on the comment :
This is why we provide the /.info/connected endpoint.
We recommend re-establishing the disconnect handler every time the connection comes back online
I followed the above and got it fixed:
const userRef = Firebase.database().ref('/users/<user-id>')
Firebase.database()
.ref('.info/connected')
.on('value', async snap => {
if (snap.val() === true) {
await userRef.onDisconnect().remove();
await userRef.update({ status : 'online' })
} else {
await this.userRef.remove()
}
});
For reference, my previous code was:
const userRef = Firebase.database().ref('/users/<user-id>')
userRef.onDisconnect().remove();
await userRef.update({ status : 'online' })
The issue with this is that the onDisconnect may not work when you go offline and come back online multiple times