I'm having a hard time wrapping my brain around this pattern I am trying to implement so I'm hoping the stack overflow community might be able to help me work through a solution to this.
Currently I use redux-thunk along with superagent to handle calls to me API and syncing it all up with redux
An example of this might look like
export const getUser = (id) => {
return (dispatch) => {
const deferred = new Promise((resolve, reject) => {
const call = () => {
API.get(`/users/${id}`)
.then((response) => response.body)
.then((response) => {
if (response.message === 'User found') {
serializeUser(response.data).then((response) => {
resolve(response);
});
} else {
reject('not found');
}
}).catch((err) => {
handleCatch(err, dispatch).then(call).catch(reject)
});
}
call()
});
return deferred;
};
};
In the case where the server comes back with a 200 and some data I continue on with putting the data into the store and rendering to the page or whatever my application does.
In the case I receive an error I have attempted to write a function that will intercept those and determine if it should show an error on page or in the case of a 401 from our API, attempt a token refresh and then try to recall the method...
import { refreshToken } from '../actions/authentication';
export default (err, dispatch) => {
const deferred = new Promise((resolve, reject) => {
if (err.status === 401) {
dispatch(refreshToken()).then(resolve).catch(reject)
} else {
reject(err);
}
})
return deferred;
};
This works, however, I have to add this to each call, and it doesn't account for concurrent calls that should not attempt to call if there is a refresh in progress.
I've seen some things in my research on this topic that maybe redux-saga could work but I haven't been able to wrap my brain around how I might make this work
Basically, I need something like a queue that all my API requests will go into that is maybe debounced so any concurrent requests will just be pushed to the end and once a timeout ends the calls get stacked up, when the first call gets a 401 it pauses the queue until the token refresh either comes back successful, in which case it continues the queue, or with a failure, in which case it cancels all future requests from the queue and sends the user back to a login page
The thing I would be worried about here is if the first call in the stack takes a long time, I don't want the other calls to then have to wait a long time because it will increase the perceived loading time to the user
Is there a better way to handle keeping tokens refreshed?
Related
I have written a handler function inside my nextjs page/api folder;
handler(req, res) {}
Am using #influxdata/influxDb-client as mentioned in the documentation. Am using
from(queryAPI.rows(query).pipe(....).subscribe(next(value)=> {results.push(value}, complete(console.log(results); res.status(200).json(results)}
Am getting all the query value, once the observable is completed. it works most of the time.
Am pushing the intermediate results in the next part of the subscriber and trying to send the results back to client in the complete part of the subscriber. I want the request handler to wait till i get all my values from influx DB query in the complete part of the subscriber and can send the value back to client..
But the issue "Handler function will not Wait till the observable is completed". Handler function returns, before the observer gets completed. Am getting error: API resolved without sending a response...
I get all the values only when the observer is completed.
I don't know how to handle the scenario.
How can I make the handler function wait until the observable is completed?
I found the solution for the same
I used new Promise() with await, added my observable inside this promise and resolved the promise on Complete of the subscribe.
Code will look like the following :
export async function handler (req, res) {
const results=[];
await new Promise((resolve, reject) => {
from((queryAPIs.rows(query))
.pipe(map(({values, tableMeta}) => tableMeta.toObject(values)))
.subscribe(
{
next(object) => {results.push(object)}
complete() => { resolve (results) }
error(err) => { reject (err) }
});
res.status(200).send(results);
}
}
I've made firebase cloud function which adds the claim to a user that he or she has paid (set paid to true for user):
const admin = require("firebase-admin");
exports.addPaidClaim = functions.https.onCall(async (data, context) => {
// add custom claim (paid)
return admin.auth().setCustomUserClaims(data.uid, {
paid: true,
}).then(() => {
return {
message: `Succes! ${data.email} has paid for the course`,
};
}).catch((err) => {
return err;
});
});
However, when I'm running this function: I'm receiving the following error: "Unhandled Rejection (RangeError): Maximum call stack size exceeded". I really don't understand why this is happening. Does somebody see what could cause what's getting recalled which in turn causes the function to never end?
Asynchronous operations need to return a promise as stated in the documentation. Therefore, Cloud Functions is trying to serialize the data contained by promise returned by transaction, then send it in JSON format to the client. I believe your setCustomClaims does not send any object to consider it as an answer to the promise to finish the process so it keeps in a waiting loop that throws the Range Error.
To avoid this error I can think of two different options:
Add a paid parameter to be able to send a JSON response (and remove the setCustomUserClaim if it there isn’t any need to change the user access control because they are not designed to store additional data) .
Insert a promise that resolves and sends any needed information to the client. Something like:
return new Promise(function(resolve, reject) {
request({
url: URL,
method: "POST",
json: true,
body: queryJSON //A json variable I've built previously
}, function (error, response, body) {
if (error) {
reject(error);
}
else {
resolve(body)
}
});
});
I'm using rxjs v5.4.3, redux-observable v0.16.0.
in my application, I'd like to achieve below:
an user has auth token, and refresh token to regenerate auth token.
the user requests with auth token. (by emitting REQUEST action)
if it failed, request regenerating auth token with refresh token.
if refreshed, emit TOKEN_REFRESHED action to update auth token, and do not emit REQUEST_FAILURE.
if refreshing failed, emit REQUEST_FAILURE
after refreshing(and updating auth token reducer), retry requesting using the refreshed auth token.
if request succeeded, emit REQUEST_SUCCESS, and if failed, emit REQUEST_FAILURE.
I'd like to achieve like:
const fetchEpic = (action$: ActionsObservable<Action>, store: Store<IRootState>) => action$
.ofAction(actions.fetchPost)
.mergeMap(({ payload: { postId } })) => {
const { authToken, refreshToken } = store.getState().auth;
return api.fetchPost({ postId, authToken }) // this returns Observable<ResponseJSON>
.map(res => actions.fetchSuccess({ res })) // if success, just emit success-action with the response
.catch(err => {
if (isAuthTokenExpiredError(err) {
return api.reAuthenticate({ refreshToken })
.map(res => actions.refreshTokenSuccess({ authToken: res.authToken });
.catch(actions.fetchFailure({ err }))
// and retry fetchPost after re-authenticate!
}
return Observable.of(actions.fetchFailure({ err }))
})
}
is there any solution?
There are many ways to do it, but I would recommend splitting off the reauthentication into its own epic to make it easier to maintain/test/reuse.
Here's what that might look like:
const reAuthenticateEpic = (action$, store) =>
action$.ofType(actions.reAuthenticate)
.switchMap(() => {
const { refreshToken } = store.getState().auth;
return api.reAuthenticate({ refreshToken })
.map(res => actions.refreshTokenSuccess({ authToken: res.authToken }))
.catch(err => Observable.of(
actions.refreshTokenFailure({ err })
));
});
We'll also want to use something like Observable.defer so that each time we retry, we look up the latest version of the authToken:
Observable.defer(() => {
const { authToken } = store.getState().auth;
return api.fetchPost({ postId, authToken });
})
When we catch errors in fetchEpic and detect isAuthTokenExpiredError we return an Observable chain that:
Starts listening for a single refreshTokenSuccess, signalling we can retry
Just in case the reauthing itself fails, we listen for it with .takeUntil(action$.ofType(refreshTokenFailure)) so that we aren't waiting around forever--you might want to handle this case differently, your call.
mergeMap it to the original source, which is the second argument of the catch callback. The "source" is the Observable chain before the catch, and since Observables are lazy, when we receive the refreshTokenSuccess action it it will resubscribe to that chain again, effectively be a "retrying"
Merge the above chain with an Observable of an reAuthenticate action. This is used to kick off the actual reauth.
To summarize: the Observable chain we return from catch will first starting listening for refreshTokenSuccess, then it emits reAuthenticate, then when (and if) we receive refreshTokenSuccess we will then "retry" the source, our api.fetchPost() chain above the catch that we wrapped in an Observable.defer. If refreshTokenFailure is emitted before we receive our refreshTokenSuccess, we give up entirely.
const fetchEpic = (action$, store) =>
action$.ofType(actions.fetchPost)
.mergeMap(({ payload: { postId } })) =>
Observable.defer(() => {
const { authToken } = store.getState().auth;
return api.fetchPost({ postId, authToken });
})
.map(res => actions.fetchSuccess({ res }))
.catch((err, source) => {
if (isAuthTokenExpiredError(err)) {
// Start listening for refreshTokenSuccess, then kick off the reauth
return action$.ofType(actions.refreshTokenSuccess)
.takeUntil(action$.ofType(refreshTokenFailure))
.take(1)
.mergeMapTo(source) // same as .mergeMap(() => source)
.merge(
Observable.of(action.reAuthenticate())
);
} else {
return Observable.of(actions.fetchFailure({ err }));
}
});
);
These examples are untested, so I may have some minor issues but you hopefully get the gist. There's also probably a more elegant way to do this, but this will at least unblock you. (Others are more than welcome to edit this answer if they can decrease the complexity)
Side notes
This creates the slight potential for infinite retries, which can cause nasty issues both in the person's browser or your servers. It might be a good idea to only retry a set number of times, and/or put some sort of delay in between your retries. In practice this might not be worth worrying about, you'll know best.
You (or someone else reading this later) may be tempted to use .startWith(action.reAuthenticate()) instead of the merge, but be mindful that a startWith is just shorthand for a concat, not a merge, which means it would synchronously emit the action before we have started to listen for a success one. Usually that isn't an issue since http requests are async, but it's caused people bugs before.
I have the following code (simplified for this post) - assume an initial call to onStart().
Running this works fine. If I lose the internet connection I get the net::ERR_INTERNET_DISCONNECTED error (as expected) but the polling stops.
Clearly I am not handling any errors here as that is where I'm getting stuck. I'm not clear where I handle those errors and how? Do I need to call startPolling() again?
I need the polling to continue even if there is no internet connection, so that on re-connection data is updated. Any advice please?
onStart() {
this.startPolling().subscribe(data => {
// do something with the data
});
}
startPolling(): Observable<any> {
return Observable
.interval(10000)
.flatMap(() => this.getData());
}
getData() {
var url = `http://someurl.com/api`;
return this.http.get(url)
.map(response => {
return response.json();
});
}
Thanks in advance.
If you know the error happens because of this.http.get(url) then you can add catch() operator that lets you subscribe to another Observable instead of the source Observable that sent an error notification.
getData() {
var url = `http://someurl.com/api`;
return this.http.get(url)
.catch(err => Observable.empty())
.map(response => {
return response.json();
});
}
This will simply ignore the error and won't emit anything.
I've been using Firebase extensively and still face only one real issue: onDisconnect isn't 100% reliable in my experience.
If you close a computer without closing the window first, or kill the browser, you sometime have the "garbage collector" getting your onDisconnect executed, sometimes it doesn't.
My question is the following: I just don't use /.connected for now, I basically use a simple
userRef.set('status', 1);
userRef.onDisconnect().update({ 'status' : 0 });
Is there anything wrong with this approach? Do we agree that the update parameters are passed to the server at the time the line is executed and not before window unload ?
NB: I happen to try to keep a multi-window status, using the following approach to keep the status at 1 if another window is closed:
userRef.child('status').on('value', function(snap) {
if (snap.val() != 1) {
userRef.set('status', 1);
}
});
I don't this how this could be related, but...
MY SOLUTION: In fact, I had just missed the part where you learn that onDisconnect is only triggered once. To get a persistent onDisconnect, you need to implement basic persistence.
Helpers.onConnected = function(callback) {
var connectedRef = lm.newFirebase('.info/connected');
var fn = connectedRef.on('value', function(snap) {
if (snap.val() === true) {
if (callback) callback();
}
});
var returned = {};
returned.cancel = function() {
connectedRef.off('value', fn);
};
return returned;
};
Simple use case:
this._onConnected = lm.helpers.onConnected(function() {
this.firebase.onDisconnect().update({ 'tu': 0 });
}.bind(this));
And then to cancel:
if (this._onConnected) this._onConnected.cancel();
this.firebase.onDisconnect().cancel();
You should always call the onDisconnect() operation BEFORE you call the set() operation. That way if the connection is lost between the two you don't end up with zombie data.
Also note that in the case where the network connection is not cleanly killed, you may have to wait for a TCP timeout before we're able to detect the user as gone and trigger disconnect cleanup. The cleanup will occur, but it may take a few minutes.
After a lot of digging around, I found this issue which was actually happening in my case and I think for most of the others also who are coming to this page for a solution.
So the problem is firebase checks it's accessToken twice when 1. the onDisconnect is enqueued and 2. when the onDsiconnect is applied.
Firebase doesn't proactively refreshes tokens when a tab is not visible. If the page is inactive for more than the expiry of the accessToken and closed without focusing on that tab firebase will not allow the onDisconnect because of the expired accessToken.
Solutions to this:
You can get a new accessToken by setting some sort of
interval like this:
let intervalId;
document.addEventListener('visibilitychange', () => {
if (document.visibilityState === 'visible') {
if (intervalId) {
window.clearTimeout(intervalId);
intervalId = undefined;
}
} else {
firebaseApp.auth().currentUser.getIdToken(true);
intervalId = setInterval(() => {
firebaseApp.auth().currentUser.getIdToken(true);
}, intervalDuration);
}
});
Or you can disconnect the database manually firebase.database().goOffline() whenever tab visibility changes from "visible".
Expanding on the comment :
This is why we provide the /.info/connected endpoint.
We recommend re-establishing the disconnect handler every time the connection comes back online
I followed the above and got it fixed:
const userRef = Firebase.database().ref('/users/<user-id>')
Firebase.database()
.ref('.info/connected')
.on('value', async snap => {
if (snap.val() === true) {
await userRef.onDisconnect().remove();
await userRef.update({ status : 'online' })
} else {
await this.userRef.remove()
}
});
For reference, my previous code was:
const userRef = Firebase.database().ref('/users/<user-id>')
userRef.onDisconnect().remove();
await userRef.update({ status : 'online' })
The issue with this is that the onDisconnect may not work when you go offline and come back online multiple times