Prevent refetching data when resetting redux state on a reroute with next-router - redux

I currently have this piece of code:
const handleClick = async () => {
dispatch(resetFilters());
if (router.pathname !== '/') {
await router.push('/');
}
};
Where resetFilters() is a function to reset all the state in a slice.
My problem is that wherever I place this function (before or after the reroute), it will cause data to be fetched twice (since what data is fetched depends on the state).
If I place it before, I fetch data based on the reset state on the page I'm rerouting away from (which I won't use)
If I place it after, I fetch data based on the old state on the page I'm rerouting to, which then has to be fetched again with the reset state.
I saw that react-router-redux has a LOCATION_CHANGE action which seems to solve my problem.
Is there an equivalent version for next-router?
I.e. I need something which allows me to update redux state and redirect with next-router in an atomic step.

Related

RTK Query - Delete cached data upon cacheEntryAdded

Currently we have an api endpoint that requests a single 'Group' via ID.
We have a WebSocket subscription set up, and in the onCacheEntryAdded definition, we handle cases where that Group is updated, or deleted.
When we receive an update message from the websocket, we trigger the following;
updateCachedData((draft) => {
draft = response;
}
Which updates the entry, as expected.
However, what is the approach we should use if we want to remove the entry entirely? Upon 'delete' messages from the websocket, I would assume I could simply set draft as undefined, but that doesn't seem to be the case.
updateCachedData((draft) => {
draft = response;
}
actually does not update anything here in the first place.
updateQueryResults has the same rules as produce from immer or normal createSlice case reducers: you can modify the object in state (or draft in this case), but you cannot reassign the variable itself. If you want to do that, you have to
updateCachedData((draft) => {
return response;
}
instead.
In the same fashion, you can
updateCachedData((draft) => {
return null;
}
too, but that will not remove the full cache entry, it will only set the data to null. (undefined won't work!)
The cache entry will only be removed once there is no more component using it (by not using it with useQuery) - and then it will be removed automatically after 60 seconds.

Is it possible to destroy firestore listeners soon if client is not connected? [duplicate]

Is there any way to pause firestore listener without removing it?
I have multiple firebase listeners, some are dependent on other, that changes or start other listeners on data change. Lets say my first listener starts a second listener its onSnapshot. First listener started on useEffect. For certain condition I may not want to change the second listener, so I need to discard data change update from first listener.
If condition met (button click), I discard data changes on first listener for a few moments. Currently I'm doing this using a boolean with useRef. My react app is working fine, with dependant listeners like this. I could remove the listener but I do not want to remove and recreate the listener.
I was wondering if there is a pausing mechanism or method available for any listener. I think it will save a tiny read cost if there was such a method because I'm not using that data sent onSnapshot.
Code example:
useEffect(() => {
let firstListener, secondListener;
//console.log("useEffect...");
function ListenerFunc(p) {
secondListener = await firestore
.collection("test")
.doc(p)
.onSnapshot((doc) => {
//console.log("Current data: ", doc.data());
//Need to discard unwanted change here.
//Changing it on button click for a 2 seconds then it changes back to : pauser.current = false.
if (pauser.current) {
console.log("paused for a moment.");
//pauser.current = false;
return;
}
else {
//update.
}
})
}
firstListener = firestore
.collection("test")
.doc("tab")
.onSnapshot((doc) => {
//console.log("Current data: ", doc.data());
var p = doc.data().p; //get variable p
ListenerFunc(p);
});
// cleanup.
}
Unfortunately this is not possible. If you need to stop listening for changes, even temporarily, you have to detach your listener and attach a new one when you want to start listening again, there is no pause mechanism for listeners.
You could open a Feature Request in Google's Issue Tracker if you'd like so that the product team can consider this, but given that this has already been proposed in this GitHub Feature Request for the IOS SDK and it was rejected I don't see this changing anytime soon.

How to properly handle simultaneous persistence actions in Redux?

React application using Redux. A have a combined reducer, consisting of appStateReducer and contestReducer. Each of these two takes care of some part of the application data.
When action is performed, I want not only the respective state to be changed, but I also want to persistently save the new state, so that if the user reloads application page in the browser, the state would be preserved.
My idea is to add third reducer to take care only of save and load actions (each of the two sub-states separately).
Save and load will use IndexedDB, through localbase package. All of the db actions (add, get, update, delete) appear to be synchronous, i.e. there seems to be no real need to implement asynchronous actions. UPDATE: this is wrong, it is asynchronous, just some basic examples ignore it.
I am not sure how to handle the problem properly.
I will need a database connection object, a singleton, initialized once after page is loaded, which should be shared by all save/load actions regardless of which part of the state is to be stored or loaded. That would lead to a separate reducer working only with the db object. If I do this, the db reducer would have to have access to all the other sub-state, which is normally not the case in Redux.
Or, I could implement save and load action in each reducers separately, not a big deal, actually. But how to make the global db object accessible by the reducers?
It is as React application written in typescript and all components are implemented as classes.
You already have access to all data if you are using middleware, Example:
export const requestPost = (id) => (dispatch,getState) => {
// You can make an bank for post and check If data exist or not
const postState = getState().bank.posts.data;
const found = postState?.find((post) => post.id === id);
if (found) {
dispatch({ type: SUCCESS.POST, data: found });
} else {
dispatch({ type: REQUEST.POST });
API.get(`/post/v2?id=${id}`)
.then((res) => dispatch({ type: SUCCESS.POST, data: res.data[0] }))
.catch((err) => errorHandler(err, FAILURE.POST));
}
};
Just make and reducer for saving data on DB or somewhere and read them at the start.

async reducer with react-router seems to have multiple stores?

I have an async reducer structure. On any given page, I inject the page's reducer with:
export const injectReducer = (store, { key, reducer }) => {
// store.asyncReducers already initialized to {}
// makeRootReducer just returns combineReducers({...store.asyncReducers})
store.asyncReducers[key] = reducer
store.replaceReducer(makeRootReducer(store.asyncReducers))
}
I use react-router 3 and the plain routes definitions on any given page. I use require.ensure for async handling of routes within getComponent of the plain route definition:
export default (store) => ({
path : 'counter',
getComponent (nextState, cb) {
require.ensure([], (require) => {
const Counter = require('./containers/Counter').default
const reducer = require('./modules/counter').default
injectReducer(store, { key: 'counter', reducer })
cb(null, Counter)
}, 'Counter')
}
})
My problem is that if I use dispatch followed by browserHistory.push, I would expect that the state gets updated before going to the new page. What happens however, is that there appears to be 2 separate stores. For example, navigating between pages, the value of the counter from the previous page seems to be preserved despite it being on the same key. What is going on here???
Sample repo of my problem. You can git clone, npm install, npm start and go to localhost:3000/counter. If you click Double (Async) it doubles the counter and then goes to the /otherPage. If you then click Half (Async) it will bring you back to /counter. However the value of the counter is the value from doubling, not from halving. Also, importantly, pulling up Redux DevTools and navigating between the pages seems to show the counter change for ALL data before. Almost as if the entire store was replaced, yet the prior values are preserved.
What is going on here???
After much investigation I have discovered that in development ReduxDevTools will recompute the entire state history on a new page. Inserting a new reducer where initialState is different on the new page results in 2 different results. There are not 2 stores or any caches, it is just redux dev tools recomputing the entire state history with different initialStates.
Hence in my scenario, dispatch was updating the state on my first page. Then browserHistory makes a push to go to the second page. The second page recomputes the entire state history. However the second page has a reducer that is missing the action handler for the first page. Hence when recomputing the entire state history, the state doesn't change from that last action.

Meteor GroundDB granularity for offline/online syncing

Let's say that two users do changes to the same document while offline, but in different sections of the document. If user 2 goes back online after user 1, will the changes made by user 1 be lost?
In my database, each row contains a JS object, and one property of this object is an array. This array is bound to a series of check-boxes on the interface. What I would like is that if two users do changes to those check-boxes, the latest change is kept for each check-box individually, based on the time the when the change was made, not the time when the syncing occurred. Is GroundDB the appropriate tool to achieve this? Is there any mean to add an event handler in which I can add some logic that would be triggered when syncing occurs, and that would take care of the merging ?
The short answer is "yes" none of the ground db versions have conflict resolution since the logic is custom depending on the behaviour of conflict resolution eg. if you want to automate or involve the user.
The old Ground DB simply relied on Meteor's conflict resolution (latest data to the server wins) I'm guessing you can see some issues with that depending on the order of when which client comes online.
Ground db II doesn't have method resume it's more or less just a way to cache data offline. It's observing on an observable source.
I guess you could create a middleware observer for GDB II - one that checks the local data before doing the update and update the client or/and call the server to update the server data. This way you would have a way to handle conflicts.
I think to remember writing some code that supported "deletedAt"/"updatedAt" for some types of conflict handling, but again a conflict handler should be custom for the most part. (opening the door for reusable conflict handlers might be useful)
Especially knowing when data is removed can be tricky if you don't "soft" delete via something like using a "deletedAt" entity.
The "rc" branch is currently grounddb-caching-2016 version "2.0.0-rc.4",
I was thinking about something like:
(mind it's not tested, written directly in SO)
// Create the grounded collection
foo = new Ground.Collection('test');
// Make it observe a source (it's aware of createdAt/updatedAt and
// removedAt entities)
foo.observeSource(bar.find());
bar.find() returns a cursor with a function observe our middleware should do the same. Let's create a createMiddleWare helper for it:
function createMiddleWare(source, middleware) {
const cursor = (typeof (source||{}).observe === 'function') ? source : source.find();
return {
observe: function(observerHandle) {
const sourceObserverHandle = cursor.observe({
added: doc => {
middleware.added.call(observerHandle, doc);
},
updated: (doc, oldDoc) => {
middleware.updated.call(observerHandle, doc, oldDoc);
},
removed: doc => {
middleware.removed.call(observerHandle, doc);
},
});
// Return stop handle
return sourceObserverHandle;
}
};
}
Usage:
foo = new Ground.Collection('test');
foo.observeSource(createMiddleware(bar.find(), {
added: function(doc) {
// just pass it through
this.added(doc);
},
updated: function(doc, oldDoc) {
const fooDoc = foo.findOne(doc._id);
// Example of a simple conflict handler:
if (fooDoc && doc.updatedAt < fooDoc.updatedAt) {
// Seems like the foo doc is newer? lets update the server...
// (we'll just use the regular bar, since thats the meteor
// collection and foo is the grounded data
bar.update(doc._id, fooDoc);
} else {
// pass through
this.updated(doc, oldDoc);
}
},
removed: function(doc) {
// again just pass through for now
this.removed(doc);
}
}));

Resources