Ngrx Large Amounts of Data causes app to slow down - ngrx

I have an app that loads some images with metadata. A single folder can be quite large (~100-142Mb) once loaded into memory. Previously, we were using a plain old javascript object to manage the state of the app and everything worked fine, but i'd like to gain the benefits of ngrx's state management.
I've discovered ngrx and it seemed to be a smarter option when it comes to state management. However, when i add these items to the state, the app hangs when adding images to the store and then performance slows down when accessing individual (and unrelated) flags from the store i.e. UI flag - draw is open.
1) Here "directories" is a Map < string, Directory > () object that is saved the the Store (~100-120Mb). Directory is a complex object with many nested values. Once images are loaded, and then added to the store, it a) hangs and then b) everything else (i.e. changing a ui flag) slows down.
return {
...state,
loadedDirectories: directories,
filesLoading: false,
};
2) The directories are then later accessed from the store.
this.store
.pipe(select(fromReducer.getLoadedDirectories))
.subscribe(loadedDirectories => {
this._directoryData = loadedDirectories;
});
Selector looks like this....
export interface ImageLoaderState {
loadedDirectories: Map<string, Directory>;
filesLoading: boolean;
errorMessage: string;
}
export class AppState {
imageLoader: fromImageLoader.ImageLoaderState;
}
export const combinedReducers = {
imageLoader: fromImageLoader.imageLoaderReducer
.... More reducers here ....
}
// Select Image loader state.
export const selectImageLoaderState = (state: AppState) => state.imageLoader;
export const getLoadedDirectories = createSelector(
selectImageLoaderState,
(state: fromImageLoader.ImageLoaderState) => state.loadedDirectories
);
Using angular 8 and the following versions of ngrx.
"#ngrx/effects": "^8.4.0",
"#ngrx/store": "^8.4.0",
"#ngrx/store-devtools": "^8.4.0",
Are there any better practices? i.e. Add each image, one at a time to the store?

The ngrx store is for application state and not so good as a document store.
Please see..
https://github.com/btroncone/ngrx-store-localstorage/issues/39

One issue I see is how you create your new state. You mention that when you create your new state, you do the following
return {
...state,
loadedDirectories: directories,
filesLoading: false,
};
I think you are creating an object with tons of key-value pairs, then recreating that work when you set the loadedDirectories property again. I'm uncertain about the performance costs of using the spread operator in the context of very large objects. I would suggest you focus on creating this property once. This might help you
Does spread operator affect performance?

Related

Proper way of using Redux and RTKQ in NextJs with code-splitting

This is a topic that's been discussed a lot through github issues and by now I've noticed two main opinions: It's not possible or it should not be done at all.
The argument for both sides is that redux is not meant for it, that the .replaceReducer function is only meant for the purposes of hot-reloading (even though redux itself mentions it as a possibility for code-splitting).
The goal
Anyway, what I would like to achieve (ideally) is a system that only sends the relevant slices and relevant redux code for a specific route in NextJs. And (even more ideally) when navigating between pages the store should just get extended and not re-created.
My initial approach
My first idea was to implement a recipe from the link above, attaching and exposing the injectReducer function onto my store during the store setup:
const store = configureStore({
reducer: {
globals,
[rtkqApi.reducerPath]: rtkqApi.reducer
},
middleware: (getDefaultMiddleware) => getDefaultMiddleware().concat(rtkqApi.middleware)
});
store.dynamicReducers = {};
store.injectDynamicReducer = (name, reducer) => {
if (Object.keys(store.dynamicReducers).includes(name)) {
return;
}
store.dynamicReducers[name] = reducer;
store.replaceReducer(
combineReducers({
globals,
[rtkqApi.reducerPath]: rtkqApi.reducer,
...store.dynamicReducers
})
);
};
const makeStore = () => store;
export const wrapper = createWrapper(makeStore);
export const injectReducer = (sliceName, reducer) => store.injectDynamicReducer(sliceName, reducer);
So basically every page would have a globalsSlice, containing the user info and some other global data, and Redux Toolkit Query API slice (which would then be code-split using RTKQ injectEndpoints functionality).
With this setup, each page that wants to inject its own custom slice (reducer) would do something like this:
const SomePage = () => {
const someData = useSelector(somePageSliceSelectors.selectSomeData);
return (
<Fragment>
<Head>
<title>Some Page</title>
</Head>
</Fragment>
)
};
export default SomeRoute;
injectReducer('somePageSlice', somePageReducer);
export const getServerSideProps = wrapper.getServerSideProps((store) => async (context) => {
// Whatever necessary logic we need
});
Initially this seemed to have worked fine, but then when I realized that next-redux-wrapper works by calling the makeStore factory on every request, and I'm manipulating and mutating a global store object, there has to be something wrong with this, ie a race condition that I haven't been able to cause by testing. Also another problem occurres when using Redux Toolkit Query. For example, if I need to get a cookie from the original request (the one that nextjs receives) and then re-send it to another API endpoint that is handled by redux toolkit query, I would need to extract the cookie from the request context, to which I don't have access unless I do something like this:
export const makeStore = (ctx) => {
return configureStore({
reducer: ...,
middleware: (getDefaultMiddleware) =>
getDefaultMiddleware({
thunk: {
extraArgument: ctx,
},
}).concat(...),
});
};
which further implies that I should definitely not be mutating the global store object.
So then I thought alright, instead of manipulating the global store I could try doing it in GSSP:
export const getServerSideProps = wrapper.getServerSideProps((store) => async (context) => {
store.injectDynamicReducer('somePageSlice', somePageReducer);
});
But no luck here, the slice does not get loaded and the state does not get constructed. It is my guess that the Provider in the _app gets rendered before this, but I'm not sure.
In conclusion, I'd like to know whether anyone has tried and succeeded in implementing redux code splitting using RTK, RTKQ and NextJs. Also, I would like to ask an addition question: Is it necessary? What I mean by this is, if I were to not code-split at all, and send all slices on every request, how performance impactful would this be? Also, since I'm not sure exactly how the NextJs bundler works and how code chunking is done: If a certain page receives a slice it doesn't use at all, will it only receive its initial state or all of its logic (all the selectors, reducers and actions)? If not then maybe this isn't so bad, since initial states are just empty objects.
I hope I've presented the problem clearly enough, as it is a very complex problem, but feel free to ask follow up questions if something doesn't make sense.
Thanks in advance.

Memory Leak with Redux Toolkit's createListenerMiddleware

I use Redux Toolkit, and in particular the new listener api, to perform tasks similar to what I could do with Redux-Saga.
Unfortunately, since a few days, I'm stuck with a memory leak and I can't find the cause.
I have reproduced a minimal example of the code that produces this memory leak, link to the example : https://github.com/MrSquaare/rtk-memory-leak
To observe this memory leak :
I use Chromium, DevTools memory tool
I trigger a garbage collector
I make a heap memory snapshot
I dispatch entity/load (via the UI button)
I make several heap memory snapshots every 2-3 seconds
I use the comparison tool, I notice that I have the array allocation size growing infinitely
And after dispatch entity/unload, then make a snapshot heap memory, we can observe that the allocations disappear...
Has anyone observed similar behavior? Or does anyone have an idea of the cause? Thanks!
EDIT 1:
I made an example with only the listener middleware (only-middleware branch), and compared it with different ways of doing :
With forkApi.pause : Important leaks, especially of the generated entities
Without forkApi.pause : I use directly api.dispatch, no more leaks of the generated entities, some leaks of other kinds, but maybe normal things (I am not qualified enough to pronounce on this)
Without api.dispatch : I call directly the function that generates an entity, same result as with api.dispatch
It seems that the leak is related to forkApi.pause, but again I am not qualified enough to know the real cause...
It's probably the promises.forEach. Every 1000ms, you create a bunch of new promises and schedule things for them. You never wait for the last batch of those promises to finish, so they accumulate.
Replace the promises.forEach with a await Promise.all(promises.map and see what that does.
After reading your solution more closely, I believe you can do this with less problems by sticking more to the reducer and less to the listenerMiddleware.
I would suggest these changes:
export const entitySlice = createSlice({
name: "entity",
initialState: entityAdapter.getInitialState({ acceptingEntities: false }),
reducers: {
upsertOne: (state, action) => {
entityAdapter.upsertOne(state, action.payload);
},
removeAll: (state) => {
entityAdapter.removeAll(state);
},
load(state) { state.acceptingEntities = true },
unload(state) { state.acceptingEntities = false },
},
extraReducers: builder => {
builder.addCase(getEntity.fulfilled, (state, action) => {
if (!state.acceptingEntities) return;
const prevEntity = entitySelectors.selectById(state.entity, id);
entityAdapter.upsertOne(state,
prevEntity
? mergeEntityData(prevEntity.data, action.payload.data)
: action.payload.data
)
})
}
});
and
entityMiddleware.startListening({
actionCreator: api.actions.load,
effect: async (action, api) => {
const task = api.fork(async (forkApi) => {
while (!forkApi.signal.aborted) {
for (const id of entityIds) {
api.dispatch(getEntity(id))
}
await api.delay(1000);
}
});
await api.condition(api.actions.unload.match);
task.cancel();
},
});
Generally:
logic like calculating a new value belongs into the reducer, not outside. Doing stuff like this outside always has the risk of race conditions and in the reducer you always have all the info available (also, no risk of hogging memory by holding stale value references)
dispatching another action directly after a thunk will only add more workload - after every reducer, every selector will rerun and your UI might rerender. Just go for an extraReducer from the start
I just added a boolean acceptingEntities to indicate if updates should currently take place or not
this massively reduces complexity in your listener
It may be related to use of Promise.race(): https://github.com/nodejs/node/issues/17469 . Filed https://github.com/reduxjs/redux-toolkit/issues/3020 for us to look at this further.

How to properly handle simultaneous persistence actions in Redux?

React application using Redux. A have a combined reducer, consisting of appStateReducer and contestReducer. Each of these two takes care of some part of the application data.
When action is performed, I want not only the respective state to be changed, but I also want to persistently save the new state, so that if the user reloads application page in the browser, the state would be preserved.
My idea is to add third reducer to take care only of save and load actions (each of the two sub-states separately).
Save and load will use IndexedDB, through localbase package. All of the db actions (add, get, update, delete) appear to be synchronous, i.e. there seems to be no real need to implement asynchronous actions. UPDATE: this is wrong, it is asynchronous, just some basic examples ignore it.
I am not sure how to handle the problem properly.
I will need a database connection object, a singleton, initialized once after page is loaded, which should be shared by all save/load actions regardless of which part of the state is to be stored or loaded. That would lead to a separate reducer working only with the db object. If I do this, the db reducer would have to have access to all the other sub-state, which is normally not the case in Redux.
Or, I could implement save and load action in each reducers separately, not a big deal, actually. But how to make the global db object accessible by the reducers?
It is as React application written in typescript and all components are implemented as classes.
You already have access to all data if you are using middleware, Example:
export const requestPost = (id) => (dispatch,getState) => {
// You can make an bank for post and check If data exist or not
const postState = getState().bank.posts.data;
const found = postState?.find((post) => post.id === id);
if (found) {
dispatch({ type: SUCCESS.POST, data: found });
} else {
dispatch({ type: REQUEST.POST });
API.get(`/post/v2?id=${id}`)
.then((res) => dispatch({ type: SUCCESS.POST, data: res.data[0] }))
.catch((err) => errorHandler(err, FAILURE.POST));
}
};
Just make and reducer for saving data on DB or somewhere and read them at the start.

async reducer with react-router seems to have multiple stores?

I have an async reducer structure. On any given page, I inject the page's reducer with:
export const injectReducer = (store, { key, reducer }) => {
// store.asyncReducers already initialized to {}
// makeRootReducer just returns combineReducers({...store.asyncReducers})
store.asyncReducers[key] = reducer
store.replaceReducer(makeRootReducer(store.asyncReducers))
}
I use react-router 3 and the plain routes definitions on any given page. I use require.ensure for async handling of routes within getComponent of the plain route definition:
export default (store) => ({
path : 'counter',
getComponent (nextState, cb) {
require.ensure([], (require) => {
const Counter = require('./containers/Counter').default
const reducer = require('./modules/counter').default
injectReducer(store, { key: 'counter', reducer })
cb(null, Counter)
}, 'Counter')
}
})
My problem is that if I use dispatch followed by browserHistory.push, I would expect that the state gets updated before going to the new page. What happens however, is that there appears to be 2 separate stores. For example, navigating between pages, the value of the counter from the previous page seems to be preserved despite it being on the same key. What is going on here???
Sample repo of my problem. You can git clone, npm install, npm start and go to localhost:3000/counter. If you click Double (Async) it doubles the counter and then goes to the /otherPage. If you then click Half (Async) it will bring you back to /counter. However the value of the counter is the value from doubling, not from halving. Also, importantly, pulling up Redux DevTools and navigating between the pages seems to show the counter change for ALL data before. Almost as if the entire store was replaced, yet the prior values are preserved.
What is going on here???
After much investigation I have discovered that in development ReduxDevTools will recompute the entire state history on a new page. Inserting a new reducer where initialState is different on the new page results in 2 different results. There are not 2 stores or any caches, it is just redux dev tools recomputing the entire state history with different initialStates.
Hence in my scenario, dispatch was updating the state on my first page. Then browserHistory makes a push to go to the second page. The second page recomputes the entire state history. However the second page has a reducer that is missing the action handler for the first page. Hence when recomputing the entire state history, the state doesn't change from that last action.

Meteor GroundDB granularity for offline/online syncing

Let's say that two users do changes to the same document while offline, but in different sections of the document. If user 2 goes back online after user 1, will the changes made by user 1 be lost?
In my database, each row contains a JS object, and one property of this object is an array. This array is bound to a series of check-boxes on the interface. What I would like is that if two users do changes to those check-boxes, the latest change is kept for each check-box individually, based on the time the when the change was made, not the time when the syncing occurred. Is GroundDB the appropriate tool to achieve this? Is there any mean to add an event handler in which I can add some logic that would be triggered when syncing occurs, and that would take care of the merging ?
The short answer is "yes" none of the ground db versions have conflict resolution since the logic is custom depending on the behaviour of conflict resolution eg. if you want to automate or involve the user.
The old Ground DB simply relied on Meteor's conflict resolution (latest data to the server wins) I'm guessing you can see some issues with that depending on the order of when which client comes online.
Ground db II doesn't have method resume it's more or less just a way to cache data offline. It's observing on an observable source.
I guess you could create a middleware observer for GDB II - one that checks the local data before doing the update and update the client or/and call the server to update the server data. This way you would have a way to handle conflicts.
I think to remember writing some code that supported "deletedAt"/"updatedAt" for some types of conflict handling, but again a conflict handler should be custom for the most part. (opening the door for reusable conflict handlers might be useful)
Especially knowing when data is removed can be tricky if you don't "soft" delete via something like using a "deletedAt" entity.
The "rc" branch is currently grounddb-caching-2016 version "2.0.0-rc.4",
I was thinking about something like:
(mind it's not tested, written directly in SO)
// Create the grounded collection
foo = new Ground.Collection('test');
// Make it observe a source (it's aware of createdAt/updatedAt and
// removedAt entities)
foo.observeSource(bar.find());
bar.find() returns a cursor with a function observe our middleware should do the same. Let's create a createMiddleWare helper for it:
function createMiddleWare(source, middleware) {
const cursor = (typeof (source||{}).observe === 'function') ? source : source.find();
return {
observe: function(observerHandle) {
const sourceObserverHandle = cursor.observe({
added: doc => {
middleware.added.call(observerHandle, doc);
},
updated: (doc, oldDoc) => {
middleware.updated.call(observerHandle, doc, oldDoc);
},
removed: doc => {
middleware.removed.call(observerHandle, doc);
},
});
// Return stop handle
return sourceObserverHandle;
}
};
}
Usage:
foo = new Ground.Collection('test');
foo.observeSource(createMiddleware(bar.find(), {
added: function(doc) {
// just pass it through
this.added(doc);
},
updated: function(doc, oldDoc) {
const fooDoc = foo.findOne(doc._id);
// Example of a simple conflict handler:
if (fooDoc && doc.updatedAt < fooDoc.updatedAt) {
// Seems like the foo doc is newer? lets update the server...
// (we'll just use the regular bar, since thats the meteor
// collection and foo is the grounded data
bar.update(doc._id, fooDoc);
} else {
// pass through
this.updated(doc, oldDoc);
}
},
removed: function(doc) {
// again just pass through for now
this.removed(doc);
}
}));

Resources