I have been looking into the Free monad because I have read that a use case for it is to do logging in a side-effect free way.
I am attempting to do this in JavaScript with the Monet library.
However the documentation is lacking and I do not understand the Free monad well enough figure it out on my own(I have been trying).
I have looked into the Haskell implementation, but I do not read Haskell well and the methods do not appear to be named the same, so I am having trouble.
Any chance someone can give me a simple example of how the Free monad works in either JavaScript or pseudo code that matches the above library?
I feel like if I can see a complete example, I will understand better.
Here are the unit tests from the Monet library: https://github.com/monet/monet.js/blob/develop/test/free-spec.js
But they don't help me much (because they are tests).
Doing logging in a side-effect free way is usually done with the Writer Monad:
const compose = (f, g) => value => f(g(value));
const Writer = ({ log = [], value }) => ({
flatMap: func => {
const mapped = func(value);
return Writer({
log: log.concat(mapped.log),
value: mapped.value
});
},
log,
value
});
Writer.of = value => Writer({ value });
Writer.log = entry => value => Writer({ log: [entry], value });
const { log, value } = Writer.of(-42.5)
.flatMap(compose(Writer.of, Math.abs))
.flatMap(Writer.log(`abs`))
.flatMap(compose(Writer.of, Math.floor))
.flatMap(Writer.log(`floor`));
console.log({ log, value })
Related
This is a topic that's been discussed a lot through github issues and by now I've noticed two main opinions: It's not possible or it should not be done at all.
The argument for both sides is that redux is not meant for it, that the .replaceReducer function is only meant for the purposes of hot-reloading (even though redux itself mentions it as a possibility for code-splitting).
The goal
Anyway, what I would like to achieve (ideally) is a system that only sends the relevant slices and relevant redux code for a specific route in NextJs. And (even more ideally) when navigating between pages the store should just get extended and not re-created.
My initial approach
My first idea was to implement a recipe from the link above, attaching and exposing the injectReducer function onto my store during the store setup:
const store = configureStore({
reducer: {
globals,
[rtkqApi.reducerPath]: rtkqApi.reducer
},
middleware: (getDefaultMiddleware) => getDefaultMiddleware().concat(rtkqApi.middleware)
});
store.dynamicReducers = {};
store.injectDynamicReducer = (name, reducer) => {
if (Object.keys(store.dynamicReducers).includes(name)) {
return;
}
store.dynamicReducers[name] = reducer;
store.replaceReducer(
combineReducers({
globals,
[rtkqApi.reducerPath]: rtkqApi.reducer,
...store.dynamicReducers
})
);
};
const makeStore = () => store;
export const wrapper = createWrapper(makeStore);
export const injectReducer = (sliceName, reducer) => store.injectDynamicReducer(sliceName, reducer);
So basically every page would have a globalsSlice, containing the user info and some other global data, and Redux Toolkit Query API slice (which would then be code-split using RTKQ injectEndpoints functionality).
With this setup, each page that wants to inject its own custom slice (reducer) would do something like this:
const SomePage = () => {
const someData = useSelector(somePageSliceSelectors.selectSomeData);
return (
<Fragment>
<Head>
<title>Some Page</title>
</Head>
</Fragment>
)
};
export default SomeRoute;
injectReducer('somePageSlice', somePageReducer);
export const getServerSideProps = wrapper.getServerSideProps((store) => async (context) => {
// Whatever necessary logic we need
});
Initially this seemed to have worked fine, but then when I realized that next-redux-wrapper works by calling the makeStore factory on every request, and I'm manipulating and mutating a global store object, there has to be something wrong with this, ie a race condition that I haven't been able to cause by testing. Also another problem occurres when using Redux Toolkit Query. For example, if I need to get a cookie from the original request (the one that nextjs receives) and then re-send it to another API endpoint that is handled by redux toolkit query, I would need to extract the cookie from the request context, to which I don't have access unless I do something like this:
export const makeStore = (ctx) => {
return configureStore({
reducer: ...,
middleware: (getDefaultMiddleware) =>
getDefaultMiddleware({
thunk: {
extraArgument: ctx,
},
}).concat(...),
});
};
which further implies that I should definitely not be mutating the global store object.
So then I thought alright, instead of manipulating the global store I could try doing it in GSSP:
export const getServerSideProps = wrapper.getServerSideProps((store) => async (context) => {
store.injectDynamicReducer('somePageSlice', somePageReducer);
});
But no luck here, the slice does not get loaded and the state does not get constructed. It is my guess that the Provider in the _app gets rendered before this, but I'm not sure.
In conclusion, I'd like to know whether anyone has tried and succeeded in implementing redux code splitting using RTK, RTKQ and NextJs. Also, I would like to ask an addition question: Is it necessary? What I mean by this is, if I were to not code-split at all, and send all slices on every request, how performance impactful would this be? Also, since I'm not sure exactly how the NextJs bundler works and how code chunking is done: If a certain page receives a slice it doesn't use at all, will it only receive its initial state or all of its logic (all the selectors, reducers and actions)? If not then maybe this isn't so bad, since initial states are just empty objects.
I hope I've presented the problem clearly enough, as it is a very complex problem, but feel free to ask follow up questions if something doesn't make sense.
Thanks in advance.
I use Redux Toolkit, and in particular the new listener api, to perform tasks similar to what I could do with Redux-Saga.
Unfortunately, since a few days, I'm stuck with a memory leak and I can't find the cause.
I have reproduced a minimal example of the code that produces this memory leak, link to the example : https://github.com/MrSquaare/rtk-memory-leak
To observe this memory leak :
I use Chromium, DevTools memory tool
I trigger a garbage collector
I make a heap memory snapshot
I dispatch entity/load (via the UI button)
I make several heap memory snapshots every 2-3 seconds
I use the comparison tool, I notice that I have the array allocation size growing infinitely
And after dispatch entity/unload, then make a snapshot heap memory, we can observe that the allocations disappear...
Has anyone observed similar behavior? Or does anyone have an idea of the cause? Thanks!
EDIT 1:
I made an example with only the listener middleware (only-middleware branch), and compared it with different ways of doing :
With forkApi.pause : Important leaks, especially of the generated entities
Without forkApi.pause : I use directly api.dispatch, no more leaks of the generated entities, some leaks of other kinds, but maybe normal things (I am not qualified enough to pronounce on this)
Without api.dispatch : I call directly the function that generates an entity, same result as with api.dispatch
It seems that the leak is related to forkApi.pause, but again I am not qualified enough to know the real cause...
It's probably the promises.forEach. Every 1000ms, you create a bunch of new promises and schedule things for them. You never wait for the last batch of those promises to finish, so they accumulate.
Replace the promises.forEach with a await Promise.all(promises.map and see what that does.
After reading your solution more closely, I believe you can do this with less problems by sticking more to the reducer and less to the listenerMiddleware.
I would suggest these changes:
export const entitySlice = createSlice({
name: "entity",
initialState: entityAdapter.getInitialState({ acceptingEntities: false }),
reducers: {
upsertOne: (state, action) => {
entityAdapter.upsertOne(state, action.payload);
},
removeAll: (state) => {
entityAdapter.removeAll(state);
},
load(state) { state.acceptingEntities = true },
unload(state) { state.acceptingEntities = false },
},
extraReducers: builder => {
builder.addCase(getEntity.fulfilled, (state, action) => {
if (!state.acceptingEntities) return;
const prevEntity = entitySelectors.selectById(state.entity, id);
entityAdapter.upsertOne(state,
prevEntity
? mergeEntityData(prevEntity.data, action.payload.data)
: action.payload.data
)
})
}
});
and
entityMiddleware.startListening({
actionCreator: api.actions.load,
effect: async (action, api) => {
const task = api.fork(async (forkApi) => {
while (!forkApi.signal.aborted) {
for (const id of entityIds) {
api.dispatch(getEntity(id))
}
await api.delay(1000);
}
});
await api.condition(api.actions.unload.match);
task.cancel();
},
});
Generally:
logic like calculating a new value belongs into the reducer, not outside. Doing stuff like this outside always has the risk of race conditions and in the reducer you always have all the info available (also, no risk of hogging memory by holding stale value references)
dispatching another action directly after a thunk will only add more workload - after every reducer, every selector will rerun and your UI might rerender. Just go for an extraReducer from the start
I just added a boolean acceptingEntities to indicate if updates should currently take place or not
this massively reduces complexity in your listener
It may be related to use of Promise.race(): https://github.com/nodejs/node/issues/17469 . Filed https://github.com/reduxjs/redux-toolkit/issues/3020 for us to look at this further.
I'm bad at introductions, so let's just cut right to the chase:
I am getting an observable from a redux store by using ngRedux.select. Immediately after I use a pipe with distinctUntilChanged with a custom compare function. However, inside this function the old value is always null, no matter how often it is being called.
Also when I put the distinctUntilChanged at a later point in the pipe after some parsing (which was the original plan), it will again always return the same value: The first one that came through the observable. Somehow I feel I am completely misunderstanding this operator, but I am using it the way the documentation suggests.
Here's the complete code I am using:
this.ngRedux.select((store: RootDomain) => getValue(() => {
const localContext = store.core.context[contextUuid];
const globalContext = store.core.context[GLOBAL_CONTEXT_KEY];
const context = Object.assign({}, globalContext, localContext);
return context[key];
})).pipe(
distinctUntilChanged((x, y) => {
console.log('OLD: ', x);
console.log('NEW: ', y);
return isEqual(x, y);
})
);
I also tried simply returning true all the time to see if anything would change (of course it didn't). x will always log as null here. Please help me understand this operator!
I'm using the Nodejs library for talking to Jira called jira-connector. I can get all of the boards on my jira instance by calling
jira.board.getAllBoards({ type: "scrum"})
.then(boards => { ...not important stuff... }
the return set looks something like the following:
{
maxResults: 50,
startAt: 0,
isLast: false,
values:
[ { id: ... } ]
}
then while isLast === false I keep calling like so:
jira.board.getAllBoards({ type: "scrum", startAt: XXX })
until isLast is true. then I can organize all of my returns from promises and be done with it.
I'm trying to reason out how I can get all of the data on pages with Ramda, I have a feeling it's possible I just can't seem to sort out the how of it.
Any help? Is this possible using Ramda?
Here's my Rx attempt to make this better:
const pagedCalls = new Subject();
pagedCalls.subscribe(value => {
jira.board.getAllBoards({ type:"scrum", startAt: value })
.then(boards => {
console.log('calling: ' + value);
allBoards.push(boards.values);
if (boards.isLast) {
pagedCalls.complete()
} else {
pagedCalls.next(boards.startAt + 50);
}
});
})
pagedCalls.next(0);
Seems pretty terrible. Here's the simplest solution I have so far with a do/while loop:
let returnResult = [];
let result;
let startAt = -50;
do {
result = await jira.board.getAllBoards( { type: "scrum", startAt: startAt += 50 })
returnResult.push(result.values); // there's an array of results under the values prop.
} while (!result.isLast)
Many of the interactions with Jira use this model and I am trying to avoid writing this kind of loop every time I make a call.
I had to do something similar today, calling the Gitlab API repeatedly until I had retrieved the entire folder/file structure of the project. I did it with a recursive call inside a .then, and it seems to work all right. I have not tried to convert the code to handle your case.
Here's what I wrote, if it will help:
const getAll = (project, perPage = 10, page = 1, res = []) =>
fetch(`https://gitlab.com/api/v4/projects/${encodeURIComponent(project)}/repository/tree?recursive=true&per_page=${perPage}&page=${page}`)
.then(resp => resp.json())
.then(xs => xs.length < perPage
? res.concat(xs)
: getAll(project, perPage, page + 1, res.concat(xs))
)
getAll('gitlab-examples/nodejs')
.then(console.log)
.catch(console.warn)
The technique is pretty simple: Our function accepts whatever parameters are necessary to be able to fetch a particular page and an additional one to hold the results, defaulting it to an empty array. We make the asynchronous call to fetch the page, and in the then, we use the result to see if we need to make another call. If we do, we call the function again, passing in the other parameters needed, the incremented page number, and the merge of the current results and the ones just received. If we don't need to make another call, then we just return that merged list.
Here, the repository contains 21 files and folders. Calling for ten at a time, we make three fetches and when the third one is complete, we resolve our returned Promise with that list of 21 items.
This recursive method definitely feels more functional than your versions above. There is no assignment except for the parameter defaulting, and nothing is mutated along the way.
I think it should be relatively easy to adapt this to your needs.
Here is a way to get all the boards using rubico:
import { pipe, fork, switchCase, get } from 'rubico'
const getAllBoards = boards => pipe([
fork({
type: () => 'scrum',
startAt: get('startAt'),
}),
jira.board.getAllBoards,
switchCase([
get('isLast'),
response => boards.concat(response.values),
response => getAllBoards(boards.concat(response.values))({
startAt: response.startAt + response.values.length,
})
]),
])
getAllBoards([])({ startAt: 0 }) // => [...boards]
getAllBoards will recursively get more boards and append to boards until isLast is true, then it will return the aggregated boards.
I have this question in my head, not sure if this is validate or not, below it's an example of redux middle console log out the store.
const logger = store => next => action => {
console.log('dispatching', action)
let result = next(action)
console.log('next state', store.getState())
return result
}
I can see it's using currying, so in redux is calling as logger(store)(store.dispatch)(action) (Correct me if i am wrong). My question is why we currying here instead just
(store, next, action) => { // do the rest }
Thanks for any suggestion I am slowly moving into functional programming too to get my head up rhythm with it.
I think redux wants to provide three hooks to developers.
We can split the call chain logger(store)(next)(action) into
let haveStoreAndDispatch = logger(store);
let haveNext = haveStoreAndDispatch(next);
let haveAction = haveNext(action);
Then we get three hook functions.
In haveStoreAndDispatch callback function, store have been created.
In haveNext callback function, we have get the next middleware.
In HaveAction callback function, we can do something with the previous middleware's result action.
Callbacks haveStoreAndDispatch and haveNext just be called only once in applyMiddleware(...middlewares)(createStore).