I just want to understand is the usage of .map() inside reducers makes them non-pure? I clearly understand that pure functions is the functions that always return predictable (lets say dirty - "the same") result. But I think that usage of .map() inside reducer makes result non-predictable, because it lets him go ahead by one of two ways in ternary operation, that is a non-puse function way. So, just look on my reducer code and say is I'm wrong or not?
Thank you! )
// .. reducer
[SELECT_CDS]: (state, action) => ({
...state,
crimesByType: state.crim.map(
(cri, i) =>
i === 0
? {
...cri,
additionalInfo: {
...cri.addition,
CDsLeft: true
}
}
: crime
)
})
A reducer should be pure function, meaning that if the reducer is called twice with the same input, the output should also be the same.
In your case, the reducer is pure, because even though your map() and ternary operator give different results for different elements of the array, the final result will always be the same if the original array and action are the same.
Related
In the Redux Style Guide, it is strongly recommended to Put as Much Logic as Possible in Reducers:
Wherever possible, try to put as much of the logic for calculating a
new state into the appropriate reducer, rather than in the code that
prepares and dispatches the action (like a click handler).
What I'm not sure of is, if thunks are also considered to be "the code" of some sort. Besides, we've also been (mis?)using thunks to grab data from other slices of state.
Hypothetically simplified code snippet of such thunk:
const addX = x => (dispatch, getState) => {
const { data, view } = getState();
const { y } = view; // <-- here accessing data from `view` state.
const yy = doSomeLogicWith(y);
const z = doSomeMoreLogicWith(yy);
dispatch({ type: 'data/xAdded', payload: { x, z } });
};
Is this actually considered to be an anti-pattern in Redux? If so, what are the cons of doing this?
Yes, a thunk would qualify as "the code that dispatches the action" for this case. So, what the rule is recommending here is that if possible, the action would just contain y, and the function calls to doSomeLogicWith(y) and doSomeMoreLogicWith(yy) would ideally exist within the reducer instead.
Having said that, it's totally fine for a thunk to extract pieces of data from the state and include that in the action, and it's not wrong for a thunk to do some pre-processing of data before dispatching the action.
The style guide rule is just saying that, given a choice between running a particular piece of logic in a reducer or outside a reducer, prefer to do it in the reducer if at all possible.
Given the following (and assuming we cannot change the state's structure):
StoreModule.forRoot({
a: aReducer,
b: {
b1: b1Reducer,
b2: b2Reducer
}
});
and b1Reducer is dependent on the value of a (for example because it contains something like user info).
What is the most idiomatic way to access (read-only) a in b1Reducer?
The solution I came up with is using #ngrx/effects, dispatch another action with a that can be used in the reducer:
#Effect()
augmentAction$ = this.action$
.ofType(Actions.Action1)
.withLatestFrom(this.store$)
.switchMap(([action, state]:[Action, AppState]) => {
const a = state.a;
return [new Actions.Action2(a)];
});
This works, but it becomes hard to manage if almost every action needs to be redispatched if a is used in many reducers. Is there a better way to handle this?
Is there any inconvenient at all if I design my reducers to, instead of reading only the partial state, had access to the full state tree?
So instead of writing this:
function reducer(state = {}, action) {
return {
a: doSomethingWithA(state.a, action),
b: processB(state.b, action),
c: c(state.c, action)
}
}
I destructure state inside doSomethingWithA, c or processB reducers, separately:
function reducer(state = {}, action) {
return {
a: doSomethingWithA(state, action), // calc next state based on a
b: processB(state, action), // calc next state based on b
c: c(state, action) // calc next state based on a, b and c
}
}
Would I'd be using more RAM? Is there any performance inconvenient? I understand that in javascript, a reference is always passed as parameter, that's why we should return a new object if we want to update the state or use Immutable.JS to enforce immutability, so... again, would it be of any inconvenient at all?
No, there's nothing wrong with that. Part of the reason for writing update logic as individual functions instead of separate Flux "stores" is that it gives you explicit control over chains of dependencies. If the logic for updating state.b depends on having state.a updated first, you can do that.
You may want to read through the Structuring Reducers section in the Redux docs, particularly the Beyond combineReducers topic. It discusses other various reducer structures besides the typical combineReducers approach. I also give some examples of this kind of structure in my blog post Practical Redux, Part 7: Form Change Handling, Data Editing, and Feature Reducers.
Looking at the real world example I see this setting up the api middleware:
export default store => next => action => {
const callAPI = action[CALL_API]
if (typeof callAPI === 'undefined') {
return next(action)
}
What exactly is happening here? I see that configureStore is importing whatever that is and passing it to applyMiddleware from redux, but what does this kind of statement mean in js?
I assume it's exporting an anonymous function that returns a function that returns a function? Just tried this:
var a = b => c => d => {
console.log('a:', a);
console.log('b:', b);
console.log('c:', c);
console.log('d:', d);
};
a(5)(6)(7);
// outputs b: 5, c: 6, and d: 7
Function Specialization
The arrow function notation simplifies currying in JavaScript.
Here it's just a way to do partial applications, and permits to bind arguments to the function at different times, by using Closures instead of Function.prototype.bind.
When you call applyMiddleware during Store creation, Redux will specialize your Middleware with the current Store it's been applied to.
Then it becomes a new specialized function, that only takes two arguments:
next => action
Where next is the next middleware that will be called on the Action. (Just like in Express, which popularized the concept, for request handling)
Timeline
The important thing here is that all these function specializations are done at different times.
store can be bound during Store creation.
next can be bound once it knows the Store it's been bound to, so also during Store creation, but could be updated later.
action is known only when you effectively dispatch an Action, which can happen any time.
The specialized middleware (the one which has been bound to the Store, and is already aware of the Next middleware function) will be reusable, and called for each new dispatched Action.
Functional Programming
These concepts (currying and partial application) come from the Functional Programming world.
Redux relies heavily on this paradigm, and the most important thing in Redux is the sidelining of Side-Effects (especially mutations).
Capturing directly the context of the function, or using a global Store via require, is a side-effect as your function will directly after its declaration be bound to this Store.
Instead Redux uses Currying to permit sort of Dependency Injection, and it results in a stateless function, that can be reused and specialized at runtime.
This way your Middleware is Loosely Coupled to the Store.
To understand this clearly you need to first know how middlewares work in redux. So first go through this
Now even after going through the documentation you are still confused, dont worry its a bit complicated, try reading it once again :).I understood this properly after 2-3 reads.
Now the one you mentioned in your question is a curried up ES6 syntax. If you try to convert this to vanilla javascript it would come to something like below
function (store) {
return function (next) {
return function (action) {
var callAPI = action[CALL_API];
if (typeof callAPI === 'undefined') {
return next(action);
}
};
};
};
So if you see its nothing but just chaining of functions.
What is a good practice for handling iteration through an Immutable.js Map object? This works:
{stocks.map((stock,key)=>{
return ( <h3>{key}</h3> )
})}
but gives the warning in the console "warning.js:45 Warning: Using Maps as children is not yet fully supported. It is an experimental feature that might be removed. Convert it to a sequence / iterable of keyed ReactElements instead."
This has been discussed before, and this link suggests some strategies https://github.com/facebook/immutable-js/issues/667 but they seem clunky to me. Like:
posts.entrySeq().map(o =>
<Post value={o[1]} key={o[0]} />
)
works but is clunky feeling. Is there a more natural way of doing this?
Since you asked this question, a better solution has been posted on the github issue you reference. #vinnymac suggests:
posts.entrySeq().map( ([key, value]) =>
<Post key={key} value={value} />
)
this works well because entrySeq() returns a Sequence of key/value tuples, which you can then destructure in the params of the .map() callback.
edit I see now that you are only asking for the keys. In that case use keySeq() if you want to use ImmutableJS map() or keys() if you want to use ES6 map()
Why not stock.keys()? As it returns an ES6 iterator, you'll need to cast it to an array for it to work in older JS versions: Array.from(stock.keys())
let zoo = Immutable.fromJS({ 'dog': 1, 'cat': 2 })
zoo.keys().map((name, index) => <Animal name={ name } key={ index } />)
Notice that I avoided key as a variable and then passed the index value as key to the children component, this is because react needs references to dynamically created components so it can handle them correctly within its VirtualDOM. Read more about React's Dynamic Children.
Using Immutable Map's reduce method is a more direct approach. Since react expects an array so setting initial value of empty array and pushing jsx into it solves the issue. Works for immutable List as well.
{
stocks.reduce((jsxArray, stock, index) => {
jsxArray.push(
<h3 key={index}>{index}</h3>,
)
return jsxArray;
}, [])
}