Using functions in observables vs creating many actions/epics - redux

I'm just starting to use redux-observable and I'm having trouble deciding between multiple ways to do the same thing. Here is a contrived example of two ways to execute some logic in response to actions:
Method A (epics):
function epic1(action$){
return action$.ofType(FOO)
.map(action => something(action.payload))
.map(result => ({type: BAR}));
}
function epic2(action$){
return action$.ofType(BAR)
.map(action => something(action.payload));
// potentially returning actions for a third epic, etc.
}
Method B (functions):
function helperFunction(result){
// something that returns an action eventually
}
function singleEpic(action$){
action$.ofType(FOO)
.map(action => something(action.payload))
.map(helperFunction);
}
You can imagine each pattern scaling out and the functions being realistically complex. Is there a lot of overhead to doing Method A? Are the actions going to go through the entire redux loop before arriving at epic2 and is this something that has been noticeable in anyone's experience?
I definitely want to make as many things epics as possible so far because they've ended up being really small and simple, but I'm not sure about the costs.

I can offer my experience from a project using redux-saga. I think the performance characteristics and design trade-offs are quite similar.
It is likely that there will be some performance hit in case A. However, unless there is something really wrong, it should not be a big consideration for your average SPA. YMMV if you're developing games or something else performance intensive. From this point of view creating fewer sagas seems like a micro optimisation.
We were leaning very much towards something similar to what you describe as method B. We would typically create factories for families of similar sagas, pretty much like in the following example:
const factory = ({ actionType, mapper, actionCreator }) => (action$) =>
action$.ofType(actionType)
.map(action => mapper(action.payload))
.map(actionCreator);
const epic = factory({
actionType: 'FOO',
mapper: payload => payload + 1,
actionCreator: newPayload => ({ type: 'BAR', payload: newPayload })
});
It probably does not make all that much sense in this contrived example, because the cognitive overhead of the abstraction is higher than what you're getting from it, but you get the idea. We employed a very similar strategy for reducers, too.
However, when possible, I would warn against creating waterfalls of actions like you describe with epic1, epic2, etc. I believe that if you have too many of those implicit interdependencies in your code, it will eventually be more difficult to debug. Though probably nowhere near as hard as chains of two-way bindings in some frameworks of the past (Ember, for example).
So in general, my recommendation would be to create as many little sagas as you like, creating little helpers to help you construct them. But prefer longer more explicit epics for more complex asynchronous flows (as opposed to having too many collaborating epics).

Method A will be slower because it is going through the whole redux chain of reducers and epics. (epics are essentially middleware). How slow that is, and how much that matters depends on what you think is more maintainable and clear to understand.
I think that, if you have a FOO action, that should perform some helping functionality, and eventually kick off another epic, that you don't need three separate epics. It makes much more sense to me to not kick off any subsequent actions if possible.
However, if you do need to merge epics, then creating one epic that handles a 'waterfall' type flow would make sense. For this contrived example, I would definitely not do Method A. If you just need to run some side logic, then call the function directly. There is no need to walk upstream, so you can ride down a river, so you can be where you already are.

Related

Redux: can I mutate state in reducer and pass new object of the same

I know that we are not suppose to mutate the state because, the app re-renders based on the fact if there has been changes in the reference in previous state and next State, but what id I do something like this
reducerFunction = (state, action) => {
state.value = action.value;
return {...state}
}
Here I am passing a new reference, so is there anything wrong with it which could happen because of state mutation
This will work but you are mutating older state and then creating new one and return it. This is not recommended. In some case this may be an issue if you need undo functionality.
Two problems that I can think of.
You would loose (or mess up) the undo history feature of redux. This comes in really handy sometimes, especially when you are dealing with lots of data.
You are assuming synchronous execution of the code. Redux (and JS in general) makes no such guarantees. In an application where you are updating the store with anything computer-generated (practically anything that is not user inputs) and reading it back somewhat quickly, this would mess up the state and you will have a Race Condition.
Generally, it is a good idea to follow the implementation guidelines to have your code run predictably. Sometimes there is error checking for things that are non-standard, simply because it might break the code. It can potentially be a security issue. I do it all the time when I write APIs.

Redux saga, rx-observable. With vanilla fetch calls, why the need over thunks?

I have been reading about sagas, their intent, and their usage. BUT - I have two questions that I'd really like some closure on, and then more of an opinion question.
When using Sagas for a simple api call, the boilerplate seems very excessive. If I had 20 api calls, how is that less wieldy than using thunks? Plus, I keep hearing the idea of "side effects" - but I'm unsure how that plays into it all.
I read some blogs that used a pattern that was able to dynamically generate the Sagas to reduce less boilerplate - but couldn't you do that with thunks too? Also, any examples would be great.
Are Sagas still usefull when literally dealing with very simple post or get calls?
Any opinions on redux-sags vs. redux-observables?
thanks!
Disclaimer: I am one of the authors of redux-observable, so my opinions of both redux-saga and redux-observable are tainted with bias
Since you used the term Saga (instead of Epic) I'll assume you're asking in the context of redux-saga (not redux-observable).
In redux-saga, the effects you do, e.g. an AJAX request, aren't actually directly handled inside your generator sagas. Instead, the helpers you use are creating Plain Old JavaScript Objects which represent the effect intent, which you yield and then the redux-saga middleware itself performs the effect internally, hidden from you, providing the results back to your yield, like yourSaga.next(response).
Some like this because your saga generators are truly pure. Because it uses generators to support multiple effects, it makes it easy to test without mocks because you just assert that the effects it yielded are expected. Personally, I found in practice this seemed far cooler than it really is: many times you end up effectively recreating everything the saga does in your test. You are now testing the implementation of the saga is correct not testing the behavior of the saga. Many don't care (or even notice this), but I did. I imagine some even prefer it. This is called "effects as data". FWIW, redux-observable does not use this "effects as data" model, which is the most fundamental difference between it and redux-saga.
Tying this back to how they compare to redux-thunk, the biggest differences are: time-based operations (e.g. debouncing sequential actions) are impractical with redux-thunk alone without major hacks. Speaking of debouncing, it doesn't come with any utilities at all, so you're on the your handling debouncing and other common effects. Testing is much much harder.
These are mostly opinions, however. Certainly very successful applications can (and have) use redux-thunk. https://m.twitter.com comes to mind.
I think it wouldn't be controversial to say that redux-thunk is significantly easier to learn and use for simple request->response AJAX calls, without needing cancellation, etc. In fact, I often recommend users unfamiliar with RxJS use redux-thunk for the simple stuff and only lean on redux-observable for the more complex stuff, so they can remain productive and learn as they go. There's definitely a place for academic "correctness" and beautiful code, but for most people's jobs, shipitâ„¢ should be #1 priority. Users don't care how correct our code is, only that it exists and [mostly] works.
Regarding opinions on redux-saga vs. redux-observable, I'm biased because I'm one of the authors of redux-observable, but I summarized some of my thoughts in a previous SO post: Why use Redux-Observable over Redux-Saga? tl;dr they have a similar overall pattern, but redux-saga uses "effects as data" whereas redux-observable uses real effects with RxJS. Pros and cons to both, the primary pro to using RxJS that it is a skill that is a vastly useful skill for things other than redux-observable, and will almost certainly outlive redux-observable/redux-saga so the skill is highly transferable.

Intergrating both synchronous and asynchronous libraries

Can synchronous and asynchronous functions be integrated into one call/interface whilst maintaining static typing? If possible, can it remain neutral with inheritance, i.e. not wrapping sync methods in async or vice versa (though this might be the best way).
I've been reading around and see it's generally recommending to keep these separate (http://www.tagwith.com/question_61011_pattern-for-writing-synchronous-and-asynchronous-methods-in-libraries-and-keepin and Maintain both synchronous and asynchronous implementations). However, the reason I want to do this is I'm creating a behaviour tree framework for Dart language and am finding it hard to mix both sync and async 'nodes' together to iterate through. It seems these might need to be kept separate, meaning nodes that would suit a sync approach would have to be async, or the opposite, if they are to be within the same 'tree'.
I'm looking for a solution particularly for Dart lang, although I know this is firmly in the territory of general programming concepts. I'm open to this not being able to be achieved, but worth a shot.
Thank you for reading.
You can of course use sync and async functions together. What you can't do is go back to sync execution after a call of an async function.
Maintaining both sync and async methods is in my opinion mostly a waste of time. Sometimes sync versions are convenient to not to have to invoke an async call for some simple operation but in general Dart async is an integral part of Dart. If you want to use Dart you have to get used to it.
With the new async/await feature you can write code that uses async functions almost the same as when only sync functions are used.

Whether to use TPL or async /await

There is an existing third party Rest API available which would accept one set of input and return the output for the same. (Think of it as Bing's Geo coding service, which would accept address and return location detail)
My need would be is to call this API multiple times (say 500-1000) for a single asp.net request and each call may take close to 500ms to return.
I could think of three approaches on how to do this action. Need your input on which could be best possible approach keeping speed as criteria.
1. Using Http Request in a for loop
Write a simple for loop and for each input call the REST API and add the output to the result. This by far could be the slowest. But there is no overhead of threads or context switching.
2. Using async and await
Use async and await mechanisms to call REST Api. It could be efficient as thread continues to do other activites while waiting for REST call to return. The problem I am facing is that, as per recommendations I should be using await all the way to the top most caller, which is not possible in my case. Not following it may lead to dead locks in asp.net as mentioned here http://msdn.microsoft.com/en-us/magazine/jj991977.aspx
3. Using Task Parallel Library
Using a Parallel.ForEach and use the Synchronuos API to invoke the Server parallely and use ConcurrentDictionary to hold the result. But may result in thread overhead
Also, let me know is there any other better way to handle things. I understand people might suggest to track performance for each approach, but would like to understand how people has solved this problem before
The best solution is to use async and await, but in that case you will have to take it async all the way up the call stack to the controller action.
The for loop keeps it all sequential and synchronous, so it would definitely be the slowest solution. Parallel will block multiple threads per request, which will negatively impact your scalability.
Since the operation is I/O-based (calling a REST API), async is the most natural fit and should provide the best overall system performance of these options.
First, I think it's worth considering some issues that you didn't mention in your question:
500-1000 API calls sounds like quite a lot. Isn't there a way to avoid that? Doesn't the API have some kind of bulk query functionality? Or can't you download their database and query it locally? (The more open organizations like Wikimedia or Stack Exchange often support this, the more closed ones like Microsoft or Google usually don't.)
If those options are not available, then at least consider some kind of caching, if that makes sense for you.
The number of concurrent requests to the same server allowed at the same time in ASP.NET is only 10 by default. If you want to make more concurrent requests, you will need to set ServicePointManager.DefaultConnectionLimit.
Making this many requests could be considered abuse by the service provider and could lead to blocking of your IP. Make sure the provider is okay with this kind of usage.
Now, to your actual question: I think that the best option is to use async-await, even if you can't use it all the way. You can avoid deadlocks either by using ConfigureAwait(false) at every await (which is the correct solution) or by using something like Task.Run(() => /* your async code here */).Wait() to escape the ASP.NET context (which is the simple solution).
Using something like Parallel.ForEach() is not great, because it unnecessarily wastes ThreadPool threads.
If you go with async, you should probably also consider throttling. A simple way to achieve that is by using SemaphoreSlim.

Function Programming and Mock Objects

I was recently watching a webcast on Clojure. In it the presenter made a comment in the context of discussing the FP nature of Clojure which went something like (I hope I don't misrepresent him) "Mock objects are mocking you".
I also heard a similar comment a while back when I watched a webcast when Microsoft's Reactive Framework was starting to appear . It went something like "Mock objects are for those who don't know math")
Now I know that both comments are jokes/tongue-in-cheek etc etc (and probably badly paraphrased), but underlying them is obviously something conceptual which I don't understand as I haven't really made the shift to the FP paradigm.
So, I would be grateful if someone could explain whether FP does in fact render mocking redundant and if so how.
In pure FP you have referentially transparent functions that compute the same output every time you call them with the same input. All the state you need must therefore be explicitly passed in as parameters and out as function results, there are no stateful objects that are in some way "hidden behind" the function you call. This, however, is, what your mock objects usually do: simulate some external, hidden state or behavior that your subject under test relies on.
In other words:
OO: Your objects combine related state and behavior.
Pure FP: State is something you pass between functions that by themselves are stateless and only rely on other stateless functions.
I think the important thing to think about is the idea of using tests help you to structure your code. Mocks are really about deferring decisions you don't want to take now (and a widely misunderstood technique). Instead of object state, consider partial functions. You can write a function that takes defers part of its behaviour to a partial function that's passed in. In a unit test, that could be a fake implementation that lets you just focus on the code in hand. Later, you compose your new code with a real implementation to build the system.
Actually, when we were developing the idea of Mocks, I always thought of Mocks this way. The object part was incidental.

Resources