Intergrating both synchronous and asynchronous libraries - asynchronous

Can synchronous and asynchronous functions be integrated into one call/interface whilst maintaining static typing? If possible, can it remain neutral with inheritance, i.e. not wrapping sync methods in async or vice versa (though this might be the best way).
I've been reading around and see it's generally recommending to keep these separate (http://www.tagwith.com/question_61011_pattern-for-writing-synchronous-and-asynchronous-methods-in-libraries-and-keepin and Maintain both synchronous and asynchronous implementations). However, the reason I want to do this is I'm creating a behaviour tree framework for Dart language and am finding it hard to mix both sync and async 'nodes' together to iterate through. It seems these might need to be kept separate, meaning nodes that would suit a sync approach would have to be async, or the opposite, if they are to be within the same 'tree'.
I'm looking for a solution particularly for Dart lang, although I know this is firmly in the territory of general programming concepts. I'm open to this not being able to be achieved, but worth a shot.
Thank you for reading.

You can of course use sync and async functions together. What you can't do is go back to sync execution after a call of an async function.
Maintaining both sync and async methods is in my opinion mostly a waste of time. Sometimes sync versions are convenient to not to have to invoke an async call for some simple operation but in general Dart async is an integral part of Dart. If you want to use Dart you have to get used to it.
With the new async/await feature you can write code that uses async functions almost the same as when only sync functions are used.

Related

Can someone clarify the use cases betwen Task and Async in F# 6?

I have read this article:
https://www.compositional-it.com/news-blog/task-vs-async/ that covers the updates in F# 6.
There is a chapter about the introduction of the task expression and its similarity and differences with async.
My understanding is that the task is a block in which we can do async calls, but it is executed in the regular flow, whereas the async block is the one we know, that creates expressions that will be executed later, on demand.
I use async quite extensively since I'm working with an app doing a lot of data streaming and database writing, so everything is pretty much async.
The only use I can see with the task expression is to interface with async api where I want the async block to run right now.
Essentially, replacing:
async {
do! callMyAsyncOnlyApi ()
} |> Async.RunSynchronously
with
task {
do! callMyAsyncOnlyApi ()
}
which would run instantl and release the cpu while waiting.
I must be missing something :D
The article provides an excellent summary in the conclusion section:
Personally, I reach for async {} by default unless I am dealing with external Task based libraries or I face a specific performance issue (which hasn't happened yet!). I find it fits the functional mindset better and is easier to use.
The reality is however that a great deal of libraries have been written with Task in mind, and it is fantastic that F# now has first class, out of the box support for the pattern.
I do not think there is much to add:
F# always had async and async is a reasonable default
There are a number of advantages of async - it supports cancellation and uses the "generator" model which is more composable (and more F# friendly)
.NET uses Task<T> more prominently, so if you are interfacing with a lot of .NET libraries (as your question suggests), then you may choose task
task can be more efficient if you have CPU-bound and not IO-bound code - so in some cases, you may choose that for performance reasons

What is the difference between futures::select! and tokio::select?

I'm using Tokio and I want to receive requests from two different mpsc queues. select! seems like the way to go, but I'm not sure what the difference is between futures::select! and tokio::select!. Under which circumstances one should you use one over the other?
tokio::select! was built out of experiences with futures::select!, but improves a bit on it to make it more ergonomic. E.g. the futures-rs version of select! requires Futures to implement FusedFuture, whereas Tokio's version no longer requires this.
Instead of this, Tokio's version supports preconditions in the macro to cover the same use-cases.
The PR in the tokio repo elaborates a bit more on this.
This change was also proposed for the futures-rs version, but has not been implemented there so far.
If you already have Tokio included in your project, then using Tokio's version seems preferable. But if you have not and do not want to add an additional dependency, then the futures-rs version will cover most use-cases too in a nearly identical fashion. The main difference is that some Futures might need to be converted into FusedFutures through the FutureExt::fuse() extension method.
To complement #matthias247's answer, a related big difference is that futures::select! takes futures in branch expressions by mutable reference, so uncompleted futures can be re-used in a loop.
tokio::select!, on the other hand, consumes passed futures. To get behavior similar to futures::select! you need to explicitly pass a reference (e.g. &mut future), and pin it if necessary (e.g. if it is async fn). Tokio docs have a section on this, Resuming an async operation
This thread has an in-depth explanation of why Tokio decided not to use FusedFuture.

Redux saga, rx-observable. With vanilla fetch calls, why the need over thunks?

I have been reading about sagas, their intent, and their usage. BUT - I have two questions that I'd really like some closure on, and then more of an opinion question.
When using Sagas for a simple api call, the boilerplate seems very excessive. If I had 20 api calls, how is that less wieldy than using thunks? Plus, I keep hearing the idea of "side effects" - but I'm unsure how that plays into it all.
I read some blogs that used a pattern that was able to dynamically generate the Sagas to reduce less boilerplate - but couldn't you do that with thunks too? Also, any examples would be great.
Are Sagas still usefull when literally dealing with very simple post or get calls?
Any opinions on redux-sags vs. redux-observables?
thanks!
Disclaimer: I am one of the authors of redux-observable, so my opinions of both redux-saga and redux-observable are tainted with bias
Since you used the term Saga (instead of Epic) I'll assume you're asking in the context of redux-saga (not redux-observable).
In redux-saga, the effects you do, e.g. an AJAX request, aren't actually directly handled inside your generator sagas. Instead, the helpers you use are creating Plain Old JavaScript Objects which represent the effect intent, which you yield and then the redux-saga middleware itself performs the effect internally, hidden from you, providing the results back to your yield, like yourSaga.next(response).
Some like this because your saga generators are truly pure. Because it uses generators to support multiple effects, it makes it easy to test without mocks because you just assert that the effects it yielded are expected. Personally, I found in practice this seemed far cooler than it really is: many times you end up effectively recreating everything the saga does in your test. You are now testing the implementation of the saga is correct not testing the behavior of the saga. Many don't care (or even notice this), but I did. I imagine some even prefer it. This is called "effects as data". FWIW, redux-observable does not use this "effects as data" model, which is the most fundamental difference between it and redux-saga.
Tying this back to how they compare to redux-thunk, the biggest differences are: time-based operations (e.g. debouncing sequential actions) are impractical with redux-thunk alone without major hacks. Speaking of debouncing, it doesn't come with any utilities at all, so you're on the your handling debouncing and other common effects. Testing is much much harder.
These are mostly opinions, however. Certainly very successful applications can (and have) use redux-thunk. https://m.twitter.com comes to mind.
I think it wouldn't be controversial to say that redux-thunk is significantly easier to learn and use for simple request->response AJAX calls, without needing cancellation, etc. In fact, I often recommend users unfamiliar with RxJS use redux-thunk for the simple stuff and only lean on redux-observable for the more complex stuff, so they can remain productive and learn as they go. There's definitely a place for academic "correctness" and beautiful code, but for most people's jobs, shipitâ„¢ should be #1 priority. Users don't care how correct our code is, only that it exists and [mostly] works.
Regarding opinions on redux-saga vs. redux-observable, I'm biased because I'm one of the authors of redux-observable, but I summarized some of my thoughts in a previous SO post: Why use Redux-Observable over Redux-Saga? tl;dr they have a similar overall pattern, but redux-saga uses "effects as data" whereas redux-observable uses real effects with RxJS. Pros and cons to both, the primary pro to using RxJS that it is a skill that is a vastly useful skill for things other than redux-observable, and will almost certainly outlive redux-observable/redux-saga so the skill is highly transferable.

Whether to use TPL or async /await

There is an existing third party Rest API available which would accept one set of input and return the output for the same. (Think of it as Bing's Geo coding service, which would accept address and return location detail)
My need would be is to call this API multiple times (say 500-1000) for a single asp.net request and each call may take close to 500ms to return.
I could think of three approaches on how to do this action. Need your input on which could be best possible approach keeping speed as criteria.
1. Using Http Request in a for loop
Write a simple for loop and for each input call the REST API and add the output to the result. This by far could be the slowest. But there is no overhead of threads or context switching.
2. Using async and await
Use async and await mechanisms to call REST Api. It could be efficient as thread continues to do other activites while waiting for REST call to return. The problem I am facing is that, as per recommendations I should be using await all the way to the top most caller, which is not possible in my case. Not following it may lead to dead locks in asp.net as mentioned here http://msdn.microsoft.com/en-us/magazine/jj991977.aspx
3. Using Task Parallel Library
Using a Parallel.ForEach and use the Synchronuos API to invoke the Server parallely and use ConcurrentDictionary to hold the result. But may result in thread overhead
Also, let me know is there any other better way to handle things. I understand people might suggest to track performance for each approach, but would like to understand how people has solved this problem before
The best solution is to use async and await, but in that case you will have to take it async all the way up the call stack to the controller action.
The for loop keeps it all sequential and synchronous, so it would definitely be the slowest solution. Parallel will block multiple threads per request, which will negatively impact your scalability.
Since the operation is I/O-based (calling a REST API), async is the most natural fit and should provide the best overall system performance of these options.
First, I think it's worth considering some issues that you didn't mention in your question:
500-1000 API calls sounds like quite a lot. Isn't there a way to avoid that? Doesn't the API have some kind of bulk query functionality? Or can't you download their database and query it locally? (The more open organizations like Wikimedia or Stack Exchange often support this, the more closed ones like Microsoft or Google usually don't.)
If those options are not available, then at least consider some kind of caching, if that makes sense for you.
The number of concurrent requests to the same server allowed at the same time in ASP.NET is only 10 by default. If you want to make more concurrent requests, you will need to set ServicePointManager.DefaultConnectionLimit.
Making this many requests could be considered abuse by the service provider and could lead to blocking of your IP. Make sure the provider is okay with this kind of usage.
Now, to your actual question: I think that the best option is to use async-await, even if you can't use it all the way. You can avoid deadlocks either by using ConfigureAwait(false) at every await (which is the correct solution) or by using something like Task.Run(() => /* your async code here */).Wait() to escape the ASP.NET context (which is the simple solution).
Using something like Parallel.ForEach() is not great, because it unnecessarily wastes ThreadPool threads.
If you go with async, you should probably also consider throttling. A simple way to achieve that is by using SemaphoreSlim.

Why shouldn't I use F# asynchronous workflows for parallelism?

I have been learning F# recently, being particularly interested in its ease of exploiting data parallelism. The data |> Array.map |> Async.Parallel |> Async.RunSynchronously idiom seems very easy to understand and straightforward to use and get real value from.
So why is it that async is not really intended for this? Donald Syme himself says that PLINQ and Futures are probably a better choice. And other answers I've read here agree with that as well as recommending TPL. (PLINQ doesn't seem too much different to the above built-in functions, as long as you're using the F# Powerpack to get the PSeq functions.)
F# and functional languages make a lot of sense for this, and some applications have achieved great success with async parallelism.
So why shouldn't I use async to execute parallel data processes? What am I going to lose by writing parallel async code instead of using PLINQ or TPL?
So why shouldn't I use async to execute parallel data processes?
If you have a tiny number of completely independent non-async tasks and lots of cores then there is nothing wrong with using async to achieve parallelism. However, if your tasks are dependent in any way or you have more tasks than cores or you push the use of async too far into the code then you will be leaving a lot of performance on the table and could do a lot better by choosing a more appropriate foundation for parallel programming.
Note that your example can be written even more elegantly using the TPL from F# though:
Array.Parallel.map f xs
What am I going to lose by writing parallel async code instead of using PLINQ or TPL?
You lose the ability to write cache oblivious code and, consequently, will suffer from lots of cache misses and, therefore, all cores stalling waiting for shared memory which means poor scalability on a multicore.
The TPL is built upon the idea that child tasks should execute on the same core as their parent with a high probability and, therefore, will benefit from reusing the same data because it will be hot in the local CPU cache. There is no such assurance with async.
I wrote an article that re-implements one C# TPL sample using both Task and Async, which also has some comments on the difference between the two. You can find it here and there is also a more advanced async-based version.
Here is a quote from the first article that compares the two options:
The choice between the two possible implementations depends on many factors. Asynchronous workflows were designed specifically for F#, so they more naturally fit with the language. They offer better performance for I/O bound tasks and provide more convenient exception handling. Moreover, the sequential syntax is quite convenient. On the other hand, tasks are optimized for CPU bound calculations and make it easier to access the result of calculation from other places of the application without explicit caching.
I always figured it's what TPL, PLinq etc... give you over and above what Async does. (Cancellation mechanisms is the one that comes to mind.) This question has some better answers.
This article hints at a slight performance advantage to TPL, but probably not enough to be significant.

Resources