Are coroutines just syntactic sugar around completion handlers? - asynchronous

Are coroutines just syntactic sugar around completion blocks and completion blocks will be created under the hood? Or is the concept of coroutines much more complex and broad then just compiler trick aka syntactic sugar

It's not just syntactic sugar, not at all. Coroutines do not block threads, they just suspend execution, thus they encourage non-blocking concurrent programming.
Coroutines do not rely on features of the operating system or the JVM (e.g. they are not mapped to native Threads). Instead, coroutines and suspend functions particularly are transformed by the compiler producing a state machine capable of handling suspensions in general and passing around suspending coroutines keeping their state. This is enabled by Continuations, which are added as a parameter to each and every suspending function by the compiler; this technique is called “Continuation-passing style”.
For details please have a look at https://github.com/Kotlin/kotlin-coroutines/blob/master/kotlin-coroutines-informal.md

No, coroutines are not syntactic sugar. You can think of coroutines as of functions that can interact with the caller. When you call a normal function, say foo you pass control to foo and have to wait until foo either completes or throws exception. Coroutines are functions which can pass control back to caller, and caller can decide whether coroutine should continue and when and how coroutine should continue. This gives opportunity to implement things which are special language constructs in other languages:
Generators (aka yield keyword) like in C# and JavaScript. Caller continues execution of coroutine when a user wants new value from iterator. Coroutine passes back to caller by calling yield() function, which also passes some value to caller.
Async/await like in C# and JavaScript. Caller continues execution when Future (similar to Task or Promise) gets resolved. Coroutine passes back to caller by calling await() function. Caller passes value to coroutine when Future gets resolved and coroutine observes this value as a result of await() call.
Goroutines/channels in Go.
Unlike C#, JavaScript or Go, Kotlin does not implement any of these features in special syntax. Instead, Kotlin provides only suspend fun syntax, and then you can implement these features yourself (or get existing one from corresponding library called kotlinx.coroutines).

Corutines are syntactic sugar around asyncronous procedures, or, better say, Actors, which are repeatable asynchronous procedures.

Here is an interesting article mentioning implementing async/await in their system: http://joeduffyblog.com/2015/11/19/asynchronous-everything/
Along with this, we added the await and async keywords. A method could be marked async:
async int Foo() { ... }
All this meant was that it was allowed to await inside of it:
async int Bar() {
int x = await Foo();
...
return x * x;
}
Originally this was merely syntactic sugar for all the callback goop above, like it is in C#. Eventually, however, we went way beyond this, in the name of performance, and added lightweight coroutines and linked stacks.

Related

Is asynchronous a kind of concurrency

I already know when calling an asynchronous method e.g. myAsync(), the caller e.g. Caller() can continue executing without waiting for it to be finished. But on the other hand, myAsync() also is executing.
public void Caller(){
myAsync();---------------------running
dosometing();------------------running
}
The code next to myAsync() in Caller() will execute with myAsync() at the same time. So could this situation be considered as a kind of concurrency?
update
I prefer use javascript and c#
That very much depends on the concurrency model of your programming language.
If your language allows you to define methods that are "implicitly" running in parallel; then of course, calling myAsync() would use some kind of "concurrency mechanism" (for example a thread) to do whatever that method is supposed to do.
In that sense, the answer to your question is yes. But it might be important to point out that many "common" programming languages (such as Java) would only "work" in such a context when myAsync() would be creating some thread to then run "something" using that thread.

Tornado and concurrent.futures.Executor

I'm learning about async and Torando and struggling. First off is it possible to use executor class in Tornado?
The below example i'm creating a websocket and when receiving a message I want to run check() as another process in the background. This is a contrived example just for my learning sake. Neither the INSIDE or AFTER gets printed. Why do we need async specific packages like Motor if we have this executor class?
Also in all the example i've seen of Torando the #gen.coroutine are always done on classes that extend the tornado.web.RequestHandler in my example I'm using a tornado.websocket.WebSocketHandler can #gen.coroutine be used inside this class also?
Finally can anyone recommend a book or in-depth tutorial on this subject? I bought "Introduction to tornado" however it's a little outdated because it uses the tornado.gen.engine.
def check(msg):
time.sleep(10)
return msg
class SessionHandler(tornado.websocket.WebSocketHandler):
def open(self):
pass
def on_close(self):
pass
# not sure if i needed this decorator or not?
#tornado.web.asynchronous
def on_message(self,message):
print("INSIDE")
with concurrent.futures.ProcessPoolExecutor() as executor:
f=executor.submit(check,"a")
result = yield f
print("AFTER")
In order to use yield, you must use #tornado.gen.coroutine (or #gen.engine). #tornado.web.asynchronous is unrelated to the use of yield, and is generally only used with callback-based handlers (#asynchronous only works in regular handlers, not websockets). Change the decorator and you should see your print statements run.
Why would someone write an asynchronous library like Motor instead of using an executor like this? For performance. A thread or process pool is much more expensive (mainly in terms of memory) than doing the same thing asynchronously. There's nothing wrong with using an executor when you need a library that has no asynchronous counterpart, but it's better to use an asynchronous version if it's available (and if the performance matters enough, to write an asynchronous version when you need one).
Also note that ProcessPoolExecutor can be tricky: all arguments submitted to the executor must be picklable, and this can be expensive for large objects. I would recommend a ThreadPoolExecutor in most cases where you just want to wrap a synchronous library.

Intergrating both synchronous and asynchronous libraries

Can synchronous and asynchronous functions be integrated into one call/interface whilst maintaining static typing? If possible, can it remain neutral with inheritance, i.e. not wrapping sync methods in async or vice versa (though this might be the best way).
I've been reading around and see it's generally recommending to keep these separate (http://www.tagwith.com/question_61011_pattern-for-writing-synchronous-and-asynchronous-methods-in-libraries-and-keepin and Maintain both synchronous and asynchronous implementations). However, the reason I want to do this is I'm creating a behaviour tree framework for Dart language and am finding it hard to mix both sync and async 'nodes' together to iterate through. It seems these might need to be kept separate, meaning nodes that would suit a sync approach would have to be async, or the opposite, if they are to be within the same 'tree'.
I'm looking for a solution particularly for Dart lang, although I know this is firmly in the territory of general programming concepts. I'm open to this not being able to be achieved, but worth a shot.
Thank you for reading.
You can of course use sync and async functions together. What you can't do is go back to sync execution after a call of an async function.
Maintaining both sync and async methods is in my opinion mostly a waste of time. Sometimes sync versions are convenient to not to have to invoke an async call for some simple operation but in general Dart async is an integral part of Dart. If you want to use Dart you have to get used to it.
With the new async/await feature you can write code that uses async functions almost the same as when only sync functions are used.

Usage of F# async workflows

As this question is huge, I will give my view on this question so that you can simply tell me whether I am right or not. If not, where to correct. If my view is superficial, please present an overview of F# async usage. In mu understanding, to write async program, you need to put async code into "async" block like async{expression}, and use "let!" or "use!" to bind names to primitives, then you need to use method to run this async expression like "Async.Run". In addition, you can use exception handling to deal with exception, and cancellation to cancel when necessary. I also know there are several primitives that defined in F# core libraries, and F# extension of I/O operation. I just need to make sure the relation between these things. If you think my view on async workflows is superficial, please give an overview usage like what I have mentioned above. Thank you very much!
This question is huge, so at best, I can highlight some ideas and point you to learning resources and examples.
The description in the question isn't wrong (though there is no Async.Run function). But the main point about Asyncs is how they execute and why the way they execute is useful.
An async block defines a piece of code that becomes an Async<'T> object, which can be seen as a computation that can be executed at a later time. The Async returns an object of type 'T when its execution has completed -- if it has neither failed nor been cancelled.
let!, do! and use! are used inside of an async block to run another Async and, in the cases of let! and use!, bind its result to a name inside the current async. Unlike for example normal let, which simply binds any value to a name, the versions with an exclamation mark explicitly "import" the result of another async.
When an Async depends on another and waits for its result, such as with a let! binding, it normally does not block a thread. Asyncs utilize the .NET thread pool for easy parallel execution, and after an Async completes that another Async depends on, a continuation runs the remainder of the dependent Async.
The Async functions offer many ready-made ways to run Asyncs, such as Async.Start, which is a simple dispatch of an Async with no result, Async.RunSynchronously, which runs the Async and returns its result as if it were a normal function, Async.Parallel, which combines a sequence of Asyncs into one that executes them in parallel, or Async.StartAsTask, which runs an Async as an independent task. Further methods allow composition of Asyncs in terms of cancellation, or explicit control over continuation after an exception or cancellation.
Asyncs are very useful where waiting times are included: otherwise blocking calls can use Asyncs to not block execution, for example in I/O bound functions.
The best introductions to F# Asyncs I know are written, or co-written, by Don Syme, the lead designer of F#:
The chapter Reactive, Asynchronous, and Parallel Programming in the book Expert F#
A blog post with examples for asyncronous agents
The blog post introducing Asyncs in late 2007

What's the relationship between the async/await pattern and continuations?

I'm wondering what's the relationship between the async/await pattern (as known from Scala, F#, C#, etc.) and continuations:
Is the async/await pattern a limited subset of full-blown continuations? (If true, how are continuations more expressive?)
Are continuations just one possible implementation technique for async/await? (If true, what other implementation approaches exist?)
Or are async/await and continuations just orthogonal concepts where the only commonality is that they both enable some abstraction of control flow/data flow?
I would say that the relation between the two is this: async-await is a technique programming languages use so that you can write code that looks synchronous (e.g. no explicit continuation delegates), but that is actually executed asynchronously. This is achieved by creating an object that represents the current state of execution of the function and registering that as the continuation of the awaited operation.
In short, async-await uses continuations.
Is the async/await pattern a limited subset of full-blown continuations? (If true, how are continuations more expressive?)
You could say that. Continuations are a more general concept, async-await just uses them to achieve asynchrony.
For example, in continuation-passing style programming, you could implement exception handling by having two continuations for each operation: one for the success case and one for the failure case. This use of continuations has nothing to do with async-await (you would have to write each continuation explicitly, probably as a lambda).
Are continuations just one possible implementation technique for async/await? (If true, what other implementation approaches exist?)
I'd say that the concept of continuation is pretty central to async-await.
The core idea being async-await is to stop executing the function for now and resume it at a later time. And for that, you need some kind of object that can be used to do that resuming. Which is exactly what a continuation is.

Resources