iPhone - GCD sending async operations to a serial dispatch queue - asynchronous

Suppose I am doing an async connection to a web service, which by definition since it is async runs in a separate thread from the main thread.
Now lets say I put this job or block of code in a serial dispatch queue. Since serial dispatch queues don't process more than 1 job at a time, but I am sending a job that is already async, would it then consider the job to be done after the call to the async job is made? or would it actually wait for the async job to get done before processing the next job?.
What about a concurrent queue, would the concurrent generated thread, generate yet another thread to process the async operation?
EDIT: I realize my question is not really clear, so my question is:
If I am using the same serial dispatch queue, and i dispatch using dispatch_async a block of code that already performs an async operation for example a NSURLConnection – initWithRequest:delegate: that runs async, will the block be considered completed by the serial queue after the async call?, and will that async call generate yet another thread?. Or will the queue still wait for job 1 that is already async to finish before processing the second job?.

When you dispatch to a serial queue, each dispatched block is processed one after the other. So if your first block takes a long time to process, the second block will not be called until the long-running first block is finished.
If you're enqueueing with dispatch_async the new block is simply put to the end of the queue, the function dispatch_async immediately returns and you can go on. But the block won't be executed until the previous blocks have finished.
However, dispatch_sync will wait until the block got its turn to execute and finish. So in your case, dispatch_sync will block until the long-running first block has finished.
If you dispatch to concurrent queue, then the second block will get to run in a new thread and thus the first block will not prevent the second from running.
You could also create two queues and dedicate them to different tasks, for example have one queue only for your web service stuff and another queue for different tasks. It depends on how these operations relate to each other and which may be run in parallel and which must run one after another.

Related

With gRPC can I have multiple RPC calls in progress over a single connection?

I'm having trouble getting multiple RPC calls to operate over a single connection. Server and client are both operating asynchronously using a completion queue.
I fire off a streaming call (getData), which sends one reply per second for 10 seconds. I wait a couple of seconds, then try to fire off a getVersion call (a unary call) and it doesn't come back until the getData call completes. Examination of the server shows that the getVersion call never hit the server until getData finished up.
And if I try to start multiple calls while the first getData is running, they all run once the first getData finishes. And in fact, they all run in parallel - for instance, if I fire off multiple getData calls, I can see all of them running in parallel after the first (blocking) getData finishes.
It's like you can queue up all you want, but once something is in progress you can't get a new call started on that channel?
Is this supposed to do this? It doesn't seem like the correct behavior, but my experience with gRPC is somewhat limited.
The problem was a bug in the way I was waiting for the next time to send data. I was blocking things that I shouldn't have been.

How does this.unblock work in Meteor?

How does this.unblock work in Meteor?
The docs say:
Call inside a method invocation. Allow subsequent method from this client to begin running in a new fiber.
On the server, methods from a given client run one at a time. The N+1th invocation from a client won't start until the Nth invocation returns. However, you can change this by calling this.unblock. This will allow the N+1th invocation to start running in a new fiber.
How can new code start running in a new fiber if Node runs in a single thread? Does it only unblock when we get to an I/O request, but no unblock would happen if we were running a long computation?
Fibers are an abstraction layer on top of Node's Event Loop. They change how we write code to interact with the Event Loop, but they do not change how Node works. Meteor, among other things, is sort of an API to Fibers.
Each client request in Meteor creates a new fiber. Meteor methods called by the client, by default, will queue up behind each other. This is the default behavior likely because there is an assumption that you want Mongo up to date for all clients before continuing execution. However, if you do not need your clients to work with the latest up to date globals or data, you can use this.unblock() to put each of these client requests in Node's Event Loop without waiting for the previous to complete. However, we are still constrained to Node's Event Loop.
So this.unblock() works by allowing all client requests to that method enter the Event Loop (non IO blocking execution based on callbacks). However, as Node is still a single threaded application, CPU intensive operations will block the callbacks in the Event Loop. That is why Node is not a good choice for CPU intensive work, and that doesn't change with Meteor or Meteor's interaction with Fibers/the Event Loop.
A simple analogy: The Event Loop, or our single Node thread, is a highway. Each car on the highway is a complex event driven function that will eventually exit off the highway when its callbacks complete. Fibers allow us to more easily control who gets on the highway and when. Meteor methods allow a single car on the highway at a time by default, but when properly using this.unblock() you allow multiple cars on the highway. However, a CPU intensive operation on any fiber will cause a traffic jam. I/O and network will not.

ASP.NET and async - how it works?

I know that is a common question, but I've read a kiloton of articles and feel confused. And now I think that it would be better not to read them at all )).
So, how ASP.NET works (only about threads):
http request is served by thread from the thread pool.
while request is processing this thread is busy, because request is processing inside exactly this thread.
when request processing finishes, the thread returns to thread pool, server sends a response.
Is this described behaviour right ?
What really happens when I start new task inside ASP.NET MVC controller?
public ActionResult Index()
{
var task1 = Task.Factory.StartNew(() => DoSomeHeavyWork());
return View();
}
private static async Task DoSomeHeavyWork()
{
await Task.Delay(5000);
}
controller action starts to execute inside thread that processes current request - T1.
the thread pool allocates another one thread (T2) for task1.
task1 starts "immediately" inside T2.
the View result returns "immediately".
ASP.NET do some work, server sends a response, T1 returns to the thread pool, T2 is still alive.
after some time when the DoSomeHeavyWork will be finished, the T2 thread will be returned to the thread pool.
Is it correct ?
Now lets look to async action
public async Task<ActionResult> Index()
{
await DoSomeHeavyWork();
return View();
}
, I understand the difference with previous code sample, but not the process, in this example the behaviour is the following:
action starts to execute inside thread that processes current request - T1.
DoSomeHeavyWork "immediately" returns a task, let's call it "task1" too.
T1 returns to the thread pool.
after the DoSomeHeavyWork finishes, Index action continue to execute.
after Index action execution the server will send a response.
Please explain what happening between points 2 and 5, the questions are:
is the DoSomeHeavyWork processed inside task1 or where (where it is "awaited") ? I think this a key question.
which thread will continue to process the request after await - any new one from the thread pool, right ?
request produces thread allocating from the thread pool, but response will not be sent until the DoSomeHeavyWorkAsync finished and it doesn't matter in which thread this method executes. In other words, according to single request and single concrete task (DoSomeHeavyWork) there is no benefits of using async. Is it correct ?
if the previous statement is correct, then I don't understand how the async can improve performance for multiple requests with the same single task. I'll try to explain. Let's assume that the thread pool has 50 threads available to handle requests. Single request should be processed at least by one single thread from the thread pool, if the request starts another threads, then all of them will be taken from the thread pool, e.g. request takes one thread to process himself, starts 5 different tasks in parallel and wait all of them, thread pool will have 50 - 1 - 5 = 44 free threads to handle incoming requests - so this is a parallelism, we can improve performance for single request, but we reduce number of requests which can be processed. So according request processing in ASP.NET I suppose that only task that somehow starts IO completion thread can achieve a goal of async (TAP). But how IO completion thread calls back thread pool thread in this case ?
Is this described behaviour right ?
Yes.
Is it correct ?
Yes.
is the DoSomeHeavyWork processed inside task1 or where (where it is
"awaited") ? I think this a key question.
From the current code, DoSomeHeavyWork will asynchronously wait for Task.Delay to complete. Yes, this will happen on the same thread allocated by the thread-pool, it won't spin any new threads. But there isn't a guarantee that it will be the same thread, though.
which thread will continue to process the request after await?
Because we're talking about ASP.NET, that will be an arbitrary thread-pool thread, with the HttpContext marshaled onto it. If this was WinForms or WPF app, you'd be hitting the UI thread again right after the await, given that you don't use ConfigureAwait(false).
request produces thread allocating from the thread pool, but response
will not be sent until the DoSomeHeavyWorkAsync finished and it
doesn't matter in which thread this method executes. In other words,
according to single request and single concrete task (DoSomeHeavyWork)
there is no benefits of using async. Is it correct ?
In this particular case, you won't see the benefits of async. async shines when you have concurrent requests hitting the server, and alot of them are doing IO bound work. When using async while hitting the database, for example, you gain freeing the thread-pool thread for the amount of time the query executes, allowing the same thread to process more requests in the meanwhile.
But how IO completion thread calls back thread pool thread in this
case ?
You have to separate parallelism and concurrency. If you need computation power for doing CPU bound work in parallel, async isn't the tool that will make it happen. On the other hand, if you have lots of concurrent IO bound operations, like hitting a database for CRUD operations, you can benefit from the usage of async by freeing the thread while to IO operation is executing. That's the major key point for async.
The thread-pool has dedicated pool of IO completion threads, as well as worker threads, which you can view by invoking ThreadPool.GetAvailableThreads. When you use IO bound operations, the thread that retrieves the callbacks is usually an IO completion thread, and not a worker thread. They are both have different pools.

How is asynchronous callback implemented?

How do all the languages implements asynchronous callbacks?
For example in C++, one need to have a "monitor thread" to start a std::async. If it is started in main thread, it has to wait for the callback.
std::thread t{[]{std::async(callback_function).get();}}.detach();
v.s.
std::async(callback_function).get(); //Main thread will have to wait
What about asynchronous callbacks in JavaScript? In JS callbacks are massively used... How does V8 implement them? Does V8 create a lot of threads to listen on them and execute callback when it gets message? Or does it use one thread to listen on all the callbacks and keep refreshing?
For example,
setInterval(function(){},1000);
setInterval(function(){},2000);
Does V8 create 2 threads and monitor each callback state, or it has a pool thing to monitor all the callbacks?
V8 does not implement asynchronous functions with callbacks (including setInterval). Engine simply provides a way to execute JavaScript code.
As a V8 embedder you can create setInterval JavaScript function linked to your native C++ function that does what you want. For example, create thread or schedule some job. At this point it is your responsibility to call provided callback when it is necessary. Only one thread at a time can use V8 engine (V8 isolate instance) to execute code. This means synchronization is required if a callback needs to be called from another thread. V8 provides locking mechanism is you need this.
Another more common approach to solve this problem is to create a queue of functions for V8 to execute and use infinite queue processing loop to execute code on one thread. This is basically an event loop. This way you don't need to use execution lock, but instead use another thread to push callback function to a queue.
So it depends on a browser/Node.js/other embedder how they implement it.
TL;DR: To implement asynchronous callback is basically to allow the control flow to proceed without blocking for the callback. Before the callback function is finally called, the control flow is free to execute anything that has no dependence on the callback's result, e.g., the caller can proceed as if the callback function has returned, or the caller may yield its control to other functions.
Since the question is for general implementation rather than a specific language, my answer tries to be as general as to cover the implementation commonalities.
Different languages have different implementations for asynchronous callbacks, but the principles are the same. The key is to decouple the control flow from the code executed. They correspond to the execution context (like a thread of control with a runtime stack) and the executed task. Traditionally the execution context and the executed task are usually 1:1 associated. With asynchronous callbacks, they are decoupled.
1. The principles
To decouple the control flow from the code, it is helpful to think of every asynchronous callback as a conditional task. When the code registers an asynchronous callback, it virtually installs the task's condition in the system. The callback function is then invoked when the condition is satisfied. To support this, a condition monitoring mechanism and a task scheduler are needed, so that,
The programmer does not need to track the callback's condition;
Before the condition is satisfied, the program may proceed to execute other code that does not depend on the callback's result, without blocking on the condition;
Once the condition is satisfied, the callback is guaranteed to execute. The programmer does not need to schedule its execution;
After the callback is executed, its result is accessible to the caller.
2. Implementation for Portability
For example, if your code needs to process the data from a network connection, you do not need to write the code checking the connection state. You only registers a callback that will be invoked once the data is available for processing. The dirty work of connection checking is left to the language implementation, which is known to be tricky especially when we talk about scalability and portability.
The language implementation may employ asynchronous io, nonblocking io or a thread pool or whatever techniques to check the network state for you, and once the data is ready, the callback function is then scheduled to execute. Here the control flow of your code looks like directly going from the callback registration to the callback execution, because the language hides the intermediate steps. This is the portability story.
3. Implementation for Scalability
To hide the dirty work is only part of the whole story. The other part is that, your code itself does not need to block waiting for the task condition. It does not make sense to wait for one connection's data when you have lots of network connections simultaneously and some of them may already have data ready. The control flow of your code can simply register the callback, and then moves on with other tasks (e.g., the callbacks whose conditions have been satisfied), knowing that the registered callbacks will be executed anyway when their data are available.
If to satisfy the callback's condition does not involve much of the CPU (e.g., waiting for a timer, or waiting for the data from network), and the callback function itself is light-weighted, then single CPU (or single thread) is able to process lots of callbacks concurrently, such as incoming network requests processing. Here the control flow may look like jumping from one callback to another. This is the scalability story.
4. Implementation for Parallelism
Sometimes, the callbacks are not pending for non-blocking IO condition, but for blocking operations such as page fault; or the callbacks do not rely on any condition, but are pure computation logics. In this case, asynchronous callback does not save you the CPU waiting time (because there is no idle waiting). But since asynchronous callback implies that the callback function can be executed in parallel with the caller or other callbacks (subject to certain data sharing and synchronization constraints), the language implementation can dispatch the callback tasks to different threads, achieving the benefits of parallelism, if the platform has more than one hardware thread context. It still improves scalability.
5. Implementation for Productivity
The productivity with asynchronous callback may not be very positive when the code need to deal with chained callbacks, i.e., when callbacks register other callbacks in recursive way, known as callback hell. There are ways to rescue.
The semantics of an asynchronous callback can be explored so as to substitute the hopeless nested callbacks with other language constructs. Basically there can be two different views of callbacks:
From data flow point of view: asynchronous callback = event + task.
To register a callback essentially generates an event that will emit
when the task condition is satisfied. In this view, the chained
callbacks are just events whose processing triggers other event
emission. It can be naturally implemented in event-driven
programming, where the task execution is driven by events. Promise
and Observable may also be regarded as event-driven concept. When
multiple events are ready concurrently, their associated tasks can
be executed concurrently as well.
From control flow point of view: to register a callback yields the
control to other code, and the callback execution just resumes the
control flow once its condition is satisfied. In this view, chained
asynchronous callbacks are just resumable functions. Multiple
callbacks can be written as one after another in traditional
"synchronous" way, with yield operation in between (or await). It
actually becomes coroutine.
I haven't discussed the implementation of data passing between the asynchronous callback and its caller, but that is usually not difficult if using shared memory where caller and callback can share data. Actually Golang's channel can also be considered in line of yield/await but with its focus on data passing.
The callbacks that are passed to browser APIs, like setTimeout, are pushed into the same browser queue when the API has done its job.
The engine can check this queue when the stack is empty and push the next callback into the JS stack for execution.
You don’t have to monitor the progress of the API calls, you asked it to do a job and it will put your callback in the queue when it’s done.

Async: Why AsyncDownloadString?

Alright... I'm getting a bit confused here. The async monad allows you to use let! which will start the computation of the given async method, and suspend the thread, untill the result is available.. thats all fine, I do understand that.
Now what I dont understand is why they made an extension for the WebClient class, thats named AsyncDownloadString - Couldn't you just wrap the normal DownloadString inside an async block? I'm pretty sure, I'm missing an important point here, since I've done some testing that shows DownloadString wrapped inside an async block, still blocks the thread.
There is an important difference between the two:
The DownloadString method is synchronous - the thread that calls the method will be blocked until the whole string is downloaded (i.e. until the entire content is transferred over the internet).
On the other hand, AsyncDownloadString doesn't block the thread for a long time. It asks the operating system to start the download and then releases the thread. When the operating system receives some data, it picks a thread from the thread pool, the thread stores the data to some buffer and is again released. When all data is downloaded, the method will read all data from the buffer and resume the rest of the asynchronous workflow.
In the first case, the thread is blocked during the entire download. In the second case, threads are only busy for very short period of time (when processing received responses, but not when waiting for the server).
Internally, the AsyncDownloadString method is just a wrapper for DownloadStringAsync, so you can also find more information in the MSDN documentation.
The important point to note is that async programming is about doing operations that are not CPU bound i.e those which are IO bound. These IO bound operations are performed on IO threads (using overlapped IO feature of operating system). What this implies is that even if you wrap some factorial function inside a async block and run it inside another async block using let! binding, you won't get any benefit out of it as it will be running on CPU bound thread and the main purpose of doing async programming is to not take up a CPU bound thread when something which is of IO nature, as that CPU bound thread can be used for other purpose in the meantime the IO completes.
If you look at the various IO classes in .NET like File, Socket etc. They all have blocking as well as non blocking read and write operations. The blocking operations will wait for the IO to complete on the CPU thread and hence blocking the CPU thread till IO is done, where as the non blocking operations uses the overlapped IO API calls to perform the operation.
Async have a method to make a async block out of these non blocking APIs of Files, Socket etc. In your case calling DownloadString will block the CPU thread as it uses the blocking API of the underlying class where as AsyncDownloadString uses the non blocking - io overlapped - based API call.

Resources