Can tasks executed Asynchronously on Serial Queue? - asynchronous

I am trying to understand the basic functionality of Serial Queue and Concurrent Queue in GCD.
Can we perform synchronous operations on Concurrent Queue? As I know synchronous means executing tasks one after another but how it is possible with Concurrent Queue which executes tasks in parallel? It seems contradictory to me.
Similarly, how can we perform asynchronous operation on serial queue as serial queue perform tasks one after another so how they can be executed concurrently?
If anyone can explain with the help of image then it will be very clear.

You asked:
Can we perform synchronous operations on Concurrent Queue? As I know synchronous means executing tasks one after another but how it is possible with Concurrent Queue which executes tasks in parallel?
OK, let’s consider terminology before answering your question:
What is a “synchronous operation”? It is one that will block its respective thread during that operation. But a concurrent queue can use multiple threads to perform these individual synchronous operations on that same queue at the same time, each running on its own thread.
Let us use a practical example: Consider a synchronous operation that might be an algorithm to process an image (e.g. resize it or convert a color image to black-and-white). When you perform this operation, it will generally tie up the respective thread until the operation is done.
So, given that example, yes, you can certainly can (and we often do) perform multiple concurrent synchronous operations in parallel. Using our prior example, you might have 4 images that you want to process concurrently. So you might instantiate a concurrent queue, and add these four operations to that queue, and they will be processed in parallel, each on its own “worker thread”.
You then ask:
Similarly, how can we perform asynchronous operation on serial queue as serial queue perform tasks one after another so how they can be executed concurrently?
This depends a little upon what you mean by “operation”. Are you talking about a Swift Operation (or Objective-C NSOperation) on an “operation queue”? Or are you using the term “operation” a little more generally as it applies to GCD and dispatch queues?
The reason I ask, is that in the world of GCD (aka “dispatch queues”), you simply do not “perform an asynchronous operation on a serial queue”. You start asynchronous tasks from a serial queue, but the definition of “asynchronous” means that the current thread does not wait for the task to finish (which generally means that, often behind the scenes, another queue/thread is doing the work).
A good example of that would be when you start a series of network requests from a serial queue. Hidden in NSURLSession/URLSession, it has its own queues/threads that are managing these multiple network requests concurrently. If you do not want these requests to run concurrently, some sleight of hand is required to take an API which is designed for concurrent operation and have it behave sequentially, one after the other.
This is where operation queues come into play, as they do have the concept of custom Operation/NSOperation subclasses, in which you can define an operation to wrap an asynchronous task, such that the operation does not “complete” until the asynchronous task is done. It uses KVO to notify the queue when the operation is executing, is finished, etc. In that scenario, you can define a serial operation queue (i.e., one with a maxConcurrentOperationCount of 1), add a series of your own asynchronous operation subclass instances to that queue, and it can run them sequentially, one after the other. But using operation queues with asynchronous operations can be a little complicated. If that’s really what you are trying to do, we can point you to some examples. But, in the interest of full disclosure, this operation queue pattern is used less frequently nowadays, and you will often see other patterns such as Combine, or the new async-await API, to achieve similar results.
So, we can’t answer this latter question without a little more detail of what precisely you mean by “asynchronous operation on serial queue”. Give us a practical example of what you mean (and what API you are using).

Related

How to evenly balance processing many simultaneous tasks?

PROBLEM
Our PROCESSING SERVICE is serving UI, API, and internal clients and listening for commands from Kafka.
Few API clients might create a lot of generation tasks (one task is N messages) in a short time. With Kafka, we can't control commands distribution, because each command comes to the partition which is consumed by one processing instance (aka worker). Thus, UI requests could be waiting too long while API requests are processing.
In an ideal implementation, we should handle all tasks evenly, regardless of its size. The capacity of the processing service is distributed among all active tasks. And even if the cluster is heavily loaded, we always understand that the new task that has arrived will be able to start processing almost immediately, at least before the processing of all other tasks ends.
SOLUTION
Instead, we want an architecture that looks more like the following diagram, where we have separate queues per combination of customer and endpoint. This architecture gives us much better isolation, as well as the ability to dynamically adjust throughput on a per-customer basis.
On the side of the producer
the task comes from the client
immediately create a queue for this task
send all messages to this queue
On the side of the consumer
in one process, you constantly update the list of queues
in other processes, you follow this list and consume for example 1 message from each queue
scale consumers
QUESTION
Is there any common solution to such a problem? Using RabbitMQ or any other tooling. Нistorically, we use Kafka on the project, so if there is any approach using - it is amazing, but we can use any technology for the solution.
Why not use spark to execute the messages within the task? What I'm thinking is that each worker creates a spark context that then parallelizes the messages. The function that is mapped can be based on which kafka topic the user is consuming. I suspect however your queues might have tasks that contained a mixture of messages, UI, API calls, etc. This will result in a more complex mapping function. If you're not using a standalone cluster and are using YARN or something similar you can change the queueing method that the spark master is using.
As I understood the problem, you want to create request isolation from the customer using dynamically allocated queues which will allow each customer tasks to be executed independently. The problem looks like similar to Head of line blocking issue in networking
The dynamically allocating queues is difficult. This can also lead to explosion of number of queues that can be a burden to the infrastructure. Also, some queues could be empty or very less load. RabbitMQ won't help here, it is a queue with different protocol than kafka.
One alternative is to use custom partitioner in kafka that can look at the partition load and based on that load balance the tasks. This works if the tasks are independent in nature and there is no state store maintains in the worker.
The other alternative would be to load balance at the customer level. In this case you select a dedicated set of predefined queues for a set of customers. Customers with certain Ids will be getting served by a set of queues. The downside of this is some queues can have less load than others. This solution is similar to Virtual Output Queuing in networking,
My understanding is that the partitioning of the messages it's not ensuring a evenly load-balance. I think that you should avoid create overengineering and so some custom stuff that will come on top of the Kafka partitioner and instead think at a good partitioning key that will allows you to use Kafka in an efficiently manner.

async await advantages when we have enough threads

I understood that .net know to use multiple threads for multiple requests.
So, if probably our service wont get more request than the number of threads our server can produce (it look like huge number), the only reason I can see to use async is on single request that do multiple blocking operations which can done in parallel.
Am I right?
Another advantage may be that serve multiple requests with same thread is cheaper than use multiple threads. How significant is this difference?
(note: no UI exists in our service (I saw that there is single thread for this, but it isn't relevant))
thanks!
Am I right?
No, doing multiple independent blocking operations, is the job of Concurrent APIs anyway (though sometimes they need Synchronization (like lock, mutex) to maintain the object state and avoid Race condition), but the usage of Async-Await is to schedule the IO Operations, like File Read / Write, call a remote service or Database Read / Write, which doesn't need a thread, as they are queued on a queue in hardware called IO Completion ports.
Benefits of Async-Await:
Doesn't start a IO operation on a separate Thread, since Thread is a costly resource, in terms memory and resource allocation and would do little precious than wait for IO call to come back. Separate thread shall be used for the compute bound operations, no IO bound.
Free up the UI / caller thread to make it completely responsive to carry out other tasks / operations
This is the evolution of Asynchronous programming model (BeginXX, EndXX), which was fairly complex to understand and implement
Another advantage may be that serve multiple requests with same thread is cheaper than use multiple threads. How significant is this difference?
Its a good strategy depending on the kind of request from caller, if they are compute bound better invoke a Parallel API and finish them fast, IO bound there's Async-Await, only issue with multiple threads is Resource allocation and Context switching, which needs to be factored in, but on other end it efficiently utilize the processor cores, which are fairly under utilized in the current day systems, as you would see most of the time processor is lying idle

OpenCL clEnqueueReadBuffer During Kernel Execution?

Can queued kernels continue to execute while an OpenCL clEnqueueReadBuffer operation is occurring?
In other words, is clEnqueueReadBuffer a blocking operation on the device?
From a host API point of view, clEnqueueReadBuffer can be blocking or not, depending on if you set the blocking_read parameter to CL_TRUE or CL_FALSE.
If you set it to not block, then the read just gets queued and you should use an event (or subsequent blocking call) to determine when it has finished (i.e., before you access the memory that you are reading to).
If you set it to block, the call won't return until the read is done. The memory being read to will be correct. Also (and answering your actual question) any operations you queued prior to the clEnqueueReadBuffer will all have to finish first before the read starts (see exception note below).
All clEnqueue* API calls are asynchronous, but some have "blocking" parameters you can set. Using it is the equivalent to using a non-blocking version and then calling clFinish instead. The command queue will be flushed to the device and your host thread won't continue until the work has finished. Of course, it is hard to keep the GPU always busy doing it this way, since now it doesn't have any work, but if you queue up new work fast enough you can still keep it reasonably busy.
This all assumes a single, in-order command queue. If your command queue is out-of-order and your device supports out-of-order queues then enqueued items can execute in any order that doesn't violate the event_wait_list parameters you provided. Likewise, you can have multiple command queues, which can again be executed in any order that doesn't violate the event_wait_list parameters you provided. Typically, they are used to overlap memory transfers and compute, and to keep multiple compute units busy. Out-of-order command queues and multiple command queues are both advanced OpenCL concepts and shouldn't be attempted until you fully understand and have experience with in-order command queues.
Clarification added later after DarkZeros pointed out the "on the device" part of the OP's question: My answer was from the host thread API point of view. On the device, with an in-order command queue all downstream commands are blocked by the current command. With an out-of-order queue they are only blocked by the event_wait_list. However, out-of-order command queues are not well supported in today's drivers. With multiple command queues, in theory commands are only blocked by prior commands (if in-order) and the event_wait_list. In reality, there are sometimes special vendor rules that prevent the free flowing of potentially non-blocked commands that you might like. This is often because the multiple OpenCL command queues get transferred to device-side memory and compute queues, and get executed in-order there. So depending on the order that you add commands to your multiple command queues, they might get interleaved in such a way that they block in sub-optimal ways. The best solution I'm aware of is to either be careful about the order you enqueue (based on knowledge of this implementation detail), or use one queue for memory and one for compute, which matches the device-side queueing.
If overlap of memory and compute is your goal, both AMD and NVIDIA both provide examples of how to overlap memory and compute operations, and for GPUs that support multiple compute operations, how to do that too. NVIDIA examples are hard to get ahold of but they are out there (from CUDA 4 days).

Does all asynchronous I/O ultimately implemented in polling?

I have been though asynchronous I/O is always has a callback form. But recently I discovered some low level implementations are using polling style API.
kqueue
libpq
And this leads me to think that maybe all (or most) asynchronous I/O (any of file, socket, mach-port, etc.) is implemented in a kind of polling manner at last. Maybe the callback form is just an abstraction only for higher-level API.
This could be a silly question, but I don't know how actually most of asynchronous I/O implemented at low level. I just used the system level notifications, and when I see kqueue - which is the system notification, it's a polling style!
How should I understand asynchronous I/O at low-level? How the high-level asynchronous notification is being made from low-level polling system? (if it actually does)
At the lowest (or at least, lowest worth looking at) hardware level, asynchronous operations truly are asynchronous in modern operating systems.
For example, when you read a file from the disk, the operating system translates your call to read to a series of disk operations (seek to location, read blocks X through Y, etc.). On most modern OSes, these commands get written either to special registers, or special locations in main memory, and the disk controller is informed that there are operations pending. The operating system then goes on about its business, and when the disk controller has completed all of the operations assigned to it, it triggers an interrupt, causing the thread that requested the read to pickup where it left off.
Regardless of what type of low-level asynchronous operation you're looking at (disk I/O, network I/O, mouse and keyboard input, etc.), ultimately, there is some stage at which a command is dispatched to hardware, and the "callback" as it were is not executed until the hardware reaches out and informs the OS that it's done, usually in the form of an interrupt.
That's not to say that there aren't some asynchronous operations implemented using polling. One trivial (but naive and costly) way to implement any blocking operation asynchronously is just to spawn a thread that waits for the operation to complete (perhaps polling in a tight loop), and then call the callback when it's finished. Generally speaking, though, common asynchronous operations at the OS level are truly asynchronous.
It's also worth mentioning that just because an API is blocking doesn't mean it's polling: you can put a blocking API on an asynchronous operation, and a non-blocking API on a synchronous operation. With things like select and kqueues, for example, the thread actually just goes to sleep until something interesting happens. That "something interesting" comes in in the form of an interrupt (usually), and that's taken as an indication that the operating system should wake up the relevant threads to continue work. It doesn't just sit there in a tight loop waiting for something to happen.
There really is no way to tell whether a system uses polling or "real" callbacks (like interrupts) just from its API, but yes, there are asynchronous APIs that are truly backed by asynchronous operations.

Event Loop vs Multithread blocking IO

I was reading a comment about server architecture.
http://news.ycombinator.com/item?id=520077
In this comment, the person says 3 things:
The event loop, time and again, has been shown to truly shine for a high number of low activity connections.
In comparison, a blocking IO model with threads or processes has been shown, time and again, to cut down latency on a per-request basis compared to an event loop.
On a lightly loaded system the difference is indistinguishable. Under load, most event loops choose to slow down, most blocking models choose to shed load.
Are any of these true?
And also another article here titled "Why Events Are A Bad Idea (for High-concurrency Servers)"
http://www.usenix.org/events/hotos03/tech/vonbehren.html
Typically, if the application is expected to handle million of connections, you can combine multi-threaded paradigm with event-based.
First, spawn as N threads where N == number of cores/processors on your machine. Each thread will have a list of asynchronous sockets that it's supposed to handle.
Then, for each new connection from the acceptor, "load-balance" the new socket to the thread with the fewest socket.
Within each thread, use event-based model for all the sockets, so that each thread can actually handle multiple sockets "simultaneously."
With this approach,
You never spawn a million threads. You just have as many as as your system can handle.
You utilize event-based on multicore as opposed to a single core.
Not sure what you mean by "low activity", but I believe the major factor would be how much you actually need to do to handle each request. Assuming a single-threaded event-loop, no other clients would get their requests handled while you handled the current request. If you need to do a lot of stuff to handle each request ("lots" meaning something that takes significant CPU and/or time), and assuming your machine actually is able to multitask efficiently (that taking time does not mean waiting for a shared resource, like a single CPU machine or similar), you would get better performance by multitasking. Multitasking could be a multithreaded blocking model, but it could also be a single-tasking event loop collecting incoming requests, farming them out to a multithreaded worker factory that would handle those in turn (through multitasking) and sending you a response ASAP.
I don't believe slow connections with the clients matter that much, as I would believe the OS would handle that efficiently outside of your app (assuming you do not block the event-loop for multiple roundtrips with the client that initially initiated the request), but I haven't tested this myself.

Resources