Why we refer to parallel programming as Asynchronous programming [duplicate] - asynchronous

This question already has answers here:
What is the difference between concurrency, parallelism and asynchronous methods?
(16 answers)
Closed 4 years ago.
In English, the word synchronous means "happening at the same time" while the word asynchronous means the opposite (i.e. "not simultaneous or concurrent in time : not synchronous")
Why all references refer to parallel programming as asynchronous programming instead of synchronous programming like this one
And why they all use the keyword async (which is an abbreviation of asynchronous) instead of sync.
For example:
If I have 2 consecutive methods Method1() and Method2() respectively, thenMethod2() will not start execution till Method1() finishes processing, which we call sequential processing.
If both Method1() and Method2() are marked with async keywords, this means Method2() will start processing without waiting for Method1() to finish.
So, I can describe this as parallel calling, concurrent calling, synchronous call, or anything indicating they run together without waiting.
Naming the second scenario Asynchronous gives an impression that they are not processing in parallel.
This if confusing, isn't it?
I am not a native English speaker, am I missing something in the English language or in the parallel programming concept?

Parallel programming implies concurrently executing activities. Today, two kinds of activities are used: threads and asynchronous procedures (coroutines are special kind of asynchronous procedures). Both kinds of activities can coexist in the same program. If most or all activities are threads, then the program is called multithreaded. If most or all activities are asynchronous procedures, the program is called asynchronous. And if program consists of a single thread, then it is called synchronous. But most funny thing is when that single thread is executing asynchronous procedures (as, for example, the GUI thread in Java/Swing or Android does), the program is asynchronous at the same time!

Related

Does the operation time.sleep(seconds) can be considered as asynchronous I/O?

The library of asyncio in Python, and generally, when we talk about asynchronous programming, I always think about doing “concurrent” I/O operations only on the level thread for optimized CPU use.
The library of asyncio has function of asyncio.sleep(seconds), but what disturb me was that sleep operation isn’t I/O operation, sleep operation is done on the kernel level with the CPU hardware without any external devices that can be counted as I/O [my definition for I/O is every hardware except from CPU and RAM].
So why does the asyncio lib (Asynchronous I/O) call this operation as an asynchronous I/O operation?
This is not a network interface controller we send requests to or the hard disk. I don’t have a problem with “concurrent” every operation we can on the level thread. However, the name of I/O in the end of the library makes me feel that it isn’t the proper terminology. I will be happy for clarification.
One more related question, does the terminology of asynchronous programming refer to “concurrent” I/O operations only or every operation, including CPU operations like x = x + 1 on the level thread? (I guess the last operation can be done “concurrently” on the level thread, but this will be unnecessary)
Link:
https://docs.python.org/3/library/asyncio.html
Code snippet:
import asyncio
async def main():
print('Hello ...')
await asyncio.sleep(1)
print('... World!')
asyncio.run(main())
Paraphrasing Wikipedia, "Asynchronous programming" generally refers to the occurrence of events outside of the main program flow and ways of handling such events. As such, asynchronous operations are not necessarily I/O ones.
These asynchronous events are generally handled at the hardware or OS level and it is important to understand that at this level almost anything is asynchronous: jobs are put into queues and scheduled by the OS, then they are regularly polled for completion by the OS which then notifies the main application that the job is done.
Such asynchronous events comprises:
Network requests (multiplexed and polled by the OS),
Timers (managed by hardware timers and interrupts),
Communication with various external devices such as keyboards (hardware interrupts),
Communication with internal devices such as the GPU (jobs are committed to command queues),
etc.
The purpose of the AsyncIO library is to allow the expression of asynchronous programs in a more "structured" and linear way. As such, it wraps many common asynchronous operations such as I/Os and timers into async-await equivalents. AsyncIO is thus not restricted to only asynchronous I/O operations and one can implement an AsyncIO async-await interface to support GPU for example.

Can tasks executed Asynchronously on Serial Queue?

I am trying to understand the basic functionality of Serial Queue and Concurrent Queue in GCD.
Can we perform synchronous operations on Concurrent Queue? As I know synchronous means executing tasks one after another but how it is possible with Concurrent Queue which executes tasks in parallel? It seems contradictory to me.
Similarly, how can we perform asynchronous operation on serial queue as serial queue perform tasks one after another so how they can be executed concurrently?
If anyone can explain with the help of image then it will be very clear.
You asked:
Can we perform synchronous operations on Concurrent Queue? As I know synchronous means executing tasks one after another but how it is possible with Concurrent Queue which executes tasks in parallel?
OK, let’s consider terminology before answering your question:
What is a “synchronous operation”? It is one that will block its respective thread during that operation. But a concurrent queue can use multiple threads to perform these individual synchronous operations on that same queue at the same time, each running on its own thread.
Let us use a practical example: Consider a synchronous operation that might be an algorithm to process an image (e.g. resize it or convert a color image to black-and-white). When you perform this operation, it will generally tie up the respective thread until the operation is done.
So, given that example, yes, you can certainly can (and we often do) perform multiple concurrent synchronous operations in parallel. Using our prior example, you might have 4 images that you want to process concurrently. So you might instantiate a concurrent queue, and add these four operations to that queue, and they will be processed in parallel, each on its own “worker thread”.
You then ask:
Similarly, how can we perform asynchronous operation on serial queue as serial queue perform tasks one after another so how they can be executed concurrently?
This depends a little upon what you mean by “operation”. Are you talking about a Swift Operation (or Objective-C NSOperation) on an “operation queue”? Or are you using the term “operation” a little more generally as it applies to GCD and dispatch queues?
The reason I ask, is that in the world of GCD (aka “dispatch queues”), you simply do not “perform an asynchronous operation on a serial queue”. You start asynchronous tasks from a serial queue, but the definition of “asynchronous” means that the current thread does not wait for the task to finish (which generally means that, often behind the scenes, another queue/thread is doing the work).
A good example of that would be when you start a series of network requests from a serial queue. Hidden in NSURLSession/URLSession, it has its own queues/threads that are managing these multiple network requests concurrently. If you do not want these requests to run concurrently, some sleight of hand is required to take an API which is designed for concurrent operation and have it behave sequentially, one after the other.
This is where operation queues come into play, as they do have the concept of custom Operation/NSOperation subclasses, in which you can define an operation to wrap an asynchronous task, such that the operation does not “complete” until the asynchronous task is done. It uses KVO to notify the queue when the operation is executing, is finished, etc. In that scenario, you can define a serial operation queue (i.e., one with a maxConcurrentOperationCount of 1), add a series of your own asynchronous operation subclass instances to that queue, and it can run them sequentially, one after the other. But using operation queues with asynchronous operations can be a little complicated. If that’s really what you are trying to do, we can point you to some examples. But, in the interest of full disclosure, this operation queue pattern is used less frequently nowadays, and you will often see other patterns such as Combine, or the new async-await API, to achieve similar results.
So, we can’t answer this latter question without a little more detail of what precisely you mean by “asynchronous operation on serial queue”. Give us a practical example of what you mean (and what API you are using).

async await advantages when we have enough threads

I understood that .net know to use multiple threads for multiple requests.
So, if probably our service wont get more request than the number of threads our server can produce (it look like huge number), the only reason I can see to use async is on single request that do multiple blocking operations which can done in parallel.
Am I right?
Another advantage may be that serve multiple requests with same thread is cheaper than use multiple threads. How significant is this difference?
(note: no UI exists in our service (I saw that there is single thread for this, but it isn't relevant))
thanks!
Am I right?
No, doing multiple independent blocking operations, is the job of Concurrent APIs anyway (though sometimes they need Synchronization (like lock, mutex) to maintain the object state and avoid Race condition), but the usage of Async-Await is to schedule the IO Operations, like File Read / Write, call a remote service or Database Read / Write, which doesn't need a thread, as they are queued on a queue in hardware called IO Completion ports.
Benefits of Async-Await:
Doesn't start a IO operation on a separate Thread, since Thread is a costly resource, in terms memory and resource allocation and would do little precious than wait for IO call to come back. Separate thread shall be used for the compute bound operations, no IO bound.
Free up the UI / caller thread to make it completely responsive to carry out other tasks / operations
This is the evolution of Asynchronous programming model (BeginXX, EndXX), which was fairly complex to understand and implement
Another advantage may be that serve multiple requests with same thread is cheaper than use multiple threads. How significant is this difference?
Its a good strategy depending on the kind of request from caller, if they are compute bound better invoke a Parallel API and finish them fast, IO bound there's Async-Await, only issue with multiple threads is Resource allocation and Context switching, which needs to be factored in, but on other end it efficiently utilize the processor cores, which are fairly under utilized in the current day systems, as you would see most of the time processor is lying idle

How does async task interrupt main thread (from itself - the main one)?

I can't seem to find this specific implementation detail, or even a pointer to where in an OS book to find this.
Basically, main thread calls an async task (to be run later) on itself. So... when does it run?
Does it wait for the run loop to finish? Or does it just randomly interrupt the run-loop in the middle of any function?
I understand the registers will be the same (unless separate thread), but not really the instruction pointer and what happens to the stack, if anything does happen.
Thank you
In C# the task is scheduled to be run on the current SynchronizationContext. The context basically has a queue of tasks which it schedules to run on the threads it is associated with, in a GUI app there is only one thread so the task is scheduled to run there.
The GUI thread is not interrupted but it executes the task when it finishes all other tasks preceding it in the queue.
The threads of a process all share the same address space, not the same CPU registers. The thread scheduling is done depends on the programming language and the O/S. Usually there are explicit scheduling points, such as returning from a system call, blocking awaiting I/O completion, or between p-code instructions for interpreted languages. Some O/S implemtations reschedule depending on how long a thread has run for time-based scheduling. Often languages include a function that explicitly offers the CPU to any other thread or process by transferring control to the process or thread scheduler component of the O/S.
The act of switching from one thread or process to another is known as a context switch and is carefully tuned code because this is often done thousands of times per second. This can make the code difficult to follow.
The best explanation of this I've ever seen is http://www.amazon.com/The-Design-UNIX-Operating-System/dp/0132017997 classic.

Cooperative Multitasking system

I'm trying to get around the concept of cooperative multitasking system and exactly how it works in a single threaded application.
My understanding is that this is a "form of multitasking in which multiple tasks execute by voluntarily ceding control to other tasks at programmer-defined points within each task."
So if you have a list of tasks and one task is executing, how do you determine to pass execution to another task? And when you give execution back to a previous task, how do resume from where you were previously?
I find this a bit confusing because I don't understand how this can be achieve without a multithreaded application.
Any advice would be very helpeful :)
Thanks
In your specific scenario where a single process (or thread of execution) uses cooperative multitasking, you can use something like Windows' fibers or POSIX setcontext family of functions. I will use the term fiber here.
Basically when one fiber is finished executing a chunk of work and wants to voluntarily allow other fibers to run (hence the "cooperative" term), it either manually switches to the other fiber's context or more typically it performs some kind of yield() or scheduler() call that jumps into the scheduler's context, then the scheduler finds a new fiber to run and switches to that fiber's context.
What do we mean by context here? Basically the stack and registers. There is nothing magic about the stack, it's just a block of memory the stack pointer happens to point to. There is also nothing magic about the program counter, it just points to the next instruction to execute. Switching contexts simply saves the current registers somewhere, changes the stack pointer to a different chunk of memory, updates the program counter to a different stream of instructions, copies that context's saved registers into the CPU, then does a jump. Bam, you're now executing different instructions with a different stack. Often the context switch code is written in assembly that is invoked in a way that doesn't modify the current stack or it backs out the changes, in either case it leaves no traces on the stack or in registers so when code resumes execution it has no idea anything happened. (Again, the theme: we assume that method calls fiddle with registers, push arguments to the stack, move the stack pointer, etc but that is just the C calling convention. Nothing requires you to maintain a stack at all or to have any particular method call leave any traces of itself on the stack).
Since each stack is separate, you don't have some continuous chain of seemingly random method calls eventually overflowing the stack (which might be the result if you naively tried to implement this scheme using standard C methods that continuously called each other). You could implement this manually with a state machine where each fiber kept a state machine of where it was in its work, periodically returning to the calling dispatcher's method, but why bother when actual fiber/co-routine support is widely available?
Also remember that cooperative multitasking is orthogonal to processes, protected memory, address spaces, etc. Witness Mac OS 9 or Windows 3.x. They supported the idea of separate processes. But when you yielded, the context was changed to the OS context, allowing the OS scheduler to run, which then potentially selected another process to switch to. In theory you could have a full protected virtual memory OS that still used cooperative multitasking. In those systems, if a errant process never yielded, the OS scheduler never ran, so all other processes in the system were frozen. **
The next natural question is what makes something pre-emptive... The answer is that the OS schedules an interrupt timer with the CPU to stop the currently executing task and switch back to the OS scheduler's context regardless of whether the current task cares to release the CPU or not, thus "pre-empting" it.
If the OS uses CPU privilege levels, the (kernel configured) timer is not cancelable by lower level (user mode) code, though in theory if the OS didn't use such protections an errant task could mask off or cancel the interrupt timer and hijack the CPU. There are some other scenarios like IO calls where the scheduler can be invoked outside the timer, and the scheduler may decide no other process has higher priority and return control to the same process without a switch... And in reality most OSes don't do a real context switch here because that's expensive, the scheduler code runs inside the context of whatever process was executing, so it has to be very careful not to step on the stack, to save register states, etc.
** You might ask why not just fire a timer if yield isn't called within a certain period of time. The answer lies in multi-threaded synchronization. In a cooperative system, you don't have to bother taking locks, worry about re-entrance, etc because you only yield when things are in a known good state. If this mythical timer fires, you have now potentially corrupted the state of the program that was interrupted. If programs have to be written to handle this, congrats... You now have a half-assed pre-emptive multitasking system. Might as well just do it right! And if you are changing things anyway, may as well add threads, protected memory, etc. That's pretty much the history of the major OSes right there.
The basic idea behind cooperative multitasking is trust - that each subtask will relinquish control, of its own accord, in a timely fashion, to avoid starving other tasks of processor time. This is why tasks in a cooperative multitasking system need to be tested extremely thoroughly, and in some cases certified for use.
I don't claim to be an expert, but I imagine cooperative tasks could be implemented as state machines, where passing control to the task would cause it to run for the absolute minimal amount of time it needs to make any kind of progress. For example, a file reader might read the next few bytes of a file, a parser might parse the next line of a document, or a sensor controller might take a single reading, before returning control back to a cooperative scheduler, which would check for task completion.
Each task would have to keep its internal state on the heap (at object level), rather than on the stack frame (at function level) like a conventional blocking function or thread.
And unlike conventional multitasking, which relies on a hardware timer to trigger a context switch, cooperative multitasking relies on the code to be written in such a way that each step of each long-running task is guaranteed to finish in an acceptably small amount of time.
The tasks will execute an explicit wait or pause or yield operation which makes the call to the dispatcher. There may be different operations for waiting on IO to complete or explicitly yielding in a heavy computation. In an application task's main loop, it could have a *wait_for_event* call instead of busy polling. This would suspend the task until it has input to process.
There may also be a time-out mechanism for catching runaway tasks, but it is not the primary means of switching (or else it wouldn't be cooperative).
One way to think of cooperative multitasking is to split a task into steps (or states). Each task keeps track of the next step it needs to execute. When it's the task's turn, it executes only that one step and returns. That way, in the main loop of your program you are simply calling each task in order, and because each task only takes up a small amount of time to complete a single step, we end up with a system which allows all of the tasks to share cpu time (ie. cooperate).

Resources