Which wait mechanism is Polly using - polly

Polly has several retry functionalities like for example WaitAndRetryForever. I looked in the documentation but couldn't find what is used exactly for making the thread wait until the next retry. I guess Polly uses System.Timers for this or is it something completely different? Thanks for any collaboration.

Asynchronous executions (fooAsyncPolicy.ExecuteAsync(...)) wait with Task.Delay(...), freeing the thread the caller was using while the delay occurs.
Synchronous executions (fooSyncPolicy.Execute(...)) wait between retries in a cancellable thread-blocking manner. This means that, for the synchronous (a):
action();
compared to the synchronous (b):
policy.Execute(action);
the following three things all hold:
both (a) and (b) block progress from continuing (subsequent code does not run) until the statement has completed;
(b) executes action on the same thread that (a) originally would have;
(b) expresses exceptions (if Policy operation does not intervene) in the same/similar-as-possible way that (a) originally would have.
These semantics (1) (2) (3) are intentional, to keep synchronously executing code with Polly as similar in semantics/behaviour (surrounding code needs little adjustment) as executing code without Polly.
Anticipating a follow-up question: Wouldn't it be possible to write the synchronous Polly: Policy.Handle<T>().WaitAndRetry(...).Execute(action) so that it didn't block a thread while waiting before retrying?: Yes, but no solution has been found that is preferable to letting the caller control transitions to TPL Tasks or async/await and then using Polly's ExecuteAsync(...).

Related

MPI - Equivalent of MPI_SENDRCV with asynchronous functions

I know that MPI_SENDRECV allow to overcome the problem of deadlocks (when we use the classic MPI_SEND and MPI_RECV functions).
I would like to know if MPI_SENDRECV(sent_to_process_1, receive_from_process_0) is equivalent to:
MPI_ISEND(sent_to_process_1, request1)
MPI_IRECV(receive_from_process_0, request2)
MPI_WAIT(request1)
MPI_WAIT(request2)
with asynchronous MPI_ISEND and MPI_RECV functions?
From I have seen, MPI_ISEND and MPI_RECV creates a fork (i.e. 2 processes). So if I follow this logic, the first call of MPI_ISEND generates 2 processes. One does the communication and the other calls MPI_RECV which forks itself 2 processes.
But once the communication of first MPI_ISEND is finished, does the second process call MPI_IRECV again? With this logic, the above equivalent doesn't seem to be valid...
Maybe I should change to this:
MPI_ISEND(sent_to_process_1, request1)
MPI_WAIT(request1)
MPI_IRECV(receive_from_process_0, request2)
MPI_WAIT(request2)
But I think that it could be create also deadlocks.
Anyone could give to me another solution using MPI_ISEND, MPI_IRECV and MPI_WAIT to get the same behaviour of MPI_SEND_RECV?
There's some dangerous lines of thought in the question and other answers. When you start a non-blocking MPI operation, the MPI library doesn't create a new process/thread/etc. You're thinking of something more like a parallel region of OpenMP I believe, where new threads/tasks are created to do some work.
In MPI, starting a non-blocking operation is like telling the MPI library that you have some things that you'd like to get done whenever MPI gets a chance to do them. There are lots of equally valid options for when they are actually completed:
It could be that they all get done later when you call a blocking completion function (like MPI_WAIT or MPI_WAITALL). These functions guarantee that when the blocking completion call is done, all of the requests that you passed in as arguments are finished (in your case, the MPI_ISEND and the MPI_IRECV). Regardless of when the operations actually take place (see next few bullets), you as an application can't consider them done until they are actually marked as completed by a function like MPI_WAIT or MPI_TEST.
The operations could get done "in the background" during another MPI operation. For instance, if you do something like the code below:
MPI_Isend(..., MPI_COMM_WORLD, &req[0]);
MPI_Irecv(..., MPI_COMM_WORLD, &req[1]);
MPI_Barrier(MPI_COMM_WORLD);
MPI_Waitall(2, req);
The MPI_ISEND and the MPI_IRECV would probably actually do the data transfers in the background during the MPI_BARRIER. This is because as an application, you are transferring "control" of your application to the MPI library during the MPI_BARRIER call. This lets the library make progress on any ongoing MPI operation that it wants. Most likely, when the MPI_BARRIER is complete, so are most other things that finished first.
Some MPI libraries allow you to specify that you want a "progress thread". This tells the MPI library to start up another thread (not that thread != process) in the background that will actually do the MPI operations for you while your application continues in the main thread.
Remember that all of these in the end require that you actually call MPI_WAIT or MPI_TEST or some other function like it to ensure that your operation is actually complete, but none of these spawn new threads or processes to do the work for you when you call your nonblocking functions. Those really just act like you stick them on a list of things to do (which in reality, is how most MPI libraries implement them).
The best way to think of how MPI_SENDRECV is implemented is to do two non-blocking calls with one completion function:
MPI_Isend(..., MPI_COMM_WORLD, &req[0]);
MPI_Irecv(..., MPI_COMM_WORLD, &req[1]);
MPI_Waitall(2, req);
How I usually do this on node i communicating with node i+1:
mpi_isend(send_to_process_iPlus1, requests(1))
mpi_irecv(recv_from_process_iPlus1, requests(2))
...
mpi_waitall(2, requests)
You can see how ordering your commands this way with non-blocking communication allows you (during the ... above) to perform any computation that does not rely on the send/recv buffers to be done during your communication. Overlapping computation with communication is often crucial for maximizing performance.
mpi_send_recv on the other hand (while avoiding any deadlock issues) is still a blocking operation. Thus, your program must remain in that routine during the entire send/recv process.
Final points: you can initialize more than 2 requests and wait on all of them the same way using the above structure as dealing with 2 requests. For instance, it's quite easy to start communication with node i-1 as well and wait on all 4 of the requests. Using mpi_send_recv you must always have a paired send and receive; what if you only want to send?

A MailboxProcessor that operates with a LIFO logic

I am learning about F# agents (MailboxProcessor).
I am dealing with a rather unconventional problem.
I have one agent (dataSource) which is a source of streaming data. The data has to be processed by an array of agents (dataProcessor). We can consider dataProcessor as some sort of tracking device.
Data may flow in faster than the speed with which the dataProcessor may be able to process its input.
It is OK to have some delay. However, I have to ensure that the agent stays on top of its work and does not get piled under obsolete observations
I am exploring ways to deal with this problem.
The first idea is to implement a stack (LIFO) in dataSource. dataSource would send over the latest observation available when dataProcessor becomes available to receive and process the data. This solution may work but it may get complicated as dataProcessor may need to be blocked and re-activated; and communicate its status to dataSource, leading to a two way communication problem. This problem may boil down to a blocking queue in the consumer-producer problem but I am not sure..
The second idea is to have dataProcessor taking care of message sorting. In this architecture, dataSource will simply post updates in dataProcessor's queue. dataProcessor will use Scanto fetch the latest data available in his queue. This may be the way to go. However, I am not sure if in the current design of MailboxProcessorit is possible to clear a queue of messages, deleting the older obsolete ones. Furthermore, here, it is written that:
Unfortunately, the TryScan function in the current version of F# is
broken in two ways. Firstly, the whole point is to specify a timeout
but the implementation does not actually honor it. Specifically,
irrelevant messages reset the timer. Secondly, as with the other Scan
function, the message queue is examined under a lock that prevents any
other threads from posting for the duration of the scan, which can be
an arbitrarily long time. Consequently, the TryScan function itself
tends to lock-up concurrent systems and can even introduce deadlocks
because the caller's code is evaluated inside the lock (e.g. posting
from the function argument to Scan or TryScan can deadlock the agent
when the code under the lock blocks waiting to acquire the lock it is
already under).
Having the latest observation bounced back may be a problem.
The author of this post, #Jon Harrop, suggests that
I managed to architect around it and the resulting architecture was actually better. In essence, I eagerly Receive all messages and filter using my own local queue.
This idea is surely worth exploring but, before starting to play around with code, I would welcome some inputs on how I could structure my solution.
Thank you.
Sounds like you might need a destructive scan version of the mailbox processor, I implemented this with TPL Dataflow in a blog series that you might be interested in.
My blog is currently down for maintenance but I can point you to the posts in markdown format.
Part1
Part2
Part3
You can also check out the code on github
I also wrote about the issues with scan in my lurking horror post
Hope that helps...
tl;dr I would try this: take Mailbox implementation from FSharp.Actor or Zach Bray's blog post, replace ConcurrentQueue by ConcurrentStack (plus add some bounded capacity logic) and use this changed agent as a dispatcher to pass messages from dataSource to an army of dataProcessors implemented as ordinary MBPs or Actors.
tl;dr2 If workers are a scarce and slow resource and we need to process a message that is the latest at the moment when a worker is ready, then it all boils down to an agent with a stack instead of a queue (with some bounded capacity logic) plus a BlockingQueue of workers. Dispatcher dequeues a ready worker, then pops a message from the stack and sends this message to the worker. After the job is done the worker enqueues itself to the queue when becomes ready (e.g. before let! msg = inbox.Receive()). Dispatcher consumer thread then blocks until any worker is ready, while producer thread keeps the bounded stack updated. (bounded stack could be done with an array + offset + size inside a lock, below is too complex one)
Details
MailBoxProcessor is designed to have only one consumer. This is even commented in the source code of MBP here (search for the word 'DRAGONS' :) )
If you post your data to MBP then only one thread could take it from internal queue or stack.
In you particular use case I would use ConcurrentStack directly or better wrapped into BlockingCollection:
It will allow many concurrent consumers
It is very fast and thread safe
BlockingCollection has BoundedCapacity property that allows you to limit the size of a collection. It throws on Add, but you could catch it or use TryAdd. If A is a main stack and B is a standby, then TryAdd to A, on false Add to B and swap the two with Interlocked.Exchange, then process needed messages in A, clear it, make a new standby - or use three stacks if processing A could be longer than B could become full again; in this way you do not block and do not lose any messages, but could discard unneeded ones is a controlled way.
BlockingCollection has methods like AddToAny/TakeFromAny, which work on an arrays of BlockingCollections. This could help, e.g.:
dataSource produces messages to a BlockingCollection with ConcurrentStack implementation (BCCS)
another thread consumes messages from BCCS and sends them to an array of processing BCCSs. You said that there is a lot of data. You may sacrifice one thread to be blocking and dispatching your messages indefinitely
each processing agent has its own BCCS or implemented as an Agent/Actor/MBP to which the dispatcher posts messages. In your case you need to send a message to only one processorAgent, so you may store processing agents in a circular buffer to always dispatch a message to least recently used processor.
Something like this:
(data stream produces 'T)
|
[dispatcher's BCSC]
|
(a dispatcher thread consumes 'T and pushes to processors, manages capacity of BCCS and LRU queue)
| |
[processor1's BCCS/Actor/MBP] ... [processorN's BCCS/Actor/MBP]
| |
(process) (process)
Instead of ConcurrentStack, you may want to read about heap data structure. If you need your latest messages by some property of messages, e.g. timestamp, rather than by the order in which they arrive to the stack (e.g. if there could be delays in transit and arrival order <> creation order), you can get the latest message by using heap.
If you still need Agents semantics/API, you could read several sources in addition to Dave's links, and somehow adopt implementation to multiple concurrent consumers:
An interesting article by Zach Bray on efficient Actors implementation. There you do need to replace (under the comment // Might want to schedule this call on another thread.) the line execute true by a line async { execute true } |> Async.Start or similar, because otherwise producing thread will be consuming thread - not good for a single fast producer. However, for a dispatcher like described above this is exactly what needed.
FSharp.Actor (aka Fakka) development branch and FSharp MPB source code (first link above) here could be very useful for implementation details. FSharp.Actors library has been in a freeze for several months but there is some activity in dev branch.
Should not miss discussion about Fakka in Google Groups in this context.
I have a somewhat similar use case and for the last two days I have researched everything I could find on the F# Agents/Actors. This answer is a kind of TODO for myself to try these ideas, of which half were born during writing it.
The simplest solution is to greedily eat all messages in the inbox when one arrives and discard all but the most recent. Easily done using TryReceive:
let rec readLatestLoop oldMsg =
async { let! newMsg = inbox.TryReceive 0
match newMsg with
| None -> oldMsg
| Some newMsg -> return! readLatestLoop newMsg }
let readLatest() =
async { let! msg = inbox.Receive()
return! readLatestLoop msg }
When faced with the same problem I architected a more sophisticated and efficient solution I called cancellable streaming and described in in an F# Journal article here. The idea is to start processing messages and then cancel that processing if they are superceded. This significantly improves concurrency if significant processing is being done.

How to control the number of threads when executing an Asynchronous Activity in WF 4

I am creating a workflow in WF 4, where I have a ParallelForeach activity that iterates over a collection of items. For each item in the collection, I execute a custom Asynchronous activity to processing multiple items in parallel.
The above solution works for me, but I am concerned about the number of threads used since each Asynchronous activity instance is executed on its own thread. Is there a way to configure/control the number of threads that get launched when executing the parallelForeach activity in the above described mechanism?
since each Asynchronous activity instance is getting executed on its own thread. Who says? Certainly not the docs.
ParallelForEach enumerates its values and schedules the Body for every value it enumerates on. It only schedules the Body. How the body executes depends on whether the Body goes idle.
If the Body does not go idle, it executes in a reverse order because the scheduled activities are handled as a stack, the last scheduled activity executes first.
For example, if you have a collection of {1,2,3,4}in ParallelForEach and use a WriteLine as the body to write the value out. You have 4, 3, 2, 1 printed out in the console. This is because WriteLine does not go idle so after 4 WriteLine activities got scheduled, they executed using a stack behavior (first in last out).
The Parallelism of execution occurs only when an Activity creates a bookmark and goes idle. Even then, two activities aren't actually executing at the same time--one or more have just stopped executing, allowing others to run in order. Understandably confusing, given the name, but that's it.
In any event, when you're relying on the framework to parallelize for you, don't worry about how many threads they're using. They probably have everything under control. Until you know they don't.
Will is correct, ParallelForEach does not require a new thread for each branch. If you are doing blocking I/O in code that should occur in an AsyncCodeActivity so that you aren't unecessarily blocking. If you want CPU-bound work to run in parallel to other activities you will either need to wrap it in an AsyncCodeActivity or use InvokeMethod { RunAsynchronously = true} in which case the framework will take care of running the work on a background thread.
The SynchronizationContext extensibility point is intended for cases where you have a particular existing threading model that you need WF to integrate with. Prime examples of this include ASP.NET's threading environment, and Windows Presentation Foundation/WinForms (e.g. if you wanted a activity to work correctly).

Why are Asynchronous processes not called Synchronous?

So I'm a little confused by this terminology.
Everyone refers to "Asynchronous" computing as running different processes on seperate threads, which gives the illusion that these processes are running at the same time.
This is not the definition of the word asynchronous.
a⋅syn⋅chro⋅nous
–adjective
1. not occurring at the same time.
2. (of a computer or other electrical machine) having each operation started only after the preceding operation is completed.
What am I not understanding here?
It means that the two threads are not running in sync, that is, they are not both running on the same timeline.
I think it's a case of computer scientists being too clever about their use of words.
Synchronisation, in this context, would suggest that both threads start and end at the same time. Asynchrony in this sense, means both threads are free to start, execute and end as they require.
The word "synchronous" implies that a function call will be synchronized with some other event.
Asynchronous implies that no such synchronization occurs.
It seems like the definition that you have there should really be the definition for "concurrent," or something. That definition looks wrong.
PS:
Here is the wiktionary definition:
asynchronous
Not synchronous; occurring at different times.
(computing, of a request or a message) allowing the client to continue during processing.
Which just so happens to be the exact opposite of what you posted.
I believe that the term was first used for synchronous vs. asynchronous communication. There synchronous means that the two communicating parts have a common clock signal that they run by, so they run in parallel. Asynchronous communication instead has a ready signal, so one part asks for data and gets a signal back when it's available.
The terms was then adapted to processes, but as there are obvious differences some aspects of the terms work differently. For a single thread process the natural way to request for something to be done is to make a synchronous call that transfers control to the subprocess, and then control is returned when it's done, and the process continues.
An asynchronous call works just like asynchronous communication in the aspect that you send a request for something to be done, and the process doing it returns a signal when it's done. The difference in the usage of the terms is that for processes it's in the asynchronous processing that the processes runs in parallel, while for communication it is the synchronous communication that run in parallel.
So "computer or electrical machine" is really a too wide scope for making a correct definition of the term, as it's used in slightly different ways for different techniques.
I would guess it's because they are not synchronized ;)
In other words... if one process gets stopped, killed, or is waiting for something, the other will carry on
I think there's a slant that is slightly different to most of the answers here.
Asynchronous means "not happening at the same time".
In the specific case of threading:
Synchronous means "execute this code now".
Asynchronous means "enqueue this work on a different thread that will be executed at some indeterminate time in the future"
This usually allows you to "do two things at once" because of reasons like:
one thread is just waiting (e.g. for data to arrive on a serial port) so is asleep
You have multiple processors, so the two threads can run concurrently.
However, even with 128 processor cores, the case is the same: the work will be executed "at some time in the future" (if perhaps the very near future) rather than "now".
Your second definition is more helpful here:
2. [...] having each operation started only after the preceding operation is completed.
When you make an asynchronous call, that call might not be completed before the next operation is started. When the call is synchronous, it will be.
It really means that an asynchronous event is happening independently of other events whereas a synchronous event would be happening dependent of other events.
It's like: Flammable, Inflammable ( which mean the same thing )
Seriously -- it's just one of those quirks of the English language. It doesn't really make sense. You can try to explain it, but it would be just as easy to justify the reverse meanings.
Many of the answers here are not correct. IN-dependently has a beginning particle that says NOT dependently, just like A-synchronous, but the meaning of dependent and synchronous are not the same! :D
So three dependent persons would wait for an order, because they are dependent to the order, but they wait, so they are not synchronous.
In english and any other language with common roots with a, syn and chrono (italian: asincrono; spanish: asincrónico; french:
asynchrone; greek: a= not syn=together chronos=time)it means exactly the opposite.
The terminology is UTTERLY counter-intiutive. Async functions ARE synchronous, they happen at the same time, and that's their power. They DO NOT wait, they DO NOT depend, they DO NOT hold the user waiting, but all those NOTs refer to anything but synchronicity :)
The only answer possibly right is the CLOCK one, although it is still confusing. My personal interpretation is this story:
"A professor has an office, and he makes SYNCHRONOUS CALLS for students to come. He says out loud in the main university hall: 'Hey guys who wants to talk to me should come at 10 in the morning tomorrow.', or simply puts a sign saying the same stuff.
RESULT: at 10 in the morning you see a long queue. People had the same time so they came in in the same moment and they got "piled up in the process".
So the professor thinks it would be nice for students not to waste time in the queue (and do synchronous operations, that is, do parallel stuff in their lives at the same time, and that's where the confusion comes).
He decides students can substitute him in making ASYNCHRONOUS CALLS, that is, every time a student ends talking with him, the students may, e.g., call another student saying the professor is free to talk, in a room where students may do whatever they like in the meantime. So every student does not have a single SYNCHRONOUS CALL (10 in the morning, the same time for all) but they have 10, 10.10, 10.18, 10.27.. etc. according to the needed time for each discussion in the professor office."
Is that the meaning of having the same clock, #Guffa?

Asynchronous vs synchronous execution. What is the difference? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 2 months ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What is the difference between asynchronous and synchronous execution?
When you execute something synchronously, you wait for it to finish before moving on to another task. When you execute something asynchronously, you can move on to another task before it finishes.
In the context of operating systems, this corresponds to executing a process or task on a "thread." A thread is a series of commands (a block of code) that exist as a unit of work. The operating system runs a given thread on a processor core. However, a processor core can only execute a single thread at once. It has no concept of running multiple threads simultaneously. The operating system can provide the illusion of running multiple threads at once by running each thread for a small slice of time (such as 1ms), and continuously switching between threads.
Now, if you introduce multiple processor cores into the mix, then threads CAN execute at the same time. The operating system can allocate time to one thread on the first processor core, then allocate the same block of time to another thread on a different processor core. All of this is about allowing the operating system to manage the completion of your task while you can go on in your code and do other things.
Asynchronous programming is a complicated topic because of the semantics of how things tie together when you can do them at the same time. There are numerous articles and books on the subject; have a look!
Synchronous/Asynchronous HAS NOTHING TO DO WITH MULTI-THREADING.
Synchronous or Synchronized means "connected", or "dependent" in some way. In other words, two synchronous tasks must be aware of one another, and one task must execute in some way that is dependent on the other, such as wait to start until the other task has completed.
Asynchronous means they are totally independent and neither one must consider the other in any way, either in the initiation or in execution.
Synchronous (one thread):
1 thread -> |<---A---->||<----B---------->||<------C----->|
Synchronous (multi-threaded):
thread A -> |<---A---->|
\
thread B ------------> ->|<----B---------->|
\
thread C ----------------------------------> ->|<------C----->|
Asynchronous (one thread):
A-Start ------------------------------------------ A-End
| B-Start -----------------------------------------|--- B-End
| | C-Start ------------------- C-End | |
| | | | | |
V V V V V V
1 thread->|<-A-|<--B---|<-C-|-A-|-C-|--A--|-B-|--C-->|---A---->|--B-->|
Asynchronous (multi-Threaded):
thread A -> |<---A---->|
thread B -----> |<----B---------->|
thread C ---------> |<------C--------->|
Start and end points of tasks A, B, C represented by <, > characters.
CPU time slices represented by vertical bars |
Technically, the concept of synchronous/asynchronous really does not have anything to do with threads. Although, in general, it is unusual to find asynchronous tasks running on the same thread, it is possible, (see below for examples) and it is common to find two or more tasks executing synchronously on separate threads... No, the concept of synchronous/asynchronous has to do solely with whether or not a second or subsequent task can be initiated before the other (first) task has completed, or whether it must wait. That is all. What thread (or threads), or processes, or CPUs, or indeed, what hardware, the task[s] are executed on is not relevant. Indeed, to make this point I have edited the graphics to show this.
ASYNCHRONOUS EXAMPLE:
In solving many engineering problems, the software is designed to split up the overall problem into multiple individual tasks and then execute them asynchronously. Inverting a matrix, or a finite element analysis problem, are good examples. In computing, sorting a list is an example. The quicksort routine, for example, splits the list into two lists and performs a quicksort on each of them, calling itself (quicksort) recursively. In both of the above examples, the two tasks can (and often were) executed asynchronously. They do not need to be on separate threads. Even a machine with one CPU and only one thread of execution can be coded to initiate processing of a second task before the first one has completed. The only criterion is that the results of one task are not necessary as inputs to the other task. As long as the start and end times of the tasks overlap, (possible only if the output of neither is needed as inputs to the other), they are being executed asynchronously, no matter how many threads are in use.
SYNCHRONOUS EXAMPLE:
Any process consisting of multiple tasks where the tasks must be executed in sequence, but one must be executed on another machine (Fetch and/or update data, get a stock quote from financial service, etc.). If it's on a separate machine it is on a separate thread, whether synchronous or asynchronous.
In simpler terms:
SYNCHRONOUS
You are in a queue to get a movie ticket. You cannot get one until everybody in front of you gets one, and the same applies to the people queued behind you.
ASYNCHRONOUS
You are in a restaurant with many other people. You order your food. Other people can also order their food, they don't have to wait for your food to be cooked and served to you before they can order.
In the kitchen restaurant workers are continuously cooking, serving, and taking orders.
People will get their food served as soon as it is cooked.
Simple Explanation via analogy
(story & pics given to help you remember).
Synchronous Execution
My boss is a busy man. He tells me to write code. I tell him: Fine. I get started and he's watching me like a vulture, standing behind me, off my shoulder. I'm like "Dude, WTF: why don't you go and do something while I finish this?"
he's like: "No, I'm waiting right here until you finish." This is synchronous.
Asynchronous Execution
The boss tells me to do it, and rather than waiting right there for my work, the boss goes off and does other tasks. When I finish my job I simply report to my boss and say: "I'm DONE!" This is Asynchronous Execution.
(Take my advice: NEVER work with the boss behind you.)
Synchronous execution means the execution happens in a single series. A->B->C->D. If you are calling those routines, A will run, then finish, then B will start, then finish, then C will start, etc.
With Asynchronous execution, you begin a routine, and let it run in the background while you start your next, then at some point, say "wait for this to finish". It's more like:
Start A->B->C->D->Wait for A to finish
The advantage is that you can execute B, C, and or D while A is still running (in the background, on a separate thread), so you can take better advantage of your resources and have fewer "hangs" or "waits".
In a nutshell, synchronization refers to two or more processes' start and end points, NOT their executions. In this example, Process A's endpoint is synchronized with Process B's start point:
SYNCHRONOUS
|--------A--------|
|--------B--------|
Asynchronous processes, on the other hand, do not have their start and endpoints synchronized:
ASYNCHRONOUS
|--------A--------|
|--------B--------|
Where Process A overlaps Process B, they're running concurrently or synchronously (dictionary definition), hence the confusion.
UPDATE: Charles Bretana improved his answer, so this answer is now just a simple (potentially oversimplified) mnemonic.
Synchronous means that the caller waits for the response or completion, asynchronous that the caller continues and a response comes later (if applicable).
As an example:
static void Main(string[] args)
{
Console.WriteLine("Before call");
doSomething();
Console.WriteLine("After call");
}
private static void doSomething()
{
Console.WriteLine("In call");
}
This will always ouput:
Before call
In call
After call
But if we were to make doSomething() asynchronous (multiple ways to do it), then the output could become:
Before call
After call
In call
Because the method making the asynchronous call would immediately continue with the next line of code. I say "could", because order of execution can't be guaranteed with asynch operations. It could also execute as the original, depending on thread timings, etc.
Sync vs Async
Sync and async operations are about execution order a next task in relation to the current task.
Let's take a look at example where Task 2 is current task and Task 3 is a next task. Task is an atomic operation - method call in a stack (method frame).
Synchronous
Implies that tasks will be executed one by one. A next task is started only after current task is finished. Task 3 is not started until Task 2 is finished.
Single Thread + Sync - Sequential
Usual execution.
Pseudocode:
main() {
task1()
task2()
task3()
}
Multi Thread + Sync - Parallel
Blocked.
Blocked means that a thread is just waiting(although it could do something useful. e.g. Java ExecutorService[About] and Future[About]) Pseudocode:
main() {
task1()
Future future = ExecutorService.submit(task2())
future.get() //<- blocked operation
task3()
}
Asynchronous
Implies that task returns control immediately with a promise to execute a code and notify about result later(e.g. callback, feature). Task 3 is executed even if Task 2 is not finished. async callback, completion handler[About]
Single Thread + Async - Concurrent
Callback Queue (Message Queue) and Event Loop (Run Loop, Looper) are used. Event Loop checks if Thread Stack is empty and if it is true it pushes first item from the Callback Queue into Thread Stack and repeats these steps again. Simple examples are button click, post event...
Pseudocode:
main() {
task1()
ThreadMain.handler.post(task2());
task3()
}
Multi Thread + Async - Concurrent and Parallel
Non-blocking.
For example when you need to make some calculations on another thread without blocking. Pseudocode:
main() {
task1()
new Thread(task2()).start();
//or
Future future = ExecutorService.submit(task2())
task3()
}
You are able use result of Task 2 using a blocking method get() or using async callback through a loop.
For example in Mobile world where we have UI/main thread and we need to download something we have several options:
sync block - block UI thread and wait when downloading is done. UI is not responsive.
async callback - create a new tread with a async callback to update UI(is not possible to access UI from non UI thread). Callback hell.
async coroutine[About] - async task with sync syntax. It allows mix downloading task (suspend function) with UI task.
[iOS sync/async], [Android sync/async]
[Paralel vs Concurrent]
I think this is bit round-about explanation but still it clarifies using real life example.
Small Example:
Let's say playing an audio involves three steps:
Getting the compressed song from harddisk
Decompress the audio.
Play the uncompressed audio.
If your audio player does step 1,2,3 sequentially for every song then it is synchronous. You will have to wait for some time to hear the song till the song actually gets fetched and decompressed.
If your audio player does step 1,2,3 independent of each other, then it is asynchronous. ie.
While playing audio 1 ( step 3), if it fetches audio 3 from harddisk in parallel (step 1) and it decompresses the audio 2 in parallel. (step 2 )
You will end up in hearing the song without waiting much for fetch and decompress.
I created a gif for explain this, hope to be helpful:
look, line 3 is asynchronous and others are synchronous.
all lines before line 3 should wait until before line finish its work, but because of line 3 is asynchronous, next line (line 4), don't wait for line 3, but line 5 should wait for line 4 to finish its work, and line 6 should wait for line 5 and 7 for 6, because line 4,5,6,7 are not asynchronous.
Simply said asynchronous execution is doing stuff in the background.
For example if you want to download a file from the internet you might use a synchronous function to do that but it will block your thread until the file finished downloading. This can make your application unresponsive to any user input.
Instead you could download the file in the background using asynchronous method. In this case the download function returns immediately and program execution continues normally. All the download operations are done in the background and your program will be notified when it's finished.
As a really simple example,
SYNCHRONOUS
Imagine 3 school students instructed to run a relay race on a road.
1st student runs her given distance, stops and passes the baton to the 2nd. No one else has started to run.
1------>
2.
3.
When the 2nd student retrieves the baton, she starts to run her given distance.
1.
2------>
3.
The 2nd student got her shoelace untied. Now she has stopped and tying up again. Because of this, 2nd's end time has got extended and the 3rd's starting time has got delayed.
1.
--2.--->
3.
This pattern continues on till the 3rd retrieves the baton from 2nd and finishes the race.
ASYNCHRONOUS
Just Imagine 10 random people walking on the same road.
They're not on a queue of course, just randomly walking on different places on the road in different paces.
2nd person's shoelace got untied. She stopped to get it tied up again.
But nobody is waiting for her to get it tied up. Everyone else is still walking the same way they did before, in that same pace of theirs.
10--> 9-->
8--> 7--> 6-->
5--> 4-->
1--> 2. 3-->
Synchronous basically means that you can only execute one thing at a time. Asynchronous means that you can execute multiple things at a time and you don't have to finish executing the current thing in order to move on to next one.
When executing a sequence like: a>b>c>d>, if we get a failure in the middle of execution like:
a
b
c
fail
Then we re-start from the beginning:
a
b
c
d
this is synchronous
If, however, we have the same sequence to execute: a>b>c>d>, and we have a failure in the middle:
a
b
c
fail
...but instead of restarting from the beginning, we re-start from the point of failure:
c
d
...this is know as asynchronous.
An example of instructions for making a breakfast:
Pour a cup of coffee.
Heat a pan, then fry two eggs.
Fry three slices of bacon.
Toast two pieces of bread.
Add butter and jam to the toast.
Pour a glass of orange juice.
If you have experience with cooking, you'd execute those instructions asynchronously. You'd start warming the pan for eggs, then start the bacon. You'd put the bread in the toaster, then start the eggs. At each step of the process, you'd start a task, then turn your attention to tasks that are ready for your attention.
Cooking breakfast is a good example of asynchronous work that isn't parallel. One person (or thread) can handle all these tasks. Continuing the breakfast analogy, one person can make breakfast asynchronously by starting the next task before the first task completes. The cooking progresses whether or not someone is watching it. As soon as you start warming the pan for the eggs, you can begin frying the bacon. Once the bacon starts, you can put the bread into the toaster.
For a parallel algorithm, you'd need multiple cooks (or threads). One would make the eggs, one the bacon, and so on. Each one would be focused on just that one task. Each cook (or thread) would be blocked synchronously waiting for the bacon to be ready to flip, or the toast to pop.
(emphasis mine)
From Asynchronous programming concepts
A synchronous operation does its work before returning to the caller.
An asynchronous operation does (most or all of) its work after returning to the caller.
You are confusing Synchronous with Parallel vs Series. Synchronous mean all at the same time. Syncronized means related to each othere which can mean in series or at a fixed interval. While the program is doing all, it it running in series. Get a dictionary...this is why we have unsweet tea. You have tea or sweetened tea.
A different english definition of Synchronize is Here
Coordinate; combine.
I think that is a better definition than of "Happening at the same time". That one is also a definition, but I don't think it is the one that fits the way it is used in Computer Science.
So an asynchronous task is not co-coordinated with other tasks, whereas a synchronous task IS co-coordinated with other tasks, so one finishes before another starts.
How that is achieved is a different question.
I think a good way to think of it is a classic running Relay Race
Synchronous: Processes like members of the same team, they won't execute until they receive baton (end of the execution of previous process/runner) and yet they are all acting in sync with each other.
Asynchronous: Where processes like members of different teams on the same relay race track, they will run and stop, async with each other, but within same race (overall program execution).
Does it make sense?
Synchronous means queue way execution one by one task will be executed. Suppose there is only vehicle that need to be share among friend to reach their destination one by one vehicle will be share.
In asynchronous case each friend can get rented vehicle and reach its destination.
In regards to the "at the same time" definition of synchronous execution (which is sometimes confusing), here's a good way to understand it:
Synchronous Execution: All tasks within a block of code are all executed at the same time.
Asynchronous Execution: All tasks within a block of code are not all executed at the same time.
Yes synchronous means at the same time, literally, it means doing work all together. multiple human/objects in the world can do multiple things at the same time but if we look at computer, it says synchronous means where the processes work together that means the processes are dependent on the return of one another and that's why they get executed one after another in proper sequence. Whereas asynchronous means where processes don't work together, they may work at the same time(if are on multithread), but work independently.

Resources