Asynchronous vs synchronous execution. What is the difference? [closed] - asynchronous

Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 2 months ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What is the difference between asynchronous and synchronous execution?

When you execute something synchronously, you wait for it to finish before moving on to another task. When you execute something asynchronously, you can move on to another task before it finishes.
In the context of operating systems, this corresponds to executing a process or task on a "thread." A thread is a series of commands (a block of code) that exist as a unit of work. The operating system runs a given thread on a processor core. However, a processor core can only execute a single thread at once. It has no concept of running multiple threads simultaneously. The operating system can provide the illusion of running multiple threads at once by running each thread for a small slice of time (such as 1ms), and continuously switching between threads.
Now, if you introduce multiple processor cores into the mix, then threads CAN execute at the same time. The operating system can allocate time to one thread on the first processor core, then allocate the same block of time to another thread on a different processor core. All of this is about allowing the operating system to manage the completion of your task while you can go on in your code and do other things.
Asynchronous programming is a complicated topic because of the semantics of how things tie together when you can do them at the same time. There are numerous articles and books on the subject; have a look!

Synchronous/Asynchronous HAS NOTHING TO DO WITH MULTI-THREADING.
Synchronous or Synchronized means "connected", or "dependent" in some way. In other words, two synchronous tasks must be aware of one another, and one task must execute in some way that is dependent on the other, such as wait to start until the other task has completed.
Asynchronous means they are totally independent and neither one must consider the other in any way, either in the initiation or in execution.
Synchronous (one thread):
1 thread -> |<---A---->||<----B---------->||<------C----->|
Synchronous (multi-threaded):
thread A -> |<---A---->|
\
thread B ------------> ->|<----B---------->|
\
thread C ----------------------------------> ->|<------C----->|
Asynchronous (one thread):
A-Start ------------------------------------------ A-End
| B-Start -----------------------------------------|--- B-End
| | C-Start ------------------- C-End | |
| | | | | |
V V V V V V
1 thread->|<-A-|<--B---|<-C-|-A-|-C-|--A--|-B-|--C-->|---A---->|--B-->|
Asynchronous (multi-Threaded):
thread A -> |<---A---->|
thread B -----> |<----B---------->|
thread C ---------> |<------C--------->|
Start and end points of tasks A, B, C represented by <, > characters.
CPU time slices represented by vertical bars |
Technically, the concept of synchronous/asynchronous really does not have anything to do with threads. Although, in general, it is unusual to find asynchronous tasks running on the same thread, it is possible, (see below for examples) and it is common to find two or more tasks executing synchronously on separate threads... No, the concept of synchronous/asynchronous has to do solely with whether or not a second or subsequent task can be initiated before the other (first) task has completed, or whether it must wait. That is all. What thread (or threads), or processes, or CPUs, or indeed, what hardware, the task[s] are executed on is not relevant. Indeed, to make this point I have edited the graphics to show this.
ASYNCHRONOUS EXAMPLE:
In solving many engineering problems, the software is designed to split up the overall problem into multiple individual tasks and then execute them asynchronously. Inverting a matrix, or a finite element analysis problem, are good examples. In computing, sorting a list is an example. The quicksort routine, for example, splits the list into two lists and performs a quicksort on each of them, calling itself (quicksort) recursively. In both of the above examples, the two tasks can (and often were) executed asynchronously. They do not need to be on separate threads. Even a machine with one CPU and only one thread of execution can be coded to initiate processing of a second task before the first one has completed. The only criterion is that the results of one task are not necessary as inputs to the other task. As long as the start and end times of the tasks overlap, (possible only if the output of neither is needed as inputs to the other), they are being executed asynchronously, no matter how many threads are in use.
SYNCHRONOUS EXAMPLE:
Any process consisting of multiple tasks where the tasks must be executed in sequence, but one must be executed on another machine (Fetch and/or update data, get a stock quote from financial service, etc.). If it's on a separate machine it is on a separate thread, whether synchronous or asynchronous.

In simpler terms:
SYNCHRONOUS
You are in a queue to get a movie ticket. You cannot get one until everybody in front of you gets one, and the same applies to the people queued behind you.
ASYNCHRONOUS
You are in a restaurant with many other people. You order your food. Other people can also order their food, they don't have to wait for your food to be cooked and served to you before they can order.
In the kitchen restaurant workers are continuously cooking, serving, and taking orders.
People will get their food served as soon as it is cooked.

Simple Explanation via analogy
(story & pics given to help you remember).
Synchronous Execution
My boss is a busy man. He tells me to write code. I tell him: Fine. I get started and he's watching me like a vulture, standing behind me, off my shoulder. I'm like "Dude, WTF: why don't you go and do something while I finish this?"
he's like: "No, I'm waiting right here until you finish." This is synchronous.
Asynchronous Execution
The boss tells me to do it, and rather than waiting right there for my work, the boss goes off and does other tasks. When I finish my job I simply report to my boss and say: "I'm DONE!" This is Asynchronous Execution.
(Take my advice: NEVER work with the boss behind you.)

Synchronous execution means the execution happens in a single series. A->B->C->D. If you are calling those routines, A will run, then finish, then B will start, then finish, then C will start, etc.
With Asynchronous execution, you begin a routine, and let it run in the background while you start your next, then at some point, say "wait for this to finish". It's more like:
Start A->B->C->D->Wait for A to finish
The advantage is that you can execute B, C, and or D while A is still running (in the background, on a separate thread), so you can take better advantage of your resources and have fewer "hangs" or "waits".

In a nutshell, synchronization refers to two or more processes' start and end points, NOT their executions. In this example, Process A's endpoint is synchronized with Process B's start point:
SYNCHRONOUS
|--------A--------|
|--------B--------|
Asynchronous processes, on the other hand, do not have their start and endpoints synchronized:
ASYNCHRONOUS
|--------A--------|
|--------B--------|
Where Process A overlaps Process B, they're running concurrently or synchronously (dictionary definition), hence the confusion.
UPDATE: Charles Bretana improved his answer, so this answer is now just a simple (potentially oversimplified) mnemonic.

Synchronous means that the caller waits for the response or completion, asynchronous that the caller continues and a response comes later (if applicable).
As an example:
static void Main(string[] args)
{
Console.WriteLine("Before call");
doSomething();
Console.WriteLine("After call");
}
private static void doSomething()
{
Console.WriteLine("In call");
}
This will always ouput:
Before call
In call
After call
But if we were to make doSomething() asynchronous (multiple ways to do it), then the output could become:
Before call
After call
In call
Because the method making the asynchronous call would immediately continue with the next line of code. I say "could", because order of execution can't be guaranteed with asynch operations. It could also execute as the original, depending on thread timings, etc.

Sync vs Async
Sync and async operations are about execution order a next task in relation to the current task.
Let's take a look at example where Task 2 is current task and Task 3 is a next task. Task is an atomic operation - method call in a stack (method frame).
Synchronous
Implies that tasks will be executed one by one. A next task is started only after current task is finished. Task 3 is not started until Task 2 is finished.
Single Thread + Sync - Sequential
Usual execution.
Pseudocode:
main() {
task1()
task2()
task3()
}
Multi Thread + Sync - Parallel
Blocked.
Blocked means that a thread is just waiting(although it could do something useful. e.g. Java ExecutorService[About] and Future[About]) Pseudocode:
main() {
task1()
Future future = ExecutorService.submit(task2())
future.get() //<- blocked operation
task3()
}
Asynchronous
Implies that task returns control immediately with a promise to execute a code and notify about result later(e.g. callback, feature). Task 3 is executed even if Task 2 is not finished. async callback, completion handler[About]
Single Thread + Async - Concurrent
Callback Queue (Message Queue) and Event Loop (Run Loop, Looper) are used. Event Loop checks if Thread Stack is empty and if it is true it pushes first item from the Callback Queue into Thread Stack and repeats these steps again. Simple examples are button click, post event...
Pseudocode:
main() {
task1()
ThreadMain.handler.post(task2());
task3()
}
Multi Thread + Async - Concurrent and Parallel
Non-blocking.
For example when you need to make some calculations on another thread without blocking. Pseudocode:
main() {
task1()
new Thread(task2()).start();
//or
Future future = ExecutorService.submit(task2())
task3()
}
You are able use result of Task 2 using a blocking method get() or using async callback through a loop.
For example in Mobile world where we have UI/main thread and we need to download something we have several options:
sync block - block UI thread and wait when downloading is done. UI is not responsive.
async callback - create a new tread with a async callback to update UI(is not possible to access UI from non UI thread). Callback hell.
async coroutine[About] - async task with sync syntax. It allows mix downloading task (suspend function) with UI task.
[iOS sync/async], [Android sync/async]
[Paralel vs Concurrent]

I think this is bit round-about explanation but still it clarifies using real life example.
Small Example:
Let's say playing an audio involves three steps:
Getting the compressed song from harddisk
Decompress the audio.
Play the uncompressed audio.
If your audio player does step 1,2,3 sequentially for every song then it is synchronous. You will have to wait for some time to hear the song till the song actually gets fetched and decompressed.
If your audio player does step 1,2,3 independent of each other, then it is asynchronous. ie.
While playing audio 1 ( step 3), if it fetches audio 3 from harddisk in parallel (step 1) and it decompresses the audio 2 in parallel. (step 2 )
You will end up in hearing the song without waiting much for fetch and decompress.

I created a gif for explain this, hope to be helpful:
look, line 3 is asynchronous and others are synchronous.
all lines before line 3 should wait until before line finish its work, but because of line 3 is asynchronous, next line (line 4), don't wait for line 3, but line 5 should wait for line 4 to finish its work, and line 6 should wait for line 5 and 7 for 6, because line 4,5,6,7 are not asynchronous.

Simply said asynchronous execution is doing stuff in the background.
For example if you want to download a file from the internet you might use a synchronous function to do that but it will block your thread until the file finished downloading. This can make your application unresponsive to any user input.
Instead you could download the file in the background using asynchronous method. In this case the download function returns immediately and program execution continues normally. All the download operations are done in the background and your program will be notified when it's finished.

As a really simple example,
SYNCHRONOUS
Imagine 3 school students instructed to run a relay race on a road.
1st student runs her given distance, stops and passes the baton to the 2nd. No one else has started to run.
1------>
2.
3.
When the 2nd student retrieves the baton, she starts to run her given distance.
1.
2------>
3.
The 2nd student got her shoelace untied. Now she has stopped and tying up again. Because of this, 2nd's end time has got extended and the 3rd's starting time has got delayed.
1.
--2.--->
3.
This pattern continues on till the 3rd retrieves the baton from 2nd and finishes the race.
ASYNCHRONOUS
Just Imagine 10 random people walking on the same road.
They're not on a queue of course, just randomly walking on different places on the road in different paces.
2nd person's shoelace got untied. She stopped to get it tied up again.
But nobody is waiting for her to get it tied up. Everyone else is still walking the same way they did before, in that same pace of theirs.
10--> 9-->
8--> 7--> 6-->
5--> 4-->
1--> 2. 3-->

Synchronous basically means that you can only execute one thing at a time. Asynchronous means that you can execute multiple things at a time and you don't have to finish executing the current thing in order to move on to next one.

When executing a sequence like: a>b>c>d>, if we get a failure in the middle of execution like:
a
b
c
fail
Then we re-start from the beginning:
a
b
c
d
this is synchronous
If, however, we have the same sequence to execute: a>b>c>d>, and we have a failure in the middle:
a
b
c
fail
...but instead of restarting from the beginning, we re-start from the point of failure:
c
d
...this is know as asynchronous.

An example of instructions for making a breakfast:
Pour a cup of coffee.
Heat a pan, then fry two eggs.
Fry three slices of bacon.
Toast two pieces of bread.
Add butter and jam to the toast.
Pour a glass of orange juice.
If you have experience with cooking, you'd execute those instructions asynchronously. You'd start warming the pan for eggs, then start the bacon. You'd put the bread in the toaster, then start the eggs. At each step of the process, you'd start a task, then turn your attention to tasks that are ready for your attention.
Cooking breakfast is a good example of asynchronous work that isn't parallel. One person (or thread) can handle all these tasks. Continuing the breakfast analogy, one person can make breakfast asynchronously by starting the next task before the first task completes. The cooking progresses whether or not someone is watching it. As soon as you start warming the pan for the eggs, you can begin frying the bacon. Once the bacon starts, you can put the bread into the toaster.
For a parallel algorithm, you'd need multiple cooks (or threads). One would make the eggs, one the bacon, and so on. Each one would be focused on just that one task. Each cook (or thread) would be blocked synchronously waiting for the bacon to be ready to flip, or the toast to pop.
(emphasis mine)
From Asynchronous programming concepts

A synchronous operation does its work before returning to the caller.
An asynchronous operation does (most or all of) its work after returning to the caller.

You are confusing Synchronous with Parallel vs Series. Synchronous mean all at the same time. Syncronized means related to each othere which can mean in series or at a fixed interval. While the program is doing all, it it running in series. Get a dictionary...this is why we have unsweet tea. You have tea or sweetened tea.

A different english definition of Synchronize is Here
Coordinate; combine.
I think that is a better definition than of "Happening at the same time". That one is also a definition, but I don't think it is the one that fits the way it is used in Computer Science.
So an asynchronous task is not co-coordinated with other tasks, whereas a synchronous task IS co-coordinated with other tasks, so one finishes before another starts.
How that is achieved is a different question.

I think a good way to think of it is a classic running Relay Race
Synchronous: Processes like members of the same team, they won't execute until they receive baton (end of the execution of previous process/runner) and yet they are all acting in sync with each other.
Asynchronous: Where processes like members of different teams on the same relay race track, they will run and stop, async with each other, but within same race (overall program execution).
Does it make sense?

Synchronous means queue way execution one by one task will be executed. Suppose there is only vehicle that need to be share among friend to reach their destination one by one vehicle will be share.
In asynchronous case each friend can get rented vehicle and reach its destination.

In regards to the "at the same time" definition of synchronous execution (which is sometimes confusing), here's a good way to understand it:
Synchronous Execution: All tasks within a block of code are all executed at the same time.
Asynchronous Execution: All tasks within a block of code are not all executed at the same time.

Yes synchronous means at the same time, literally, it means doing work all together. multiple human/objects in the world can do multiple things at the same time but if we look at computer, it says synchronous means where the processes work together that means the processes are dependent on the return of one another and that's why they get executed one after another in proper sequence. Whereas asynchronous means where processes don't work together, they may work at the same time(if are on multithread), but work independently.

Related

Which wait mechanism is Polly using

Polly has several retry functionalities like for example WaitAndRetryForever. I looked in the documentation but couldn't find what is used exactly for making the thread wait until the next retry. I guess Polly uses System.Timers for this or is it something completely different? Thanks for any collaboration.
Asynchronous executions (fooAsyncPolicy.ExecuteAsync(...)) wait with Task.Delay(...), freeing the thread the caller was using while the delay occurs.
Synchronous executions (fooSyncPolicy.Execute(...)) wait between retries in a cancellable thread-blocking manner. This means that, for the synchronous (a):
action();
compared to the synchronous (b):
policy.Execute(action);
the following three things all hold:
both (a) and (b) block progress from continuing (subsequent code does not run) until the statement has completed;
(b) executes action on the same thread that (a) originally would have;
(b) expresses exceptions (if Policy operation does not intervene) in the same/similar-as-possible way that (a) originally would have.
These semantics (1) (2) (3) are intentional, to keep synchronously executing code with Polly as similar in semantics/behaviour (surrounding code needs little adjustment) as executing code without Polly.
Anticipating a follow-up question: Wouldn't it be possible to write the synchronous Polly: Policy.Handle<T>().WaitAndRetry(...).Execute(action) so that it didn't block a thread while waiting before retrying?: Yes, but no solution has been found that is preferable to letting the caller control transitions to TPL Tasks or async/await and then using Polly's ExecuteAsync(...).

A MailboxProcessor that operates with a LIFO logic

I am learning about F# agents (MailboxProcessor).
I am dealing with a rather unconventional problem.
I have one agent (dataSource) which is a source of streaming data. The data has to be processed by an array of agents (dataProcessor). We can consider dataProcessor as some sort of tracking device.
Data may flow in faster than the speed with which the dataProcessor may be able to process its input.
It is OK to have some delay. However, I have to ensure that the agent stays on top of its work and does not get piled under obsolete observations
I am exploring ways to deal with this problem.
The first idea is to implement a stack (LIFO) in dataSource. dataSource would send over the latest observation available when dataProcessor becomes available to receive and process the data. This solution may work but it may get complicated as dataProcessor may need to be blocked and re-activated; and communicate its status to dataSource, leading to a two way communication problem. This problem may boil down to a blocking queue in the consumer-producer problem but I am not sure..
The second idea is to have dataProcessor taking care of message sorting. In this architecture, dataSource will simply post updates in dataProcessor's queue. dataProcessor will use Scanto fetch the latest data available in his queue. This may be the way to go. However, I am not sure if in the current design of MailboxProcessorit is possible to clear a queue of messages, deleting the older obsolete ones. Furthermore, here, it is written that:
Unfortunately, the TryScan function in the current version of F# is
broken in two ways. Firstly, the whole point is to specify a timeout
but the implementation does not actually honor it. Specifically,
irrelevant messages reset the timer. Secondly, as with the other Scan
function, the message queue is examined under a lock that prevents any
other threads from posting for the duration of the scan, which can be
an arbitrarily long time. Consequently, the TryScan function itself
tends to lock-up concurrent systems and can even introduce deadlocks
because the caller's code is evaluated inside the lock (e.g. posting
from the function argument to Scan or TryScan can deadlock the agent
when the code under the lock blocks waiting to acquire the lock it is
already under).
Having the latest observation bounced back may be a problem.
The author of this post, #Jon Harrop, suggests that
I managed to architect around it and the resulting architecture was actually better. In essence, I eagerly Receive all messages and filter using my own local queue.
This idea is surely worth exploring but, before starting to play around with code, I would welcome some inputs on how I could structure my solution.
Thank you.
Sounds like you might need a destructive scan version of the mailbox processor, I implemented this with TPL Dataflow in a blog series that you might be interested in.
My blog is currently down for maintenance but I can point you to the posts in markdown format.
Part1
Part2
Part3
You can also check out the code on github
I also wrote about the issues with scan in my lurking horror post
Hope that helps...
tl;dr I would try this: take Mailbox implementation from FSharp.Actor or Zach Bray's blog post, replace ConcurrentQueue by ConcurrentStack (plus add some bounded capacity logic) and use this changed agent as a dispatcher to pass messages from dataSource to an army of dataProcessors implemented as ordinary MBPs or Actors.
tl;dr2 If workers are a scarce and slow resource and we need to process a message that is the latest at the moment when a worker is ready, then it all boils down to an agent with a stack instead of a queue (with some bounded capacity logic) plus a BlockingQueue of workers. Dispatcher dequeues a ready worker, then pops a message from the stack and sends this message to the worker. After the job is done the worker enqueues itself to the queue when becomes ready (e.g. before let! msg = inbox.Receive()). Dispatcher consumer thread then blocks until any worker is ready, while producer thread keeps the bounded stack updated. (bounded stack could be done with an array + offset + size inside a lock, below is too complex one)
Details
MailBoxProcessor is designed to have only one consumer. This is even commented in the source code of MBP here (search for the word 'DRAGONS' :) )
If you post your data to MBP then only one thread could take it from internal queue or stack.
In you particular use case I would use ConcurrentStack directly or better wrapped into BlockingCollection:
It will allow many concurrent consumers
It is very fast and thread safe
BlockingCollection has BoundedCapacity property that allows you to limit the size of a collection. It throws on Add, but you could catch it or use TryAdd. If A is a main stack and B is a standby, then TryAdd to A, on false Add to B and swap the two with Interlocked.Exchange, then process needed messages in A, clear it, make a new standby - or use three stacks if processing A could be longer than B could become full again; in this way you do not block and do not lose any messages, but could discard unneeded ones is a controlled way.
BlockingCollection has methods like AddToAny/TakeFromAny, which work on an arrays of BlockingCollections. This could help, e.g.:
dataSource produces messages to a BlockingCollection with ConcurrentStack implementation (BCCS)
another thread consumes messages from BCCS and sends them to an array of processing BCCSs. You said that there is a lot of data. You may sacrifice one thread to be blocking and dispatching your messages indefinitely
each processing agent has its own BCCS or implemented as an Agent/Actor/MBP to which the dispatcher posts messages. In your case you need to send a message to only one processorAgent, so you may store processing agents in a circular buffer to always dispatch a message to least recently used processor.
Something like this:
(data stream produces 'T)
|
[dispatcher's BCSC]
|
(a dispatcher thread consumes 'T and pushes to processors, manages capacity of BCCS and LRU queue)
| |
[processor1's BCCS/Actor/MBP] ... [processorN's BCCS/Actor/MBP]
| |
(process) (process)
Instead of ConcurrentStack, you may want to read about heap data structure. If you need your latest messages by some property of messages, e.g. timestamp, rather than by the order in which they arrive to the stack (e.g. if there could be delays in transit and arrival order <> creation order), you can get the latest message by using heap.
If you still need Agents semantics/API, you could read several sources in addition to Dave's links, and somehow adopt implementation to multiple concurrent consumers:
An interesting article by Zach Bray on efficient Actors implementation. There you do need to replace (under the comment // Might want to schedule this call on another thread.) the line execute true by a line async { execute true } |> Async.Start or similar, because otherwise producing thread will be consuming thread - not good for a single fast producer. However, for a dispatcher like described above this is exactly what needed.
FSharp.Actor (aka Fakka) development branch and FSharp MPB source code (first link above) here could be very useful for implementation details. FSharp.Actors library has been in a freeze for several months but there is some activity in dev branch.
Should not miss discussion about Fakka in Google Groups in this context.
I have a somewhat similar use case and for the last two days I have researched everything I could find on the F# Agents/Actors. This answer is a kind of TODO for myself to try these ideas, of which half were born during writing it.
The simplest solution is to greedily eat all messages in the inbox when one arrives and discard all but the most recent. Easily done using TryReceive:
let rec readLatestLoop oldMsg =
async { let! newMsg = inbox.TryReceive 0
match newMsg with
| None -> oldMsg
| Some newMsg -> return! readLatestLoop newMsg }
let readLatest() =
async { let! msg = inbox.Receive()
return! readLatestLoop msg }
When faced with the same problem I architected a more sophisticated and efficient solution I called cancellable streaming and described in in an F# Journal article here. The idea is to start processing messages and then cancel that processing if they are superceded. This significantly improves concurrency if significant processing is being done.

How to control the number of threads when executing an Asynchronous Activity in WF 4

I am creating a workflow in WF 4, where I have a ParallelForeach activity that iterates over a collection of items. For each item in the collection, I execute a custom Asynchronous activity to processing multiple items in parallel.
The above solution works for me, but I am concerned about the number of threads used since each Asynchronous activity instance is executed on its own thread. Is there a way to configure/control the number of threads that get launched when executing the parallelForeach activity in the above described mechanism?
since each Asynchronous activity instance is getting executed on its own thread. Who says? Certainly not the docs.
ParallelForEach enumerates its values and schedules the Body for every value it enumerates on. It only schedules the Body. How the body executes depends on whether the Body goes idle.
If the Body does not go idle, it executes in a reverse order because the scheduled activities are handled as a stack, the last scheduled activity executes first.
For example, if you have a collection of {1,2,3,4}in ParallelForEach and use a WriteLine as the body to write the value out. You have 4, 3, 2, 1 printed out in the console. This is because WriteLine does not go idle so after 4 WriteLine activities got scheduled, they executed using a stack behavior (first in last out).
The Parallelism of execution occurs only when an Activity creates a bookmark and goes idle. Even then, two activities aren't actually executing at the same time--one or more have just stopped executing, allowing others to run in order. Understandably confusing, given the name, but that's it.
In any event, when you're relying on the framework to parallelize for you, don't worry about how many threads they're using. They probably have everything under control. Until you know they don't.
Will is correct, ParallelForEach does not require a new thread for each branch. If you are doing blocking I/O in code that should occur in an AsyncCodeActivity so that you aren't unecessarily blocking. If you want CPU-bound work to run in parallel to other activities you will either need to wrap it in an AsyncCodeActivity or use InvokeMethod { RunAsynchronously = true} in which case the framework will take care of running the work on a background thread.
The SynchronizationContext extensibility point is intended for cases where you have a particular existing threading model that you need WF to integrate with. Prime examples of this include ASP.NET's threading environment, and Windows Presentation Foundation/WinForms (e.g. if you wanted a activity to work correctly).

asynchronous and non-blocking calls? also between blocking and synchronous

What is the difference between asynchronous and non-blocking calls? Also between blocking and synchronous calls (with examples please)?
In many circumstances they are different names for the same thing, but in some contexts they are quite different. So it depends. Terminology is not applied in a totally consistent way across the whole software industry.
For example in the classic sockets API, a non-blocking socket is one that simply returns immediately with a special "would block" error message, whereas a blocking socket would have blocked. You have to use a separate function such as select or poll to find out when is a good time to retry.
But asynchronous sockets (as supported by Windows sockets), or the asynchronous IO pattern used in .NET, are more convenient. You call a method to start an operation, and the framework calls you back when it's done. Even here, there are basic differences. Asynchronous Win32 sockets "marshal" their results onto a specific GUI thread by passing Window messages, whereas .NET asynchronous IO is free-threaded (you don't know what thread your callback will be called on).
So they don't always mean the same thing. To distil the socket example, we could say:
Blocking and synchronous mean the same thing: you call the API, it hangs up the thread until it has some kind of answer and returns it to you.
Non-blocking means that if an answer can't be returned rapidly, the API returns immediately with an error and does nothing else. So there must be some related way to query whether the API is ready to be called (that is, to simulate a wait in an efficient way, to avoid manual polling in a tight loop).
Asynchronous means that the API always returns immediately, having started a "background" effort to fulfil your request, so there must be some related way to obtain the result.
synchronous / asynchronous is to describe the relation between two modules.
blocking / non-blocking is to describe the situation of one module.
An example:
Module X: "I".
Module Y: "bookstore".
X asks Y: do you have a book named "c++ primer"?
blocking: before Y answers X, X keeps waiting there for the answer. Now X (one module) is blocking. X and Y are two threads or two processes or one thread or one process? we DON'T know.
non-blocking: before Y answers X, X just leaves there and do other things. X may come back every two minutes to check if Y has finished its job? Or X won't come back until Y calls him? We don't know. We only know that X can do other things before Y finishes its job. Here X (one module) is non-blocking. X and Y are two threads or two processes or one process? we DON'T know. BUT we are sure that X and Y couldn't be one thread.
synchronous: before Y answers X, X keeps waiting there for the answer. It means that X can't continue until Y finishes its job. Now we say: X and Y (two modules) are synchronous. X and Y are two threads or two processes or one thread or one process? we DON'T know.
asynchronous: before Y answers X, X leaves there and X can do other jobs. X won't come back until Y calls him. Now we say: X and Y (two modules) are asynchronous. X and Y are two threads or two processes or one process? we DON'T know. BUT we are sure that X and Y couldn't be one thread.
Please pay attention on the two bold-sentences above. Why does the bold-sentence in the 2) contain two cases whereas the bold-sentence in the 4) contains only one case? This is a key of the difference between non-blocking and asynchronous.
Let me try to explain the four words with another way:
blocking: OMG, I'm frozen! I can't move! I have to wait for that specific event to happen. If that happens, I would be saved!
non-blocking: I was told that I had to wait for that specific event to happen. OK, I understand and I promise that I would wait for that. But while waiting, I can still do some other things, I'm not frozen, I'm still alive, I can jump, I can walk, I can sing a song etc.
synchronous: My mom is gonna cook, she sends me to buy some meat. I just said to my mom: We are synchronous! I'm so sorry but you have to wait even if I might need 100 years to get some meat back...
asynchronous: We will make a pizza, we need tomato and cheeze. Now I say: Let's go shopping. I'll buy some tomatoes and you will buy some cheeze. We needn't wait for each other because we are asynchronous.
Here is a typical example about non-blocking & synchronous:
// thread X
while (true)
{
msg = recv(Y, NON_BLOCKING_FLAG);
if (msg is not empty)
{
break;
}
else
{
sleep(2000); // 2 sec
}
}
// thread Y
// prepare the book for X
send(X, book);
You can see that this design is non-blocking (you can say that most of time this loop does something nonsense but in CPU's eyes, X is running, which means that X is non-blocking. If you want you can replace sleep(2000) with any other code) whereas X and Y (two modules) are synchronous because X can't continue to do any other things (X can't jump out of the loop) until it gets the book from Y.
Normally in this case, making X blocking is much better because non-blocking spends much resource for a stupid loop. But this example is good to help you understand the fact: non-blocking doesn't mean asynchronous.
The four words do make us confused easily, what we should remember is that the four words serve for the design of architecture. Learning about how to design a good architecture is the only way to distinguish them.
For example, we may design such a kind of architecture:
// Module X = Module X1 + Module X2
// Module X1
while (true)
{
msg = recv(many_other_modules, NON_BLOCKING_FLAG);
if (msg is not null)
{
if (msg == "done")
{
break;
}
// create a thread to process msg
}
else
{
sleep(2000); // 2 sec
}
}
// Module X2
broadcast("I got the book from Y");
// Module Y
// prepare the book for X
send(X, book);
In the example here, we can say that
X1 is non-blocking
X1 and X2 are synchronous
X and Y are asynchronous
If you need, you can also describe those threads created in X1 with the four words.
One more time: the four words serve for the design of architecture. So what we need is to make a proper architecture, instead of distinguishing the four words like a language lawyer. If you get some cases, where you can't distinguish the four words very clearly, you should forget about the four words, use your own words to describe your architecture.
So the more important things are: when do we use synchronous instead of asynchronous? when do we use blocking instead of non-blocking? Is making X1 blocking better than non-blocking? Is making X and Y synchronous better than asynchronous? Why is Nginx non-blocking? Why is Apache blocking? These questions are what you must figure out.
To make a good choice, you must analyze your need and test the performance of different architectures. There is no such an architecture that is suitable for various of needs.
Asynchronous refers to something done in parallel, say is another thread.
Non-blocking often refers to polling, i.e. checking whether given condition holds (socket is readable, device has more data, etc.)
Synchronous is defined as happening at the same time (in predictable timing, or in predictable ordering).
Asynchronous is defined as not happening at the same time. (with unpredictable timing or with unpredictable ordering).
This is what causes the first confusion, which is that asynchronous is some sort of synchronization scheme, and yes it is used to mean that, but in actuality it describes processes that are happening unpredictably with regards to when or in what order they run. And such events often need to be synchronized in order to make them behave correctly, where multiple synchronization schemes exists to do so, one of those called blocking, another called non-blocking, and yet another one confusingly called asynchronous.
So you see, the whole problem is about finding a way to synchronize an asynchronous behavior, because you've got some operation that needs the response of another before it can begin. Thus it's a coordination problem, how will you know that you can now start that operation?
The simplest solution is known as blocking.
Blocking is when you simply choose to wait for the other thing to be done and return you a response before moving on to the operation that needed it.
So if you need to put butter on toast, and thus you first need to toast the bred. The way you'd coordinate them is that you'd first toast the bred, then stare endlessly at the toaster until it pops the toast, and then you'd proceed to put butter on them.
It's the simplest solution, and works very well. There's no real reason not to use it, unless you happen to also have other things you need to be doing which don't require coordination with the operations. For example, doing some dishes. Why wait idle staring at the toaster constantly for the toast to pop, when you know it'll take a bit of time, and you could wash a whole dish while it finishes?
That's where two other solutions known respectively as non-blocking and asynchronous come into play.
Non-blocking is when you choose to do other unrelated things while you wait for the operation to be done. Checking back on the availability of the response as you see fit.
So instead of looking at the toaster for it to pop. You go and wash a whole dish. And then you peek at the toaster to see if the toasts have popped. If they haven't, you go wash another dish, checking back at the toaster between each dish. When you see the toasts have popped, you stop washing the dishes, and instead you take the toast and move on to putting butter on them.
Having to constantly check on the toasts can be annoying though, imagine the toaster is in another room. In between dishes you waste your time going to that other room to check on the toast.
Here comes asynchronous.
Asynchronous is when you choose to do other unrelated things while you wait for the operation to be done. Instead of checking on it though, you delegate the work of checking to something else, could be the operation itself or a watcher, and you have that thing notify and possibly interupt you when the response is availaible so you can proceed to the other operation that needed it.
Its a weird terminology. Doesn't make a whole lot of sense, since all these solutions are ways to create synchronous coordination of dependent tasks. That's why I prefer to call it evented.
So for this one, you decide to upgrade your toaster so it beeps when the toasts are done. You happen to be constantly listening, even while you are doing dishes. On hearing the beep, you queue up in your memory that as soon as you are done washing your current dish, you'll stop and go put the butter on the toast. Or you could choose to interrupt the washing of the current dish, and deal with the toast right away.
If you have trouble hearing the beep, you can have your partner watch the toaster for you, and come tell you when the toast is ready. Your partner can itself choose any of the above three strategies to coordinate its task of watching the toaster and telling you when they are ready.
On a final note, it's good to understand that while non-blocking and async (or what I prefer to call evented) do allow you to do other things while you wait, you don't have too. You can choose to constantly loop on checking the status of a non-blocking call, doing nothing else. That's often worse than blocking though (like looking at the toaster, then away, then back at it until it's done), so a lot of non-blocking APIs allow you to transition into a blocking mode from it. For evented, you can just wait idle until you are notified. The downside in that case is that adding the notification was complex and potentially costly to begin with. You had to buy a new toaster with beep functionality, or convince your partner to watch it for you.
And one more thing, you need to realize the trade offs all three provide. One is not obviously better than the others. Think of my example. If your toaster is so fast, you won't have time to wash a dish, not even begin washing it, that's how fast your toaster is. Getting started on something else in that case is just a waste of time and effort. Blocking will do. Similarly, if washing a dish will take 10 times longer then the toasting. You have to ask yourself what's more important to get done? The toast might get cold and hard by that time, not worth it, blocking will also do. Or you should pick faster things to do while you wait. There's more obviously, but my answer is already pretty long, my point is you need to think about all that, and the complexities of implementing each to decide if its worth it, and if it'll actually improve your throughput or performance.
Edit:
Even though this is already long, I also want it to be complete, so I'll add two more points.
There also commonly exists a fourth model known as multiplexed. This is when while you wait for one task, you start another, and while you wait for both, you start one more, and so on, until you've got many tasks all started and then, you wait idle, but on all of them. So as soon as any is done, you can proceed with handling its response, and then go back to waiting for the others. It's known as multiplexed, because while you wait, you need to check each task one after the other to see if they are done, ad vitam, until one is. It's a bit of an extension on top of normal non-blocking.
In our example it would be like starting the toaster, then the dishwasher, then the microwave, etc. And then waiting on any of them. Where you'd check the toaster to see if it's done, if not, you'd check the dishwasher, if not, the microwave, and around again.
Even though I believe it to be a big mistake, synchronous is often used to mean one thing at a time. And asynchronous many things at a time. Thus you'll see synchronous blocking and non-blocking used to refer to blocking and non-blocking. And asynchronous blocking and non-blocking used to refer to multiplexed and evented.
I don't really understand how we got there. But when it comes to IO and Computation, synchronous and asynchronous often refer to what is better known as non-overlapped and overlapped. That is, asynchronous means that IO and Computation are overlapped, aka, happening concurrently. While synchronous means they are not, thus happening sequentially. For synchronous non-blocking, that would mean you don't start other IO or Computation, you just busy wait and simulate a blocking call. I wish people stopped misusing synchronous and asynchronous like that. So I'm not encouraging it.
Edit2:
I think a lot of people got a bit confused by my definition of synchronous and asynchronous. Let me try and be a bit more clear.
Synchronous is defined as happening with predictable timing and/or ordering. That means you know when something will start and end.
Asynchronous is defined as not happening with predictable timing and/or ordering. That means you don't know when something will start and end.
Both of those can be happening in parallel or concurrently, or they can be happening sequentially. But in the synchronous case, you know exactly when things will happen, while in the asynchronous case you're not sure exactly when things will happen, but you can still put some coordination in place that at least guarantees some things will happen only after others have happened (by synchronizing some parts of it).
Thus when you have asynchronous processes, asynchronous programming lets you place some order guarantees so that some things happen in the right sequence, even though you don't know when things will start and end.
Here's an example, if we need to do A then B and C can happen at any time. In a sequential but asynchronous model you can have:
A -> B -> C
or
A -> C -> B
or
C -> A -> B
Every time you run the program, you could get a different one of those, seemingly at random. Now this is still sequential, nothing is parallel or concurrent, but you don't know when things will start and end, except you have made it so B always happens after A.
If you add concurrency only (no parallelism), you can also get things like:
A<start> -> C<start> -> A<end> -> C<end> -> B<start> -> B<end>
or
C<start> -> A<start> -> C<end> -> A<end> -> B<start> -> B<end>
or
A<start> -> A<end> -> B<start> -> C<start> -> B<end> -> C<end>
etc...
Once again, you don't really know when things will start and end, but you have made it so B is coordinated to always start after A ends, but that's not necessarily immediately after A ends, it's at some unknown time after A ends, and B could happen in-between fully or partially.
And if you add parallelism, now you have things like:
A<start> -> A<end> -> B<start> -> B<end> ->
C<start> -> C<keeps going> -> C<keeps going> -> C<end>
or
A<start> -> A<end> -> B<start> -> B<end>
C<start> -> C<keeps going> -> C<end>
etc...
Now if we look at the synchronous case, in a sequential setting you would have:
A -> B -> C
And this is the order always, each time you run the program, you get A then B and then C, even though C conceptually from the requirements can happen at any time, in a synchronous model you still define exactly when it will start and end. Off course, you could specify it like:
C -> A -> B
instead, but since it is synchronous, then this order will be the ordering every time the program is ran, unless you changed the code again to change the order explicitly.
Now if you add concurrency to a synchronous model you can get:
C<start> -> A<start> -> C<end> -> A<end> -> B<start> -> B<end>
And once again, this would be the order no matter how many time you ran the program. And similarly, you could explicitly change it in your code, but it would be consistent across program execution.
Finally, if you add parallelism as well to a synchronous model you get:
A<start> -> A<end> -> B<start> -> B<end>
C<start> -> C<end>
Once again, this would be the case on every program run. An important aspect here is that to make it fully synchronous this way, it means B must start after both A and C ends. If C is an operation that can complete faster or slower say depending on the CPU power of the machine, or other performance consideration, to make it synchronous you still need to make it so B waits for it to end, otherwise you get an asynchronous behavior again, where not all timings are deterministic.
You'll get this kind of synchronous thing a lot in coordinating CPU operations with the CPU clock, and you have to make sure that you can complete each operation in time for the next clock cycle, otherwise you need to delay everything by one more clock to give room for this one to finish, if you don't, you mess up your synchronous behavior, and if things depended on that order they'd break.
Finally, lots of systems have synchronous and asynchronous behavior mixed in, so if you have any kind of inherently unpredictable events, like when a user will click a button, or when a remote API will return a response, but you need things to have guaranteed ordering, you will basically need a way to synchronize the asynchronous behavior so it guarantees order and timing as needed. Some strategies to synchronize those are what I talk about previously, you have blocking, non-blocking, async, multiplexed, etc. See the emphasis on "async", this is what I mean by the word being confusing. Somebody decided to call a strategy to synchronize asynchronous processes "async". This then wrongly made people think that asynchronous meant concurrent and synchronous meant sequential, or that somehow blocking was the opposite of asynchronous, where as I just explained, synchronous and asynchronous in reality is a different concept that relates to the timing of things as being in sync (in time with each other, either on some shared clock or in a predictable order) or out of sync (not on some shared clock or in an unpredictable order). Where as asynchronous programming is a strategy to synchronize two events that are themselves asynchronous (happening at an unpredictable time and/or order), and for which we need to add some guarantees of when they might happen or at least in what order.
So we're left with two things using the word "asynchronous" in them:
Asynchronous processes: processes that we don't know at what time they will start and end, and thus in what order they would end up running.
Asynchronous programming: a style of programming that lets you synchronize two asynchronous processes using callbacks or watchers that interrupt the executor in order to let them know something is done, so that you can add predictable ordering between the processes.
A nonblocking call returns immediately with whatever data are available: the full number of bytes requested, fewer, or none at all.
An asynchronous call requests a transfer that will be performed in its whole(entirety) but will complete at some future time.
Putting this question in the context of NIO and NIO.2 in java 7, async IO is one step more advanced than non-blocking.
With java NIO non-blocking calls, one would set all channels (SocketChannel, ServerSocketChannel, FileChannel, etc) as such by calling AbstractSelectableChannel.configureBlocking(false).
After those IO calls return, however, you will likely still need to control the checks such as if and when to read/write again, etc.
For instance,
while (!isDataEnough()) {
socketchannel.read(inputBuffer);
// do something else and then read again
}
With the asynchronous api in java 7, these controls can be made in more versatile ways.
One of the 2 ways is to use CompletionHandler. Notice that both read calls are non-blocking.
asyncsocket.read(inputBuffer, 60, TimeUnit.SECONDS /* 60 secs for timeout */,
new CompletionHandler<Integer, Object>() {
public void completed(Integer result, Object attachment) {...}
public void failed(Throwable e, Object attachment) {...}
}
}
As you can probably see from the multitude of different (and often mutually exclusive) answers, it depends on who you ask. In some arenas, the terms are synonymous. Or they might each refer to two similar concepts:
One interpretation is that the call will do something in the background essentially unsupervised in order to allow the program to not be held up by a lengthy process that it does not need to control. Playing audio might be an example - a program could call a function to play (say) an mp3, and from that point on could continue on to other things while leaving it to the OS to manage the process of rendering the audio on the sound hardware.
The alternative interpretation is that the call will do something that the program will need to monitor, but will allow most of the process to occur in the background only notifying the program at critical points in the process. For example, asynchronous file IO might be an example - the program supplies a buffer to the operating system to write to file, and the OS only notifies the program when the operation is complete or an error occurs.
In either case, the intention is to allow the program to not be blocked waiting for a slow process to complete - how the program is expected to respond is the only real difference. Which term refers to which also changes from programmer to programmer, language to language, or platform to platform. Or the terms may refer to completely different concepts (such as the use of synchronous/asynchronous in relation to thread programming).
Sorry, but I don't believe there is a single right answer that is globally true.
Blocking call: Control returns only when the call completes.
Non blocking call: Control returns immediately. Later OS somehow notifies the process that the call is complete.
Synchronous program: A program which uses Blocking calls. In order not to freeze during the call it must have 2 or more threads (that's why it's called Synchronous - threads are running synchronously).
Asynchronous program: A program which uses Non blocking calls. It can have only 1 thread and still remain interactive.
Non-blocking: This function won't wait while on the stack.
Asynchronous: Work may continue on behalf of the function call after that call has left the stack
Synchronous means to start one after the other's result, in a sequence.
Asynchronous means start together, no sequence is guaranteed on the result
Blocking means something that causes an obstruction to perform the next step.
Non-blocking means something that keeps running without waiting for anything, overcoming the obstruction.
Blocking eg: I knock on the door and wait till they open it. ( I am idle here )
Non-Blocking eg: I knock on the door, if they open it instantly, I greet them, go inside, etc. If they do not open instantly, I go to the next house and knock on it. ( I am doing something or the other, not idle )
Synchrounous eg: I will go out only if it rains. ( dependency exists )
Asynchronous eg: I will go out. It can rain. ( independent events, does't matter when they occur )
Synchronous or Asynchronous, both can be blocking or non-blocking and vice versa
The blocking models require the initiating application to block when the I/O has started. This means that it isn't possible to overlap processing and I/O at the same time. The synchronous non-blocking model allows overlap of processing and I/O, but it requires that the application check the status of the I/O on a recurring basis. This leaves asynchronous non-blocking I/O, which permits overlap of processing and I/O, including notification of I/O completion.
To Simply Put,
function sum(a,b){
return a+b;
}
is a Non Blocking. while Asynchronous is used to execute Blocking task and then return its response
synchronous
asynchonous
block
Block I/O must be a synchronus I/O, becuase it has to be executed in order. Synchronous I/O might not be block I/O
Not exist
non-block
Non-block and Synchronous I/O at the same time is polling/multi-plexing..
Non-block and Asynchronous I/O at the same time is parallel execution, such as signal trigger…
block/non-block describe behavior of the initializing entity itself, it means what the entity does during wating for I/O completion
synchronous/asynchronous describe behavior between I/O initilaizing entity and I/O executor(the operating system, for example), it means whether these two entity can be executed parallelly
They differ in spelling only. There is no difference in what they refer to. To be technical you could say they differ in emphasis. Non blocking refers to control flow(it doesn't block.) Asynchronous refers to when the event\data is handled(not synchronously.)
Blocking: control returns to invoking precess after processing of primitive(sync or async) completes
Non blocking: control returns to process immediately after invocation

Why are Asynchronous processes not called Synchronous?

So I'm a little confused by this terminology.
Everyone refers to "Asynchronous" computing as running different processes on seperate threads, which gives the illusion that these processes are running at the same time.
This is not the definition of the word asynchronous.
a⋅syn⋅chro⋅nous
–adjective
1. not occurring at the same time.
2. (of a computer or other electrical machine) having each operation started only after the preceding operation is completed.
What am I not understanding here?
It means that the two threads are not running in sync, that is, they are not both running on the same timeline.
I think it's a case of computer scientists being too clever about their use of words.
Synchronisation, in this context, would suggest that both threads start and end at the same time. Asynchrony in this sense, means both threads are free to start, execute and end as they require.
The word "synchronous" implies that a function call will be synchronized with some other event.
Asynchronous implies that no such synchronization occurs.
It seems like the definition that you have there should really be the definition for "concurrent," or something. That definition looks wrong.
PS:
Here is the wiktionary definition:
asynchronous
Not synchronous; occurring at different times.
(computing, of a request or a message) allowing the client to continue during processing.
Which just so happens to be the exact opposite of what you posted.
I believe that the term was first used for synchronous vs. asynchronous communication. There synchronous means that the two communicating parts have a common clock signal that they run by, so they run in parallel. Asynchronous communication instead has a ready signal, so one part asks for data and gets a signal back when it's available.
The terms was then adapted to processes, but as there are obvious differences some aspects of the terms work differently. For a single thread process the natural way to request for something to be done is to make a synchronous call that transfers control to the subprocess, and then control is returned when it's done, and the process continues.
An asynchronous call works just like asynchronous communication in the aspect that you send a request for something to be done, and the process doing it returns a signal when it's done. The difference in the usage of the terms is that for processes it's in the asynchronous processing that the processes runs in parallel, while for communication it is the synchronous communication that run in parallel.
So "computer or electrical machine" is really a too wide scope for making a correct definition of the term, as it's used in slightly different ways for different techniques.
I would guess it's because they are not synchronized ;)
In other words... if one process gets stopped, killed, or is waiting for something, the other will carry on
I think there's a slant that is slightly different to most of the answers here.
Asynchronous means "not happening at the same time".
In the specific case of threading:
Synchronous means "execute this code now".
Asynchronous means "enqueue this work on a different thread that will be executed at some indeterminate time in the future"
This usually allows you to "do two things at once" because of reasons like:
one thread is just waiting (e.g. for data to arrive on a serial port) so is asleep
You have multiple processors, so the two threads can run concurrently.
However, even with 128 processor cores, the case is the same: the work will be executed "at some time in the future" (if perhaps the very near future) rather than "now".
Your second definition is more helpful here:
2. [...] having each operation started only after the preceding operation is completed.
When you make an asynchronous call, that call might not be completed before the next operation is started. When the call is synchronous, it will be.
It really means that an asynchronous event is happening independently of other events whereas a synchronous event would be happening dependent of other events.
It's like: Flammable, Inflammable ( which mean the same thing )
Seriously -- it's just one of those quirks of the English language. It doesn't really make sense. You can try to explain it, but it would be just as easy to justify the reverse meanings.
Many of the answers here are not correct. IN-dependently has a beginning particle that says NOT dependently, just like A-synchronous, but the meaning of dependent and synchronous are not the same! :D
So three dependent persons would wait for an order, because they are dependent to the order, but they wait, so they are not synchronous.
In english and any other language with common roots with a, syn and chrono (italian: asincrono; spanish: asincrónico; french:
asynchrone; greek: a= not syn=together chronos=time)it means exactly the opposite.
The terminology is UTTERLY counter-intiutive. Async functions ARE synchronous, they happen at the same time, and that's their power. They DO NOT wait, they DO NOT depend, they DO NOT hold the user waiting, but all those NOTs refer to anything but synchronicity :)
The only answer possibly right is the CLOCK one, although it is still confusing. My personal interpretation is this story:
"A professor has an office, and he makes SYNCHRONOUS CALLS for students to come. He says out loud in the main university hall: 'Hey guys who wants to talk to me should come at 10 in the morning tomorrow.', or simply puts a sign saying the same stuff.
RESULT: at 10 in the morning you see a long queue. People had the same time so they came in in the same moment and they got "piled up in the process".
So the professor thinks it would be nice for students not to waste time in the queue (and do synchronous operations, that is, do parallel stuff in their lives at the same time, and that's where the confusion comes).
He decides students can substitute him in making ASYNCHRONOUS CALLS, that is, every time a student ends talking with him, the students may, e.g., call another student saying the professor is free to talk, in a room where students may do whatever they like in the meantime. So every student does not have a single SYNCHRONOUS CALL (10 in the morning, the same time for all) but they have 10, 10.10, 10.18, 10.27.. etc. according to the needed time for each discussion in the professor office."
Is that the meaning of having the same clock, #Guffa?

Resources