Autosys Job Queue - autosys

I'm trying to set1 an autosys jobs configuration so that will have a "funnel" job queue behavior, or, as I call it, in a 'waterdrops' pattern, each job executing in sequence after a given time interval, with local job failure not cascading into sequence failure.
1 (ask for it to be setup, actually, as I do not control the Autosys machine)
Constraints
I have an (arbitrary) N jobs (all executing on success of job A)
For this discussion, lets say three (B1, B2, B3)
Real production numbers might go upward of 100 jobs.
All these jobs won't be created at the same time, so addition of a new job should be as less painful as possible.
None of those should execute simultaneously.
Not actually a direct problem for our machine
But side effect on a remote, client machine : jobs include file transfer, which are trigger-listened to on client machine, which doesn't handle well.
Adaptation of client-machine behavior is, unfortunately, not possible.
Failure of job is meaningless to other jobs.
There should be a regular delay in between each job
This is a soft requirement in that, our jobs being batch scripts, we can always append or prepend a sleep command.
I'd rather, however have a more elegant solution especially if the delay is centralised : a parameter - that could be set to greater values, should the need arise.
State of my reasearch
Legend
A(s) : Success status of job
A(d) : Done status of job
Solution 1 : Unfailing sequence
This is the current "we should pick this solution" solution.
A (s) --(delay D)--> B(d) --(delay D)--> B2(d) --(delay D)--> B3 ...
Pros :
Less bookeeping than solution 2
Cons :
Bookeeping of the (current) tailing job
Sequence doesn't resist to job being ON HOLD (ON ICE is fine).
Solution 2 : Stairway parallelism
A(s) ==(delay D)==> B1
A(s) ==(delay D x2)==> B2
A(s) ==(delay D x3)==> B3
...
Pros :
Jobs can be put ON HOLD without incidence.
Cons :
Bookeeping to know "who is when" (and what's the next delay to implement)
N jobs executed at the same time
Underlying race condition created
++ Risk of overlap of job execution, especially if small delays accumulates
Solution 3 : The Miracle Box ?
I have read a bit about Job Boxes, but the specific details eludes me.
-----------------
A(s) ====> | B1, B2, B3 |
-----------------
Can we limit the number of concurrent executions of jobs of a box (i.e a box-local max_load, if I understand that parameter) ?
Pros :
Adding jobs would be painless
Little to no bookeeping (box name, to add new jobs - and it's constant)
Jobs can be put ON HOLD without incidence (unless I'm mistaken)
Cons :
I'm half-convinced it can't be done (but that's why I'm asking you :) )
... any other problem I have failed to forseen
My questions to SO
Is Solution 3 a possibility, and if yes, what are the specific commands and parameters for implementing it ?
Am I correct in favoring Solution 1 over Solution 2 otherwise2 ?
An alternative solution fitting in the constraints is of course more than welcome!
Thanks in advance,
Best regards
PS: By the way, is all of this a giant race condition manager for the remote machine failing behavior ?
Yes, it is.
2 I'm aware it skirts a bit toward the "subjective" part of questions rejection rules, but I'm asking it in regards to the solution(s) correctness toward my (arguably) objective constraints.

I would suggest you to do below
Put all the jobs (B1,B2,B3) in a box job B.
Create another job (say M1) which would run on success of A. This job will call a shell/perl script (say forcejobs.sh)
The shell script will get a list of all the jobs in B and start a loop with a sleep interval of delay period. Inside loop it would force start jobs one by one after the delay period.
So outline of script would be
get all the jobs in B
for each job start for loop
force start the job
sleep for delay interval
At the end of the loop, when all jobs are successfully started, you can use an infinite loop and keep checking status of jobs. Once all jobs are SU/FA or whatever, you can end the script and send the result to you/stdout and finish the job M1.

Related

Kill job if execution time less than 2 min

I have a chain with several jobs, and sometimes a certain job that normally takes about 2 hours, finishes in less than 2 minutes.
What I would like to do is to kill this job if it ends in less than 2 minutes so that the chain won't proceed.
Is this possible?
Thanks
Well you don't really want to kill anything, do you? If you do, see BMC's note (including video) on using ctmkilljob.
In this case your next job is dependent on 2 things, the predecessor job being ok and the duration of the predecessor job. Add another input condition to your next job (in addition to the existing condition) to represent the >2 minutes duration.
On the job that needs to run for more than 2 minutes, add a Notify when the Exectime exceeds 2 mins (or 60 mins or whatever you decide is the threshold) and get it to shout to an entry in your shout destination table.
On the Control-M Server create a new program entry in your shout destination table and create a small script referenced by the shout. The script should use the ctmcontb utility to create a new ODAT condition that your next is waiting on. If you have a look at the BMC help note for ctmkilljob (and just substitute in ctmcontb) then you'll see how to do this.

Intel MPI distributed memory: building a wall out of M*N blocks using q<M processors

Imagine I have M independent jobs, each job has N steps. Jobs are independent from each other but steps of each job should be serial. In other words J(i,j) should be started only after J(i,j-1) is finished (i indicates the job index and j indicates the step). This is isomorphic to building a wall with width of M and hight of N blocks.
Each block of job should be executed only once. The time that it takes to do one block of work using one CPU (also the same order) is different for different blocks and is not known in advance.
The simple way of doing this using MPI is to assign blocks of work to processors and wait until all of them finish their blocks before the next assignment. This way we can make ensure that priorities are enforced, but there will be a lot of waiting time.
Is there a more efficient way of doing this? I mean when a processor finishes its job, using some kind of environmental variables or shared memory, could decide which block of job it should do next, without waiting for other processors to finish their jobs and make a collective decision using communications.
You have M jobs with N steps each. You also have a set of worker processes of size W, somewhere between 2 and M.
If W is close to M, the best you can do is simply assign them 1:1. If one worker finishes early that's fine.
If W is much smaller than M, and N is also fairly large, here is an idea:
Estimate some average or typical time for one step to complete. Call this T. You can adjust this estimate as you go in case you have a very poor estimator at the outset.
Divide your M jobs evenly in number among the workers, and start them. Tell the workers to run as many steps of their assigned jobs as possible before a timeout, say T*N/K. Overrunning the timeout slightly to finish the current job is allowed to ensure forward progress.
Have the workers communicate to each other which steps they completed.
Repeat, dividing the jobs evenly again taking into account how complete each one is (e.g. two 50% complete jobs count the same as one 0% complete job).
The idea is to give all the workers enough time to complete roughly 1/K of the total work each time. If no job takes much more than K*T, this will be quite efficient.
It's up to you to find a reasonable K. Maybe try 10.
Here's an idea, IDK if it's good:
Maintain one shared variable: n = the progress of the farthest-behind task. i.e. the lowest step-number that any of the M tasks has completed. It starts out at 0, because all tasks start at the first step. It stays at 0 until all tasks have completed at least 1 step each.
When a processor finishes a step of a job, check the progress of the step it's currently working on against n. If n < current_job_step - 4, switch tasks because the one we're working on is too far ahead of the farthest-behind one.
I picked 4 to give a balance between too much switching vs. having too much serial work in only a couple tasks. Adjust as necessary, and maybe make it adaptive as you near the end.
Switching tasks without having two threads both grab the same work unit is non-trivial unless you have a scheduler thread that makes all the decisions. If this is on a single shared-memory machine, you could use locking to protect a priority queue.

F# task parallelism under Mono doesn't "appear" to execute in parallel

I have the following dummy code to test out TPL in F#. (Mono 4.5, Xamarin studio, quad core MacBook Pro)
To my surprise, all the processes are done on the same thread. There is no parallelism at all.
open System
open System.Threading
open System.Threading.Tasks
let doWork (num:int) (taskId:int) : unit =
for i in 1 .. num do
Thread.Sleep(10)
for j in 1 .. 1000 do
()
Console.WriteLine(String.Format("Task {0} loop: {1}, thread id {2}", taskId, i, Thread.CurrentThread.ManagedThreadId))
[<EntryPoint>]
let main argv =
let t2 = Task.Factory.StartNew(fun() -> doWork 10 2)
//printfn "launched t2"
Console.WriteLine("launched t2")
let t1 = Task.Factory.StartNew(fun() -> doWork 8 1)
Console.WriteLine("launched t1")
let t3 = Task.Factory.StartNew(fun() -> doWork 10 3)
Console.WriteLine("launched t3")
let t4 = Task.Factory.StartNew(fun() -> doWork 5 4)
Console.WriteLine("launched t4")
Task.WaitAll(t1,t2,t3,t4)
0 // return an integer exit code
However, if I increase the thread sleep time from 10 to 100ms, I can see a little parallelism.
What have I done wrong? What does this mean? I did consider the possibility of the CPU finished the work before TPL can start the task on a new thread. But this doesn't make sense to me. I can increase the inner dummy loop for j in 1 .. 1000 do () to loop 1000 more times. The result is the same: no parallelism (thread.sleep is set 10 ms).
The same code in C# on the other hand, produces the desired results: all tasks print the message to the window in a mixed order (rather than sequential order)
Update:
As suggested I changed the inner loop to do some 'actual' thing but the result is still execution on the single thread
Update 2:
I don't quite understand Luaan's comments but I just did a test on a friend's PC. And with the same code, parallelism is working (without thread sleep). It looks like something to do with Mono. But can Luaan explain what I should expect from TPL again? If I have tasks that I want to perform in parallel and taking advantage of the multicore CPU, isn't TPL the way to go?
Update 3:
I have tried out #FyodorSoikin's suggestion again with dummy code that won't be optimized away. Unfortunately, the workload still is not able to make Mono TPL to use multiple threads. Currently the only way I can get Mono TPL to allocate multiple threads is to force a sleep on the existing thread for more than 20ms. I am not qualified enough to asset that Mono is wrong, but I can confirm the same code (same benchmark workload) have the different behaviors under Mono and Windows.
It looks like the Sleeps are ignored completely - see how the Task 2 loop is printed even before launching the next task, that's just silly - if the thread waited for 10ms, there's no way for that to happen.
I'd assume that the cause might be the timer resolution in the OS. The Sleep is far from accurate - it might very well be that Mono (or Mac OS) decides that since they can't reliably make you run again in 10ms, the best choice is to simply let you run right now. This is not how it works on Windows - there you're guaranteed to lose control as long as you don't Sleep(0); you'll always sleep at least as long as you wanted. It seems that on Mono / Mac OS, the idea is the reverse - the OS tries to let you sleep at most the amount of time you specified. If you want to sleep for less time than is the timer precision, too bad - no sleep.
But even if they are not ignored, there's still not a lot of pressure on the thread pool to give you more threads. You're only blocking for less than 100ms, for four tasks in a line - that's not nearly enough for the pool to start creating new threads to handle the requests (on MS.NET, new threads are only spooled after not having any free threads for 200ms, IIRC). You're simply not doing enough work for it to be worth it to spool up new threads!
The point you might be missing is that Task.Factory.StartNew is not actually starting any new threads, ever. Instead, it's scheduling the associated task on the default task scheduler - which just puts it in the thread pool queue, as tasks to execute "at earliest convenience", basically. If there's one free thread in the pool, the first tasks starts running there almost immediately. The second will run when there's another thread free etc. Only if the thread usage is "bad" (i.e. the threads are "blocked" - they're not doing any CPU work, but they're not free either) is the threadpool going to spawn new threads.
If you look at the IL output from this program, you'll see that the inner loop is optimized away, because it doesn't have any side effects, and its return value is completely ignored.
To make it count, put something non-optimizable there, and also make it heavier: 1000 empty cycles is hardly noticeable compared to the cost of spinning up a new task.
For example:
let doWork (num:int) (taskId:int) : unit =
for i in 1 .. num do
Thread.Sleep(10)
for j in 1 .. 1000 do
Debug.WriteLine("x")
Console.WriteLine(String.Format("Task {0} loop: {1}, thread id {2}", taskId, i, Thread.CurrentThread.ManagedThreadId))
Update:
Adding a pure function, such as your fact, is no good. The compiler is perfectly able to see that fact has no side effects and that you duly ignore its return value, and therefore, it is perfectly cool to optimize it away. You need to do something that the compiler doesn't know how to optimize, such as Debug.WriteLine above.

Setting a new label on all nodes takes too long in a huge graph

I'm working on a graph containing about 50 million nodes and 40 million relationships.
I need to update every node.
I'm trying to set a new label to these nodes, but it's taking too long.
The label applies to all 50 million nodes, so the operation never ends.
After some research, i found out that Neo4j treats this operation as a single transaction (i don't know if optimistic or not), keeping the changes uncommitted, until the end (which will never happen in this fashion).
I'm currently using Neo4j 2.1.4, which has a feature called "USING PERIODIC COMMIT" (already present in earlier versions). Unfortunately, this feature is coupled to the "LOAD CSV" feature, and not available to every cypher command.
The cypher is quite simple:
match n set n:Person;
I decided to use a workaround, and make some sort of block update, as follows:
match n
where not n:Person
with n
limit 500000
set n:node;
It's ugly, but i couldn't come up with a better solution yet.
Here are some of my confs:
== neo4j.properties =========
neostore.nodestore.db.mapped_memory=250M
neostore.relationshipstore.db.mapped_memory=500M
neostore.propertystore.db.mapped_memory=900M
neostore.propertystore.db.strings.mapped_memory=1300M
neostore.propertystore.db.arrays.mapped_memory=1300M
keep_logical_logs=false
node_auto_indexing=true
node_keys_indexable=name_autocomplete,document
relationship_auto_indexing=true
relationship_keys_indexable=role
execution_guard_enabled=true
cache_type=weak
=============================
== neo4j-server.properties ==
org.neo4j.server.webserver.limit.executiontime=20000
org.neo4j.server.webserver.maxthreads=200
=============================
The hardware spec is:
RAM: 24GB
PROC: Intel(R) Xeon(R) X5650 # 2.67GHz, 32 cores
HDD1: 1.2TB
In this environment, each block update of 500000 nodes took from 200 to 400 seconds. I think this is because every node satisfies the query at the start, but as the updates take place, more nodes need to be scanned to find the unlabeled ones (but again, it's a hunch).
So what's the best course of action whenever an operation needs to touch every node in the graph?
Any help towards a better solution to this will be appreciated!
Thanks in advance.
The most performant way to achieve this is using the batch inserter API. You might use the following recipe:
take a look at http://localhost:7474/webadmin and note the "node count". In fact it's not the number of nodes it's more the highest node id in use - we'll need that later on.
make sure to cleanly shut down your graph database.
take a backup copy of your graph.db directory.
write a short piece of java/groovy/(whatever jvm language you prefer...) program that performs the following tasks
open your graph.db folder using the batch inserter api
in a loop from 0..<node count> (from step above) check if the node with given id exists, if so grab its current labels and amend the list by the new label and use setNodeLabels to write it back.
make sure you run shutdown with the batchinserter
start up your Neo4j instance again

Asynchronous vs synchronous execution. What is the difference? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 2 months ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What is the difference between asynchronous and synchronous execution?
When you execute something synchronously, you wait for it to finish before moving on to another task. When you execute something asynchronously, you can move on to another task before it finishes.
In the context of operating systems, this corresponds to executing a process or task on a "thread." A thread is a series of commands (a block of code) that exist as a unit of work. The operating system runs a given thread on a processor core. However, a processor core can only execute a single thread at once. It has no concept of running multiple threads simultaneously. The operating system can provide the illusion of running multiple threads at once by running each thread for a small slice of time (such as 1ms), and continuously switching between threads.
Now, if you introduce multiple processor cores into the mix, then threads CAN execute at the same time. The operating system can allocate time to one thread on the first processor core, then allocate the same block of time to another thread on a different processor core. All of this is about allowing the operating system to manage the completion of your task while you can go on in your code and do other things.
Asynchronous programming is a complicated topic because of the semantics of how things tie together when you can do them at the same time. There are numerous articles and books on the subject; have a look!
Synchronous/Asynchronous HAS NOTHING TO DO WITH MULTI-THREADING.
Synchronous or Synchronized means "connected", or "dependent" in some way. In other words, two synchronous tasks must be aware of one another, and one task must execute in some way that is dependent on the other, such as wait to start until the other task has completed.
Asynchronous means they are totally independent and neither one must consider the other in any way, either in the initiation or in execution.
Synchronous (one thread):
1 thread -> |<---A---->||<----B---------->||<------C----->|
Synchronous (multi-threaded):
thread A -> |<---A---->|
\
thread B ------------> ->|<----B---------->|
\
thread C ----------------------------------> ->|<------C----->|
Asynchronous (one thread):
A-Start ------------------------------------------ A-End
| B-Start -----------------------------------------|--- B-End
| | C-Start ------------------- C-End | |
| | | | | |
V V V V V V
1 thread->|<-A-|<--B---|<-C-|-A-|-C-|--A--|-B-|--C-->|---A---->|--B-->|
Asynchronous (multi-Threaded):
thread A -> |<---A---->|
thread B -----> |<----B---------->|
thread C ---------> |<------C--------->|
Start and end points of tasks A, B, C represented by <, > characters.
CPU time slices represented by vertical bars |
Technically, the concept of synchronous/asynchronous really does not have anything to do with threads. Although, in general, it is unusual to find asynchronous tasks running on the same thread, it is possible, (see below for examples) and it is common to find two or more tasks executing synchronously on separate threads... No, the concept of synchronous/asynchronous has to do solely with whether or not a second or subsequent task can be initiated before the other (first) task has completed, or whether it must wait. That is all. What thread (or threads), or processes, or CPUs, or indeed, what hardware, the task[s] are executed on is not relevant. Indeed, to make this point I have edited the graphics to show this.
ASYNCHRONOUS EXAMPLE:
In solving many engineering problems, the software is designed to split up the overall problem into multiple individual tasks and then execute them asynchronously. Inverting a matrix, or a finite element analysis problem, are good examples. In computing, sorting a list is an example. The quicksort routine, for example, splits the list into two lists and performs a quicksort on each of them, calling itself (quicksort) recursively. In both of the above examples, the two tasks can (and often were) executed asynchronously. They do not need to be on separate threads. Even a machine with one CPU and only one thread of execution can be coded to initiate processing of a second task before the first one has completed. The only criterion is that the results of one task are not necessary as inputs to the other task. As long as the start and end times of the tasks overlap, (possible only if the output of neither is needed as inputs to the other), they are being executed asynchronously, no matter how many threads are in use.
SYNCHRONOUS EXAMPLE:
Any process consisting of multiple tasks where the tasks must be executed in sequence, but one must be executed on another machine (Fetch and/or update data, get a stock quote from financial service, etc.). If it's on a separate machine it is on a separate thread, whether synchronous or asynchronous.
In simpler terms:
SYNCHRONOUS
You are in a queue to get a movie ticket. You cannot get one until everybody in front of you gets one, and the same applies to the people queued behind you.
ASYNCHRONOUS
You are in a restaurant with many other people. You order your food. Other people can also order their food, they don't have to wait for your food to be cooked and served to you before they can order.
In the kitchen restaurant workers are continuously cooking, serving, and taking orders.
People will get their food served as soon as it is cooked.
Simple Explanation via analogy
(story & pics given to help you remember).
Synchronous Execution
My boss is a busy man. He tells me to write code. I tell him: Fine. I get started and he's watching me like a vulture, standing behind me, off my shoulder. I'm like "Dude, WTF: why don't you go and do something while I finish this?"
he's like: "No, I'm waiting right here until you finish." This is synchronous.
Asynchronous Execution
The boss tells me to do it, and rather than waiting right there for my work, the boss goes off and does other tasks. When I finish my job I simply report to my boss and say: "I'm DONE!" This is Asynchronous Execution.
(Take my advice: NEVER work with the boss behind you.)
Synchronous execution means the execution happens in a single series. A->B->C->D. If you are calling those routines, A will run, then finish, then B will start, then finish, then C will start, etc.
With Asynchronous execution, you begin a routine, and let it run in the background while you start your next, then at some point, say "wait for this to finish". It's more like:
Start A->B->C->D->Wait for A to finish
The advantage is that you can execute B, C, and or D while A is still running (in the background, on a separate thread), so you can take better advantage of your resources and have fewer "hangs" or "waits".
In a nutshell, synchronization refers to two or more processes' start and end points, NOT their executions. In this example, Process A's endpoint is synchronized with Process B's start point:
SYNCHRONOUS
|--------A--------|
|--------B--------|
Asynchronous processes, on the other hand, do not have their start and endpoints synchronized:
ASYNCHRONOUS
|--------A--------|
|--------B--------|
Where Process A overlaps Process B, they're running concurrently or synchronously (dictionary definition), hence the confusion.
UPDATE: Charles Bretana improved his answer, so this answer is now just a simple (potentially oversimplified) mnemonic.
Synchronous means that the caller waits for the response or completion, asynchronous that the caller continues and a response comes later (if applicable).
As an example:
static void Main(string[] args)
{
Console.WriteLine("Before call");
doSomething();
Console.WriteLine("After call");
}
private static void doSomething()
{
Console.WriteLine("In call");
}
This will always ouput:
Before call
In call
After call
But if we were to make doSomething() asynchronous (multiple ways to do it), then the output could become:
Before call
After call
In call
Because the method making the asynchronous call would immediately continue with the next line of code. I say "could", because order of execution can't be guaranteed with asynch operations. It could also execute as the original, depending on thread timings, etc.
Sync vs Async
Sync and async operations are about execution order a next task in relation to the current task.
Let's take a look at example where Task 2 is current task and Task 3 is a next task. Task is an atomic operation - method call in a stack (method frame).
Synchronous
Implies that tasks will be executed one by one. A next task is started only after current task is finished. Task 3 is not started until Task 2 is finished.
Single Thread + Sync - Sequential
Usual execution.
Pseudocode:
main() {
task1()
task2()
task3()
}
Multi Thread + Sync - Parallel
Blocked.
Blocked means that a thread is just waiting(although it could do something useful. e.g. Java ExecutorService[About] and Future[About]) Pseudocode:
main() {
task1()
Future future = ExecutorService.submit(task2())
future.get() //<- blocked operation
task3()
}
Asynchronous
Implies that task returns control immediately with a promise to execute a code and notify about result later(e.g. callback, feature). Task 3 is executed even if Task 2 is not finished. async callback, completion handler[About]
Single Thread + Async - Concurrent
Callback Queue (Message Queue) and Event Loop (Run Loop, Looper) are used. Event Loop checks if Thread Stack is empty and if it is true it pushes first item from the Callback Queue into Thread Stack and repeats these steps again. Simple examples are button click, post event...
Pseudocode:
main() {
task1()
ThreadMain.handler.post(task2());
task3()
}
Multi Thread + Async - Concurrent and Parallel
Non-blocking.
For example when you need to make some calculations on another thread without blocking. Pseudocode:
main() {
task1()
new Thread(task2()).start();
//or
Future future = ExecutorService.submit(task2())
task3()
}
You are able use result of Task 2 using a blocking method get() or using async callback through a loop.
For example in Mobile world where we have UI/main thread and we need to download something we have several options:
sync block - block UI thread and wait when downloading is done. UI is not responsive.
async callback - create a new tread with a async callback to update UI(is not possible to access UI from non UI thread). Callback hell.
async coroutine[About] - async task with sync syntax. It allows mix downloading task (suspend function) with UI task.
[iOS sync/async], [Android sync/async]
[Paralel vs Concurrent]
I think this is bit round-about explanation but still it clarifies using real life example.
Small Example:
Let's say playing an audio involves three steps:
Getting the compressed song from harddisk
Decompress the audio.
Play the uncompressed audio.
If your audio player does step 1,2,3 sequentially for every song then it is synchronous. You will have to wait for some time to hear the song till the song actually gets fetched and decompressed.
If your audio player does step 1,2,3 independent of each other, then it is asynchronous. ie.
While playing audio 1 ( step 3), if it fetches audio 3 from harddisk in parallel (step 1) and it decompresses the audio 2 in parallel. (step 2 )
You will end up in hearing the song without waiting much for fetch and decompress.
I created a gif for explain this, hope to be helpful:
look, line 3 is asynchronous and others are synchronous.
all lines before line 3 should wait until before line finish its work, but because of line 3 is asynchronous, next line (line 4), don't wait for line 3, but line 5 should wait for line 4 to finish its work, and line 6 should wait for line 5 and 7 for 6, because line 4,5,6,7 are not asynchronous.
Simply said asynchronous execution is doing stuff in the background.
For example if you want to download a file from the internet you might use a synchronous function to do that but it will block your thread until the file finished downloading. This can make your application unresponsive to any user input.
Instead you could download the file in the background using asynchronous method. In this case the download function returns immediately and program execution continues normally. All the download operations are done in the background and your program will be notified when it's finished.
As a really simple example,
SYNCHRONOUS
Imagine 3 school students instructed to run a relay race on a road.
1st student runs her given distance, stops and passes the baton to the 2nd. No one else has started to run.
1------>
2.
3.
When the 2nd student retrieves the baton, she starts to run her given distance.
1.
2------>
3.
The 2nd student got her shoelace untied. Now she has stopped and tying up again. Because of this, 2nd's end time has got extended and the 3rd's starting time has got delayed.
1.
--2.--->
3.
This pattern continues on till the 3rd retrieves the baton from 2nd and finishes the race.
ASYNCHRONOUS
Just Imagine 10 random people walking on the same road.
They're not on a queue of course, just randomly walking on different places on the road in different paces.
2nd person's shoelace got untied. She stopped to get it tied up again.
But nobody is waiting for her to get it tied up. Everyone else is still walking the same way they did before, in that same pace of theirs.
10--> 9-->
8--> 7--> 6-->
5--> 4-->
1--> 2. 3-->
Synchronous basically means that you can only execute one thing at a time. Asynchronous means that you can execute multiple things at a time and you don't have to finish executing the current thing in order to move on to next one.
When executing a sequence like: a>b>c>d>, if we get a failure in the middle of execution like:
a
b
c
fail
Then we re-start from the beginning:
a
b
c
d
this is synchronous
If, however, we have the same sequence to execute: a>b>c>d>, and we have a failure in the middle:
a
b
c
fail
...but instead of restarting from the beginning, we re-start from the point of failure:
c
d
...this is know as asynchronous.
An example of instructions for making a breakfast:
Pour a cup of coffee.
Heat a pan, then fry two eggs.
Fry three slices of bacon.
Toast two pieces of bread.
Add butter and jam to the toast.
Pour a glass of orange juice.
If you have experience with cooking, you'd execute those instructions asynchronously. You'd start warming the pan for eggs, then start the bacon. You'd put the bread in the toaster, then start the eggs. At each step of the process, you'd start a task, then turn your attention to tasks that are ready for your attention.
Cooking breakfast is a good example of asynchronous work that isn't parallel. One person (or thread) can handle all these tasks. Continuing the breakfast analogy, one person can make breakfast asynchronously by starting the next task before the first task completes. The cooking progresses whether or not someone is watching it. As soon as you start warming the pan for the eggs, you can begin frying the bacon. Once the bacon starts, you can put the bread into the toaster.
For a parallel algorithm, you'd need multiple cooks (or threads). One would make the eggs, one the bacon, and so on. Each one would be focused on just that one task. Each cook (or thread) would be blocked synchronously waiting for the bacon to be ready to flip, or the toast to pop.
(emphasis mine)
From Asynchronous programming concepts
A synchronous operation does its work before returning to the caller.
An asynchronous operation does (most or all of) its work after returning to the caller.
You are confusing Synchronous with Parallel vs Series. Synchronous mean all at the same time. Syncronized means related to each othere which can mean in series or at a fixed interval. While the program is doing all, it it running in series. Get a dictionary...this is why we have unsweet tea. You have tea or sweetened tea.
A different english definition of Synchronize is Here
Coordinate; combine.
I think that is a better definition than of "Happening at the same time". That one is also a definition, but I don't think it is the one that fits the way it is used in Computer Science.
So an asynchronous task is not co-coordinated with other tasks, whereas a synchronous task IS co-coordinated with other tasks, so one finishes before another starts.
How that is achieved is a different question.
I think a good way to think of it is a classic running Relay Race
Synchronous: Processes like members of the same team, they won't execute until they receive baton (end of the execution of previous process/runner) and yet they are all acting in sync with each other.
Asynchronous: Where processes like members of different teams on the same relay race track, they will run and stop, async with each other, but within same race (overall program execution).
Does it make sense?
Synchronous means queue way execution one by one task will be executed. Suppose there is only vehicle that need to be share among friend to reach their destination one by one vehicle will be share.
In asynchronous case each friend can get rented vehicle and reach its destination.
In regards to the "at the same time" definition of synchronous execution (which is sometimes confusing), here's a good way to understand it:
Synchronous Execution: All tasks within a block of code are all executed at the same time.
Asynchronous Execution: All tasks within a block of code are not all executed at the same time.
Yes synchronous means at the same time, literally, it means doing work all together. multiple human/objects in the world can do multiple things at the same time but if we look at computer, it says synchronous means where the processes work together that means the processes are dependent on the return of one another and that's why they get executed one after another in proper sequence. Whereas asynchronous means where processes don't work together, they may work at the same time(if are on multithread), but work independently.

Resources