How does rendezvous work? - ada

I am studying for an exam and am having a difficult time understanding Rendezvous.
Here's an example I'm looking a
While(1) {
select{
when a == TRUE :
accept A() {f1; b=FALSE}
when b == TRUE :
accept B() {f2; a=FALSE}
else {a=true; b=true}
}
}
The following calls arrive in the given order:
A(), B(), B(), A(), A(), B()
In which order will the calls be accepted? And can caller of A or B starve?
I'd really appreciate any help. Thanks in advance.

That's not Ada. At all.
For some guidance on tasking with actual Ada read Chapter 14 of Ada Distilled.
And to be honest, if you didn't recognize your example as not Ada, you probably ought to start with Chapter 1.

Going with the logic of your problem rather than the syntax, I think the answer is 'it all depends'.
The task that's running this loop (call it Server) is in a busy loop (most times round the loop, it'll end up setting A := True; B := True;). This could use up all your CPU and starve the other tasks out.
Assuming that doesn't happen and you have 2 client tasks A_Caller and B_Caller which have higher priority than Server and call their entries relatively infrequently, then you might get
A_Caller and B_Caller both run and call their respective entries.
Server enters the select and finds both guards open and the entries called; it chooses to accept A (it could have chosen B). The guard B is closed.
Next time round the loop, guard A is open but there is no caller; guard B is closed; the else part is chosen, so guard B is opened.
Next time round the loop, Server accepts B; guard A is closed.
Next time round the loop, guard B is open but there is no caller; the else part is chosen, so guard A is opened.
Next time round the loop, both guards are open but there is no caller, so the else part is chosen again and away we spin.
Obviously the exact sequence followed will depend on when the entry calls arrive with respect to Server's loop. Assuming the entry calls in your question happen a second apart, the entries will be accepted in the order called.
I don't think you can get starvation unless one of the callers is running sufficiently frequently that it's called its entry again before Server has got back round its loop. It would be hard to get this to happen under a general-purpose OS.

This does not look like ada syntax, Perhaps you could repost with ada code?
Meanwhile i refer you to: http://en.wikibooks.org/wiki/Ada_Programming/Tasking#Rendezvous

Related

Awesome WM - os.execute vs afwul.spawn

I was wondering if there are any difference between lua's
os.execute('<command>')
and awesome's
awful.spawn('<command>')
I noticed that lain suggests to use os.execute, is there any reason for that, or it's a matter of personal taste?
Do not ever use os.execute or io.popen. They are blocking functions and cause very bad performance and noticeable increase in input (mouse/keyboard) and client repaint latency. Using these functions is an anti-pattern and will only make everything worst.
See the warning in the documentation https://awesomewm.org/apidoc/libraries/awful.spawn.html
Now to really answer the question, you have to understand that Awesome is single threaded. It means it only does a single thing at a time. If you call os.execute("sleep 10"), your mouse will keep moving but your computer will otherwise be frozen for 10 seconds. For commands that execute fast enough, it may be possible to miss this, but keep in mind that due to the latency/frequency rules, any command that takes 33 millisecond to execute will drop up to 2 frames in a 60fps video game (and will at least drop one). If you have many commands per second, they add up and wreck your system performance.
But Awesome isn't doomed to be slow. It may not have multiple threads, but it has something called coroutine and also have callbacks. This means things that take time to execute can still do so in the background (using C threads or external process + sockets). When they are done, they can notify the Awesome thread. This works perfectly and avoid blocking Awesome.
Now this brings us to the next part. If I do:
-- Do **NOT** do this.
os.execute("sleep 1; echo foo > /tmp/foo.txt")
mylabel.text = io.popen("cat /tmp/foo.txt"):read("*all*")
The label will display foo. But If I do:
-- Assumes /tmp/foo.txt does not exist
awful.spawn.with_shell("sleep 1; echo foo > /tmp/foo.txt")
mylabel.text = io.popen("cat /tmp/foo.txt"):read("*all*")
Then the label will be empty. awful.spawn and awful.spawn.with_shell will not block and thus the io.popen will be execute wayyy before sleep 1 finishes. This is why we have async functions to execute shell commands. There is many variants with difference characteristics and complexity. awful.spawn.easy_async is the most common as it is good enough for the general "I want to execute a command and do something with the output when it finishes".
awful.spawn.easy_async_with_shell("sleep 1; echo foo > /tmp/foo.txt", function()
awful.spawn.easy_async_with_shell("cat /tmp/foo.txt", function(out)
mylabel.text = out
end)
end)
In this variant, Awesome will not block. Again, like other spawn, you cannot add code outside of the callback function to use the result of the command. The code will be executed before the command is finished so the result wont yet be available.
There is also something called coroutine that remove the need for callbacks, but it is currently hard to use within Awesome and would also be confusing to explain.

How to replace WAIT, to avoid locked error and to free the memory

I have today a question about the usage of the WAIT.
I work with an internal source code quality team, in charge of reviewing your code and approve it. Unfortunately, the usage of WAIT UP TO x SECONDS instruction is now forbidden and not negotiable.
Issue
CALL FUNCTION 'ANY_FUNCTION_WITH_LOCK'.
CALL FUNCTION 'ANY_FUNCTION_WITH_LOCK_2'.
If I execute this pseudo-code, I will have an error (locked object) because I use shared objects (I use function modules without sync/async mode).
I can resolve this issue by the usage of WAIT.
CALL FUNCTION 'ANY_FUNCTION_WITH_LOCK'.
WAIT UP TO 5 SECONDS. " or 1, 2, 3, 4, 5, ... seconds <------------
CALL FUNCTION 'ANY_FUNCTION_WITH_LOCK_2'.
With this method (and std functions), the system will stop and wait for a specific time.
But, sometime the system need 1 second ... or more. We can't know the exact time needed.
But, If we execute this code inside a loop with a large number of objects, the system can wait during an infinite time until a memory dump.
(Impacted functions are related to VL32N, QA11, ... and their objects)
Need
The need is How to replace the WAIT instruction ?
We need to find a solution/function that has the same behavior as the WAIT UP TO, but that will not impact (or less) the memory level (dump, excessive consumption of resources, ...)
In fact we need something like the COMMIT WORK AND WAIT but with the result of a function and not the database.
Solutions ?
Use a loop with timestamp comparaison and use ENQUEUE_READ to get the list of locked objects and check if the needed object is in this list, until X secondes.
It seems that this solution need the same level of resource as the WAIT.
ENQUE_SLEEP seems to have the same behavior that WAIT on the memory (How to make an abap program pause?)
Refactor all the code already done, and use synchronous functions.
Anything else ? Any idea ? Is it even possible?
Thanks in advance :)
Why not just put a check for the lock in between the two function modules? You could put that inside of a loop and exit the loop as soon as the lock is cleared from FM 1.
I use ENQUE_SLEEP when I want to wait for a specified amount of time and then recheck something. For example you could wait 5 seconds and then check for the existence of the locks. If the objects are no longer locked, then proceed. If the locks are still there, sleep again. To avoid an infinite loop, you must have some limit on the number of times you are willing to sleep before you give up and log some kind of an error.
The problem with WAIT is it triggers an implicit commit. ENQUE_SLEEP will not do that.

asynchronous and non-blocking calls? also between blocking and synchronous

What is the difference between asynchronous and non-blocking calls? Also between blocking and synchronous calls (with examples please)?
In many circumstances they are different names for the same thing, but in some contexts they are quite different. So it depends. Terminology is not applied in a totally consistent way across the whole software industry.
For example in the classic sockets API, a non-blocking socket is one that simply returns immediately with a special "would block" error message, whereas a blocking socket would have blocked. You have to use a separate function such as select or poll to find out when is a good time to retry.
But asynchronous sockets (as supported by Windows sockets), or the asynchronous IO pattern used in .NET, are more convenient. You call a method to start an operation, and the framework calls you back when it's done. Even here, there are basic differences. Asynchronous Win32 sockets "marshal" their results onto a specific GUI thread by passing Window messages, whereas .NET asynchronous IO is free-threaded (you don't know what thread your callback will be called on).
So they don't always mean the same thing. To distil the socket example, we could say:
Blocking and synchronous mean the same thing: you call the API, it hangs up the thread until it has some kind of answer and returns it to you.
Non-blocking means that if an answer can't be returned rapidly, the API returns immediately with an error and does nothing else. So there must be some related way to query whether the API is ready to be called (that is, to simulate a wait in an efficient way, to avoid manual polling in a tight loop).
Asynchronous means that the API always returns immediately, having started a "background" effort to fulfil your request, so there must be some related way to obtain the result.
synchronous / asynchronous is to describe the relation between two modules.
blocking / non-blocking is to describe the situation of one module.
An example:
Module X: "I".
Module Y: "bookstore".
X asks Y: do you have a book named "c++ primer"?
blocking: before Y answers X, X keeps waiting there for the answer. Now X (one module) is blocking. X and Y are two threads or two processes or one thread or one process? we DON'T know.
non-blocking: before Y answers X, X just leaves there and do other things. X may come back every two minutes to check if Y has finished its job? Or X won't come back until Y calls him? We don't know. We only know that X can do other things before Y finishes its job. Here X (one module) is non-blocking. X and Y are two threads or two processes or one process? we DON'T know. BUT we are sure that X and Y couldn't be one thread.
synchronous: before Y answers X, X keeps waiting there for the answer. It means that X can't continue until Y finishes its job. Now we say: X and Y (two modules) are synchronous. X and Y are two threads or two processes or one thread or one process? we DON'T know.
asynchronous: before Y answers X, X leaves there and X can do other jobs. X won't come back until Y calls him. Now we say: X and Y (two modules) are asynchronous. X and Y are two threads or two processes or one process? we DON'T know. BUT we are sure that X and Y couldn't be one thread.
Please pay attention on the two bold-sentences above. Why does the bold-sentence in the 2) contain two cases whereas the bold-sentence in the 4) contains only one case? This is a key of the difference between non-blocking and asynchronous.
Let me try to explain the four words with another way:
blocking: OMG, I'm frozen! I can't move! I have to wait for that specific event to happen. If that happens, I would be saved!
non-blocking: I was told that I had to wait for that specific event to happen. OK, I understand and I promise that I would wait for that. But while waiting, I can still do some other things, I'm not frozen, I'm still alive, I can jump, I can walk, I can sing a song etc.
synchronous: My mom is gonna cook, she sends me to buy some meat. I just said to my mom: We are synchronous! I'm so sorry but you have to wait even if I might need 100 years to get some meat back...
asynchronous: We will make a pizza, we need tomato and cheeze. Now I say: Let's go shopping. I'll buy some tomatoes and you will buy some cheeze. We needn't wait for each other because we are asynchronous.
Here is a typical example about non-blocking & synchronous:
// thread X
while (true)
{
msg = recv(Y, NON_BLOCKING_FLAG);
if (msg is not empty)
{
break;
}
else
{
sleep(2000); // 2 sec
}
}
// thread Y
// prepare the book for X
send(X, book);
You can see that this design is non-blocking (you can say that most of time this loop does something nonsense but in CPU's eyes, X is running, which means that X is non-blocking. If you want you can replace sleep(2000) with any other code) whereas X and Y (two modules) are synchronous because X can't continue to do any other things (X can't jump out of the loop) until it gets the book from Y.
Normally in this case, making X blocking is much better because non-blocking spends much resource for a stupid loop. But this example is good to help you understand the fact: non-blocking doesn't mean asynchronous.
The four words do make us confused easily, what we should remember is that the four words serve for the design of architecture. Learning about how to design a good architecture is the only way to distinguish them.
For example, we may design such a kind of architecture:
// Module X = Module X1 + Module X2
// Module X1
while (true)
{
msg = recv(many_other_modules, NON_BLOCKING_FLAG);
if (msg is not null)
{
if (msg == "done")
{
break;
}
// create a thread to process msg
}
else
{
sleep(2000); // 2 sec
}
}
// Module X2
broadcast("I got the book from Y");
// Module Y
// prepare the book for X
send(X, book);
In the example here, we can say that
X1 is non-blocking
X1 and X2 are synchronous
X and Y are asynchronous
If you need, you can also describe those threads created in X1 with the four words.
One more time: the four words serve for the design of architecture. So what we need is to make a proper architecture, instead of distinguishing the four words like a language lawyer. If you get some cases, where you can't distinguish the four words very clearly, you should forget about the four words, use your own words to describe your architecture.
So the more important things are: when do we use synchronous instead of asynchronous? when do we use blocking instead of non-blocking? Is making X1 blocking better than non-blocking? Is making X and Y synchronous better than asynchronous? Why is Nginx non-blocking? Why is Apache blocking? These questions are what you must figure out.
To make a good choice, you must analyze your need and test the performance of different architectures. There is no such an architecture that is suitable for various of needs.
Asynchronous refers to something done in parallel, say is another thread.
Non-blocking often refers to polling, i.e. checking whether given condition holds (socket is readable, device has more data, etc.)
Synchronous is defined as happening at the same time (in predictable timing, or in predictable ordering).
Asynchronous is defined as not happening at the same time. (with unpredictable timing or with unpredictable ordering).
This is what causes the first confusion, which is that asynchronous is some sort of synchronization scheme, and yes it is used to mean that, but in actuality it describes processes that are happening unpredictably with regards to when or in what order they run. And such events often need to be synchronized in order to make them behave correctly, where multiple synchronization schemes exists to do so, one of those called blocking, another called non-blocking, and yet another one confusingly called asynchronous.
So you see, the whole problem is about finding a way to synchronize an asynchronous behavior, because you've got some operation that needs the response of another before it can begin. Thus it's a coordination problem, how will you know that you can now start that operation?
The simplest solution is known as blocking.
Blocking is when you simply choose to wait for the other thing to be done and return you a response before moving on to the operation that needed it.
So if you need to put butter on toast, and thus you first need to toast the bred. The way you'd coordinate them is that you'd first toast the bred, then stare endlessly at the toaster until it pops the toast, and then you'd proceed to put butter on them.
It's the simplest solution, and works very well. There's no real reason not to use it, unless you happen to also have other things you need to be doing which don't require coordination with the operations. For example, doing some dishes. Why wait idle staring at the toaster constantly for the toast to pop, when you know it'll take a bit of time, and you could wash a whole dish while it finishes?
That's where two other solutions known respectively as non-blocking and asynchronous come into play.
Non-blocking is when you choose to do other unrelated things while you wait for the operation to be done. Checking back on the availability of the response as you see fit.
So instead of looking at the toaster for it to pop. You go and wash a whole dish. And then you peek at the toaster to see if the toasts have popped. If they haven't, you go wash another dish, checking back at the toaster between each dish. When you see the toasts have popped, you stop washing the dishes, and instead you take the toast and move on to putting butter on them.
Having to constantly check on the toasts can be annoying though, imagine the toaster is in another room. In between dishes you waste your time going to that other room to check on the toast.
Here comes asynchronous.
Asynchronous is when you choose to do other unrelated things while you wait for the operation to be done. Instead of checking on it though, you delegate the work of checking to something else, could be the operation itself or a watcher, and you have that thing notify and possibly interupt you when the response is availaible so you can proceed to the other operation that needed it.
Its a weird terminology. Doesn't make a whole lot of sense, since all these solutions are ways to create synchronous coordination of dependent tasks. That's why I prefer to call it evented.
So for this one, you decide to upgrade your toaster so it beeps when the toasts are done. You happen to be constantly listening, even while you are doing dishes. On hearing the beep, you queue up in your memory that as soon as you are done washing your current dish, you'll stop and go put the butter on the toast. Or you could choose to interrupt the washing of the current dish, and deal with the toast right away.
If you have trouble hearing the beep, you can have your partner watch the toaster for you, and come tell you when the toast is ready. Your partner can itself choose any of the above three strategies to coordinate its task of watching the toaster and telling you when they are ready.
On a final note, it's good to understand that while non-blocking and async (or what I prefer to call evented) do allow you to do other things while you wait, you don't have too. You can choose to constantly loop on checking the status of a non-blocking call, doing nothing else. That's often worse than blocking though (like looking at the toaster, then away, then back at it until it's done), so a lot of non-blocking APIs allow you to transition into a blocking mode from it. For evented, you can just wait idle until you are notified. The downside in that case is that adding the notification was complex and potentially costly to begin with. You had to buy a new toaster with beep functionality, or convince your partner to watch it for you.
And one more thing, you need to realize the trade offs all three provide. One is not obviously better than the others. Think of my example. If your toaster is so fast, you won't have time to wash a dish, not even begin washing it, that's how fast your toaster is. Getting started on something else in that case is just a waste of time and effort. Blocking will do. Similarly, if washing a dish will take 10 times longer then the toasting. You have to ask yourself what's more important to get done? The toast might get cold and hard by that time, not worth it, blocking will also do. Or you should pick faster things to do while you wait. There's more obviously, but my answer is already pretty long, my point is you need to think about all that, and the complexities of implementing each to decide if its worth it, and if it'll actually improve your throughput or performance.
Edit:
Even though this is already long, I also want it to be complete, so I'll add two more points.
There also commonly exists a fourth model known as multiplexed. This is when while you wait for one task, you start another, and while you wait for both, you start one more, and so on, until you've got many tasks all started and then, you wait idle, but on all of them. So as soon as any is done, you can proceed with handling its response, and then go back to waiting for the others. It's known as multiplexed, because while you wait, you need to check each task one after the other to see if they are done, ad vitam, until one is. It's a bit of an extension on top of normal non-blocking.
In our example it would be like starting the toaster, then the dishwasher, then the microwave, etc. And then waiting on any of them. Where you'd check the toaster to see if it's done, if not, you'd check the dishwasher, if not, the microwave, and around again.
Even though I believe it to be a big mistake, synchronous is often used to mean one thing at a time. And asynchronous many things at a time. Thus you'll see synchronous blocking and non-blocking used to refer to blocking and non-blocking. And asynchronous blocking and non-blocking used to refer to multiplexed and evented.
I don't really understand how we got there. But when it comes to IO and Computation, synchronous and asynchronous often refer to what is better known as non-overlapped and overlapped. That is, asynchronous means that IO and Computation are overlapped, aka, happening concurrently. While synchronous means they are not, thus happening sequentially. For synchronous non-blocking, that would mean you don't start other IO or Computation, you just busy wait and simulate a blocking call. I wish people stopped misusing synchronous and asynchronous like that. So I'm not encouraging it.
Edit2:
I think a lot of people got a bit confused by my definition of synchronous and asynchronous. Let me try and be a bit more clear.
Synchronous is defined as happening with predictable timing and/or ordering. That means you know when something will start and end.
Asynchronous is defined as not happening with predictable timing and/or ordering. That means you don't know when something will start and end.
Both of those can be happening in parallel or concurrently, or they can be happening sequentially. But in the synchronous case, you know exactly when things will happen, while in the asynchronous case you're not sure exactly when things will happen, but you can still put some coordination in place that at least guarantees some things will happen only after others have happened (by synchronizing some parts of it).
Thus when you have asynchronous processes, asynchronous programming lets you place some order guarantees so that some things happen in the right sequence, even though you don't know when things will start and end.
Here's an example, if we need to do A then B and C can happen at any time. In a sequential but asynchronous model you can have:
A -> B -> C
or
A -> C -> B
or
C -> A -> B
Every time you run the program, you could get a different one of those, seemingly at random. Now this is still sequential, nothing is parallel or concurrent, but you don't know when things will start and end, except you have made it so B always happens after A.
If you add concurrency only (no parallelism), you can also get things like:
A<start> -> C<start> -> A<end> -> C<end> -> B<start> -> B<end>
or
C<start> -> A<start> -> C<end> -> A<end> -> B<start> -> B<end>
or
A<start> -> A<end> -> B<start> -> C<start> -> B<end> -> C<end>
etc...
Once again, you don't really know when things will start and end, but you have made it so B is coordinated to always start after A ends, but that's not necessarily immediately after A ends, it's at some unknown time after A ends, and B could happen in-between fully or partially.
And if you add parallelism, now you have things like:
A<start> -> A<end> -> B<start> -> B<end> ->
C<start> -> C<keeps going> -> C<keeps going> -> C<end>
or
A<start> -> A<end> -> B<start> -> B<end>
C<start> -> C<keeps going> -> C<end>
etc...
Now if we look at the synchronous case, in a sequential setting you would have:
A -> B -> C
And this is the order always, each time you run the program, you get A then B and then C, even though C conceptually from the requirements can happen at any time, in a synchronous model you still define exactly when it will start and end. Off course, you could specify it like:
C -> A -> B
instead, but since it is synchronous, then this order will be the ordering every time the program is ran, unless you changed the code again to change the order explicitly.
Now if you add concurrency to a synchronous model you can get:
C<start> -> A<start> -> C<end> -> A<end> -> B<start> -> B<end>
And once again, this would be the order no matter how many time you ran the program. And similarly, you could explicitly change it in your code, but it would be consistent across program execution.
Finally, if you add parallelism as well to a synchronous model you get:
A<start> -> A<end> -> B<start> -> B<end>
C<start> -> C<end>
Once again, this would be the case on every program run. An important aspect here is that to make it fully synchronous this way, it means B must start after both A and C ends. If C is an operation that can complete faster or slower say depending on the CPU power of the machine, or other performance consideration, to make it synchronous you still need to make it so B waits for it to end, otherwise you get an asynchronous behavior again, where not all timings are deterministic.
You'll get this kind of synchronous thing a lot in coordinating CPU operations with the CPU clock, and you have to make sure that you can complete each operation in time for the next clock cycle, otherwise you need to delay everything by one more clock to give room for this one to finish, if you don't, you mess up your synchronous behavior, and if things depended on that order they'd break.
Finally, lots of systems have synchronous and asynchronous behavior mixed in, so if you have any kind of inherently unpredictable events, like when a user will click a button, or when a remote API will return a response, but you need things to have guaranteed ordering, you will basically need a way to synchronize the asynchronous behavior so it guarantees order and timing as needed. Some strategies to synchronize those are what I talk about previously, you have blocking, non-blocking, async, multiplexed, etc. See the emphasis on "async", this is what I mean by the word being confusing. Somebody decided to call a strategy to synchronize asynchronous processes "async". This then wrongly made people think that asynchronous meant concurrent and synchronous meant sequential, or that somehow blocking was the opposite of asynchronous, where as I just explained, synchronous and asynchronous in reality is a different concept that relates to the timing of things as being in sync (in time with each other, either on some shared clock or in a predictable order) or out of sync (not on some shared clock or in an unpredictable order). Where as asynchronous programming is a strategy to synchronize two events that are themselves asynchronous (happening at an unpredictable time and/or order), and for which we need to add some guarantees of when they might happen or at least in what order.
So we're left with two things using the word "asynchronous" in them:
Asynchronous processes: processes that we don't know at what time they will start and end, and thus in what order they would end up running.
Asynchronous programming: a style of programming that lets you synchronize two asynchronous processes using callbacks or watchers that interrupt the executor in order to let them know something is done, so that you can add predictable ordering between the processes.
A nonblocking call returns immediately with whatever data are available: the full number of bytes requested, fewer, or none at all.
An asynchronous call requests a transfer that will be performed in its whole(entirety) but will complete at some future time.
Putting this question in the context of NIO and NIO.2 in java 7, async IO is one step more advanced than non-blocking.
With java NIO non-blocking calls, one would set all channels (SocketChannel, ServerSocketChannel, FileChannel, etc) as such by calling AbstractSelectableChannel.configureBlocking(false).
After those IO calls return, however, you will likely still need to control the checks such as if and when to read/write again, etc.
For instance,
while (!isDataEnough()) {
socketchannel.read(inputBuffer);
// do something else and then read again
}
With the asynchronous api in java 7, these controls can be made in more versatile ways.
One of the 2 ways is to use CompletionHandler. Notice that both read calls are non-blocking.
asyncsocket.read(inputBuffer, 60, TimeUnit.SECONDS /* 60 secs for timeout */,
new CompletionHandler<Integer, Object>() {
public void completed(Integer result, Object attachment) {...}
public void failed(Throwable e, Object attachment) {...}
}
}
As you can probably see from the multitude of different (and often mutually exclusive) answers, it depends on who you ask. In some arenas, the terms are synonymous. Or they might each refer to two similar concepts:
One interpretation is that the call will do something in the background essentially unsupervised in order to allow the program to not be held up by a lengthy process that it does not need to control. Playing audio might be an example - a program could call a function to play (say) an mp3, and from that point on could continue on to other things while leaving it to the OS to manage the process of rendering the audio on the sound hardware.
The alternative interpretation is that the call will do something that the program will need to monitor, but will allow most of the process to occur in the background only notifying the program at critical points in the process. For example, asynchronous file IO might be an example - the program supplies a buffer to the operating system to write to file, and the OS only notifies the program when the operation is complete or an error occurs.
In either case, the intention is to allow the program to not be blocked waiting for a slow process to complete - how the program is expected to respond is the only real difference. Which term refers to which also changes from programmer to programmer, language to language, or platform to platform. Or the terms may refer to completely different concepts (such as the use of synchronous/asynchronous in relation to thread programming).
Sorry, but I don't believe there is a single right answer that is globally true.
Blocking call: Control returns only when the call completes.
Non blocking call: Control returns immediately. Later OS somehow notifies the process that the call is complete.
Synchronous program: A program which uses Blocking calls. In order not to freeze during the call it must have 2 or more threads (that's why it's called Synchronous - threads are running synchronously).
Asynchronous program: A program which uses Non blocking calls. It can have only 1 thread and still remain interactive.
Non-blocking: This function won't wait while on the stack.
Asynchronous: Work may continue on behalf of the function call after that call has left the stack
Synchronous means to start one after the other's result, in a sequence.
Asynchronous means start together, no sequence is guaranteed on the result
Blocking means something that causes an obstruction to perform the next step.
Non-blocking means something that keeps running without waiting for anything, overcoming the obstruction.
Blocking eg: I knock on the door and wait till they open it. ( I am idle here )
Non-Blocking eg: I knock on the door, if they open it instantly, I greet them, go inside, etc. If they do not open instantly, I go to the next house and knock on it. ( I am doing something or the other, not idle )
Synchrounous eg: I will go out only if it rains. ( dependency exists )
Asynchronous eg: I will go out. It can rain. ( independent events, does't matter when they occur )
Synchronous or Asynchronous, both can be blocking or non-blocking and vice versa
The blocking models require the initiating application to block when the I/O has started. This means that it isn't possible to overlap processing and I/O at the same time. The synchronous non-blocking model allows overlap of processing and I/O, but it requires that the application check the status of the I/O on a recurring basis. This leaves asynchronous non-blocking I/O, which permits overlap of processing and I/O, including notification of I/O completion.
To Simply Put,
function sum(a,b){
return a+b;
}
is a Non Blocking. while Asynchronous is used to execute Blocking task and then return its response
synchronous
asynchonous
block
Block I/O must be a synchronus I/O, becuase it has to be executed in order. Synchronous I/O might not be block I/O
Not exist
non-block
Non-block and Synchronous I/O at the same time is polling/multi-plexing..
Non-block and Asynchronous I/O at the same time is parallel execution, such as signal trigger…
block/non-block describe behavior of the initializing entity itself, it means what the entity does during wating for I/O completion
synchronous/asynchronous describe behavior between I/O initilaizing entity and I/O executor(the operating system, for example), it means whether these two entity can be executed parallelly
They differ in spelling only. There is no difference in what they refer to. To be technical you could say they differ in emphasis. Non blocking refers to control flow(it doesn't block.) Asynchronous refers to when the event\data is handled(not synchronously.)
Blocking: control returns to invoking precess after processing of primitive(sync or async) completes
Non blocking: control returns to process immediately after invocation

Should I care about thread safe of static int (4 bytes) variable in ASP .NET

I have the feeling that I should not care about thread safe accessing / writing to an
public static int MyVar = 12;
in ASP .NET.
I read/write to this variable from various user threads. Let's suppose this variable will store the numbers of clicks on a certain button/link.
My theory is that no thread can read/write to this variable at the same time. It's just a simple variable of 4 bytes.
I do care about thread safe, but only for refference objects and List instances or other types that take more cycles to read/update.
I am wrong with my presumption ?
EDIT
I understand this depend of my scenario, but wasn't that the point of the question. The question is: it is right that can be written thread safe code with an (static int) variable without using lock keyword ?
It is my problem to write correct code. The answer seems to be: Yes, if you write correct and simple code, and not to much complicated, you can create thread safe functions without the need of lock keyword.
If one thread simply sets the value and another thread reads the value, then a lock is not necessary; the read and write are atomic. But if multiple threads might be updating it and are also reading it to do the update (e.g., increment), then you definitely do need some kind of synchronization. If only one thread is ever going to update it even for an increment, then I would argue that no synchronization is necessary.
Edit (three years later) It might also be desirable to add the volatile keyword to the declaration to ensure that reads of the value always get the latest value (assuming that matters in the application).
The concept of thread 'safety' is too vague to be meaningful unfortunately. If you're asking whether you can read and write to it from multiple threads without the program crashing during the operation, the answer is almost certainly yes. If you're also asking if the variable is guaranteed to either be the old value or the new value without ever storing any broken intermediate values, the answer for this data type is again almost certainly yes.
But if your question is "will my program work correctly if I access this from multiple threads", then the answer depends entirely on what your program is doing. For example, if you run the following pseudo code in 2 threads repeatedly in most programming languages, eventually you'll hit the assertion.
if MyVar >= 1:
MyVar = MyVar - 1
assert MyVar >= 0
Primitives like int are thread-safe in the sense that reads/writes are atomic. But as with most any type, it's left to you to do proper checking with more complex operations. For example, if (x > 0) x--; would be problematic in a multi-threaded scenario because x might change in between the if condition check and decrement.
A simple read or write on a field of 32 bits or less is always atomic. But you should provide your read/write code to make sure that it is thread safe.
Check out this post: http://msdn.microsoft.com/en-us/magazine/cc163929.aspx
It explains why you need to synchronize access to the integers in this scenario
Try Interlocked.Increment() or Interlocked.Add() and you'll be right. Your code complexity will be the same but you truly won't have to worry. If you're not worried about losing a few clicks in your counter, you can continue as you are.
Reading or writing integers is atomic. However, reading and then writing is not atomic. So, if you have one thread that writes and many that read, you may be able to get away without locks.
However, even though the operations are atomic, there are still potential multi-threading issues. In order for one thread to be guaranteed that another thread can see values it writes, you need a memory barrier. Otherwise, the compiler can optimize the code so that the variable stays in a register (or even optimize the operation away completely), so changes would be invisible from one thread to another.
You can establish a memory barrier explicitly (volatile or Thread.MemoryBarrier), or with the Interlocked class -- or with the lock statement (Monitor).

I just don't get continuations!

What are they and what are they good for?
I do not have a CS degree and my background is VB6 -> ASP -> ASP.NET/C#. Can anyone explain it in a clear and concise manner?
Imagine if every single line in your program was a separate function. Each accepts, as a parameter, the next line/function to execute.
Using this model, you can "pause" execution at any line and continue it later. You can also do inventive things like temporarily hop up the execution stack to retrieve a value, or save the current execution state to a database to retrieve later.
You probably understand them better than you think you did.
Exceptions are an example of "upward-only" continuations. They allow code deep down the stack to call up to an exception handler to indicate a problem.
Python example:
try:
broken_function()
except SomeException:
# jump to here
pass
def broken_function():
raise SomeException() # go back up the stack
# stuff that won't be evaluated
Generators are examples of "downward-only" continuations. They allow code to reenter a loop, for example, to create new values.
Python example:
def sequence_generator(i=1):
while True:
yield i # "return" this value, and come back here for the next
i = i + 1
g = sequence_generator()
while True:
print g.next()
In both cases, these had to be added to the language specifically whereas in a language with continuations, the programmer can create these things where they're not available.
A heads up, this example is not concise nor exceptionally clear. This is a demonstration of a powerful application of continuations. As a VB/ASP/C# programmer, you may not be familiar with the concept of a system stack or saving state, so the goal of this answer is a demonstration and not an explanation.
Continuations are extremely versatile and are a way to save execution state and resume it later. Here is a small example of a cooperative multithreading environment using continuations in Scheme:
(Assume that the operations enqueue and dequeue work as expected on a global queue not defined here)
(define (fork)
(display "forking\n")
(call-with-current-continuation
(lambda (cc)
(enqueue (lambda ()
(cc #f)))
(cc #t))))
(define (context-switch)
(display "context switching\n")
(call-with-current-continuation
(lambda (cc)
(enqueue
(lambda ()
(cc 'nothing)))
((dequeue)))))
(define (end-process)
(display "ending process\n")
(let ((proc (dequeue)))
(if (eq? proc 'queue-empty)
(display "all processes terminated\n")
(proc))))
This provides three verbs that a function can use - fork, context-switch, and end-process. The fork operation forks the thread and returns #t in one instance and #f in another. The context-switch operation switches between threads, and end-process terminates a thread.
Here's an example of their use:
(define (test-cs)
(display "entering test\n")
(cond
((fork) (cond
((fork) (display "process 1\n")
(context-switch)
(display "process 1 again\n"))
(else (display "process 2\n")
(end-process)
(display "you shouldn't see this (2)"))))
(else (cond ((fork) (display "process 3\n")
(display "process 3 again\n")
(context-switch))
(else (display "process 4\n")))))
(context-switch)
(display "ending process\n")
(end-process)
(display "process ended (should only see this once)\n"))
The output should be
entering test
forking
forking
process 1
context switching
forking
process 3
process 3 again
context switching
process 2
ending process
process 1 again
context switching
process 4
context switching
context switching
ending process
ending process
ending process
ending process
ending process
ending process
all processes terminated
process ended (should only see this once)
Those that have studied forking and threading in a class are often given examples similar to this. The purpose of this post is to demonstrate that with continuations you can achieve similar results within a single thread by saving and restoring its state - its continuation - manually.
P.S. - I think I remember something similar to this in On Lisp, so if you'd like to see professional code you should check the book out.
One way to think of a continuation is as a processor stack. When you "call-with-current-continuation c" it calls your function "c", and the parameter passed to "c" is your current stack with all your automatic variables on it (represented as yet another function, call it "k"). Meanwhile the processor starts off creating a new stack. When you call "k" it executes a "return from subroutine" (RTS) instruction on the original stack, jumping you back in to the context of the original "call-with-current-continuation" ("call-cc" from now on) and allowing your program to continue as before. If you passed a parameter to "k" then this becomes the return value of the "call-cc".
From the point of view of your original stack, the "call-cc" looks like a normal function call. From the point of view of "c", your original stack looks like a function that never returns.
There is an old joke about a mathematician who captured a lion in a cage by climbing into the cage, locking it, and declaring himself to be outside the cage while everything else (including the lion) was inside it. Continuations are a bit like the cage, and "c" is a bit like the mathematician. Your main program thinks that "c" is inside it, while "c" believes that your main program is inside "k".
You can create arbitrary flow-of-control structures using continuations. For instance you can create a threading library. "yield" uses "call-cc" to put the current continuation on a queue and then jumps in to the one on the head of the queue. A semaphore also has its own queue of suspended continuations, and a thread is rescheduled by taking it off the semaphore queue and putting it on to the main queue.
Basically, a continuation is the ability for a function to stop execution and then pick back up where it left off at a later point in time. In C#, you can do this using the yield keyword. I can go into more detail if you wish, but you wanted a concise explanation. ;-)
I'm still getting "used" to continuations, but one way to think about them that I find useful is as abstractions of the Program Counter (PC) concept. A PC "points" to the next instruction to execute in memory, but of course that instruction (and pretty much every instruction) points, implicitly or explicitly, to the instruction that follows, as well as to whatever instructions should service interrupts. (Even a NOOP instruction implicitly does a JUMP to the next instruction in memory. But if an interrupt occurs, that'll usually involve a JUMP to some other instruction in memory.)
Each potentially "live" point in a program in memory to which control might jump at any given point is, in a sense, an active continuation. Other points that can be reached are potentially active continuations, but, more to the point, they are continuations that are potentially "calculated" (dynamically, perhaps) as a result of reaching one or more of the currently active continuations.
This seems a bit out of place in traditional introductions to continuations, in which all pending threads of execution are explicitly represented as continuations into static code; but it takes into account the fact that, on general-purpose computers, the PC points to an instruction sequence that might potentially change the contents of memory representing a portion of that instruction sequence, thus essentially creating a new (or modified, if you will) continuation on the fly, one that doesn't really exist as of the activations of continuations preceding that creation/modification.
So continuation can be viewed as a high-level model of the PC, which is why it conceptually subsumes ordinary procedure call/return (just as ancient iron did procedure call/return via low-level JUMP, aka GOTO, instructions plus recording of the PC on call and restoring of it on return) as well as exceptions, threads, coroutines, etc.
So just as the PC points to computations to happen in "the future", a continuation does the same thing, but at a higher, more-abstract level. The PC implicitly refers to memory plus all the memory locations and registers "bound" to whatever values, while a continuation represents the future via the language-appropriate abstractions.
Of course, while there might typically be just one PC per computer (core processor), there are in fact many "active" PC-ish entities, as alluded to above. The interrupt vector contains a bunch, the stack a bunch more, certain registers might contain some, etc. They are "activated" when their values are loaded into the hardware PC, but continuations are abstractions of the concept, not PCs or their precise equivalent (there's no inherent concept of a "master" continuation, though we often think and code in those terms to keep things fairly simple).
In essence, a continuation is a representation of "what to do next when invoked", and, as such, can be (and, in some languages and in continuation-passing-style programs, often is) a first-class object that is instantiated, passed around, and discarded just like most any other data type, and much like how a classic computer treats memory locations vis-a-vis the PC -- as nearly interchangeable with ordinary integers.
In C#, you have access to two continuations. One, accessed through return, lets a method continue from where it was called. The other, accessed through throw, lets a method continue at the nearest matching catch.
Some languages let you treat these statements as first-class values, so you can assign them and pass them around in variables. What this means is that you can stash the value of return or of throw and call them later when you're really ready to return or throw.
Continuation callback = return;
callMeLater(callback);
This can be handy in lots of situations. One example is like the one above, where you want to pause the work you're doing and resume it later when something happens (like getting a web request, or something).
I'm using them in a couple of projects I'm working on. In one, I'm using them so I can suspend the program while I'm waiting for IO over the network, then resume it later. In the other, I'm writing a programming language where I give the user access to continuations-as-values so they can write return and throw for themselves - or any other control flow, like while loops - without me needing to do it for them.
Think of threads. A thread can be run, and you can get the result of its computation. A continuation is a thread that you can copy, so you can run the same computation twice.
Continuations have had renewed interest with web programming because they nicely mirror the pause/resume character of web requests. A server can construct a continaution representing a users session and resume if and when the user continues the session.

Resources