Does the ZooKeeper asynchronous api watcher callback arrive before the completion callback? - asynchronous

I used zookpeer asynchronous api with c client to monitor my cluster, such as zwget. I found some confusing bugs when my app is running for a while (zookeeper cluster was unstable and a new leader may be selected in this time).
The problem is:
The Watcher of the asynchronous api is called first before completion callback.
And I have searched zookeeper offical docs,so the following passage hints that it is possible for the asynchronous API to notify Watcher before the completion callback, am I right?
Synchronous calls may not return in the correct order. For example, assume a client does the following processing: issues an asynchronous read of node /a with watch set to true, and then in the completion callback of the read it does a synchronous read of /a. (Maybe not good practice, but not illegal either, and it makes for a simple example.)
Note that if there is a change to /a between the asynchronous read and the synchronous read, the client library will receive the watch event saying /a changed before the response for the synchronous read, but because the completion callback is blocking the event queue, the synchronous read will return with the new value of /a before the watch event is processed.

Related

what's difference between synchronous and asynchronous and callback for grpc cpp server?

I have read tutorial and googled for this question, but still have some confuse. here is my understand about their difference, but I'm not sure whether it's right.
callback is also an synchronous style
synchronous and callback grpc cpp itself manage request and response queue and thread model, but asynchronous let user provide a thread management, am I right?
sync and callback sytle is not multiple threaded, am I right?
synchronous and callback have same performance, but asynchronous style can archive very high performance?

Async. programming in .Net Core

I was reading the documentation of Microsoft specifically the Async programming article and I didn't understand this section while he is explaining the work of the server's threads when using Async code.
because it(The server) uses async and await, each of its threads is freed up when the I/O-bound work starts, rather than when it finishes.
Could anyone help what does it mean by the threads r freed up when the I/O starts??
Here is the article : https://learn.microsoft.com/en-us/dotnet/standard/async-in-depth
When ASP.NET gets an HTTP request, it takes a thread from the thread pool and uses that to execute the handler for that request (e.g., a specific controller action).
For synchronous actions, the thread stays assigned to that HTTP request until the action completes. For asynchronous actions, the await in the action method may cause the thread to return an incomplete task to the ASP.NET runtime. In this case, ASP.NET will free up the thread to handle other requests while the I/O is in flight.
Further reading about the difference between synchronous and asynchronous request handling and how asynchronous work doesn't require a thread at all times.
When your application makes a call to an external resource like Database or HttpClient thread, that initiated connection needs to wait.
Until it gets a response, it waits idly.
In the asynchronous approach, the thread gets released as soon as the app makes an external call.
Here is an article about how it happens:
https://medium.com/#karol.rossa/asynchronous-programming-73b4f1988cc6
And performance comparison between async and sync apporach
https://medium.com/#karol.rossa/asynchronous-performance-1be01a71925d
Here's an analogy for you: have you ever ordered at a restaurant with a large group and had someone not be ready to order when the waiter came to them? Did they bring in a different waiter to wait for him or did the waiter just come back to him after he took other people's orders?
The fact that the waiter is allowed to come back to him later means that he's freed up immediately after calling on him rather than having to wait around until he's ready.
Asynchronous I/O works the same way. When you do a web service call, for example, the slowest part (from the perspective of the client at least) is waiting for the result to come back: most of the delay is introduced by the network (and the other server), during which time the client thread would otherwise have nothing to do but wait. Async allows the client to do other things in the background.

WebAssembly blocks the web worker thread too

This is related to the previous question WebAssembly in async code
Basically, that question is about the problem of the WebAssembly blocking the main thread, and the answer to the question is to move the WebAssembly code to a web worker. That works.
The problem now is that the WebAssembly blocks the onmessage() on the worker.
My long running WebAssembly code has functions like play(), pause(), stop(), etc. The play() checks a pause flag and a stop flag periodically to determine if the play() should return. The pause() and the stop() are used to set those flags.
The JavaScript main thread calls postMessage() to send a message to the worker, which further calls the play().
Since the onmessage() is blocked, the worker will have no chance to receive further messages to do pause() or stop() until the play() is completed. That will defeat the very purposes of the pause/stop.
It seems the simple use case of play/pause/stop cannot be supported by the WebAssembly.
Any comments or suggestions?
By the way, that use case is well supported by the defunct Google PNaCl.
Thanks.
In short: Web worksers do not ignore messages even if the web worker thread is blocked.
All browsers events, including web worker postMessage()/onmessage() events are queued. This is the fundamental philosophy of JavaScript (onmessage() is done in JS even if you use WebAssembly). Have a look at "Concurrency model and Event Loop" from MDN for further detail.
So what going to happen in your case is, while onmessage() is blocked, the events from main thread postMessage() are queued automatically. When a single onmessage() job is finished in the worker thread, from the worker event queue, will check if postMessage() is called before it finishes and catch the message if there is. So you don't need to worry about that case as long as the onmessage() job takes like 10 seconds and the you get hundreds of events in the queue.
This is how asynchronous execution is done everywhere in the browser.
Considering you are targeting recent browsers (WebAssembly), you can most likely rely on SharedArrayBuffer and Atomics. Have a look at these solutions Is it possible to pause/resume a web worker externally? , which in your case will need to be handled inside WebAssembly (Atomics.wait part)

Web API 2 - are all REST requests asynchronous?

Do I need to do anything to make all requests asynchronous or are they automatically handled that way?
I ran some tests and it appears that each request comes in on its own thread, but I figure better to ask as I might have tested wrong.
Update: (I have a bad habit of not explaining fully - sorry) Here's my concern. A client browser makes a REST request to my server of http://data.domain/com/employee_database/?query=state:Colorado. That comes in to the appropriate method in the controller. That method queries the database and returns an object which is then turned into a JSON structure and returned to the calling app.
Now let's say 10,000 clients all make a similar query to the same server. So I have 10,000 requests coming in at once. Will my controller method be called simultaneously in 10,000 distinct threads? Or must the first request return before the second request is called?
I'm not asking about the code in my handler method having asynchronous components. For my case the request becomes a single SQL query so the code has nothing that can be handled asynchronously. And until I get the requested data, I can't return from the method.
No REST is not async by default. the request are handled synchronously. However, your web server (IIS) has a number of max threads setting which can work at the same time, and it maintains a queue of the request received. So, the request goes in the queue and if a thread is available it gets executed else, the request waits in the IIS queue till a thread is available
I think you should be using async IO/operations such as database calls in your case. Yes in Web Api, every request has its own thread, but threads can run out if there are many consecutive requests. Also threads use memory so if your api gets hit by too many request it may put pressure on your system.
The benefit of using async over sync is that you use your system resources wisely. Instead of blocking the thread while it is waiting for the database call to complete in sync implementation, the async will free the thread to handle more requests or assign it what ever process needs a thread. Once IO (database) call completes, another thread will take it from there and continue with the implementation. Async will also make your api run faster if your IO operations take longer to complete.
To be honest, your question is not very clear. If you are making an HTTP GET using HttpClient, say the GetAsync method, request is fired and you can do whatever you want in your thread until the time you get the response back. So, this request is asynchronous. If you are asking about the server side, which handles this request (assuming it is ASP.NET Web API), then asynchronous or not is up to how you implemented your web API. If your action method, does three things, say 1, 2, and 3 one after the other synchronously in blocking mode, the same thread is going to the service the request. On the other hand, say #2 above is a call to a web service and it is an HTTP call. Now, if you use HttpClient and you make an asynchronous call, you can get into a situation where one request is serviced by more than one thread. For that to happen, you should have made the HTTP call from your action method asynchronously and used async keyword. In that case, when you call await inside the action method, your action method execution returns and the thread servicing your request is free to service some other request and ultimately when the response is available, the same or some other thread will continue from where it was left off previously. Long boring answer, perhaps but difficult to explain just through words by typing, I guess. Hope you get some clarity.
UPDATE:
Your action method will execute in parallel in 10,000 threads (ideally). Why I'm saying ideally is because a CLR thread pool having 10,000 threads is not typical and probably impractical as well. There are physical limits as well as limits imposed by the framework as well but I guess the answer to your question is that the requests will be serviced in parallel. The correct term here will be 'parallel' but not 'async'.
Whether it is sync or async is your choice. You choose by the way to write your action. If you return a Task, and also use async IO under the hood, it is async. In other cases it is synchronous.
Don't feel tempted to slap async on your action and use Task.Run. That is async-over-sync (a known anti-pattern). It must be truly async all the way down to the OS kernel.
No framework can make sync IO automatically async, so it cannot happen under the hood. Async IO is callback-based which is a severe change in programming model.
This does not answer what you should do of course. That would be a new question.

Is ASMX WebService or WCF or aspx pages are async by default?

I involved my self within a bet, our feud is about - Async WebServices and the other stuff i mentioned above.
I am thinking logically web service by default is sync, the other said that it is not correct.
Who is right or wrong can any one explain it to me?
Thanks in advance.
All of them are by default synchronous but you can write all of them asynchronously and you can call all of them asynchronously. You should always differ between synchronous/asynchronous call and between synchronous/asynchronous execution.
Calls
Synchronous - client calls the service/page and hangs on until the service/page returns the response.
Asynchronous - client calls the service/page and can continue in work. Client is usually notified by some event (or it can poll the result) that response has arrived. In ASPX this is typical callback or AJAX call.
Execution:
Synchronous - service/page receives the call and process it. Every external processing (file access, calling other services, calling database) is done synchronously and the service/page block the executing thread for the whole duration of the request processing.
Asynchronous - service/page receives the call, prepares external processing and executes it asynchronously. Processing thread is returned back to thread pool and can server other requests in the meanwhile. Once external processing ends service/page execution is again scheduled to receive a thread from thread pool and it finishes execution and returns response. This is usually only need on high traffic pages/services with intensive external communication.
These two types of asynchronous processing are completely independent. You can have asynchronous calls to synchronous services and any other combination.

Resources