async/await with HTTP method - asynchronous

If you use a fetch method for HTTP communication in a code that does not consider asynchronous processing at all, all functions (even the main function, to take it to the extreme) that include a fetch method, even if only a little, will need to use async/await (asynchronous processing) Is it necessary to think about
Or can we limit the scope of asynchronous processing?

It is normal to go async all the way.
If you use a hexagonal / ports-and-adapters architecture, you can (sometimes) extract the I/O operations into "ports" and keep your core business logic synchronous. But the composition of the synchronous business logic and the asynchronous ports is asynchronous, so your main entry points are almost always asynchronous.

Related

what's difference between synchronous and asynchronous and callback for grpc cpp server?

I have read tutorial and googled for this question, but still have some confuse. here is my understand about their difference, but I'm not sure whether it's right.
callback is also an synchronous style
synchronous and callback grpc cpp itself manage request and response queue and thread model, but asynchronous let user provide a thread management, am I right?
sync and callback sytle is not multiple threaded, am I right?
synchronous and callback have same performance, but asynchronous style can archive very high performance?

Synchronous vs Asynchronous in Microservice pattern

What is the meaning of Synchronous and Asynchronous in general?
What are the use of Synchronous and Asynchronous communication in microservice? When to use synchronous and when to use Asynchronous.
Please explain with example thanks in advance.
Under synchronous, the communication between components is live all the time. An example would be a service making a GET/ POST call and waiting for the response to proceed to the immediate next step.
Asynchronous meaning one component does not wait for the other components to react. An example would be a service publishing message to a Kafka topic. The service which creates the event does not know when the clients will consume it.
I would start thinking about the application end-user use case to decide when I should use what.

Async calls using HTTPClient vs Direct calling methods asynchronously using Tasks for a synchronous service

I have a scenario in my existing application where on the click of a Save button a Javascript function is called. This javascript function internally makes 4-5 asynchronous calls to webservices.For some reasons we have big javascript files now with lot of business logic. Also we are facing performance issues in the application. To reduce the number of XHR calls we are making to the server, we thought of consolidating these calls on the server side and just make a single call from our Javascript.
On the server side we are using Async Await to make this calls asynchronous.So we have created a wrapper service with one method which now calls different service methods using SendAsync method exposed by HTTPClient.
Our underlying services are all synchronous and to achieve asynchronous functionality we used HTTPClient. We measured performance and it shows considerable gain.
But, one of our colleague pointed out that we will actually have an overhead of serialization and Deserialization as well as we are originating now other webservice calls from server which will ultimately run synchronously.So why not directly call the methods instead of new HTTP calls.
ow our methods are all synchronous and to make them asynchronous we will have to use Tasks which will again be overhead.
Both the approaches will be overhead but we see the making new HTTP requests using async await more inline with the microservices concept.
There is a debate and I would like to know other thoughts.
My two-cents:
The approach of aggregating the information on the server side is good.
From my point of view the use of HTTPClient internally on the server side is a solution only if you want to connect to a legacy service and you do not have the ability to integrate it directly. HTTPClient is simple to use and robust, but it's technically a lot more overhead than using a Task (think of error handling, serialisation, testing, network/socket-resources).
A Task is also nice, since it allows proper cancelation, which HTTPClient cannot achieve (HTTPClient can only close the socket, other end could still block resources).
On top of the general resource aspect, the use of Futures makes the Task a perfect match:
https://msdn.microsoft.com/en-us/library/ff963556.aspx

How is asynchronous callback implemented?

How do all the languages implements asynchronous callbacks?
For example in C++, one need to have a "monitor thread" to start a std::async. If it is started in main thread, it has to wait for the callback.
std::thread t{[]{std::async(callback_function).get();}}.detach();
v.s.
std::async(callback_function).get(); //Main thread will have to wait
What about asynchronous callbacks in JavaScript? In JS callbacks are massively used... How does V8 implement them? Does V8 create a lot of threads to listen on them and execute callback when it gets message? Or does it use one thread to listen on all the callbacks and keep refreshing?
For example,
setInterval(function(){},1000);
setInterval(function(){},2000);
Does V8 create 2 threads and monitor each callback state, or it has a pool thing to monitor all the callbacks?
V8 does not implement asynchronous functions with callbacks (including setInterval). Engine simply provides a way to execute JavaScript code.
As a V8 embedder you can create setInterval JavaScript function linked to your native C++ function that does what you want. For example, create thread or schedule some job. At this point it is your responsibility to call provided callback when it is necessary. Only one thread at a time can use V8 engine (V8 isolate instance) to execute code. This means synchronization is required if a callback needs to be called from another thread. V8 provides locking mechanism is you need this.
Another more common approach to solve this problem is to create a queue of functions for V8 to execute and use infinite queue processing loop to execute code on one thread. This is basically an event loop. This way you don't need to use execution lock, but instead use another thread to push callback function to a queue.
So it depends on a browser/Node.js/other embedder how they implement it.
TL;DR: To implement asynchronous callback is basically to allow the control flow to proceed without blocking for the callback. Before the callback function is finally called, the control flow is free to execute anything that has no dependence on the callback's result, e.g., the caller can proceed as if the callback function has returned, or the caller may yield its control to other functions.
Since the question is for general implementation rather than a specific language, my answer tries to be as general as to cover the implementation commonalities.
Different languages have different implementations for asynchronous callbacks, but the principles are the same. The key is to decouple the control flow from the code executed. They correspond to the execution context (like a thread of control with a runtime stack) and the executed task. Traditionally the execution context and the executed task are usually 1:1 associated. With asynchronous callbacks, they are decoupled.
1. The principles
To decouple the control flow from the code, it is helpful to think of every asynchronous callback as a conditional task. When the code registers an asynchronous callback, it virtually installs the task's condition in the system. The callback function is then invoked when the condition is satisfied. To support this, a condition monitoring mechanism and a task scheduler are needed, so that,
The programmer does not need to track the callback's condition;
Before the condition is satisfied, the program may proceed to execute other code that does not depend on the callback's result, without blocking on the condition;
Once the condition is satisfied, the callback is guaranteed to execute. The programmer does not need to schedule its execution;
After the callback is executed, its result is accessible to the caller.
2. Implementation for Portability
For example, if your code needs to process the data from a network connection, you do not need to write the code checking the connection state. You only registers a callback that will be invoked once the data is available for processing. The dirty work of connection checking is left to the language implementation, which is known to be tricky especially when we talk about scalability and portability.
The language implementation may employ asynchronous io, nonblocking io or a thread pool or whatever techniques to check the network state for you, and once the data is ready, the callback function is then scheduled to execute. Here the control flow of your code looks like directly going from the callback registration to the callback execution, because the language hides the intermediate steps. This is the portability story.
3. Implementation for Scalability
To hide the dirty work is only part of the whole story. The other part is that, your code itself does not need to block waiting for the task condition. It does not make sense to wait for one connection's data when you have lots of network connections simultaneously and some of them may already have data ready. The control flow of your code can simply register the callback, and then moves on with other tasks (e.g., the callbacks whose conditions have been satisfied), knowing that the registered callbacks will be executed anyway when their data are available.
If to satisfy the callback's condition does not involve much of the CPU (e.g., waiting for a timer, or waiting for the data from network), and the callback function itself is light-weighted, then single CPU (or single thread) is able to process lots of callbacks concurrently, such as incoming network requests processing. Here the control flow may look like jumping from one callback to another. This is the scalability story.
4. Implementation for Parallelism
Sometimes, the callbacks are not pending for non-blocking IO condition, but for blocking operations such as page fault; or the callbacks do not rely on any condition, but are pure computation logics. In this case, asynchronous callback does not save you the CPU waiting time (because there is no idle waiting). But since asynchronous callback implies that the callback function can be executed in parallel with the caller or other callbacks (subject to certain data sharing and synchronization constraints), the language implementation can dispatch the callback tasks to different threads, achieving the benefits of parallelism, if the platform has more than one hardware thread context. It still improves scalability.
5. Implementation for Productivity
The productivity with asynchronous callback may not be very positive when the code need to deal with chained callbacks, i.e., when callbacks register other callbacks in recursive way, known as callback hell. There are ways to rescue.
The semantics of an asynchronous callback can be explored so as to substitute the hopeless nested callbacks with other language constructs. Basically there can be two different views of callbacks:
From data flow point of view: asynchronous callback = event + task.
To register a callback essentially generates an event that will emit
when the task condition is satisfied. In this view, the chained
callbacks are just events whose processing triggers other event
emission. It can be naturally implemented in event-driven
programming, where the task execution is driven by events. Promise
and Observable may also be regarded as event-driven concept. When
multiple events are ready concurrently, their associated tasks can
be executed concurrently as well.
From control flow point of view: to register a callback yields the
control to other code, and the callback execution just resumes the
control flow once its condition is satisfied. In this view, chained
asynchronous callbacks are just resumable functions. Multiple
callbacks can be written as one after another in traditional
"synchronous" way, with yield operation in between (or await). It
actually becomes coroutine.
I haven't discussed the implementation of data passing between the asynchronous callback and its caller, but that is usually not difficult if using shared memory where caller and callback can share data. Actually Golang's channel can also be considered in line of yield/await but with its focus on data passing.
The callbacks that are passed to browser APIs, like setTimeout, are pushed into the same browser queue when the API has done its job.
The engine can check this queue when the stack is empty and push the next callback into the JS stack for execution.
You don’t have to monitor the progress of the API calls, you asked it to do a job and it will put your callback in the queue when it’s done.

Which is better in this case - sync or async web service?

I'm setting up a web service in Axis2 whose job it will be to take a bunch of XML and put it on to a queue to be processed later. I understand its possible to set up a client to invoke a synchronous web service asynchronously by creating a using an "invokeNonBlocking" operation on the "Call" instance. (ref http://onjava.com/pub/a/onjava/2005/07/27/axis2.html?page=4)
So, my question is, is there any advantage to using an asynchronous web service in this case? It seems redundant because 1) the client isn't blocked and 2) the service has to accept and write the xml to queue regardless if it's synchronous or asynchronous
In my opinion, asynchronous is the appropriate way to go. A couple of things to consider:
Do you have multiple clients accessing this service at any given moment?
How often is this process occurring?
It does take a little more effort to implement the async methods. But I guarantee, in the end you will be much happier with the result. For one, you don't have to manage threading. Your primary concern might just be the volatility of the data in the que (i.e. race/deadlock conditions).
A "sync call" seems appropriate, I agree.
If the request from the client isn't time consuming, then I don't see the advantage either in making the call asynchronous. From what I understand of the situation in question here, the web-service will perform its "processing" against the request some time in the future.
If, on the contrary, the request had required a time consuming process, then an async call would haven been appropriate.
After ruminating some more about it, I'm thinking that the service should be asynchronous. The reason is that it would put the task of writing the data to the queue into a separate thread, thus lessening the chances of a timeout. It makes the process more complicated, but if I can avoid a timeout, then it's got to be done.

Resources