Track status of multiple async requests when using Spring #Async - asynchronous

I am using spring boot for developing services in my application.
I have a scenario where-in the request submitted to the back-end would take some time to complete.
To avoid waiting the client I want to return the response immediately with a message your request has been accepted. The request would be in progress in a background thread.
I see Spring provides the #Async annotation which can be used to create a separate processing thread from the main thread and using that I am able to offload the processing in a separate thread.
What I want to do is when I return the initial response as accepted I also want to provide the client with a tracking key/token which the client can later use to check the status of the request.
Since there can be multiple clients who would be accessing the service there should be a way of uniquely identifying each client's request from another.
I see there is no such feature in Spring Async or Future which can return a tracking id as such.
One possibility I see it to put the Future returned in HttpSession and later use that to check for the status by the client. But, I prefer not to use HttpSession and want my services to be stateless.
Is there any way/approach I can accomplish my requirement.
Thanks,
BS

Generate the key before calling the Async method, and pass it to the method:
String key = generateUniqueKey();
callAsyncMethod(key);
return key;
The Async method will have to persist the status of the execution somewhere (let's call it dataStore). When the client asks for the status using the key, you look it up on the dataStore and return it.

Related

How Async/Await handle multiple calls at same API end point from multiple user at same time

I am working on an application where I have used async/await for every endpoint. My question here is how async/await handles multiple requests at the same time on the same API endpoint. For example, if I have an endpoint to save a user record and that endpoint has been hit by two different users at the same time, what will happen? Can somebody explain?
Here is the example code here I am registering a restaurant using async/await:
[HttpPost]
[Route("RegisterRestaurant")]
public async Task<IActionResult> RegisterRestaurant([FromBody] RegisterRequestDTO registerModel)
{
var response = await _uow.UserRepository.RegisterRestaurant(new DTO.ResponseDTO.GenericResponseDTO<RegisterRequestDTO> { Data = registerModel });
return Ok(response);
}
Everything is fine with this code. But what will happen if it is hit by multiple users from multiple places at same time?
My question here is how async/await handles multiple requests at the same time on the same API endpoint. For example, if I have an endpoint to save a user record and that endpoint has been hit by two different users at the same time what will happen? Can somebody explain?
This actually has to do with how ASP.NET works, not async/await.
When new requests come in, ASP.NET takes a thread from the thread pool, constructs any necessary instances (e.g., an instance of your controller), and executes the handler for that request. So, each request is independent by default, unless your code explicitly uses static or singleton instances.
async/await do allow threads to return to the ASP.NET thread pool more quickly, but the core mechanism of ASP.NET handling requests is unchanged.
Which user will hit this action method first he will Register the restaurant and mean while second user will wait until the first user await completes.

rest API return result from callback request of another endpoint

I want to standup an endpoint /foo which is a synchronous endpoint for clients but the response is dependent on a callback /foo_callback being called on the app as a result of the request to the synchronous endpoint.
to elaborate the workflow:
Flow diagram
I havent decided on a technology to use so ideally would look for a recommendation.
High level what I am thinking of is starting an async thread in the request handler and check for an update on a singleton map to see if the server has responded but I am wondering if there is a better way
I dont have control over the client and cannot really use websocket or long polling.

Pact Request That Depends on the Response from A Previous Request

I am using the Pact framework to test some APIs from a service. I have one API that initiates some backend execution. Let's call it request A and the response returns a unique execution ID. The second API (request B) send the execution ID returned from request A to pull the execution status. How do I set up the pact test in this case? The problem is the execution ID that is generated dynamically. I know a provider can inject some provider state to the consumer. So potentially, the execution ID could be injected. But I am not sure how to make the injection from the provider side. It requires access to the response from the response A and inject the execution ID (with the provider state callback, perhaps) for the second request.
You need to have a lot of control over what is happening in your provider for Pact to work for you.
Each interaction is verified individually (and in some frameworks, in a random order), and all state should be cleared in between interactions, so you need to use provider states to set up any data that would have been created by the initial request. In regards to something like the execution IDs, you could use a different implementation of the code that generates the IDs that you only use for Pact Tests.

Async communication between spring boot micro services

I am new to spring boot and created 2 micro services.
They need to communicate with one other in synchronous and asynchronous way.
For synchronous communication, I can use the RestTemplate.
But how to do for asynchronous calling ?
my requirement for Asynchonous is:
lets say I am querying for something from one micro service. To fetch the queried data it will take sometime because of queried for large sum of data.
In this case, I need to save the request into some transaction table and return the response with transactionId and callBackAPI. After sometime if I call the callBackAPI with transactionId. Then I should be able to get the previously queried data.
Please help me with this.
Thanks.
Two solutions :
Async call from your client :
Spring provides an Asynchronous version of the RestTemplate :
AsyncRestTemplate
with this solution, your client is asynchronous, you don't need to store the data in a table with the transaction id and stuff.
Make your endpoint Asynchronous (if you don't need the response) :
Spring lets you create asynchronous methods(services) that you can call from your RestController. With this solution you can do what you described in the question(creating and storing a transaction id that will be returned directly to the client and start the async job).

Web API 2 - are all REST requests asynchronous?

Do I need to do anything to make all requests asynchronous or are they automatically handled that way?
I ran some tests and it appears that each request comes in on its own thread, but I figure better to ask as I might have tested wrong.
Update: (I have a bad habit of not explaining fully - sorry) Here's my concern. A client browser makes a REST request to my server of http://data.domain/com/employee_database/?query=state:Colorado. That comes in to the appropriate method in the controller. That method queries the database and returns an object which is then turned into a JSON structure and returned to the calling app.
Now let's say 10,000 clients all make a similar query to the same server. So I have 10,000 requests coming in at once. Will my controller method be called simultaneously in 10,000 distinct threads? Or must the first request return before the second request is called?
I'm not asking about the code in my handler method having asynchronous components. For my case the request becomes a single SQL query so the code has nothing that can be handled asynchronously. And until I get the requested data, I can't return from the method.
No REST is not async by default. the request are handled synchronously. However, your web server (IIS) has a number of max threads setting which can work at the same time, and it maintains a queue of the request received. So, the request goes in the queue and if a thread is available it gets executed else, the request waits in the IIS queue till a thread is available
I think you should be using async IO/operations such as database calls in your case. Yes in Web Api, every request has its own thread, but threads can run out if there are many consecutive requests. Also threads use memory so if your api gets hit by too many request it may put pressure on your system.
The benefit of using async over sync is that you use your system resources wisely. Instead of blocking the thread while it is waiting for the database call to complete in sync implementation, the async will free the thread to handle more requests or assign it what ever process needs a thread. Once IO (database) call completes, another thread will take it from there and continue with the implementation. Async will also make your api run faster if your IO operations take longer to complete.
To be honest, your question is not very clear. If you are making an HTTP GET using HttpClient, say the GetAsync method, request is fired and you can do whatever you want in your thread until the time you get the response back. So, this request is asynchronous. If you are asking about the server side, which handles this request (assuming it is ASP.NET Web API), then asynchronous or not is up to how you implemented your web API. If your action method, does three things, say 1, 2, and 3 one after the other synchronously in blocking mode, the same thread is going to the service the request. On the other hand, say #2 above is a call to a web service and it is an HTTP call. Now, if you use HttpClient and you make an asynchronous call, you can get into a situation where one request is serviced by more than one thread. For that to happen, you should have made the HTTP call from your action method asynchronously and used async keyword. In that case, when you call await inside the action method, your action method execution returns and the thread servicing your request is free to service some other request and ultimately when the response is available, the same or some other thread will continue from where it was left off previously. Long boring answer, perhaps but difficult to explain just through words by typing, I guess. Hope you get some clarity.
UPDATE:
Your action method will execute in parallel in 10,000 threads (ideally). Why I'm saying ideally is because a CLR thread pool having 10,000 threads is not typical and probably impractical as well. There are physical limits as well as limits imposed by the framework as well but I guess the answer to your question is that the requests will be serviced in parallel. The correct term here will be 'parallel' but not 'async'.
Whether it is sync or async is your choice. You choose by the way to write your action. If you return a Task, and also use async IO under the hood, it is async. In other cases it is synchronous.
Don't feel tempted to slap async on your action and use Task.Run. That is async-over-sync (a known anti-pattern). It must be truly async all the way down to the OS kernel.
No framework can make sync IO automatically async, so it cannot happen under the hood. Async IO is callback-based which is a severe change in programming model.
This does not answer what you should do of course. That would be a new question.

Resources