Let's imagine there is a server, that when receives a request with a car model queries all known car dealers looking for the cheapest one and responds back with the price (using whatever protocol). This actions takes a while.
In a casual blocking request/response server model, I do
request = "audi a8" // prepare a request and one line after have the response
response = server.findCheapestCar(request) // takes 20 seconds
I don't want to block my client main thread for 20 seconds, so would rather want it to be executed asynchronously. My understanding for something being asynchronous is that I can pass some sort of an object to it and carry one with my work. Once the server is ready with the response it will notify the object I passed -> Casual callback pattern.
This approach would require library match - both client ad the server need to know the object. But I want my asynchronous server built on Netty to be able to handle requests from various clients (C++/Python and others).
Where is the asynchronousness of netty coming from? What do I need on the client side to benefit from the asynchronousness?
Where is the asynchronousness of netty coming from?
Netty adopted the principle of eventloops which you may known from a language like JavaScript. This allows netty to work fully asynchronous. (For more information about eventloops and the basic underlying principle I would recommend this video about the evenloop in JavaScript)
What do I need on the client side to benefit from the asynchronousness?
Client sends request (containing payload and request id = clientside incrementing integer)
Server process the request for 50sec
Server sends response (containing the payload and the same request id the client send in his request)
Client receives the response and looks up the request id (If the client is able to find the request id and its underlying callback it will invoke it)
Hope that helped
Related
I want to standup an endpoint /foo which is a synchronous endpoint for clients but the response is dependent on a callback /foo_callback being called on the app as a result of the request to the synchronous endpoint.
to elaborate the workflow:
Flow diagram
I havent decided on a technology to use so ideally would look for a recommendation.
High level what I am thinking of is starting an async thread in the request handler and check for an update on a singleton map to see if the server has responded but I am wondering if there is a better way
I dont have control over the client and cannot really use websocket or long polling.
For the last week I've been researching a lot on the microservice architecture pattern and its requirements and constraints.
The majority of ressources suggest to use event buses/message brokers (asynchronous communication) to communicate between microservices rather than using REST API endpoints.
Synchronous calling would result in a higher response time and may cause cascading failure in case of a particular microservice failing in the chain.
Question:
Let's say the user requests a particular functionality or page on a website/mobile app which then needs to fetch data from multiple microservices and use theire respective functionalities to provide the desired outcome. But to achieve the desired outcome (response to client) ALL the services need to do their work before the backend sends the response back to the client (website/mobile app).
But if we use asynchronous service requests - which means the calling service doesnt wait for a response and would send its own response back to the client without getting the data from the asynchronously called service - the outcome might not be complete if an asychronously called service doesnt respond in time (service is unavailable or network issues). This would mean that the backend will send an incomplete response back to the client which is not acceptable.
How can I deal with this issue or did I get the concept wrong?
I'm thankful for every answer
If it's absolutely essential that a request gets a full response (i.e. that the request is synchronous), that's a strong argument in favor of the service stitching together synchronous requests and responses (and potentially needing to handle rollback in cases of partial success etc.).
Many requests don't fall into that pattern, though. For instance, a response might well be interpretable as "we've received your request and the operation will be performed. You can track the progress of your operation by using this request ID"; such an approach fits well with asynchronous messaging.
How can i implement an asynchronous callback scenario in Apigee.
For example i need to call a host and the host may take some time to process response. Once the response is ready that needs to be delivered to the caller/client.
Thanks in Advance
Regards
Can not claim that it is a standard way of doing this, however here is a design:
Assumption: The target host must support registering a call back URL.
When the client calls Apigee proxy, Apigee proxy in the middle can generate a unique callback URL and send to the target as a parameter when making the API request. In the meantime it would have to block the client ( and start polling an internal storage).
The callback URL would be itself be a proxy in Apigee that receives the response from the target side and then updates an entry in Apigee persistence store, which is being polled by the first proxy.
If the callback happens within say x seconds, then the apigee proxy can send the response back to the client. If it does not happen within that time than it can send back some error.
To implement you can use Key Value Map or Caching policy in apigee for the transient persistence store. And for blocking the client and polling the persistence store use java or javascript policies
Take a look at https://github.com/apigee/api-platform-samples/tree/master/sample-proxies/async-callout and see if that helps. This sample makes the requests to the target, stores the response handles in the JS "session", goes away to do other things, and then retrieves the handles from the "session" and checks the responses.
This is the issue I encounter, which is design and implementation related :
I have a REST web service that accepts POST requests. Nothing special about it. It currently responds synchronously.
However, this web service is going to initiate a background process that may take some long time.
I do not want this service to respond 30 minutes later.
Instead, it should immediately return an ack response to the client, and nothing more (even after 30 minutes, there will be no more information to send).
How do I implement such behavior with Jersey ?
I read the page https://jersey.java.net/nonav/documentation/2.0/async.html#d0e6914.
Though it was an interesting reading, I did not find the way to only send an ACK typed response (something like an HTTP 200 code).
Maybe i am confused with asynchronous and the behavior I want to implement.
I just understood that I could create a new Thread within my #POST method to handle the background process, and just returns immediately the ACK response.
But does this newly thread live after the response has been sent back to the client ?
How would you implement this WS ?
I hope you will help me clarifying this point.
I think the Jersey 2 Asynchronous Server API you linked would still hold the client connection until the processing completes. The asynchronous processing is really internal to Jersey and does not affect the client experience.
If you want to return an ACK, you can use a regular Jersey method, delegate the work to another thread and then return immediately. I'd recommend HTTP 202 for this use case.
You may create a Thread to do so just like in the Jersey 2 example and it would survive the execution of the Jersey resource method invocation:
#POST
public Response asyncPost(String data) {
new Thread(...).start();
return Response.status(Response.Status.ACCEPTED).build();
}
This being said, creating threads is generally not recommended within app servers.
If you're using EE7, I'd recommend you look at JSR-236 http://docs.oracle.com/javaee/7/api/javax/enterprise/concurrent/package-summary.html
If you're using EE6, you can consider sending a message to a queue to be processed by a Message-Driven Beans (MDB) in the background.
How can I write Http server in TornadoWeb that will support persistent Connections.
I mean will be able to receive many requests and answer to them without closing connection.
How does it actually work in async?
I just want to know how to write handler to handle persistent connection.
How actually would it work?
I have handler like that:
class MainHandler(RequestHandler):
count = 0
#asynchronous
def post(self):
#get header content type
content_type = self.request.headers.get('Content-Type')
if not content_type in ACCEPTED_CONTENT:
raise HTTPError(403, 'Incorrect content type')
text = self.request.body
self.count += 1
command = CommandObject(text, self.count, callback = self.async_callback(self.on_response))
command.execute()
def on_response(self, response):
if response.error: raise HTTPError(500)
body = response.body
self.write(body)
self.flush()
execute calls callback when finishes.
is my asumption right that with things that way post will be called many times
and for one connection count will increase with each httprequest from client?
but for each connection I will have separate count value?
I don't think that your assumption is correct. My understanding of the way the Tornado server works is that each request from the client will produce a new RequestHandler. The purpose of the #tornado.web.asynchronous decorator is to prevent the server from automatically closing the connection when your handler function (post, get, etc.) returns. But at the end of the day, I think there is just one response for each request.
I don't believe additional requests from the client will go to the same instance of the RequestHandler class. Instead, my understanding is that Tornado is set up to allow for the long-polling paradigm. Here is an example of the flow of communications:
Client makes a POST request to the Tornado server
Tornado server checks to see if a response is ready, if not you could add the RequestHandler to some kind of stack or queue (depending on your application architecture)
Server comes up with a response (maybe another user added a message to the queue that needs to be distributed to open connections, etc.) and distributes the response back to the RequestHandler and then calls the finish() function to close the connection
Client makes another POST request to repeat the process
I think if you want to implement true persistent connections you'll want to look into tornado.websocket (http://www.tornadoweb.org/documentation/websocket.html). I haven't experimented with that module yet so I'm afraid I can't give any input there.
Best of luck!
The Tornado web framework actually does come with it's own server implementation which supports persistent connections, so there should be no need to write your own server. There is a section in the documentation on how to use it in production (behind nginx).
From the source for tornado.web module, you can see that a new handler is always instantiated, I don't think there is anyway you can have handlers reused.