For the last week I've been researching a lot on the microservice architecture pattern and its requirements and constraints.
The majority of ressources suggest to use event buses/message brokers (asynchronous communication) to communicate between microservices rather than using REST API endpoints.
Synchronous calling would result in a higher response time and may cause cascading failure in case of a particular microservice failing in the chain.
Question:
Let's say the user requests a particular functionality or page on a website/mobile app which then needs to fetch data from multiple microservices and use theire respective functionalities to provide the desired outcome. But to achieve the desired outcome (response to client) ALL the services need to do their work before the backend sends the response back to the client (website/mobile app).
But if we use asynchronous service requests - which means the calling service doesnt wait for a response and would send its own response back to the client without getting the data from the asynchronously called service - the outcome might not be complete if an asychronously called service doesnt respond in time (service is unavailable or network issues). This would mean that the backend will send an incomplete response back to the client which is not acceptable.
How can I deal with this issue or did I get the concept wrong?
I'm thankful for every answer
If it's absolutely essential that a request gets a full response (i.e. that the request is synchronous), that's a strong argument in favor of the service stitching together synchronous requests and responses (and potentially needing to handle rollback in cases of partial success etc.).
Many requests don't fall into that pattern, though. For instance, a response might well be interpretable as "we've received your request and the operation will be performed. You can track the progress of your operation by using this request ID"; such an approach fits well with asynchronous messaging.
Related
I am stuck in a problem while using Kafka in a microservice architecture . I am not able to understand how a microservice handling HTTP requests will be able to send a response to the user. I want to take data from HTTP and then publish it to topic named A then another validator service will validate it and publish it on another topic named B. I want to send processed data to HTTP response from subscribed data from topic B.
In my experience, one option is to respond immediately with 201 accepted, or embed a blocking validator library into your API, and properly return a 400 Bad Request.
Then future GET calls are required to read eventually consistent data that might come back from any consumer. E.g. the consumer process is Kafka Connector that writes to some database, or is a Kafka Streams/KSQL table, future API queries will return data from that. Your original client may need to make periodic HTTP calls until that data is available.
Let's imagine I have a REST API with an endpoint /api/status. When this endpoint is accessed, the API sends a message to a message queue requesting the status of some other service.
Then in reply, the service sends a message with its status to a queue on which the REST API listens. So it's single message to request the status and single reply message.
My question is: Is there a design pattern for converting the asynchronous nature of this approach to a synchronous one in the API? In other words: Is there a pattern that the GetStatus(...) method in the pseudo code below can implement to synchronize the getting of the status with communication over multiple message queues or even pub/sub systems.
var statusRequestMsg = "get_status";
var statusResponseMsg = GetStatus(statusRequestMsg);
I know how to solve this in code but I was curious if there is a design patter that introduces a common approach.
I googled a lot in search for that but the only think that I found was a very technical explanation of an approach to do that in this article:
A Communication Model to Integrate the Request-Response and the Publish-Subscribe Paradigms into Ubiquitous Systems
Please note that I understand that this is not the perfect API design and that there are better ways to implement the example. I've created the above example to help me illustrate my question. Also I understand that some AMQP impl. (like RabbitMQ) provide a way to synchronize MQ communication to request/response style.
Thanks in advance.
Microsoft calls it Async Request Reply pattern and uses a solution that polls over HTTP:
https://learn.microsoft.com/en-us/azure/architecture/patterns/async-request-reply
I imagine it should be possible to avoid polling by subscribing to updates for a key. For example, it's possible to subscribe to updates to a single key in Redis with keyspace notifications (The page mentions two caveats: that "all the events delivered during the time the client [is] disconnected are lost" and "events' notifications are not broadcasted to all nodes".)
Have you considered something like this:
Request comes in
Create a correlation id
Send correlation id to other service as part of message sent via queue
Begin polling for that id in some data store (say Redis)
Time elapses...
Send correlation id back to originating service along with result of request in a message sent via queue
Worker reading queue sets value of correlation id in data store to result of asynchronous request
Polling discovers result and returns in as response to request
Would that work?
I've been reading about microservices, and have found a lot of interesting advice in Jonas Bonér's Reactive Microservices Architecture (available to download free here). He emphasises the need for asynchronous communication between miroservices, but says that APIs for external clients sometimes need to be synchronous (often REST).
I've been trying to think how asynchronous response messages sent back from microservices should best be routed back to the waiting client. To me the most obvious way would be to record something like a request id in all messages sent when processing the request, and then copy this id into response messages sent by the services. The public API would block when processing the request, collecting all expected response messages which have the matching id, before finally sending the response to the client.
Am I on the right lines here? Are there better approaches? Do any frameworks take the work of doing this routing away from the developer (I'm looking at Spring Cloud Streams etc, but others would be interesting too)?
He emphasises the need for asynchronous communication between
miroservices, but says that APIs for external clients sometimes need
to be synchronous (often REST).
When dealing with client - backend communications you can have a couple of types of operations and they should be handled seperetly (look at the idea of CQS):
State changing operations - they should be one way fire and forget using messaging (it can be the client calling an HTTP API and the api dispatching the message)
read operations: synchronous (request response) operations (using an HTTP API) and this is does not involve any messaging what so ever
Does that make sense?
I am working on prototyping a new web service for my company and we are considering Apache Camel as our integration framework. Here is a quick run-down of the high-level architecture:
-IBM Websphere MQ as the queuing solution
1) we receive http request
2) asynchronously persist this request
3a) do some processing on the request
3b) send to another tier for further processing
4) asynchronously update the request record in DB
5) respond to caller
What I want to do is:
When a http request comes in, put it on a queue to be processed and wait n seconds. If the web handler doesn't get a response in n seconds, reply to the caller with a custom message
Once the request is on the processing queue, a camel route is listening to this queue to process. When it pulls a message from queue, put a copy of the request on a different queue to be persisted asynchronously. Do some processing on the request. Then send it to another queue to be further processed and wait for a response. Then put it back on the persist queue to be asynchronously updated.
Then respond to web listener. Then web listener responds to web caller.
I am reading everything I can about Apache Camel and there is a lot of information about there. I might be on a little bit of information overload, and any help on the following concerns would be greatly appreciated:
1)
If the web listeners use an InOut exchange (with the first processing tier) without a replyTo queue defined, it will create a temporary queue for the response. What happens if this request times out? I understand I can set a requestTimeout on the exchange and, if it times out, catch that exception and set a custom message. But, will that temporary queue be killed? Or will they build up over time as requests time out?
2)
When it comes to scaling the processing tiers (adding more instances of those same routes on different machines), is it customary that if the instance that picks up the response (using a fixed reply to queue) is different than the instance that picked up the request, all the information about the original request is inside the message, so there is no need to share data across instances (unless of course there is data that is shared, like aggregrates and such)?
Any other tips and tricks when building a system like this would be very helpful.
Thanks!
I would say this solution is too complicated and there are too many areas which are hard both in terms of maintenance and also complexity. There is too much many steps mixing async and sync communication.
Why not simply the solution to the following steps:
Synchronously http request
Put message on MQ with reply to header
Message is picked up and sent to backend
If reply is not received within a given time transaction is terminated.
The reply to queue is removed
Requestor is notified.
I've been recently investigating about Spring Integration and AMQP (RabbitMQ), as I need to communicate two applications (middleware and backend) with async approach, so that the middleware doesn't block when receiving client calls.
I first followed the simpler approach of implementing this in a synchronous, this meaning that I have a gateway interface and an outbound gateway (with requiresReply=true) on the middleware, and then an inbound gateway and a service activator on the backend. This initial approach works well (I've used Spring Integration XML config).
Now I need clarification on the approach to follow to make this work in an async way.
By looking at the RabbitMQ Tutorial 6, it's better to work with a callback queue and a correlationId, and per what I understood, this would be similar to calling Spring RabbitTemplate's convertAndSend() and then receive(), instead of convertSendAndReceive() (which would block until response is received).
I've checked the Spring Integration docs, where I need to replace the gateway interface on the middleware for it to return Future or ListenableFuture.
Async Gateway
Once that's done, I also looked at the documentation for the outbound gateway, where it says that it can work together with the RabbitTemplate to manage the correlationID and replyTo message attributes.
My questions are:
In order to make this work with an async approach, should I keep working with outbound/inbound gateways, instead of outbound/inbound message converters?
In case of following the outbound/inbound message converters approach (which sounds to me similar to what the RabbitMQ tutorial shows), how do I associate the Future on the gateway interface with the result coming back from with inbound channel adapter?
To be honest you don't provide an original business requirement. It might be a fact that there is really no reason to get deal with this async handsoff, because you have a #Gateway as an entry point which is thread-free and even if it is blocked to wait for the reply it doesn't impact other threads which may perform similar sendAndReceive operation. In most cases it is really just enough to do everything within the same requestor thread and don't loose performance with shifting to the shared ThreadPoolExecutor.
Right, the Future allows you to free a caller a bit to be ready to accept new requests within the same thread.
Since it is a MessagingGateway and you want to have a reply anyway, there is a hook associated with the request - TemporaryReplyChannel header. That's why that <outbound-gateway> works properly: it place its blocking reply to that channel for the gateway's return (or for FutureTask#set()).
I'd say that we can achieve the same TemporaryReplyChannel gain with that your async reply requirement.
You should use inbound/outbound channel adapter pair.
Before send the message to the <int-amqp:outbound-channel-adapter> you should do this <header-channels-to-string> for the <header-enricher>.
The server side maybe the same - <int-amqp:inbound-gateway>
You should use fixed replyQueue as a header for those message to send through the <int-amqp:outbound-channel-adapter>
the <int-amqp:inbound-channel-adapter> should be configured for that fixed replyQueue.
Both <int-amqp:outbound-channel-adapter> on client side and <int-amqp:inbound-gateway> must be configured for the mapped-request-headers="*" to allow to propagate that reply-channel header to the server and vise versa.
The <int-amqp:inbound-channel-adapter> on the client side will just send the reply to the reply-channel as it is for the <int-amqp:outbound-gateway>
You may need to take care about the correlationId manually, since <int-amqp:inbound-gateway> may require that to produce a reply properly.
Well, something like that...
HTH
Feel free to ask more questions. Or correct me if I misunderstood your question.