Need to validate approach: Spring Integration + AMQP + Async - asynchronous

I've been recently investigating about Spring Integration and AMQP (RabbitMQ), as I need to communicate two applications (middleware and backend) with async approach, so that the middleware doesn't block when receiving client calls.
I first followed the simpler approach of implementing this in a synchronous, this meaning that I have a gateway interface and an outbound gateway (with requiresReply=true) on the middleware, and then an inbound gateway and a service activator on the backend. This initial approach works well (I've used Spring Integration XML config).
Now I need clarification on the approach to follow to make this work in an async way.
By looking at the RabbitMQ Tutorial 6, it's better to work with a callback queue and a correlationId, and per what I understood, this would be similar to calling Spring RabbitTemplate's convertAndSend() and then receive(), instead of convertSendAndReceive() (which would block until response is received).
I've checked the Spring Integration docs, where I need to replace the gateway interface on the middleware for it to return Future or ListenableFuture.
Async Gateway
Once that's done, I also looked at the documentation for the outbound gateway, where it says that it can work together with the RabbitTemplate to manage the correlationID and replyTo message attributes.
My questions are:
In order to make this work with an async approach, should I keep working with outbound/inbound gateways, instead of outbound/inbound message converters?
In case of following the outbound/inbound message converters approach (which sounds to me similar to what the RabbitMQ tutorial shows), how do I associate the Future on the gateway interface with the result coming back from with inbound channel adapter?

To be honest you don't provide an original business requirement. It might be a fact that there is really no reason to get deal with this async handsoff, because you have a #Gateway as an entry point which is thread-free and even if it is blocked to wait for the reply it doesn't impact other threads which may perform similar sendAndReceive operation. In most cases it is really just enough to do everything within the same requestor thread and don't loose performance with shifting to the shared ThreadPoolExecutor.
Right, the Future allows you to free a caller a bit to be ready to accept new requests within the same thread.
Since it is a MessagingGateway and you want to have a reply anyway, there is a hook associated with the request - TemporaryReplyChannel header. That's why that <outbound-gateway> works properly: it place its blocking reply to that channel for the gateway's return (or for FutureTask#set()).
I'd say that we can achieve the same TemporaryReplyChannel gain with that your async reply requirement.
You should use inbound/outbound channel adapter pair.
Before send the message to the <int-amqp:outbound-channel-adapter> you should do this <header-channels-to-string> for the <header-enricher>.
The server side maybe the same - <int-amqp:inbound-gateway>
You should use fixed replyQueue as a header for those message to send through the <int-amqp:outbound-channel-adapter>
the <int-amqp:inbound-channel-adapter> should be configured for that fixed replyQueue.
Both <int-amqp:outbound-channel-adapter> on client side and <int-amqp:inbound-gateway> must be configured for the mapped-request-headers="*" to allow to propagate that reply-channel header to the server and vise versa.
The <int-amqp:inbound-channel-adapter> on the client side will just send the reply to the reply-channel as it is for the <int-amqp:outbound-gateway>
You may need to take care about the correlationId manually, since <int-amqp:inbound-gateway> may require that to produce a reply properly.
Well, something like that...
HTH
Feel free to ask more questions. Or correct me if I misunderstood your question.

Related

Asynchronous communication between Microservices

For the last week I've been researching a lot on the microservice architecture pattern and its requirements and constraints.
The majority of ressources suggest to use event buses/message brokers (asynchronous communication) to communicate between microservices rather than using REST API endpoints.
Synchronous calling would result in a higher response time and may cause cascading failure in case of a particular microservice failing in the chain.
Question:
Let's say the user requests a particular functionality or page on a website/mobile app which then needs to fetch data from multiple microservices and use theire respective functionalities to provide the desired outcome. But to achieve the desired outcome (response to client) ALL the services need to do their work before the backend sends the response back to the client (website/mobile app).
But if we use asynchronous service requests - which means the calling service doesnt wait for a response and would send its own response back to the client without getting the data from the asynchronously called service - the outcome might not be complete if an asychronously called service doesnt respond in time (service is unavailable or network issues). This would mean that the backend will send an incomplete response back to the client which is not acceptable.
How can I deal with this issue or did I get the concept wrong?
I'm thankful for every answer
If it's absolutely essential that a request gets a full response (i.e. that the request is synchronous), that's a strong argument in favor of the service stitching together synchronous requests and responses (and potentially needing to handle rollback in cases of partial success etc.).
Many requests don't fall into that pattern, though. For instance, a response might well be interpretable as "we've received your request and the operation will be performed. You can track the progress of your operation by using this request ID"; such an approach fits well with asynchronous messaging.

Is there a pattern for synchronizing message queue communication to request/response manually?

Let's imagine I have a REST API with an endpoint /api/status. When this endpoint is accessed, the API sends a message to a message queue requesting the status of some other service.
Then in reply, the service sends a message with its status to a queue on which the REST API listens. So it's single message to request the status and single reply message.
My question is: Is there a design pattern for converting the asynchronous nature of this approach to a synchronous one in the API? In other words: Is there a pattern that the GetStatus(...) method in the pseudo code below can implement to synchronize the getting of the status with communication over multiple message queues or even pub/sub systems.
var statusRequestMsg = "get_status";
var statusResponseMsg = GetStatus(statusRequestMsg);
I know how to solve this in code but I was curious if there is a design patter that introduces a common approach.
I googled a lot in search for that but the only think that I found was a very technical explanation of an approach to do that in this article:
A Communication Model to Integrate the Request-Response and the Publish-Subscribe Paradigms into Ubiquitous Systems
Please note that I understand that this is not the perfect API design and that there are better ways to implement the example. I've created the above example to help me illustrate my question. Also I understand that some AMQP impl. (like RabbitMQ) provide a way to synchronize MQ communication to request/response style.
Thanks in advance.
Microsoft calls it Async Request Reply pattern and uses a solution that polls over HTTP:
https://learn.microsoft.com/en-us/azure/architecture/patterns/async-request-reply
I imagine it should be possible to avoid polling by subscribing to updates for a key. For example, it's possible to subscribe to updates to a single key in Redis with keyspace notifications (The page mentions two caveats: that "all the events delivered during the time the client [is] disconnected are lost" and "events' notifications are not broadcasted to all nodes".)
Have you considered something like this:
Request comes in
Create a correlation id
Send correlation id to other service as part of message sent via queue
Begin polling for that id in some data store (say Redis)
Time elapses...
Send correlation id back to originating service along with result of request in a message sent via queue
Worker reading queue sets value of correlation id in data store to result of asynchronous request
Polling discovers result and returns in as response to request
Would that work?

Do I need a Redis ConnectionMultiplexer for each pub/sub subscription?

I have a .Net Core Web Api setup where I expose an endpoint that is basically a forever-frame. I am constrained by an API contract that forces me to expose it as such.
That forever-frame pushes data that is received from a Redis pub/sub channel. I will have multiple listeners on this endpoint, and they should basically be individual subscribers to the same channel.
I use StackExchange.Redis.
There is one thing I cannot wrap my head around, and that is how to use the ConnectionMultiplexer in this scenario. Everywhere I read about it I am told to have one global ConnectionMultiplexer. But if I do that won't I unsubscribe all subcribers when one leaves and shuts down a subscription to the channel that they are all listening to?
If I don't then I will run into a memory leak I am sure.
A global ConnectionMultiplexer keeps the number of connections to Redis at a minimum, but I don't see any way to avoid it here.
Is there something I have misunderstood?
Always use the same instance of ConnectionMultiplexer, or you will lose the benefits of using a multiplexer.
I had a similar issue when calling unsubscribe on a channel caused all subscribers to unsubscribe too.
If you look at the ISubscriber interface, there is two ways to subscribe to a channel :
void Subscribe(RedisChannel channel, Action<RedisChannel, RedisValue> handler, CommandFlags flags = CommandFlags.None);
ChannelMessageQueue Subscribe(RedisChannel channel, CommandFlags flags = CommandFlags.None);
I took the second one and it solved my problem.

Does gRPC resend messages

A question related to the idempotence of serverside code, or the necessity of it. Either for gRPC in general, or specifically to the java implementation.
Is it possible that when we send a message once from a client, that it is handled twice by our service implementation?
Maybe this would be related to retries when service seems unavailable; or could be configured by some policy?
Right now, when you send a message from a client it will be seen (at most) once by the server. If you have idempotent methods, you will soon be able to specify a policy for automatic retries (design doc) but these retry attempts will not be enabled by default. You do not have to worry about the gRPC library sending your RPCs more than once unless you have this configured in your retry policy.
According to grpc-java/5724, the retry logic has already been implemented. The OP does it using a Map, that is not type safe. A better way would be as follows:
NettyChannelBuilder builder = NettyChannelBuilder.forAddress(host, port)
.enableRetry()
.maxRetryAttempts(3);
There are other retry configurations available on the NettyChannelBuilder.
There's also an example here, although it's pretty hard to find.

How should a synchronous public api be integrated with message-based services?

I've been reading about microservices, and have found a lot of interesting advice in Jonas Bonér's Reactive Microservices Architecture (available to download free here). He emphasises the need for asynchronous communication between miroservices, but says that APIs for external clients sometimes need to be synchronous (often REST).
I've been trying to think how asynchronous response messages sent back from microservices should best be routed back to the waiting client. To me the most obvious way would be to record something like a request id in all messages sent when processing the request, and then copy this id into response messages sent by the services. The public API would block when processing the request, collecting all expected response messages which have the matching id, before finally sending the response to the client.
Am I on the right lines here? Are there better approaches? Do any frameworks take the work of doing this routing away from the developer (I'm looking at Spring Cloud Streams etc, but others would be interesting too)?
He emphasises the need for asynchronous communication between
miroservices, but says that APIs for external clients sometimes need
to be synchronous (often REST).
When dealing with client - backend communications you can have a couple of types of operations and they should be handled seperetly (look at the idea of CQS):
State changing operations - they should be one way fire and forget using messaging (it can be the client calling an HTTP API and the api dispatching the message)
read operations: synchronous (request response) operations (using an HTTP API) and this is does not involve any messaging what so ever
Does that make sense?

Resources