RabbitMQ synchronous messaging pros and cons - asynchronous

as we all know message bus like rabbitMQ is mainly meant for asynchronous messaging so standard approch is to fire and forget like publish something on bus and don't worry about who will process published message or when. But i'm thinking about latest talk in our development team about synchronous processing of message: case would be to publish message to service bus and as as publisher i want to wait for any subscriber to process message and return results to me - so it looks rather as request-response model. I'm thinking now of one con like degrading performance in this model. What are your thoughts? When to use async and when sync? What are the tradeoffs?

Synchronous messaging is possible but impacts scalability. If a publisher has to wait for its recipients to respond, then it will be limited in how much it can achieve at any given time.
However, you can achieve request-response using asynchronous messaging. In RabbitMQ, you do this by means of the Remote Procedure Call (RPC) pattern.
To put it simply, your publisher publishes a message, but doesn't wait for the response; it can continue doing other stuff in the meantime. The publisher does keep track of it though, by putting a CorrelationId on the message, and storing it locally. The message eventually reaches a consumer, who processes it and responds back to the publisher on a different queue. The reply has the same CorrelationId. When the publisher receives the reply, it can then mark that particular call (via the CorrelationId) as processed.
If you want, you can also do other things with the CorrelatonId, such as timeout those messages for which we haven't received a reply after e.g. 30 seconds.

Related

In asynchronous messaging, is client-broker communication synchronous?

While discussing asynchronous messaging on page 67 of the Microservices Patterns book by Chris Richardson (2019), the author writes:
Synchronous—The client expects a timely response from the service and might even block while it waits.
Asynchronous - The client doesn’t block, and the response, if any, isn’t necessarily sent immediately
Given that, it seems that moving from "synchronous" to "asynchronous" communication actually just swaps one synchronous service (e.g., Service A) with a different synchronous service (e.g., a listening port on the message broker like Active MQ, Kafka, IBM MQ, AWS Kinesis, etc.).
That's because the client, presumably, must still block (or at least use 1 thread or connection from a pool) while communicating with the broker, instead of communicating directly with Service A--especially since the client probably expects a broker response (e.g., SUCCESS) for reliability purposes.
Is that analysis correct?
Yes, your analysis is correct.
Working on your case, the broker's client library provides the asynchronous functionality to the caller code (ServiceA for example), which means that it doesn't block the ServiceA's thread until the operation is finished, but it lets you provide a callback that will be invoked (with the results of the async operation) when it is finished.
Now the question is: who will invoke that callback? Well, some code from the broker's client library, which runs on a thread that presumably does some periodic checks to see if the operation is finished (or any other logic that will eventually emit this result).
So yes, there has to be some background thread that does some synchronous work to grab those results.

Multiple consumer on single JMS queue

JMS Queue is having 2 consumers, synchronous and asynchronous Java application process waiting for the response.
1)Synchronous application send request and will be waiting for the response for 60 seconds based on the JMS correlation ID.
2)Asynchronous thread will be constantly listening on the same queue.
In this scenario, when the response is received on the queue within 60 second I would expect load is distributed on both synchronous and asynchronous application. However, for some unknown reason almost all the response messages are consumed by synchronous process. And,only in some cases the messages are picked up asynchronous process.
Are there any factors that could cause only synchronous application to pick almost all the messages?
There is usually no guarantee that the load will be distributed evenly, especially if its synchronous versus async. consumer. The synchronous consumer will have to poll, wait, poll, wait while the async. consumer is probably waiting on the socket in a separate thread until a message arrives and then call your callback. So the async. consumer will most always be there first.
Any chance you can change to Topics and discard messages you don't wont ? Or change your sync. consumer to be async ? Another alternative would be to build a small 'asnyc' gateway in front of your synchronous consumer: a little application that makes an async consumption and then copies each message received to a second queue where the sync. consumer picks it up. Depending on your JMS provider it might support this type of 'JMS bridge' already - what are you using ?

Synchronous consumption on Node-RED & MQTT

I am working on an orchestration system using Node-RED and MQTT.
I have decided to dissociate the event acquisition from the treatment. The main objective is to quickly push events on a queue and treat them soon as possible in real time.
The system operates like this :
I receive an event on an HTTP Rest API,
Push this event on an MQTT Topic,
On an another flow, listen and read events from the MQTT Topic,
Launch several actions/process from this event (up to 5/10 seconds).
But I am facing an issue: If I receive too quickly 2 related events, the second event could change the processing of the first event. To solve this, I would like to synchronize my event consumption/processing in order to keep them ordered.
MQTT QoS 2 messages will be delivered in order. How can I simply implement a synchronization paradigm in Node-RED ? Is it possible to avoid MQTT Client listening while processing an event ?
No, you can't turn the MQTT client off.
And no there is no concept of synchronisation, mainly because all NodeJS apps are purely single threaded so 2 things can't actually happen at once, tasks just yield normally when they get to something IO bound.
I'm not sure you actually gain anything receiving it via HTTP and then re-consuming it via MQTT.
If you want to queue the incoming events up you could use the delay node to rate limit the input to something you are sure the processing can manage. The rate limit option has 2 modes, one that drops messages and one the queues them,.
Place a simple message Q on output of MQTT client and get it to release next message by sending it trigger=true (or might be able to get it to release a message at a set rate). I am still looking at these.
https://flows.nodered.org/node/node-red-contrib-simple-message-queue
Here is my proposal to solve the problem: https://flows.nodered.org/flow/3003f194750b0dec19502e31ca234847.
Sync req/res REST API with async workers based on MQTT
Example implements one REST API endpoint, one inject node for doing requests to that endpoint and two workers doing their job async.
It's still in progress so expect changes. Feel free to contact me and chat about this or other solutions for orchestration along with Node-RED.

Reliable WCF Service with MSMQ + Order processing web application. One way calls delivery

I am trying to implement Reliable WCF Service with MSMQ based on this architecture (http://www.devx.com/enterprise/Article/39015)
A message may be lost if queue is not available (even cluster doesn't provide zero downtime)
Take a look at the simple order processing workflow
A user enters credit card details and makes a payment
Application receives a success result from payment gateway
Application send a message as “fire and forget”/”one way” call to a backend service by WCF MSMQ binding
The user will be redirected on the “success” page
Message is stored in a REMOTE transactional queue (windows cluster)
The backend service dequeue and process the message, completes complex order processing workflow and, as a result, sends an as email confirmation to the user
Everything looks fine as excepted.
What I cannot understand how can we guarantee that all “one way” calls will be delivered in the queue?
Duplex communication is not a case due to the user should be redirected at the result web page ASAP.
Imagine the case when a user received “success” page with language “… Your payment was made, order has been starting to process, and you will email notifications later…” but the message itself is lost.
How durability can be implemented for step 3?
One of the possible solutions that I can see is
3a. Create a database record with a transaction details marked as uncompleted, just to have any record about the transaction. This record may be used as a start point to process the lost message in case of the message will not be saved in the queue.
I read this post
The main thing to understand about transactional MSMQ is that there
are three distinct transactions involved in a transactional send to a
remote queue.
The sender writes the message to a local queue.
The queue manager on the senders machine transmits the message across the wire to the queue manager on the recipient machine
The receiver service processes the queue message and then removes the message from the queue.
But it doesn’t solve described issue - as I know WCF netMsmqBinding‎ doesn’t use local queue to send messages to remote one.
But it doesn’t solve described issue - as I know WCF netMsmqBinding‎
doesn’t use local queue to send messages to remote one.
Actually this is not correct. MSMQ always sends to a remote queue via local queue, regardless of whether you are using WCF or not.
If you send a message to a remote queue then look in Message Queuing in Server Management you will see in Outbound queues that a queue has been created with the address of the remote queue. This is a temporary queue which is automatically created for you. If the remote queue was for some reason unavailable, the message would sit in the local queue until it became available, and then it would be transmitted.
So durability is provided because of the three-phase commit:
transactionally write message locally
transactionally transmit message
transactionally receive and process message
There are instances where you may drop messages, for example, if your message processing happens outside the scope of the dequeue transaction, and also instances where it is not possible to know if the processing was successful (eg back-end web service call times out), and of course you could have a badly formed message which will never succeed processing, but in all cases it should be possible to design for these.
If you're using public queues on a clustered environment then I think there may be more scope for failure as clustering msmq introduces complexity (I have not really used so I don't know) so try to avoid if possible.

MVC3 AsyncController - Can we send heartbeat data to the client?

In order to overcome the (apparent) 4 minute idle connection timeout on the Azure load balancer, it seems necessary to send some data down the pipe to the client every now and again to keep the connection from being regarded as idle.
Our controller is set up as an AsyncController, and it fires several different asynchronous methods on other objects, all of which are set up to use IO Completion Ports. Thus, we return from our method immediately, and when the completion packet is processed, IIS hooks back up to the original request so that we can render our View.
Is there any way to periodically send a few bytes down the wire in this case? In a "classic" situation, we could have executed the method and then just spun while we waited, sending data every few seconds until the asynchronous method was complete. But, in this situation, the IIS thread is freed to go do other business, and we hook back up to it in our completion callback. What to do? Is this possible?
While your particular case concerns Windows Azure specific (the 4 minute timeout of LBs), the question is pure IIS / ASP.NET workwise. Anyway, I don't think it is possible to send "ping-backs" to the client while in AsyncController/AsyncPage. This is the whole idea of the AsyncPages/Controllers. The IIS leaves the socket aside having the thread serving other requests. And gets back only when you got the OutstandingOperations to zero with AsyncManager.OutstandingOperations.Decrement(); Only then the control is given back to send final response to the client. And once you are the point of sending response, there is no turning back.
I would rather argue for the architectural approach of why you thing someone would wait 4 minutes to get a response (even with a good animated "please wait")? A lot of things may happen during this time. From browser crash, through internet disruption to total power loss/disruption at client. If you are doing real Azure, why not just send tasks for a Worker Role via a Queue (Azure Storage Queues or Service Bus Queues). The other option that stays in front of you for so long running tasks is to use SingalR and fully AJAXed solution. Where you communicate via SignalR the status of the long running operation.
UPDATE 1 due to comments
In addition to the approach suggested by #knightpfhor this can be also achieved with a Queues. Requestor creates a task with some Unique ID and sends it to "Task submission queue". Then "listens" (or polls at regular/irregular intervals) a "Task completion" queue for a message with given Task ID.
In any way I don't see a reason for keeping client connected for the whole duration of the long running task. There are number of ways to decouple such communication.

Resources