RabbitMq .net core client to handle multiple messages in parallel (not one by one) - .net-core

Let's say I have one publisher and 2 consumers.
Each consumer should consume 5 messages at a time (in parallel).
(one exchange, bound to one queue, direct mode)
Publisher produces messages (1,2,3,...14,15)
Consumer A consumes (1,3,5,7,9)
Consumer B consumes (2,4,6,8,10)
Consumer A finished processing message 1 and receives message 11
... etc
How can I achieve this behaviour?
I realized, that the consumer.Receive event is only fired when the previous message has been processed.
When reading the rabbitmq docs, this seemed exactly what I need:
https://www.rabbitmq.com/consumer-prefetch.html
but obviously that setting has no impact on the above mentioned behaviour (messages are still processed serially).
Any ideas?

setting prefetch, messages are still processed serially
Because per-channel messages are be processed serially. So you have two options:
consume on a single channel and spawn multiple task thread to handle the message.
open multiple consumer channel, and process message in that channel thread.

Related

Multiple consumer on single JMS queue

JMS Queue is having 2 consumers, synchronous and asynchronous Java application process waiting for the response.
1)Synchronous application send request and will be waiting for the response for 60 seconds based on the JMS correlation ID.
2)Asynchronous thread will be constantly listening on the same queue.
In this scenario, when the response is received on the queue within 60 second I would expect load is distributed on both synchronous and asynchronous application. However, for some unknown reason almost all the response messages are consumed by synchronous process. And,only in some cases the messages are picked up asynchronous process.
Are there any factors that could cause only synchronous application to pick almost all the messages?
There is usually no guarantee that the load will be distributed evenly, especially if its synchronous versus async. consumer. The synchronous consumer will have to poll, wait, poll, wait while the async. consumer is probably waiting on the socket in a separate thread until a message arrives and then call your callback. So the async. consumer will most always be there first.
Any chance you can change to Topics and discard messages you don't wont ? Or change your sync. consumer to be async ? Another alternative would be to build a small 'asnyc' gateway in front of your synchronous consumer: a little application that makes an async consumption and then copies each message received to a second queue where the sync. consumer picks it up. Depending on your JMS provider it might support this type of 'JMS bridge' already - what are you using ?

RabbitMQ synchronous messaging pros and cons

as we all know message bus like rabbitMQ is mainly meant for asynchronous messaging so standard approch is to fire and forget like publish something on bus and don't worry about who will process published message or when. But i'm thinking about latest talk in our development team about synchronous processing of message: case would be to publish message to service bus and as as publisher i want to wait for any subscriber to process message and return results to me - so it looks rather as request-response model. I'm thinking now of one con like degrading performance in this model. What are your thoughts? When to use async and when sync? What are the tradeoffs?
Synchronous messaging is possible but impacts scalability. If a publisher has to wait for its recipients to respond, then it will be limited in how much it can achieve at any given time.
However, you can achieve request-response using asynchronous messaging. In RabbitMQ, you do this by means of the Remote Procedure Call (RPC) pattern.
To put it simply, your publisher publishes a message, but doesn't wait for the response; it can continue doing other stuff in the meantime. The publisher does keep track of it though, by putting a CorrelationId on the message, and storing it locally. The message eventually reaches a consumer, who processes it and responds back to the publisher on a different queue. The reply has the same CorrelationId. When the publisher receives the reply, it can then mark that particular call (via the CorrelationId) as processed.
If you want, you can also do other things with the CorrelatonId, such as timeout those messages for which we haven't received a reply after e.g. 30 seconds.

Reliable WCF Service with MSMQ + Order processing web application. One way calls delivery

I am trying to implement Reliable WCF Service with MSMQ based on this architecture (http://www.devx.com/enterprise/Article/39015)
A message may be lost if queue is not available (even cluster doesn't provide zero downtime)
Take a look at the simple order processing workflow
A user enters credit card details and makes a payment
Application receives a success result from payment gateway
Application send a message as “fire and forget”/”one way” call to a backend service by WCF MSMQ binding
The user will be redirected on the “success” page
Message is stored in a REMOTE transactional queue (windows cluster)
The backend service dequeue and process the message, completes complex order processing workflow and, as a result, sends an as email confirmation to the user
Everything looks fine as excepted.
What I cannot understand how can we guarantee that all “one way” calls will be delivered in the queue?
Duplex communication is not a case due to the user should be redirected at the result web page ASAP.
Imagine the case when a user received “success” page with language “… Your payment was made, order has been starting to process, and you will email notifications later…” but the message itself is lost.
How durability can be implemented for step 3?
One of the possible solutions that I can see is
3a. Create a database record with a transaction details marked as uncompleted, just to have any record about the transaction. This record may be used as a start point to process the lost message in case of the message will not be saved in the queue.
I read this post
The main thing to understand about transactional MSMQ is that there
are three distinct transactions involved in a transactional send to a
remote queue.
The sender writes the message to a local queue.
The queue manager on the senders machine transmits the message across the wire to the queue manager on the recipient machine
The receiver service processes the queue message and then removes the message from the queue.
But it doesn’t solve described issue - as I know WCF netMsmqBinding‎ doesn’t use local queue to send messages to remote one.
But it doesn’t solve described issue - as I know WCF netMsmqBinding‎
doesn’t use local queue to send messages to remote one.
Actually this is not correct. MSMQ always sends to a remote queue via local queue, regardless of whether you are using WCF or not.
If you send a message to a remote queue then look in Message Queuing in Server Management you will see in Outbound queues that a queue has been created with the address of the remote queue. This is a temporary queue which is automatically created for you. If the remote queue was for some reason unavailable, the message would sit in the local queue until it became available, and then it would be transmitted.
So durability is provided because of the three-phase commit:
transactionally write message locally
transactionally transmit message
transactionally receive and process message
There are instances where you may drop messages, for example, if your message processing happens outside the scope of the dequeue transaction, and also instances where it is not possible to know if the processing was successful (eg back-end web service call times out), and of course you could have a badly formed message which will never succeed processing, but in all cases it should be possible to design for these.
If you're using public queues on a clustered environment then I think there may be more scope for failure as clustering msmq introduces complexity (I have not really used so I don't know) so try to avoid if possible.

MVC3 AsyncController - Can we send heartbeat data to the client?

In order to overcome the (apparent) 4 minute idle connection timeout on the Azure load balancer, it seems necessary to send some data down the pipe to the client every now and again to keep the connection from being regarded as idle.
Our controller is set up as an AsyncController, and it fires several different asynchronous methods on other objects, all of which are set up to use IO Completion Ports. Thus, we return from our method immediately, and when the completion packet is processed, IIS hooks back up to the original request so that we can render our View.
Is there any way to periodically send a few bytes down the wire in this case? In a "classic" situation, we could have executed the method and then just spun while we waited, sending data every few seconds until the asynchronous method was complete. But, in this situation, the IIS thread is freed to go do other business, and we hook back up to it in our completion callback. What to do? Is this possible?
While your particular case concerns Windows Azure specific (the 4 minute timeout of LBs), the question is pure IIS / ASP.NET workwise. Anyway, I don't think it is possible to send "ping-backs" to the client while in AsyncController/AsyncPage. This is the whole idea of the AsyncPages/Controllers. The IIS leaves the socket aside having the thread serving other requests. And gets back only when you got the OutstandingOperations to zero with AsyncManager.OutstandingOperations.Decrement(); Only then the control is given back to send final response to the client. And once you are the point of sending response, there is no turning back.
I would rather argue for the architectural approach of why you thing someone would wait 4 minutes to get a response (even with a good animated "please wait")? A lot of things may happen during this time. From browser crash, through internet disruption to total power loss/disruption at client. If you are doing real Azure, why not just send tasks for a Worker Role via a Queue (Azure Storage Queues or Service Bus Queues). The other option that stays in front of you for so long running tasks is to use SingalR and fully AJAXed solution. Where you communicate via SignalR the status of the long running operation.
UPDATE 1 due to comments
In addition to the approach suggested by #knightpfhor this can be also achieved with a Queues. Requestor creates a task with some Unique ID and sends it to "Task submission queue". Then "listens" (or polls at regular/irregular intervals) a "Task completion" queue for a message with given Task ID.
In any way I don't see a reason for keeping client connected for the whole duration of the long running task. There are number of ways to decouple such communication.

BizTalk Zombies - any way to explicitly REMOVE a subscription from within a BizTalk orchestration

Background:
We make use of a lot of aggregation, singleton and multiton orchestrations, similar to Seroter's Round Robin technique described here (BizTalk 2009).
All of the these orchestration types have fairly arbitrary exit or continuation points (for aggregations), usually defined by a timer - i.e. if an Orch hasn't received any more messages within X minutes then proceed with the batching, and if after Y more minutes have elapsed and no more messages then quit. (We also exit our Single / N-Tons due to concerns about degraded performance after large numbers of messages are subscribed to the singleton over a period).
As much as we've tried to mitigate against Zombies e.g. by Starting any continuation processing in an asynch refactored orchestration, there is always a point of weakness where a 'well' timed message could cause a zombie. (i.e. receiving more incoming messages correlated to the 'already completed' shapes of an orchestration),
If a message causes a zombie on one of the subscriptions, the message does not appear to be propogated to OTHER subscribers either (i.e. orchs totally decoupled from the 'zombie causing' orchestration), i.e. the zombie-causing message is not processed.
Question
So I would be very interested in seeing if anyone has another way, programmatically or otherwise, to explicitly remove a correlated subscription from a running orchestration once the orchestration has 'progressed' beyond the point where it is interested in this correlated message. (this new message would then would typically start a new orchestration with its own correlations etc)
At this point we would consider even a hack solution such as a reflected BizTalk API call or direct SQL delete against the MsgBoxDB.
No you can't explicitly remove the subscription in an Orchestration.
The subscription will be removed as the Orchestration is tearing itself down, but a message arriving at that exact instance will be routed to the Orchestration but the Orchestration will end without processing it, and that's your Zombie.
Microsoft Article about Zombies http://msdn.microsoft.com/en-us/library/bb203853.aspx
I once also had to have an receive, debatch, aggregate, send pattern. Receiving enveloped messages from multiple senders, debatching them, aggregating by intended recipient (based on two rules, number of messages or time delay, whichever occurred first).
This scenario was ripe for Zombies and when I read about them I designed it so it would not occur. This was for BizTalk 2004
I debatched the messages, and inserted them into a database. I had a stored procedure that was polled by a receive port that would work out if there was a batch to send in if there was it would trigger of an Orcherstration that would take that message and route it dynamically.
Since neither Orchestrations had to wait for a another message they could end gracefully and there would be no Zombies.

Resources