How can Inspect the error queue from Rebus - rebus

I have a input queue which runs fine. Sometimes a message gets on the error queue.
Now i want to be able inspect these messages and maybe forward them to the input queue again if I know this particular message will pass.
How do I start with inspecting the error queue? Are there any best practices?
I can't just do a .CreateBus().Start() because this will trigger the handlers from the normal handlers.

The way you inspect queues and the options you get depends on the chosen transport.
If you're using Rebus with MSMQ, the easiest way to inspect your queues (input queues, error queues, MSMS dead-letter queues) and retry delivery of failed messages is to fire up Rebus Snoop. Rebus has a ReturnToSourceQueue CLI tool for MSMQ as well.
If you're using Azure Service Bus, I can recommend Paolo Salvatori's Service Bus Explorer which I've used a little bit myself on a few projects.
With RabbitMQ, I usually use RabbitMQ's built-in web management plugin to inspect queues, and then Rebus comes with a ReturnToSourceQueue CLI tool for RabbitMQ as well.
If you're using SQL Server, I can recommend firing up SQL Server Management Studio and getting your SQL-fu on ;)
If you want to code something that does some kind of automatic forwarding or handling of failed messages, I can recommend using Rebus' transport implementations (i.e. MsmsMessageQueue (along with MsmqUtil), RabbitMqMessageQueue, AzureServiceBusMessageQueue, etc.) to handle the receiving and sending of raw transport messages - it's an approach that I've used several times myself to e.g. implement crude second level retries mechanisms and forwarding and archival of failed messages etc.

Related

Rebus retry policy when RabbitMQ is temporarily down

I have a dockerized microservice architecture where I am using Rebus with RabbitMQ as message bus.
One container is running RabbitMQ. Other containers are running services that communicate with each other via Rebus/RabbitMQ.
I want my solution to be resilient to container restarts so if for example the RabbitMQ container restarts I expect the other services to be unaffected by that.
I expect that messages sent while RabbitMQ is down are queued up for delivery by Rebus
in the sending service and that they are delivered when the RabbitMQ connection is restored.
To verify that I run this test scenario:
Service A sends a message to service B via Rebus and RabbitMQ. That works fine.
I stop the RabbitMQ container.
Service A sends a message to service B via Rebus and RabbitMQ. That fails because RabbitMQ is unavailable.
I start the RabbitMQ container again.
I can see that Rebus in my services automatically reconnect to RabbitMQ when it is up. That is as expected.
Now that the RabbitMQ connection is restored I would expect that Rebus sends the pending message from Service A to service B, but it does not.
Is this not expected behaviour of Rebus? If not, can I enable this feature?
I have read this topic https://github.com/rebus-org/Rebus/wiki/Automatic-retries-and-error-handling
and tried to configure Rbus like this:
Configure.With(...)
.Options(b => b.SimpleRetryStrategy(maxDeliveryAttempts: 10))
.(...)
but with no luck.
The "delivery attempts" you're configuring is how you configure how many Rebus should try to consume a received message before giving up (i.e. moving it to the error queue).
If Rebus loses its connection to the broker, it will not be able to receive anything for the entire duration of the outage, so stopping RabbitMQ should effectively pause all message processing (possibly with some exceptions in all messages being handled at the instant where RabbitMQ goes away).
Since no Rebus handlers will be running then, while RabbitMQ is down, you will have to deal with outgoing messages sent from other places, e.g. like messages sent/published from a web request.
(...) I expect that messages sent while RabbitMQ is down are queued up for delivery by Rebus (...)
...but Rebus cannot queue anything up, because RabbitMQ is down(*).
The natural thing to do for Rebus in this situation is to give you, the caller, the responsibility of deciding what to do about the problem.
In .NET, you usually do that by throwing an exception back at you. 🙂
This leaves you with the option of
performing some alternative action, or
retrying some more times, or
whatever makes sense in that particular situation
A simple approach to building some resilience into your system in this case would be to use something like Polly to try sending outgoing messages multiple times in cases where it could fail.
I hope that makes sense. Please let me know if anything needs to be elaborated on. 🙂
(*) Of course Rebus could have "cheated" and queued outgoing messages up in memory, but that would make it very hard for you to write resilient code, because you would not know whether an outgoing message had been safely delivered to the broker, or whether it was just sitting in memory waiting to be saved somewhere.

SQL to resume suspended messages in order

We have an upcoming deploy for a system that processes a lot of messages through BizTalk. Since those messages are cumulative updates they need to be queued up during the deployment outage then processed in order when the deploy is finished. Since there may be a large number of them it’s difficult to do this manually.
One possible solution is to leave the send port stopped and let the messages suspend. We can then resume them in order when the deployment is completed.
Is it possible to run a SQL script (or a tool) against the BizTalk messagebox database that will resume suspended messages, for a specific port, in order of receipt?
If you have an ordered requirement (you either do or don't), then the Send Port should be marked for Ordered Delivery.
If so, then when you Start a Stopped Send Port, the messages will be processed in the same order they were submitted.
If you stop the port (but leave it subscribed) and start it again afterwards it should resume the message itself, or if not it is simple enough to go into the Administration Console and batch resume them.
However if the response messages of the send port are subscribed too by running Orchestrations you will not be able to un-deploy the Orchestrations until they have all completed, so stopping the send port would not work in this scenario.
Sometimes one option is if the initiating port is a one way receive, is to stop the receive location and let everything complete. You can then stop the application and redeploy and restart it and the send port will pick up all the waiting message to process.
If the above is not possible you may want to look at doing a side by side deployment where you increment the version numbers of all the assemblies in the solution so you can have both versions deployed at the same time and you can then allow the old version to finish running but have the new version processing any new messages.
The better option is to send messages to msmq, usually there is no extra coding required for this. You can just route messages to msmq using MSMQ adapter and then after deployment receive them in order as MSMQ adapter allows to receive in order. Just make sure you do a small test in yr QA environment before doing it in production.

Rebus HTTP gateway and MSMQ health state

Let's say we have
Client node with HTTP gateway outbound service
Server node with HTTP gateway inbound service
I consider situation where MSMQ itself stops from some reason on the client node. In current implementation Rebus HTTP gateway will catch the exception.
What do you think about idea that instead of just catching, the MessageQueueException exception could be also sent to server node and put on error queue? (name of error queue could be gathered from headers)
So without additional infrastructure server would know that client has a problem so someone could react.
UPDATE:
I guessed problems described in the answer would be raised. I should have explained my scenario deeper :) Sorry about it. Here it is:
I'm going to modify HTTP gateway in the way that InboundService would be able to do both - Send and Receive messages. So the OutboundService would be the only one who initiate the connection(periodically e.g. once per 5 minutes) in order to get new messages from server and send its messages to server. That is because client node is not considered as a server but as a one of many clients which are behind the NAT.
Indeed, server itself is not interested in client health but I though that instead of creating separate alerting service on client side which would use HTTP gateway HTTP gateway code, the HTTP gateway itelf could do this since it's quite in business of HTTP gateway to have both sides running.
What if the client can't reach the server at all?
Since MSMQ would be dead I thought about using in-process standalone persistent queue object like that http://ayende.com/blog/4540/building-a-managed-persistent-transactional-queue
(just an example implementation, I'm not sure what kind of license it has)
to aggregate exceptions on client side until server is reachable.
And how often will the client notify the server that is has experienced an error?
I'm not sure about that part - I thought it could be related to scheduled time of message synchronization like once per 5 minutes but what in case there would be no scheduled time just like in current implementation (while(true) loop)? Maybe it could be just set by config?
I like to have a consistent strategy about handling errors which usually involves plain old NLog logging
Since client nodes will be in the Internet behind the NAT standard monitoring techniques won't work. I thought about using queue as NLog transport but since MSMQ would be dead it wouldn't work.
I also thought about using HTTP as NLog transport but on the server side it would require queue (not really, but I would like to store it in queue) so we are back to sbus and HTTP gateway...that kind of NLog transport would be de facto clone of HTTP gateway.
UPDATE2: HTTP as NLog transport (by transport I mean target) would also require client side queue like I described in "What if the client can't reach the server at all?" section. It would be clone of HTTP gateway embedded into NLog. Madness :)
All the thing is that client is unreliable so I want to have all the information about client on the server side and log it in there.
UPDATE3
Alternative solution could be creating separate service, which would however be part of HTTP gateway (e.g. OutboundAlertService). Then three goals would be fulfilled:
shared sending loop code
no additional server infrastructure required
no negative impact on OutboundService (no complexity of adding in-process queue to it)
It wouldn't take exceptions from OutboundService but instead it would check MSMQ perodically itself.
Yet other alternative solution would be simply using other than MSMQ queue as NLog target but that's ugly overkill.
Regarding your scenario, my initial thought is that it should never be the server's problem that a client has a problem, so I probably wouldn't send a message to the server when the client fails.
As I see it, there would be multiple problems/obstacles/challenges with that approach because, e.g. what if the client can't reach the server at all? And how often will the client notify the server that is has experienced an error?
Of course I don't know the details of your setup, so it's hard to give specific advice, but in general I like to have a consistent strategy about handling errors which usually involves plain old NLog logging and configuring WARN and ERROR levels to go the Windows Event Log.
This allows for setting up various tools (like e.g. Service Center Operations Manager or similar) to monitor all of your machines' event logs to raise error flags when someting goes wrong.
I hope I've said something you can use :)
UPDATE
After thinking about it some more, I think I'm beginning to understand your problem, and I think that I would prefer a solution where the client lets the HTTP listener in the other end know that it's having a problem, and then the HTTP listener in the other end could (maybe?) log that as an error.
Another option is that the HTTP listener in the other end could have an event, ReceivedClientError or something, that one could attach to and then do whatever is right in the given situation.
In your case, you might put a message in an error queue. I would just avoid putting anything in the error queue as a general solution because I think it confuses the purpose of the error queue - the "thing" in the error queue wouldn't be a message, and as such it would not be retryable etc.

Reliable WCF Service with MSMQ + Order processing web application. One way calls delivery

I am trying to implement Reliable WCF Service with MSMQ based on this architecture (http://www.devx.com/enterprise/Article/39015)
A message may be lost if queue is not available (even cluster doesn't provide zero downtime)
Take a look at the simple order processing workflow
A user enters credit card details and makes a payment
Application receives a success result from payment gateway
Application send a message as “fire and forget”/”one way” call to a backend service by WCF MSMQ binding
The user will be redirected on the “success” page
Message is stored in a REMOTE transactional queue (windows cluster)
The backend service dequeue and process the message, completes complex order processing workflow and, as a result, sends an as email confirmation to the user
Everything looks fine as excepted.
What I cannot understand how can we guarantee that all “one way” calls will be delivered in the queue?
Duplex communication is not a case due to the user should be redirected at the result web page ASAP.
Imagine the case when a user received “success” page with language “… Your payment was made, order has been starting to process, and you will email notifications later…” but the message itself is lost.
How durability can be implemented for step 3?
One of the possible solutions that I can see is
3a. Create a database record with a transaction details marked as uncompleted, just to have any record about the transaction. This record may be used as a start point to process the lost message in case of the message will not be saved in the queue.
I read this post
The main thing to understand about transactional MSMQ is that there
are three distinct transactions involved in a transactional send to a
remote queue.
The sender writes the message to a local queue.
The queue manager on the senders machine transmits the message across the wire to the queue manager on the recipient machine
The receiver service processes the queue message and then removes the message from the queue.
But it doesn’t solve described issue - as I know WCF netMsmqBinding‎ doesn’t use local queue to send messages to remote one.
But it doesn’t solve described issue - as I know WCF netMsmqBinding‎
doesn’t use local queue to send messages to remote one.
Actually this is not correct. MSMQ always sends to a remote queue via local queue, regardless of whether you are using WCF or not.
If you send a message to a remote queue then look in Message Queuing in Server Management you will see in Outbound queues that a queue has been created with the address of the remote queue. This is a temporary queue which is automatically created for you. If the remote queue was for some reason unavailable, the message would sit in the local queue until it became available, and then it would be transmitted.
So durability is provided because of the three-phase commit:
transactionally write message locally
transactionally transmit message
transactionally receive and process message
There are instances where you may drop messages, for example, if your message processing happens outside the scope of the dequeue transaction, and also instances where it is not possible to know if the processing was successful (eg back-end web service call times out), and of course you could have a badly formed message which will never succeed processing, but in all cases it should be possible to design for these.
If you're using public queues on a clustered environment then I think there may be more scope for failure as clustering msmq introduces complexity (I have not really used so I don't know) so try to avoid if possible.

Resume BizTalk dehydrated orchestration

How can I resume a dehydrated orchestration ?
the orchestration in question should have been retrieving messages from a MSMQ queue
but the userid permission wasn't set on the queue, so the BizTalk box wasn't able to read from the queue
Corrected the permissions, but the only options are teminate and suspend ?
If the orchestration attempted to start and failed on the MSMQ receive, it's essentially hung and has not removed a message from the queue. I'd terminate it. The orchestration should clear and pickup the new messages. Does your orchestration implement a singleton pattern or are you using ordered delivery on the receive? This makes things a little more complicated.
Shouldn't you be restarting the biztalk service instance for MSMQ?
Dehydrated means the orchestration is still waiting for something. I guess in your case, you must be waiting for a corelated message from MQ. If you restart receive host service instance, it will try to reconnect all connections (MSMQ, SQL, etc that managed by the service instance). Then all messages will be flow through to orchestrations.
update 1:
Check the relevant receive location. Maybe it got disabled by biztalk due to the permission problem. You will have to enable it manually.
update 0:
Your don't have to resume dehydrated orchestration. It's not the orchestration that read from the queue, but the msmq adapter. When a msmq message arrive the receive location will route it into the message box. If the said orchestration have a subscription ( receive port ) that match the msmq message then it will be resumed by the biztalk engine.
Can you suspend, then resume?
It's been a couple years since I did BizTalk. Quirks like this were annoying. Even worse when it's 250k dehydrated and you need to script to restart them. ugh
I feel for you.
BizTalk's ability to resume depends on the place and way it failed, and whether it can replay any part of the operatio; in most cases, when failing in an orchestration, some coding pattern need to be used to allow it to resume.

Resources