Accepting command and raising events from a service - rebus

Using Rebus as message bus over RabbitMq message broker for enabling event driven communication between micro services.
Using bus.Send(command) service A sends command over a specific queue, to which service B has subscribed. we are using type based routing.
Service B during the workflow of command needs to emit events for change in status (performingA, performedA etc..). One of the handler for an event will be in service B only ( per say it will listen to a specific event and call another api).
To achieve this do I need to have 3 instances of rebus in service B? One for subscribing to command from service A and another for raising events and 3rd to handle the event?

do I need to have 3 instances of rebus in service B? One for subscribing to command from service A and another for raising events and 3rd to handle the event?
No 😃 you only need one Rebus instance in service B.
One Rebus endpoint (with one input queue) is enough to:
...receive the command (you already know that 😊)
...subscribe to an event (e.g. await bus.Subscribe<YourEvent>();)
...publish the event (e.g. await bus.Publish(new YourEvent(...);)
...receive the event (because you subscribed to it, creating a binding from a topic named after your YourEvent type to service B's input queue.

Related

Can NServiceBus have two endpoints with different handlers to receive different RabbitMQ events?

We have two NServiceBus endpoints in a .Net Core 3.1 service using RabbitMQ as the transport. We're using two endpoints since we're subscribing to different events from different instances of Rabbit. We have event handlers created for the specific events from each of the RabbitMQ services.
When we create the first endpoint, we can go into RabbitMQ admin and see the bindings for our exchanges being set up correctly:
From:
Event1
nsb.delay-delivery
||
This Exchange
||
To:
Queue1
The problem is that the second endpoint isn't getting the same exchange binding setup. The code is the same for the creation of the endpoints, so we're not sure what's going wrong or how to debug it.
This is what the second exchange looks like:
From:
nsb.delay-delivery
||
This exchange
||
Queue2
Missing from the binding hierarchy is the Event2. If we go look at the exchange for Event2, there is no routing set up to our Queue2.
We've compared the *-configuration.txt files that are written to the .diagnostics directory of our service, but we see no difference between the logs for the two endpoints. When testing, our service does receive events from Queue1 in our handlers. We never see our handlers triggered for Queue2.
Has anyone successfully set up a service that can handle events from two different RabbitMQ services with two different endpoints?
What's the best way to go about debugging this? The NServiceBus documentation is a little limited on this part.

Axon4 - Re-queue failed messages

In below scenario, what would be the bahavior of Axon -
Command Bus recieved the command
It creates an event
However messaging infra is down (say kafka)
Does Axon has re-queing capability for event or any other alternative to handle this scenario.
If you're using Axon, you know it differentiates between Command, Event and Query messages. I'd suggest to be specific in your question which message type you want to retry.
However, I am going to make the assumption it's about events, as your stating Kafka.
If this is the case, I'd highly recommend reading the reference guide on the matter, as it states how you can uncouple Kafka publication from actual event storage in Axon.
Simply put, use a TrackingEventProcessor as the means to publish events on Kafka, as this will ensure a dedicate thread is used for publication instead of the same thread storing the event. Added, the TrackingEventProcessor can be replayed, thus "re-process" events.

How to create command by consuming message from kafka topic rather than through Rest API

I'm using Axon version (3.3) which seamlessly supports Kafka with annotation in the SpringBoot Main class using
#SpringBootApplication(exclude = KafkaAutoConfiguration.class)
In our use case, the command side microservice need to pick message from kafka topic rather than we expose it as Rest api. It will store the event in event store and then move it to another kafka topic for query side microservice to consume.
Since KafkaAutoCOnfiguration is disabled, I cannot use spring-kafka configuration to write a consumer. How can I consume a normal message in Axon?
I tried writing a normal Kafka spring Consumer but since Kafka Auto COnfiguration is disabled, initial trigger for the command is not picked up from the Kafka topic
I think I can help you out with this.
The Axon Kafka Extension is solely meant for Events.
Thus, it is not intended to dispatch Commands or Queries from one node to another.
This is very intentionally, as Event messages have different routing needs apposed to Command and Query messages.
Axon views Kafka a fine fit as an Event Bus and as such this is supported through the framework.
It is however not ideal for Command messages (should be routed to a single handler, always) or Query messages (can be routed to a single handler, several handlers or have a subscription model).
Thus, I you'd want to "abuse" Kafka for different types of messages in conjunction with Axon, you will have to write your own component/service for it.
I would however stick to the messaging paradigm and separate these concerns.
For far increasing simplicity when routing messages between Axon applications, I'd highly recommend trying out Axon Server.
Additionally, here you can hear/see Allard Buijze point out the different routing needs per message type (thus the reason why Axon's Kafka Extension only deals with Event messages).

how to build a presence system with the new firebase queue

I have a chat application using firebase and node.js where I keep track of presence by running a single worker thread on the server that monitors child_added and child_deleted events on the firebase presence channel, and updating our presence database tables accordingly.
My question is this - now that firebase queue exists https://www.firebase.com/blog/2015-05-15-introducing-firebase-queue.html
Can I use the queue to replace the worker thread that I have running on the server to monitor presence and child_added events? Looking at the current examples - it looks like I would create a reference to the queue on the client and then set on disconnect and connect events to push into that queue from the client? However I'd like to secure it a bit more and not rely on the client so much. I'd also like to have the queue process the event by archiving it to a 3rd party logging service - credentials or details I wouldn't want to expose to the client.
Does this mean I would still need a server side worker process - and if so what benefit would the firebase queue be in this use case?
Firebase Queue is not a hosted solution - you still need to run it on your own server.
The main advantage of using a queue over a single listener process is the ability to run multiple workers for the same tasks so there's not a single point of failure. Using the queue you'll know that the worker processes are synchronized such that only one worker will be processing a task at any one point in time, and if a worker dies during processing or takes too long, another worker will pick it up again once the task has timed out.
It sounds like you're trying to create some kind of audit trail for presence, but there's currently no way to report presence directly from the server - you'll need to rely on the client at some point. Your security rules can enforce whether a write is a boolean to specific location in your database, but they can't enforce that the client was in any particular presence state when writing it. Also note that there's no push or childByAutoId equivalent onDisconnect handler, so to push to a queue you'd have to do something like:
var ref = new Firebase(…);
var disconnectTask = {};
var pushId = ref.push().key(); // This just generates the ID and does no network traffic
disconnectTask[pushId] = { /* populate with task data here */ };
ref.onDisconnect().update(disconnectTask);
Note that the push ID will be generated client-side before the operation is sent to the server and so the task won't necessarily be in order when added to the queue.

How does the BizTalk Batching Service work?

I am working on a BizTalk EDI project and now was struggling with BizTalk Batching service cannot subscribe my published message in message box
I have created the party and agreement, in the batch config, I set the filter as something like:
EDI.ToBeBatched==True
and BTS.MessageType == MyMessageType
But BizTalk kept complaining that my message does not have a subscriber.
When I query the Subscription in Hub, I can find 2 instance subscription related with my batch, but none of these 2 have my customized filter condition.
Could someone show me how the batching service work? i.e.: when a message published in message box, how does the BizTalk batching service know it is belongs to which batch?
So, the documentation on this is pretty complete: http://msdn.microsoft.com/en-us/library/bb226413.aspx
It explains what that Filter is for and how to route messages to the Batching instance.
Your filter should NOT have the "EDI.ToBeBatched" property set to true. Biztalk will set this to true for you in the EDI Receive pipeline when your specified filter conditions (at the party level) are met.
To be more specific, the "BatchMarker" component of the EDIReceive pipeline will set subscription conditions necessary for the special batching orchestration instance (running in the Biztalk EDI Application) to subscribe to, batch and deliver your EDI Messages.

Resources