Rebus priority on message - rebus

Is it possible using Rebus to set priority on messages?
The scenario is that we have a queueworker singing documents, for different services, some of them witch the user awaits the result, and some that the signed document is just stored for later use.
It would then be nice to prioritize the messages of the users that awaits a response. Is this possible ?
We are using Rebus2 with Azure ServiceBus

Unfortunately, since Azure Service Bus does not natively support message prioritization, and implementing it with the priority queue pattern would be cumbersome, it cannot readily be done by setting a priority on a message.
A simple approach that works in the general case is to have separate Rebus instances for different priorities, where one particular instance could then be used to "fast-track" messages that need to overtake all the other messages.
The instances could have the exact same configuration, except they use different input queues. This way, the routing configuration (endpoint mappings) get to determine which priority a message gets.

Related

Microservice to Microservice Architecure using gRPC : .NET Core

So I've this Microservice architecture where there is an ApiGateway, 2 microservices i.e., Configurations. API and API-1. The Configuration. API is mainly responsible to parse the JSON request and
access the DB and update Status tables, also to fetch required data, it even adds up more values to the JSON request and send it to the API-1. API-1 is responsible to just generate report based on the json passed.
Yes I can merge the configurations. API to the API-1 and make it a single service/container but the requirement is not to merge and create two different components i.e., 1 component purely based on
fetching the data, updating the status while the other just to generate the reports.
So here are some questions:
: Should I use gRPC for the configuration.API or is there a better way to achieve this.
Thank you.
RPC is a synchronous communication so you have to come up with strong reason to use it in service to service communication. it brings the fast and performant communication on the table but also coupling to the services. if you insist use rpc it is better to use MASSTRANSIT to implement the rpc in less coupled way. however in most cases the asynchronous event-base communication is recommended to avoid coupling (in that case look at CAP theory, SAGA, circuit breaker ).
since you said
but the requirement is not to merege and create two different
components
and that is your reason and also base on the fact
also to fetch requried data, it even adds up more values to the JSON
request and send it to the API-1
i think the second one makes scenes more. how ever i cant understand why you change the database position since you said the configuration service is responsible for that.
if your report service needs request huge data to generate report you have to think about the design. there is no more profile on you domain so there cannot be an absolute answer to this. but consider data reduce from insertion or request or some sort of pre-calculation if you could and also caching responses.

How to create command by consuming message from kafka topic rather than through Rest API

I'm using Axon version (3.3) which seamlessly supports Kafka with annotation in the SpringBoot Main class using
#SpringBootApplication(exclude = KafkaAutoConfiguration.class)
In our use case, the command side microservice need to pick message from kafka topic rather than we expose it as Rest api. It will store the event in event store and then move it to another kafka topic for query side microservice to consume.
Since KafkaAutoCOnfiguration is disabled, I cannot use spring-kafka configuration to write a consumer. How can I consume a normal message in Axon?
I tried writing a normal Kafka spring Consumer but since Kafka Auto COnfiguration is disabled, initial trigger for the command is not picked up from the Kafka topic
I think I can help you out with this.
The Axon Kafka Extension is solely meant for Events.
Thus, it is not intended to dispatch Commands or Queries from one node to another.
This is very intentionally, as Event messages have different routing needs apposed to Command and Query messages.
Axon views Kafka a fine fit as an Event Bus and as such this is supported through the framework.
It is however not ideal for Command messages (should be routed to a single handler, always) or Query messages (can be routed to a single handler, several handlers or have a subscription model).
Thus, I you'd want to "abuse" Kafka for different types of messages in conjunction with Axon, you will have to write your own component/service for it.
I would however stick to the messaging paradigm and separate these concerns.
For far increasing simplicity when routing messages between Axon applications, I'd highly recommend trying out Axon Server.
Additionally, here you can hear/see Allard Buijze point out the different routing needs per message type (thus the reason why Axon's Kafka Extension only deals with Event messages).

Dealing with "saga not found" scenarios

Are there any mechanisms in Rebus to deal with messages that would normally be handled by a saga but for which there is no current saga that matches the correlation property? Out of the box, I believe those messages are just consumed by Rebus but there is no visibility as to what happens with them.
i.e. NServiceBus has the IHandleSagaNotFound to allow endpoints to deal with this scenario
Unfortunately there's no way to handle that right now. As you've probably found out, a message is simply logged that Rebus could not find an existing saga data instance for the incoming message.

SignalR: Why choose Hub vs. Persistent Connection?

I've been searching and reading up on SignalR recently and, while I see a lot of explanation of what the difference is between Hubs and Persistent Connections I haven't been able to get my head around the next level, which is why would I choose one approach over the other?
From what I see in the Connection and Hubs section it seems that Hubs provide a topic system overlaying the lower-level persistent connections.
From the highly up-voted comment below:
Partially correct. You can get topics or groups in persistent connections as well. The big difference is dispatching different types of messages. For example you have different kinds of messages and you want to send different kinds of payloads. With persistent connections you have to embed the message type in the payload (see Raw sample) but hubs gives you the ability to do RPC over a connection (lets you call methods on on the client from the server and from the server to the client). Another big thing is model binding. Hubs allow you to pass strongly typed parameters to methods.
The example used in the documentation uses a chat room metaphor, where users can join a specific room and then only get messages from other users in the same room. More generically your code subscribes to a topic and then get just messages published to that topic. With the persistent connections you'd get all messages.
You could easily build your own topic system on top of the persistent connections, but in this case the SignalR team did the work for you already.
The main difference is that you can't do RPC with PersistentConnection, you can only send raw data. So instead of sending messages from the server like this
Clients.All.addNewMessageToPage(name, message);
you'd have to send an object with Connection.Broadcast() or Connection.Send() and then the client would have to decide what to do with that. You could, for example, send an object like this:
Connection.Broadcast(new {
method: "addNewMessageToPage",
name: "Albert",
message: "Hello"
});
And on the client, instead of simply defining
yourHub.client.addNewMessageToPage = function(name, message) {
// things and stuff
};
you'd have to add a callback to handle all incoming messages:
function addNewMessageToPage(name, message) {
// things and stuff
}
connection.received(function (data) {
var method = data.method;
window[method](data.name, data.message);
});
You'd have to do the same kind of dispatching on the server side in the OnReceived method. You'd also have to deserialize the data string there instead of receiving the strongly typed objects as you do with hub methods.
There aren't many reasons to choose PersistentConnection over Hubs. One reason I'm aware of is that it is possible to send preserialized JSON via PersistentConnection, which you can't do using hubs. In certain situations, this might be a relevant performance benefit.
Apart from that, see this quote from the documentation:
Choosing a communication model
Most applications should use the Hubs API. The Connections API could
be used in the following circumstances:
The format of the actual message sent needs to be specified.
The developer prefers to work with a messaging and dispatching model
rather than a remote invocation model.
An existing application that uses a messaging model is being ported to use SignalR.
Depending on your message structure, you might also get small perfomance benefits from using PersistentConnection.
You may want to take a look at the SignalR samples, specifically this here.
There are two ways to use SignalR: you can access it at a low level by overriding its PersistentConnection class, which gives you a lot of control over it; or you can let SignalR do all of the heavy lifting for you, by using the high level ‘Hubs’.
Persistent Connection is a lower level API, you can perform actions on more specific time when the connection is opened or closed, in most applications the Hub is the best choice
There are three major points to consider when comparing these two:
Message Format
Communication model
SignalR customization
With hubs message formatting is basically handled from you but with persistent connections the message is raw and has be tokenized and parsed back and forth. If the message size is important then also note that the payload of a persistent connection is much less that that of a hub.
When it comes to the communication model persistent connections basically have a function for sending and receiving messaging while hubs take a remote procedure call model with unique function per requirement.
When it comes to customization since persistent connections are more low level they may give you more control over customization.

Biztalk client defined subscription items

I am designing a Biztalk solution which requires client applications to subscribe and receive only a certain subset of event messages depending on their user permissions. Subscription will be done through topic or content based routing. The client will subscribe once and receive many messages until they choose to unsubscribe.
Client applications will number in the 100s and subscribed topics could change on a regular basis, so defining an individual send port from Biztalk for each reciever isn't a viable solution.
I have thought I could build an additional message broker service which holds the individual client subscriptions and distributes messages sent from a biztalk port.
I have also seen that a recipient list pattern can be build using orchestrations. This appears to me to still follow a request-response pattern though and I am after 1 way subscribe message to many returned event messages.
My message broker solution seems to me to be doubling up on what Biztalk should be good at so I imagine I am missing some important functionality somewhere. Has anyone tried such an application before and can give some pointers? Should I be investingating the ESB toolkit as a solution? I have had a look on the net but nothing makes it very clear for this type of topic-subscription model.
Thanks,
Phil
Do take a look at the ESB Toolkit. You can use the itinerary functionality that it adds to BizTalk, either with one of the built-in resolvers (e.g., UDDI) or with your own custom resolver. This allows you to route messages based on configuration (stored in Business Rules or elsewhere).
You will find a developer-oriented overview video of the ESB Toolkit on MSDN, which is a decent introduction to the design process and tooling. There are several other helpful videos there as well.
Your specific scenario can accomplished with a single itinerary, as described here. Use a receive pipeline with the ESB Dispatch Disassembler component, configure multiple resolvers, and for each resolver a new message is produced.
There are also two samples to look at:
The Itinerary On-Ramp Sample - builds a set of SOAP headers that contain the itinerary that you create in the test client, loads the specific message file from disk, appends the itinerary headers to the message, and submits it to the ESB through an Itinerary on-ramp for processing.
The Scatter-Gather Sample - Also appends SOAP headers containing the itinerary to the message, which is submitted to the ESB through an on-ramp for processing. A Broker orchestration analyzes the settings for its itinerary step, retrieves a collection of resolvers associated with the itinerary step, and for each of those resolvers resolves the service endpoint. After that, the orchestration activates the proper ServiceDispatcher orchestration instances to dispatch the outbound request messages.
You should also look at "How to: Route a Single Message to Multiple Recipients Using an Itinerary Routing Slip" or perhaps look into creating a custom itinerary message service (documentation is here).

Resources