As you see there are a Headquarter as root node and some branches as child nodes. There is a message of Data type, and I want to publish a message based on content of Data object, for example :
if (data.value == xxxx) publish(data, Br1, Br2)
else if (data.value == yyyy) publish(data, Br3, Br4)
else if (data.value == zzzz) publis(data, Br5, Br6)
This is somehow customized version of pub/sub pattern. But I want publish message of type Data to just some special subscribers based on content of message.
Is there a solution in Rebus?
There's several solutions in Rebus :)
For your scenario, I can see two ways of solving it: 1) Use custom topics, or 2) Implement a real content-based router.
If it makes sense, you can model this pub/sub scenario using topics by using Rebus' topics API to take care of the routing. This makes sense if you can say that your data messages each belong to some category, which your subscribers can then subscribe to.
Compared to "real" topics-based queueing systems, like e.g. RabbitMQ, the topics API in Rebus is very crude. It doesn't allow for wildcards(*) or anything advanced like that – topics are just simple strings that you can subscribe to and then use as a pub/sub channel to have an event routed to multiple subscribers.
You can use it like this in the subscriber's end:
await bus.Advanced.Topics.Subscribe("department_a");
and then in the publisher's end:
var data = new Data(...);
await bus.Advanced.Topics.Publish("department_a", data);
If that doesn't cut it, you can insert a "real" content-based router which is simply an endpoint to which you await bus.Send(eachDataMessage), which in turn forwards the message to the relevant subscribers.
It can be done at two levels with Rebus, depending on your requirements. If it is enough to look at the message's headers, you should implement it as a "transport message forwarder", because that skips deserialization and provides a nice API for simply forwarding messages:
Configure.With(...)
.Transport(t => t.UseMsmq("router"))
.Routing(r => {
r.AddTransportMessageForwarder(async transportMessage => {
var headers = transportMessage.Headers;
var subscribers = Decide(headers);
return ForwardAction.ForwardTo(subscribers);
});
})
.Start();
If you need to look at the actual message, you should just implement an ordinary message handler and then use the bus to forward the message:
public class Router : IHandleMessages<Data>
{
readonly IBus _bus;
public Router(IBus bus)
{
_bus = bus;
}
public async Task Handle(Data message)
{
var subscribers = Decide(message);
foreach(var subscriber in subscribers)
{
await _bus.Advanced.TransportMessage.ForwardTo(subscriber);
}
}
}
The custom-implemented router is the most flexible solution, as you can implement any logic you like, but as you can see it is slightly more involved.
(*) Rebus doesn't allow for using wildcards in general, although it does pass the topics directly to RabbitMQ if you happen to be using that as the transport, which means that you can actually take full advantage of RabbitMQ (see this issue for some more info about that)
static void Main()
{
using (var activator = new BuiltinHandlerActivator())
{
activator.Handle<Packet>(async (bus, packet) =>
{
string subscriber = "subscriberA";
await bus.Advanced.TransportMessage.Forward(subscriber);
});
Configure.With(activator)
.Logging(l => l.ColoredConsole(minLevel: LogLevel.Warn))
.Transport(t => t.UseMsmq("router"))
.Start();
for (int i = 0; i < 10; i++)
{
activator.Bus.SendLocal(
new Packet()
{
ID = i,
Content = "content" + i.ToString(),
Sent = false,
}).Wait();
}
}
Console.ReadLine();
}
using (var trScope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
scope.EnlistRebus();
Packet packet = ReadFromDB()
activator.Bus.SendLocal(packet).Wait()
scope.Complete()
}
activator.Handle<Packet>(async (bus, packet) =>
{
string subscriber = "subscriberA";
await bus.Advanced.TransportMessage.Forward(subscriber);
});
using (var activator = new BuiltinHandlerActivator())
{
activator.Handle<Packet>(async message =>
{
string connectionString =
"Data Source=.;Initial Catalog=Rebus;User ID=sa;Password=123456";
using (SqlConnection connection = new SqlConnection(connectionString))
{
string queryString = #"INSERT INTO CLIENTPACKET(ID, CONTENT, SENT) VALUES(#id, #content, #sent)";
connection.Open();
using (SqlCommand command = new SqlCommand(queryString, connection))
{
command.Parameters.Add(new SqlParameter("#id", message.ID));
command.Parameters.Add(new SqlParameter("#content", message.Content));
command.Parameters.Add(new SqlParameter("#sent", message.Sent));
await command.ExecuteNonQueryAsync();
}
}
});
Configure.With(activator)
.Logging(l => l.ColoredConsole(minLevel: LogLevel.Warn))
.Transport(t => t.UseMsmq(#"subscriberA"))
.Routing(r => r.TypeBased().MapAssemblyOf<Packet>("router"))
.Options(o =>
{
TransactionOptions tranOp = new TransactionOptions();
tranOp.IsolationLevel = IsolationLevel.ReadCommitted;
o.HandleMessagesInsideTransactionScope(tranOp);
o.SetNumberOfWorkers(2);
o.SetMaxParallelism(2);
})
.Start();
activator.Bus.Subscribe<Packet>().Wait();
Console.WriteLine("Press ENTER to quit");
Console.ReadLine();
}
Related
I'm using MassTransit with RabbitMQ on a .net core 6 web application. My goal is to keep in sync several instances of an application, running on different plants. The application needs to be able to publish / consume messages.
When a site publishes something, this is broadcasted to all the sites queues (also itself, it will simply discard the message).
In order to do it, I configured MassTransit queue names with the suffix of the plant: eg norm-queue-CV, norm-queue-MB. I configured also the Consumer to bind to a generic fanout exchange name (norm-exchange).
Here my configuration extract:
public void ConfigureServices(IServiceCollection services)
{
services.AddMassTransit(x =>
{
x.AddBus(provider => Bus.Factory.CreateUsingRabbitMq(cfg =>
{
cfg.Host(new Uri(_configuration["RabbitMQ:URI"] + _configuration["RabbitMQ:VirtualHost"]), $"ENG {_configuration["Application:PlantID"]} Producer", h =>
{
h.Username(_configuration["RabbitMQ:UserName"]);
h.Password(_configuration["RabbitMQ:Password"]);
});
cfg.Publish<NormCreated>(x =>
{
x.Durable = true;
x.AutoDelete = false;
x.ExchangeType = "fanout"; // default, allows any valid exchange type
});
}));
});
// consumer
var busControl = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
cfg.Host(new Uri(_configuration["RabbitMQ:URI"] + _configuration["RabbitMQ:VirtualHost"]), $"ENG {_configuration["Application:PlantID"]} Consumer", h =>
{
h.Username(_configuration["RabbitMQ:UserName"]);
h.Password(_configuration["RabbitMQ:Password"]);
});
cfg.ReceiveEndpoint($"norm-queue-{_configuration["Application:PlantID"]}", e =>
{
e.Consumer<NormConsumer>();
e.UseConcurrencyLimit(1);
e.UseMessageRetry(r => r.Intervals(100, 200, 500, 800, 1000));
e.Bind<NormCreated>();
e.Bind("norm-exchange");
});
});
busControl.Start();
And here how NormConsumer is defined
public class NormConsumer : IConsumer<NormCreated>
{
private readonly ILogger<NormConsumer>? logger;
public NormConsumer()
{
}
public NormConsumer(ILogger<NormConsumer> logger)
{
this.logger = logger;
}
public async Task Consume(ConsumeContext<NormCreated> context)
{
logger.LogInformation("Norm Submitted: {NormID}", context.Message.NormID);
//await context.Publish<NormCreated>(new
//{
// context.Message.OrderId
//});
}
}
Here the queues automatically created. To me they look fine
And here the exchange created. I was trying to get only one exchange (norm-exchange), but also the other 2 are created.
My problem is first of all understand if my layout makes sense (I'm quite new to Rabbit/Masstransit).
Moreover I'd like to override how exchanges are named, forcing to have for this queues only one exchange: "norm-exchange". I tried to override it in "producer" part, but not able to do it
RabbitMQ broker topology is covered extensively in RabbitMQ - The Details, and also in the documentation.
You do not need to call Bind in the receive endpoint, consumer message types are already bound for you. Remove both Bind statements, and any published messages will be routed by type to the receive endpoints.
I have a service requesting an URL and validating the server SSL certificate. The code has been running smoothly with HttpWebRequest in full .NET framework, but now I want to migrate it to HttpClient and .NET Core. I can get the certificate like this (the approach is recommended in multiple blog posts and stack overflow answers):
X509Certificate2 cert = null;
var httpClient = new HttpClient(new HttpClientHandler
{
ServerCertificateCustomValidationCallback = (request, certificate, chain, errors) =>
{
cert = certificate;
return true;
}
});
httpClient.GetAsync(...);
The issue here is that I constantly create new HttpClient instances, which isn't recommended. I want to move to HttpClientFactory, why I add the following in my setup code:
services
.AddHttpClient("sslclient", x =>
{
...
})
.ConfigurePrimaryHttpMessageHandler(() => new HttpClientHandler
{
ServerCertificateCustomValidationCallback = (request, certificate, chain, errors) =>
{
return true;
}
});
The challenge now is that the code creates the client no longer has access to ServerCertificateCustomValidationCallback:
var httpClient = httpClientFactory.CreateClient("sslclient");
Anyone know how to solve this?
Someone at Reddit suggested the following solution. Once the call to AddHttpClient has been made, it is no longer possible to modify the HttpClientHandler. It is possible to share a resource, though:
var certificates= new ConcurrentDictionary<string, X509Certificate2>();
services.AddSingleton(certificates);
services
.AddHttpClient("sslclient", x =>
{
...
})
.ConfigurePrimaryHttpMessageHandler(() => new HttpClientHandler
{
ServerCertificateCustomValidationCallback = (request, certificate, chain, errors) =>
{
certificates.TryAdd(request.RequestUri.Host, new X509Certificate2(certificate));
return true;
}
});
In the code making the HTTP request, you'd need to inject the certificates dictionary as well. Once the request has been made, you can check for a certificate in the dictionary:
var response = await httpClient.GetAsync(url);
if (certificates.ContainsKey(uri.Host))
{
// Happy days!
}
I've worked through the unit testing examples in the SignalR 2 documentation, but now I'd like to test that my hub only notifies a single user in certain situations.
My hub code looks like this:
public class NotificationsHub : Hub
{
public void RaiseAlert(string message)
{
Clients.All.RaiseAlert(message);
}
public void RaiseNotificationAlert(string userId)
{
if (userId == null)
{
// Notify all clients
Clients.All.RaiseAlert("");
return;
}
// Notify only the client for this userId
Clients.User(userId).RaiseAlert("");
}
}
My unit test for checking that all clients are notified looks like this (it's based on the Microsoft example):
[Test]
public void NotifiesAllUsersWhenNoUserIdSpecified()
{
// Based on: https://learn.microsoft.com/vi-vn/aspnet/signalr/overview/testing-and-debugging/unit-testing-signalr-applications
// Arrange
// This is faking the
var mockClients = new Mock<IClientContract>();
mockClients.Setup(m => m.RaiseAlert(It.IsAny<string>())).Verifiable();
// A mock of our SignalR hub's clients
var mockClientConnCtx = new Mock<IHubCallerConnectionContext<dynamic>>();
mockClientConnCtx.Setup(m => m.All).Returns(mockClients.Object);
// Set the hub's connection context to the mock context
var hub = new NotificationsHub
{
Clients = mockClientConnCtx.Object
};
// Action
hub.RaiseNotificationAlert(null);
// Assert
mockClients.Verify(m => m.RaiseAlert(It.IsAny<string>()));
}
What I'm not sure about is how to change the collection of clients represented by the var mockClients = new Mock<IClientContract>() line into a group of individual clients so that I can then test that if I notify user 1, then users 2 and 3 weren't notified?
I found another question about how to unit test groups and one of the answers pointed to the unit tests for the SignalR codebase.
Looking at those examples I worked out that I needed to add mocking of calls to the User method of the mockClients. That ended up looking like this:
public interface IClientContract
{
void RaiseAlert(string message);
}
[Test]
public void NotifiesOnlySpecifiedUserWhenUserIdSent()
{
// Adapted from code here: https://github.com/SignalR/SignalR/blob/dev/tests/Microsoft.AspNet.SignalR.Tests/Server/Hubs/HubFacts.cs
// Arrange
// Set up the individual mock clients
var client1 = new Mock<IClientContract>();
var client2 = new Mock<IClientContract>();
var client3 = new Mock<IClientContract>();
client1.Setup(m => m.RaiseAlert(It.IsAny<string>())).Verifiable();
client2.Setup(m => m.RaiseAlert(It.IsAny<string>())).Verifiable();
client3.Setup(m => m.RaiseAlert(It.IsAny<string>())).Verifiable();
// set the Connection Context to return the mock clients
var mockClients = new Mock<IHubCallerConnectionContext<dynamic>>();
mockClients.Setup(m => m.User("1")).Returns(client1.Object);
mockClients.Setup(m => m.User("2")).Returns(client2.Object);
mockClients.Setup(m => m.User("3")).Returns(client3.Object);
// Assign our mock context to our hub
var hub = new NotificationsHub
{
Clients = mockClients.Object
};
// Act
hub.RaiseNotificationAlert("2");
// Assert
client1.Verify(m => m.RaiseAlert(It.IsAny<string>()), Times.Never);
client2.Verify(m=>m.RaiseAlert(""), Times.Once);
client3.Verify(m => m.RaiseAlert(It.IsAny<string>()), Times.Never);
}
I am looking into Rebus and to use it with Azure Service Bus. Using it with regalure Queues was easy, but when I want to use Topic instead I can't get it to work.
Is there any here that have a done a setup and use it with Topic/Subscription. This is what I have so far.
static void Main(string[] args)
{
_bus1 = InitializeBus(System.Environment.MachineName);
_bus2 = InitializeBus(System.Environment.MachineName + "_2");
_bus3 = InitializeBus();
Run();
Console.WriteLine("Press Enter to exit!");
Console.ReadLine();
}
private static void Run()
{
try
{
_bus1.Handle<string>((b, c, m) => { Console.WriteLine(m); return null; });
_bus2.Handle<string>((b, c, m) => { Console.WriteLine(m); return null; });
_bus1.Bus.Subscribe<string>();
_bus2.Bus.Subscribe<string>();
_bus3.Bus.Publish("Publish test message");
}
catch (Exception ex)
{
throw;
}
}
private static BuiltinHandlerActivator InitializeBus(string queueName = null)
{
var activator = new BuiltinHandlerActivator();
if(string.IsNullOrEmpty(queueName))
Configure.With(activator)
.Transport(t => t.UseAzureServiceBusAsOneWayClient(connectionString))
.Options(o => { o.SetNumberOfWorkers(10); o.SetMaxParallelism(10); })
.Start();
else
Configure.With(activator)
.Transport(t => t.UseAzureServiceBus(connectionString, queueName).EnablePartitioning().DoNotCreateQueues())
.Options(o => { o.SetNumberOfWorkers(10); o.SetMaxParallelism(10); })
.Start();
return activator;
}
First I create all the buses. I am using DontCreateQueues() since I don't want the queues to be duplicated created in my root but only under the Topic as Subscription.
Then I set up the buses and the Publish works fine, there is one Topic created and 2 subscriptions created under this Topic, and there is 1 message in each of this subscriptions. But the messages is never collected.
If I remove the DontCreateQueues() method in the Configuration the code work, but then 2 queues are created in the root togheter with the topic and it's 2 subscriptions, but I can't have it like that.
Best Regards
Magnus
Rebus uses topics by creating a subscription for each topic you subscribe to, and then configure the subscription to forward received messages to the input queue of the bus.
If the bus does not have an input queue with the expected name (either one created by Rebus, or one you created manually), things will not work.
The reason DontCreateQueues() exists is to allow for expert users to configure their queue settings beyond what Rebus is capable of (and willing) to do. It requires a pretty detailed knowledge about how Rebus expects your queue entities to be laid out though, so I would recommend almost anyone to NOT create anything manually, and then simply let Rebus set things up.
Coming from the Meteor world & I'm curious how to replicate the cached pubsub/observers functionality. For a basic example, let's say I have a todolist where each todo has a userId and I want to keep todos private to each userId (but a userId could exist on multiple connected devices, eg phone + desktop). I imagine I have to create some publish function that verifies the userId by the socketId from the sent request, then create a socket namespace specific to that query (since the query could include more than a userId constraint). Then, register an emitter that only sends the changes to those socketIds that are verified to listen to the given namespace. Am I close? All my research just returns basic things like publishing to all connected users based on keywords. Any links to reading material would be great! Here's a first attempt with the missing logic in comments...
export function sendTodosByUserId(io, userId) {
//How to auth? By linking a client socketId to a user in a lookup table?
connect()
.then(conn => {
r
.table('todos')
.filter(todos => todos("userId").eq(userId))
.changes().run(conn, (err, cursor) => {
cursor.each((err, change) => {
//Do I emit a unique message? namespace? How do I handle 2 clients using the same userId?
io.emit('TODO_CHANGE', change);
});
});
});
}
You could implement some way of mapping new sockets to the correct user. For example, you could put the userId in a Express session and store the user's socket ids in a simple object.
var userSockets = {};
io.sockets.on('connection', function(socket) {
var userId = socket.handshake.session.userId;
if(userSockets[userId]) {
userSockets[userId].push(socket.id);
} else {
userSockets[userId] = [socket.id];
sendTodosByUserId(io, userId)
}
socket.on('disconnect', function() {
var i = userSockets[userId].indexOf(socket.id);
userSockets[userId].splice(i);
if(userSockets[userId].length === 0) {
delete userSockets[userId];
}
});
});
In your sendTodosByUsedId, you would just loop over the socket array belonging to the user, and emit to every socket.
var sockets = userSockets[userId];
for(var i = 0; i < sockets.length; ++i) {
io.to(sockets[i]).emit('TODO_CHANGE', change);
}
Note that this will not work if you have multiple nodes the user can be connected to. Then you might have to store the userSockets-object in e.g. Redis.
Alternatively, you could just have your users join a room named e.g. user:<userId>, and emit to this on every todo-change.
io.sockets.on('connection', function(socket) {
var userId = socket.handshake.session.userId;
socket.join('user:' + userId);
socket.on('disconnect', function() {
console.log("Rooms are left automatically on disconnect");
}
});
On todo change:
io.to('user:' + userId).emit('TODO_CHANGE', change);