I am implementing ApiGateway-MicroService communication protocol in my app with MassTransit and RabbitMQ. That protocol is meant to replace "traditional" REST API communication between ApiGateway and Microservices (I am talking about simple request-response here and not about any kind of events, sagas, etc). So on microservice(s) side I have consumers (which respond to requests) and on ApiGateway side I have request clients. Usually microservice has let's say ~10 consumers (for example OrderingMicroservice has consumers for following requests: CreateOrder, UpdateOrder, GetOrderById, ListUserOrders etc). I am trying to figure out best topology (Masstransit + RabbitMQ) for this scenario.
Here are my goals, at least I think it should work like this:
A. Request messages (that are routed to consumer queue) should be durable for short time only (for example 20s) and then removed from the consumer queue (and request client should receive timeout error) and not routed to any other queue. So when microservice is temporary down or it is temporary too busy to receive next request from queue then request messages should be kept in the queue for 20s and then disappear.
B. Since RequestClient should timeout after ~20s, Response messages (that are routed to client "response-queue") should also be durable for short amount of time (~20s). Then they can disappear. If ApiGW is offline / too busy to receive response then response(s) should be discarded.
So basically I want to use MassTransit/RabbitMQ as a short-lived buffer between ApiGW and microservice(s).
// ApiGw MassTransit configuration
services.AddMassTransit(x =>
{
x.SetKebabCaseEndpointNameFormatter();
x.UsingRabbitMq((context, cfg) =>
{
});
x.AddRequestClient<ICreateGroupPayload>();
});
// Service MassTransit configuration
services.AddMassTransit(x =>
{
x.SetKebabCaseEndpointNameFormatter();
var entryAssembly = Assembly.GetEntryAssembly();
x.AddConsumers(entryAssembly);
x.UsingRabbitMq((context, cfg) =>
{
cfg.ConfigureEndpoints(context);
});
});
// Single consumer definition in service
public class CreateGroupActionDefinition : ConsumerDefinition<CreateGroupAction>
{
public CreateGroupActionDefinition()
{
EndpointName = "group-service";
}
}
This setup creates following exchanges and queues:
exchange ICreateGroupPayload (fanout, durable) => bind exchange:group-service
exchange group-service (fanout, durable) => bind queue:group-service
exchange PublicGateway_bus_4wdoyyro5ycgmgbybdcx1gp3r3 (fanout, autoDelete) => bind queue:PublicGateway_bus_4wdoyyro5ycgmgbybdcx1gp3r3
queue group-service (durable)
queue PublicGateway_bus_4wdoyyro5ycgmgbybdcx1gp3r3 (x-expires: 60000)
When I terminate ApiGw following exchanges/queues are removed from RabbitMQ within ~1min:
exchange PublicGateway_bus_4wdoyyro5ycgmgbybdcx1gp3r3
queue PublicGateway_bus_4wdoyyro5ycgmgbybdcx1gp3r3
My questions are:
Should I use separate queues (endpoint names) for different consumers in a microservice? Or I can use same queue (group-service for example) for different consumers/message types?
How I can modify my configuration to set expiration time on my consumer queues? Right now it's durable but I want messages to be removed after ~20s. Also I think such queue should not be deleted after consumer is disconnected because it should be able to send requests even when consumer is offline (but only for 20s).
How I can modify my configuration to set expiration time on my request client response queue to be 20s (currently it seems it's 60s by default?).
Maybe someone have any other suggestions on how to adjust topology to best fit for this scenario? The aim is to have the setup as fast as possible just for simple request-response + short time buffering for edge cases.
All the work is done by MassTransit, as you can understand from the request documentation. You can change the default request timeout from 30 seconds to 20 seconds when adding the request client to the container. There is also an .AddGenericRequestClient() method to automatically add requests clients for whatever request type is needed.
You can also specify the request timeout for each request, and it will set the message TimeToLive to match that value. The responses should be sent with a TimeToLive as needed.
Related
I'm using Masstransit dotnet core v6.3.1 with RabbitMQ v3. My case is sending request from api gateway to other services. Services consume by topics and Gateway using different topics per request. I'm trying to use request/response with masstransit. But requestClient declared exhange type to fanout. And I cant change the type. I want to use different routingKey per request with request/response. How can I do this?
I have used in gateway:
(startup.cs)
cfg.AddRequestClient<ISimpleRequest>();
(Custom Controller)
await client.GetResponse<ISimpleResponse>(new { Data="test request"});
I have used in other services(startup):
cfg.ReceiveEndpoint("TestGateway", ep =>
{
ep.Consumer(() => new SimpleConsumer(context));
});
(Custom Consumer)
await client.RespondAsync<ISimpleResponse>(new { Data="test response"});
Also I tried to declare exchange in rabbitmq first. After I created request from clientFactory with exchange Uri. But I had an error like " ...received 'fanout' but current is 'topic'."
There is a sample on using a direct exchange, topic exchanges are similar but support wildcard semantics. I'd suggest reviewing it to get more details on how to configure topology with RabbitMQ using MassTransit.
Sample
There is also documentation on how to setup routing keys with exchange types.
I'm using the new Symfony Messenger Component 4.1 and RabbitMQ 3.6.10-1 to queue and asynchronously send email and SMS notifications from my Symfony 4.1 web application. My Messenger configuration (messenger.yaml) looks like this:
framework:
messenger:
transports:
amqp: '%env(MESSENGER_TRANSPORT_DSN_NOTIFICATIONS)%'
routing:
'App\NotificationBundle\Entity\NotificationQueueEntry': amqp
When a new notification is to be sent, I queue it like this:
use Symfony\Component\Messenger\MessageBusInterface;
// ...
$notificationQueueEntry = new NotificationQueueEntry();
// [Set notification details such as recipients, subject, and message]
$this->messageBus->dispatch($notificationQueueEntry);
Then I start the consumer like this on the command line:
$ bin/console messenger:consume-messages
I have implemented a SendNotificationHandler service where the actual delivery happens. The service configuration:
App\NotificationBundle\MessageHandler\SendNotificationHandler:
arguments:
- '#App\NotificationBundle\Service\NotificationQueueService'
tags: [ messenger.message_handler ]
And the class:
class SendNotificationHandler
{
public function __invoke(NotificationQueueEntry $entry): void
{
$this->notificationQueueService->sendNotification($entry);
}
}
Until this point, everything works smoothly and the notifications get delivered.
Now my question: It may happen that an email or SMS cannot be delivered due to a (temporary) network failure. In such a case, I would like my system to retry the delivery after a specified amount of time, up to a specified maximum number of retries. What is the way to go to achieve this?
I have read about Dead Letter Exchanges, however, I could not find any documentation or example on how to integrate this with the Symfony Messenger Component.
What you need to do is tell RabbitMQ, that the message is rejected instead of acknowledged. By default the messenger will take care of this inside the AmqpReceiver. As you can see there, if you throw an exception that implements the RejectMessageExceptionInterface inside your handler, the message will automatically be rejected for you.
You could also "simulate" this behaviour with custom middleware. I created something like it, in a small demo application. The mechanism consists of a middleware that wraps the (serialized) original message inside a new RetryMessage and sends it via a custom message bus to a different queue, used as a dead letter exchange. The handler for that message will then unpack the RetryMessage (getting the original message and deserializing it) and transmit it over the default bus:
See:
RetryMessage
RetryMiddleware
messenger.yaml
RetryMessageHandler
This is a basic setup which rejects the message and allows you to consume it again instantly(!). You probably want to add additional information such as headers for timestamps when delaying the consumption to improve on this. For this you should look at writing your own receiver, middleware and/or handler.
I've wondered how to identify the current protocol if it's using websocket or polling.
-- in the client. (appended for certainty)
I've found a valid information from the debug console.
Meteor.connection._stream.socket.protocol
and it seems to have one value among...
['websocket',
'xdr-streaming',
'xhr-streaming',
'iframe-eventsource',
'iframe-htmlfile',
'xdr-polling',
'xhr-polling',
'iframe-xhr-polling',
'jsonp-polling'];
is there more grace way to identify the current protocol?
and when would it be the fastest timing to detect the protocol?
By the way, I need it to use a different DDP server when the sticky sessions needed since AWS ELB doesn't support sticky sessions and websocket at the same time.
Meteor uses DDP protocol. To connect to a different Meteor server and call its methods, use DDP.connect as follows.
import { DDP } from 'meteor/ddp-client'
DDP.connect(url)
Unfortunately, there is no graceful to get the protocol. onConnection returns an object which has some info.
Meteor.onConnection(obj =>
{ console.log(obj.httpHeaders.x-forwarded-proto); });
This returns 'ws' for websocket. This way of getting the protocol is not graceful!
Meteor.status() gives a reactive data source.
(https://docs.meteor.com/api/connections.html#DDP-connect)
if (Meteor.isClient) {
Tracker.autorun(() => {
const stat = Meteor.status();
if (stat.status === 'connected') {
console.log(Meteor.connection._stream.socket.protocol);
}
});
}
something like that will give the current protocol in the client side.
What is the rationale behind the following exception when trying to Defer the sending of a message on a one-way client:
System.InvalidOperationException "Cannot use ourselves as timeout manager because we're a one-way client"
A one-way client is a Rebus client that is not capable of receiving messages, so it has no input queue.
The way await bus.Defer(...) works, is by sending a message with some special headers to a "timeout manager", which by default is the endpoint that defers the message.
But since a one-way client has no input queue, it has no place to send the deferred message to.
You can make a one-way client defer messages by configuring an external timeout manager like this:
Configure.With(...)
.(...)
.Options(o => o.UseExternalTimeoutManager(anotherQueue))
.Start();
which will then cause the client to send the deferred message to that queue.
Moreover, you would have to manually set the rbs2-defer-recipient header to some other input queue, so that the timeout manager knows where to send the message when it is time to be consumed(*).
I hope that explains it :) please let me know if it is not clear.
*) This is actually not the case with Rebus 4, because bus.Defer uses the normal endpoint mappings to route messages.
If Rebus.AzureServiceBus is used there is more simple (or hacky) way to send delayed messages.
You have to specify 2 headers: rbs2-deferred-until and rbs2-defer-recipient and call Publish method like in the example.
var deferredUntil = DateTimeOffset.UtcNow.AddDays(1);
var headers = new Dictionary<string, string>();
headers.Add(Headers.DeferredUntil, deferredUntil.ToString("O", CultureInfo.InvariantCulture));
headers.Add(Headers.DeferredRecipient, #"Rebus requires this ¯\_(ツ)_/¯");
await bus.Publish(new SomeMessage(), headers);
Note: rbs2-defer-recipient is required by Rebus so any dummy values are okay.
Be careful, it looks like a workaround so it may not work after Rebus.AzureServiceBus update. It works for me in 5.0.1.
M2mqtt incorporate in my asp.net mvc project. Face problem to synch subscribe informations.
When more than one clients published on one specific topic, client can subscribe them easily.
suppose in one situation when published happen then client is down/offline when he alive then only get the last published message not all published messages.
What to do?Is it a problem on MQTT?How alive client get all published messages.
M2mqtt connection with broker use by bellow syntax
public static MqttClient SmartHomeMQTT { get; set; }
SmartHomeMQTT = new MqttClient(brokerAddress, MqttSettings.MQTT_BROKER_DEFAULT_SSL_PORT, true, new X509Certificate(Resource.ca), null, MqttSslProtocols.TLSv1_2, client_RemoteCertificateValidationCallback);
SmartHomeMQTT.Connect("6ea592c5-4b2f-481a-bb0a-eccbe8579d14", "####", "####", false, 3600);
**Note:**Connect method parameter four set to false for clean_session property but it's not work.
To ensure that subscribers receive all messages, even ones that are published when they are offline (known as message persistence), you need to do a few things:
Make sure that 'Clean Session' is turned off in the subscribers
Ensure that each subscriber is using a unique Client ID
Use a QoS of 1 or 2
You don't say which MQTT server you are using, but you need to ensure that the server implementation supports it too.