Using topic with request/response in masstransit - .net-core

I'm using Masstransit dotnet core v6.3.1 with RabbitMQ v3. My case is sending request from api gateway to other services. Services consume by topics and Gateway using different topics per request. I'm trying to use request/response with masstransit. But requestClient declared exhange type to fanout. And I cant change the type. I want to use different routingKey per request with request/response. How can I do this?
I have used in gateway:
(startup.cs)
cfg.AddRequestClient<ISimpleRequest>();
(Custom Controller)
await client.GetResponse<ISimpleResponse>(new { Data="test request"});
I have used in other services(startup):
cfg.ReceiveEndpoint("TestGateway", ep =>
{
ep.Consumer(() => new SimpleConsumer(context));
});
(Custom Consumer)
await client.RespondAsync<ISimpleResponse>(new { Data="test response"});
Also I tried to declare exchange in rabbitmq first. After I created request from clientFactory with exchange Uri. But I had an error like " ...received 'fanout' but current is 'topic'."

There is a sample on using a direct exchange, topic exchanges are similar but support wildcard semantics. I'd suggest reviewing it to get more details on how to configure topology with RabbitMQ using MassTransit.
Sample
There is also documentation on how to setup routing keys with exchange types.

Related

Azure Service Bus interoperability between Apache Camel based producer and .NET consumer

we're trying to build an event-based integration between an Apache Camel based system with produces messages in an Azure Service Bus topic and an .NET based consumer of these messages.
The producer used the AMQP interface of the Service Bus, while the .NET based consumer uses the current API from Microsoft in namespace Azure.Messaging.ServiceBus.
When we try to access the body in a received message as follows:
private async Task ProcessMessagesAsync(ProcessMessageEventArgs args)
{
try {
message = Encoding.UTF8.GetString(args.Message.Body);
}
catch( Exception e)
{
_logger.LogError(e, "Body not decoded: Message: {#message}", e.Message);
}
_logger.LogInformation("Body Type: {#bodytype}, Content-Type: {#contenttype}, Message: {#message}, Properties: {#properties}", raw.Body.BodyType, args.Message.ContentType, message, args.Message.ApplicationProperties);
await args.CompleteMessageAsync(args.Message);
}
the following exception is raised:
Value cannot be retrieved using the Body property.Use GetRawAmqpMessage to access the underlying Amqp Message object.
System.NotSupportedException: Value cannot be retrieved using the Body property.Use GetRawAmqpMessage to access the underlying Amqp Message object.
at Azure.Messaging.ServiceBus.Amqp.AmqpMessageExtensions.GetBody(AmqpAnnotatedMessage message)
at Azure.Messaging.ServiceBus.ServiceBusReceivedMessage.get_Body()
When peeking the topic with service bus explorer the message looks strange:
#string3http://schemas.microsoft.com/2003/10/Serialization/�_{"metadata":{"version":"1.0.0","message_id":"AGHcNehoD-hK0pPJCSga9v9sXFwC","message_timestamp":"2022-01-10T13:34:32.778Z"},"data":{"source_timestamp":"2022-01-05T17:20:31.000","material":"101052"}}
When messages are sent to another topic with a .NET producer there's a plaintext JSON body in the topic, as expected.
Did anybody successfully build a solution with Azure Service Bus with components based on the two mentioned frameworks, and what did the trick so that interoperability did work? Who can a Camel AMQP producer create messages with a BodyType of Data so that the body can be decoded by the .NET Service Bus client libraries without need to use GetRawAmqpMessage?
I can't speak to what format Camel is using, but that error message indicates that the Body that you're trying to decode is not an AMQP data body, which is what the Service Bus client library uses and expects.
In order to read a body that is encoded as an AMQP value or sequence, you'll need to work with the data in AMQP format rather than by using the ServiceBusReceivedMessage convenience layer. To do so, you'll want to call GetRawAmqpMessage on the ServiceBusReceivedMessage, which will give you back an AmqpAnnotatedMessage.
The annotated message Body property will return an AmqpMessageBody instance which will allow you to query the BodyType and retrieve the data in its native format using one of the TryGetmethods on the AmqpMessageBody.
On our procuder side a SAP Cloud Integration is used, when the Message Type parapeter of the AMQP Adapter is set to Binary, according to:
https://help.sap.com/viewer/368c481cd6954bdfa5d0435479fd4eaf/Cloud/en-US/d5660c146a93483692335e9d79a8c58f.html.
This seem to correspond to Apache Camel jmsMessageType set to Bytes,
see https://camel.apache.org/components/3.14.x/amqp-component.html for details.
The decoding of the body in the ServiceBusReceivedMessage works as expected and the BodyType is set to Data. If using Text on the producer side, the BodyType will be set to Value as described which led to the problems with the decoding of the body.

WSO2 API Manager : Extracting and sending client identity to the backend host

So we have
API published in WSO2 API management
This is consumed by two consumers, A and B.
SETUP --
Consumers(A or B) ----OAuth + data ---> WSO2_APIM(Authorization done) ---> Backend host
Now we need to send the consumers identity to backend host.
For Ex -
Consumer B ----OAuth + data ----> WSO2_APIM(Authorization done) ---Header(client='B')---> Backend host
Please suggest, cite how can we achieve this?
TIA
You have several options.
Use a custom sequence and add a header
You can extract some data that is coming in the request and based on
that you can add a header
https://apim.docs.wso2.com/en/latest/deploy-and-publish/deploy-on-gateway/api-gateway/message-mediation/passing-a-custom-authorization-token-to-the-backend/#passing-a-custom-authorization-token-to-the-backend
Enable backend JWT
https://apim.docs.wso2.com/en/latest/deploy-and-publish/deploy-on-gateway/choreo-connect/passing-enduser-attributes-to-the-backend-via-choreo-connect/#passing-end-user-attributes-to-the-backend

Best setup for ApiGateway-MicroService communication pattern in MassTransit and RabbitMQ

I am implementing ApiGateway-MicroService communication protocol in my app with MassTransit and RabbitMQ. That protocol is meant to replace "traditional" REST API communication between ApiGateway and Microservices (I am talking about simple request-response here and not about any kind of events, sagas, etc). So on microservice(s) side I have consumers (which respond to requests) and on ApiGateway side I have request clients. Usually microservice has let's say ~10 consumers (for example OrderingMicroservice has consumers for following requests: CreateOrder, UpdateOrder, GetOrderById, ListUserOrders etc). I am trying to figure out best topology (Masstransit + RabbitMQ) for this scenario.
Here are my goals, at least I think it should work like this:
A. Request messages (that are routed to consumer queue) should be durable for short time only (for example 20s) and then removed from the consumer queue (and request client should receive timeout error) and not routed to any other queue. So when microservice is temporary down or it is temporary too busy to receive next request from queue then request messages should be kept in the queue for 20s and then disappear.
B. Since RequestClient should timeout after ~20s, Response messages (that are routed to client "response-queue") should also be durable for short amount of time (~20s). Then they can disappear. If ApiGW is offline / too busy to receive response then response(s) should be discarded.
So basically I want to use MassTransit/RabbitMQ as a short-lived buffer between ApiGW and microservice(s).
// ApiGw MassTransit configuration
services.AddMassTransit(x =>
{
x.SetKebabCaseEndpointNameFormatter();
x.UsingRabbitMq((context, cfg) =>
{
});
x.AddRequestClient<ICreateGroupPayload>();
});
// Service MassTransit configuration
services.AddMassTransit(x =>
{
x.SetKebabCaseEndpointNameFormatter();
var entryAssembly = Assembly.GetEntryAssembly();
x.AddConsumers(entryAssembly);
x.UsingRabbitMq((context, cfg) =>
{
cfg.ConfigureEndpoints(context);
});
});
// Single consumer definition in service
public class CreateGroupActionDefinition : ConsumerDefinition<CreateGroupAction>
{
public CreateGroupActionDefinition()
{
EndpointName = "group-service";
}
}
This setup creates following exchanges and queues:
exchange ICreateGroupPayload (fanout, durable) => bind exchange:group-service
exchange group-service (fanout, durable) => bind queue:group-service
exchange PublicGateway_bus_4wdoyyro5ycgmgbybdcx1gp3r3 (fanout, autoDelete) => bind queue:PublicGateway_bus_4wdoyyro5ycgmgbybdcx1gp3r3
queue group-service (durable)
queue PublicGateway_bus_4wdoyyro5ycgmgbybdcx1gp3r3 (x-expires: 60000)
When I terminate ApiGw following exchanges/queues are removed from RabbitMQ within ~1min:
exchange PublicGateway_bus_4wdoyyro5ycgmgbybdcx1gp3r3
queue PublicGateway_bus_4wdoyyro5ycgmgbybdcx1gp3r3
My questions are:
Should I use separate queues (endpoint names) for different consumers in a microservice? Or I can use same queue (group-service for example) for different consumers/message types?
How I can modify my configuration to set expiration time on my consumer queues? Right now it's durable but I want messages to be removed after ~20s. Also I think such queue should not be deleted after consumer is disconnected because it should be able to send requests even when consumer is offline (but only for 20s).
How I can modify my configuration to set expiration time on my request client response queue to be 20s (currently it seems it's 60s by default?).
Maybe someone have any other suggestions on how to adjust topology to best fit for this scenario? The aim is to have the setup as fast as possible just for simple request-response + short time buffering for edge cases.
All the work is done by MassTransit, as you can understand from the request documentation. You can change the default request timeout from 30 seconds to 20 seconds when adding the request client to the container. There is also an .AddGenericRequestClient() method to automatically add requests clients for whatever request type is needed.
You can also specify the request timeout for each request, and it will set the message TimeToLive to match that value. The responses should be sent with a TimeToLive as needed.

How to (asynchronously) consume a steaming end-point generated with servant's StreamGenerators?

The servant documentation describes how to create streaming endpoints:
type StreamAPI = "userStream" :> StreamGet NewlineFraming JSON (StreamGenerator User)
streamAPI :: Proxy StreamAPI
streamAPI = Proxy
streamUsers :: StreamGenerator User
Now the question is how can a client (written in javascript for instance) consume the end-point in an asynchronous fashion?
Note: that in newer servant (0.15 IIRC) streaming was refactored. However, the question of how to consume streaming endpoints in JavaScript is irrelevant of backend implementation.
For example if you use Fetch API, you can use reader API, which is well explained in MDN. Summarizing that:
// Fetch the original image
fetch('./tortoise.png')
// Retrieve its body as ReadableStream
.then(response => {
const reader = response.body.getReader();
Then you can call reader.read() to get response in parts.

BizTalk 2016: How to use HTTP Send adapter with API token

I need to make calls to a rest API service via BizTalk Send adapter. The API simply uses a token in the header for authentication/authorization. I have tested this in a C# console app using httpclient and it works fine:
string apiUrl = "https://api.site.com/endpoint/<method>?";
string dateFormat = "dateFormat = 2017-05-01T00:00:00";
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("token", "<token>");
client.DefaultRequestHeaders.Add("Accept", "application/json");
string finalurl = apiUrl + dateFormat;
HttpResponseMessage resp = await client.GetAsync(finalurl);
if (resp.IsSuccessStatusCode)
{
string result = await resp.Content.ReadAsStringAsync();
var rootresult = JsonConvert.DeserializeObject<jobList>(result);
return rootresult;
}
else
{
return null;
}
}
however I want to use BizTalk to make the call and handle the response.
I have tried using the wcf-http adapter, selecting 'Transport' for security (it is an https site so security is required(?)) with no credential type specified and placed the header with the token in the 'messages' tab of the adapter configuration. This fails though with the exception: System.IO.IOException: Authentication failed because the remote party has closed the transport stream.
I have tried googling for this specific scenario and cannot find a solution. I did find this article with suggestions for OAUth handling but I'm surprised that even with BizTalk 2016 I still have to create a custom assembly for something so simple.
Does anyone know how this might be done in the wcf-http send adapter?
Yes, you have to write a custom Endpoint Behaviour and add it to the send port. In fact with the WCF-WebHttp adapter even Basic Auth doesn't work so I'm currently writing an Endpoint Behaviour to address this.
One of the issues with OAuth, is that there isn't one standard that everyone follows, so far I've had to write 2 different OAuth behaviours as they have implemented things differently. One using a secret and time stamp hashed to has to get a token, and the other using Basic Auth to get a token. Also one of them you could get multiple tokens using the same creds, whereas the other would expire the old token straight away.
Another thing I've had to write a custom behaviour for is which version of TLS the end points expects as by default BizTalk 2013 R2 tries TLS 1.0, and then will fail if the web site does not allow it.
You can feedback to Microsoft that you wish to have this feature by voting on Add support for OAuth 2.0 / OpenID Connect authentication
Maybe someone will open source their solution. See Announcement: BizTalk Server embrace open source!
Figured it out. I should have used the 'Certificate' for client credential type.
I just had to:
Add token in the Outbound HTTP Headers box in the Messages tab and select 'Transport' security and 'Certificate' for Transport client credential type.
Downloaded the certificate from the API's website via the browser (manually) and installed it on the local servers certificate store.
I then selected that certificate and thumbprint in the corresponding fields in the adapter via the 'browse' buttons (had to scroll through the available certificates and select the API/website certificate I was trying to connect to).
I discovered this on accident when I had Fiddler running and set the adapter proxy setting to the local Fiddler address (http://localhost:8888). I realized that since Fiddler negotiates the TLS connection/certificate (I enabled tls1.2 in fiddler) to the remote server, messages were able to get through but not directly between the adapter and the remote API server (when Fiddler WASN'T running).

Resources