NServiceBus IP Address - asp.net-core-webapi

I am new to Nservicebus and have recently started working in it. I am stuck on a point and need input from you guys. I have 2 asp.net core web api projects and I want to use NServicebus to send messages between both of them in some scenarios.
What I have found so far that I can provide name to EndpointConfiguration, what if one of my api is deployed on 1 server and 2nd on another server, in that case how my configuration should be?
I tried to gave url instead of name in EndpointConfiguration but it gave me exception.
Thanks in advance for your help

NServiceBus endpoints communicate over some messaging infrastructure your system will be using. Endpoint names represent queues messages sent to. Messaging infrastructure is abstracted by what NServiceBus is calling a Transport. You will need to decide on the transport you'd like to use (see the options here). Once you've decided what transport your solution will use, you could have a look at the samples for that specific transports to have an idea how to set up your endpoints.
For example, if you'll decide to use Azure Service Bus as your transport, you could download and try the Send/Reply sample.
A good starting point could be the tutorials available on the documentation site here.

Related

MassTransit Check if queue and exchange exist

I'm using dotnet with masstransit rabbitmq.
I was wondering if there's a way to check if a specific exchange and queue exist. I have 2 services that connect to the same rabbitmq and are started at the same time. One services does all of the queue/exchange setup; I want the 2nd service to do while loop check to see if the queue/exchange exist before proceeding.
I was trying to look at the documentation to see if I could find some examples, but could not locate any. Could someone point me in the right direction?
Thanks.
MassTransit does not generally support this type of coupling behavior, since it ends up linking the consuming service to the producing service. There are plenty of other solutions in MassTransit to support your needs, such as sending messages directly to a queue using queue:name as the destination address.
As such, there aren't any built-in methods to check queue/exchange existence in MassTransit.

Difference between SignalR and Pusher

I want to create a web app using React as the front end technology. A requirement for the app is that the server will be able to update all the clients with information about changes (not have to be an exact real time, but should update after no more than 10 seconds).
Solutions like clients requesting updates from the server every several seconds are out of the question.
Requirements:
1) The server's should be implemented with either .NET or with Node.js.
2) The connection MUST be secured via port 443 of the IIS.
I read a bit about Micorsoft's SignalR and about Pusher Channels which seems to provide exactly the kind of service I require.
Could you please elaborate about what exactly are the differences between them? When should I choose each? Which of them got more community support? Which is easier to implement? Stuff like that...
Both SignalR and Pusher Channels ultimately both use websockets to deliver messages to clients, so both should meet your requirements to deliver messages to clients in realtime.
1) Both offerings also meet your requirements for both library support:
SignalR supports .NET:
https://dotnet.microsoft.com/apps/aspnet/signalr
Pusher Channels has server support for both nodejs and .NET:
https://github.com/pusher/pusher-http-node
https://github.com/pusher/pusher-http-dotnet
2) Both offerings also meet your requirements for sending messages over TLS/WSS:
SignalR:
https://kimsereyblog.blogspot.com/2018/07/signalr-with-asp-net-core.html
Pusher Channels:
Securing Pusher's messages
In terms of the differences between them this depends on your implementation, if you just run SignalR on your own ISS server then it will be down to you to manage all of the websocket connections and all of the scaling challenges that come with this.
However similar to how Channels works, SignalR also has a managed websocket service, so you do not need to manage the connections or scaling. You just make an API request with the message you want to send to either Channels or SignalR and this message is then broadcast to the interested clients connected by websockets. In this scenario you do not manage the websocket connections yourself.
However in terms of pricing Channels appears to be far more competitive (especially the free offering), so if you are looking at the managed offering Channels looks to be a better value proposition:
https://azure.microsoft.com/en-gb/pricing/details/signalr-service/
https://pusher.com/channels/pricing
Both offerings look fairly similar in terms of implementation (assuming you are using the managed service). The complexity would increase if you implement SignalR on ISS:
https://learn.microsoft.com/en-us/aspnet/core/signalr/scale?view=aspnetcore-2.2
In terms of support Pusher has a free application support offering:
https://support.pusher.com/hc/en-us
Hope this helps!
This presentation has some answers A 10 Minute Guide to Choosing a Realtime Framework

What Should I Use (Notification/Events) To Send Data From Application Server To End Points (Devices) and vice versa Using KAA Middleware

As per the KAA references, I understand that once should only use the Notification feature, When it required to send data from server (External apps) to endpoints and Events are only used when there is a need for endpoint to endpoint communication (kind of device binding requirement)
So, To achieve request/response functionality using KAA. I need to implement any hybrid solutions like as below.
1) In my server, I can run one KAA SDK instance and use the event feature for request to the endpoint and response from the endpoint.
OR
2) From my server, I use the notification REST API for request and get the response back through the data logger feature using any in-build appender by configuring "LogUploadStrategy" as to uploads every log record as soon as it is created.
Notes For Point 1
As per Andrew, Solutions Architect of Kaa IoT platform
"You can always embed an SDK to a standalone application and host in
on the same server where kaa-node is present. This application may
receive REST API calls and forward them to particular endpoints via
Kaa events feature. However, this is useful for test purposes. I
would not recommend this solution in production because it is hard to
scale and has potential security issues"
Notes For Point 2
It satisfies the KAA reference document as well as Andrew's suggestion for request only but how can i achieve the response.
Questions For Point 1
1) What causes to scale the application and what type of security issues it faces even through it uses RSA 2048 encryption for communication?
2) Can we embed more then one SDK in standalone application and host in on the same server where kaa-node is present.
Questions For Point 2
3) if device sends the notification response along with the telemetry data, can it increase the latency and any other performance issue.
Common Questions
4) Which one is the better approach to achieve request/response functionality?
Any help or suggestion is really appreciated.
1) What causes to scale the application and what type of security issues it faces even though it uses RSA 2048 encryption for communication?
It makes the EP on the server side as a single point of failure and does not allow load balancing.
About security issues, Andrew meant: This application may receive REST API calls and this forces one to provide additional security for this REST API calls and better use your first hybrid solutions using solely event feature.
2) Can we embed more then one SDK in standalone application and host in on the same server where kaa-node is present.
No, you can't use more than one SDK in one application, but you can run a couple of instance on one machine in different directories
in order to prevent collisions of autogenerated security keys and other files.
3) if device sends the notification response along with the telemetry data, can it increase the latency and any other performance issue.
Of course, you will face some delays if start sending very frequently and big portions of data on both sides. If you have a lot of devices that sends in total a big amount of telemetry data, you can increase performance on the server side by start-up KAA in the cluster mode or add new nodes for processing requests.
4) Which one is the better approach to achieve request/response functionality?
The second hybrid solution – data collection and notification features. This doesn't cause any problem with scale and you can easily launch Kaa server in cluster mode.

How Hystrix communicate with Eureka?

I've seen a lot examples of projects, where both Hystrix and Eureka are used.
It would be nice if someone could explain to me how they both communicate.
Maybe It's a wrong build question, but I would like to know why there is both hystrix and eureka on the projects.
Eureka and Hystrix are two different services, both developed by Netflix.
Eureka provides some kind of elastic load balancer. It has a server part (where the clients get registered), and a client one. The clients register themselves in the server by sending heartbeats, and also get the registry information from the server in order to know where the services (other clients) it needs are located (as a client can also be a service). Have a look at the eureka wiki for a much better explanation.
Hystrix is on the other hand an implementation of the Circuit Breaker Pattern (if you do not know what this is, buy the Release It book right now). It basically provides a way of controlling your "expensive" calls (normally to a remote system) by wrapping them. If the remote system is not available, or the calls are taking too long, Hystrix will deliver you a "failure" (or configured fallback) response immediately instead of keeping you blocked waiting for a response that will not come. The hystrix page explains it much better.

What are some various SOA approaches or methodologies

I've read of an ESB being used as a SOA approach. What are some other approaches?
This is a very broad question, you may want to focus is.
If you are asking regarding approaches that are instead of ESB, then you may consider using direct access to services, instead of using a service bus.
This approach is often used with a directory or lookup service like UDDI to look up service end point location.
When using an ESB, you send the message to the ESB, who 's responsible to route it to the service provider.
When using direct access the client should know in advance the address of the service provider, and he sends the message directly to him.
When using a lookup service, you first query the address of the service provider (like using DNS to lookup IP addresses), and using this address you send the message to the service provider.
Beyond addressing and routing, the ESB may provide other functions that you loose (or have to implement in other way) if you use the direct access approach.
multi cast routing - sending the request to more then one service provider
context based routing - deciding to which service provider we should send the request, based on the content of the request
central logging
central policy enforcement
load balancing \ fault tolerance
format or protocol translation
buffering and asynchronous service invocation
First.... ask yourself which SOA philosophy are you adhering to. If you are in the IBM camp, then there are 4 different products that provide ESB functionality. Each product is optimized for a different scenarios but basically each one does similar functions.
Think.... SOA == a car. IBM is one manufacturer. Different products == different type of cars for different type of drivers.

Resources