How can I implement SignalR Redundancy using a load balancer? - asp.net

My current solution is ,I have a load balancer and behind that I have implemented a backplane (SQL Server) in order to support scaling . Along with that, I want to have redundancy as well . Example if server 1 goes down all the connected connection in that server should connect to other servers .
Will the standard SignalR Backplane solve this problem ? Or else is there other good approaches ?

If you have a backplane and a loadBalancer it should work. If a node dies clients will try to reconnect and if the load balancer redirects them to a different node that is using the same backplane they will be able to reconnect fine. One important thing in distributed scenarios - all nodes have to have the same machineKey otherwise requests will be rejected because the node will not be able to decrypt the connection token.

Related

Correct way to get a gRPC client to communicate with one of many ECS instances of the gRPC service?

I have a gRPC client, not dockerised, and server application, which I want to dockerise.
What I don't understand is that gRPC first creates a connection with a server, which involves a handshake. So, if I want to deploy the dockerised server on ECS with multiple instances, then how will the client switch from one to the other (e.g., if one gRPC server falls over).
I know AWS loadbalancer now works with HTTP 2, but I can't find information on how to handle the fact that the server might change after the client has already opened a connection to another one.
What is involved?
You don't necessarily need an in-line load balancer for this. By using a Round Robin client-side load balancing policy along with a DNS record that points to multiple backend instances, you should be able to get some level of redundancy.

Can I avoid of using a SignalR backplane behind a load balancer?

I use SignalR in order to expose RabbitMQ messages to browsers. This works fine with one app instance obviously. The question is if it could work with multiple instances too without a backplane. I understand that SignalR client could be disconnected from the pod A and connected back to the pod B but what exactly is the issue here? I am fine to lose some messages during reconnection. Is it the only issue? Is reconnection to the pod B treated as a regular new connection so that the client is just subscribed again as it was subscribed normally without reconnection? Or the system doesn't have input parameters it had during initial subscription and therefore it cannot resubscribe without hints?
As long as all of your SignalR servers are getting the same data from RabbitMQ or getting only the data for the clients connected to them, you don't need a backplane.
You will need a backplane if you have one of the following:
Clients can communicate with one another.
One one SignalR server is connected to RabbitMQ but clients can connect to multiple SignalR servers.
SignalR servers are connected to different queues or getting different data from the same queue.
I have a similar setup with a database instead of RabbitMQ and need a backplane to either have only one of the SignalR servers access the database (and have data be sent to all clients) or to share the database load between servers (and have data be sent to all clients). This way, the server getting the data can have it sent to a client connected to a different server.
I am using SignalR for ASP.NET and the servers do not know who is subscribed to the other servers. All messages are sent over the backplane and each server determines if they apply to their connected clients. This works well with broadcasts for example or if the same user has multiple clients to make sure they all get the same data regardless of the server.

SignalR OnConnected firing on different server to the one which it's actually connected

I'm using SignalR and a web farm in IIS, currently with 3 servers and requests are load balanced via ARR.
There are certain external events that happen which I want to be processed by the server to which the client is connected. So I want to track which of the 3 servers the client is currently connected.
I thought that I could do this using OnConnected and within that method store the MachineName against the ConnectionID in redis.
The problem is that OnConnected seems to get called an a different server to the one that the client is connected to.
Upon investigating, it seems that there are three calls, one to /negiotate one to /connect and one to /start. The /connect seems to be the websocket connection that is kept up for the duration, the others are just transient.
These three connections can happen on different servers, and it seems that the websocket connection can be to server A (so that's the server that the client's SignalR connection is going to), but the OnConnected gets fired on server B.
I was wondering if I'm overlooking something that will let me see which server the SignalR connection is actually connected to?
Thanks,
Will
If you are going to use a web farm, then you need to implement a backplane to track all of the messaging.
https://learn.microsoft.com/en-us/aspnet/signalr/overview/performance/scaleout-in-signalr
Without a proper backplane implementation its impossible to do what you want to do.
I believe that is something you would have to save. Assuming you are using a database for mapping users, you could have an additional field such as "LoggedInOn" and store the server host name or other identifier.
However, other than some aspect of troubleshooting your are looking to do, proper send/receive of messages should cross the backplane to all servers. This way no matter which server they are connected to, messages are received.
If you have external events as you say, once they complete and a message is ready to be sent back to a client, the backplane should push that to all servers.
If that's not happening I would review the docs as Kelso Sharp stated.

Does signalR backplane shares connections also?

Does signalR backplane shares the connection information also?
I mean in case of "longpolling" the connect request goes to one server and start server goes to another server then it gives this error
"The ConnectionId is in the incorrect format."
I am believing that this error is coming because the instance on which this request is going does not have any information about this connection id. I am using SQL server backplane but still facing this problem.
We are not supposed to use sticky session in our production environment
No, SignalR doesn't share any information regarding client connect\disconnect over the backplane (for example - server2 is not notified about new client connections on server1)
So the problem is somewhere else...
I got the problem. Its machine key issue only.
I had to explicitly add machine key in web.config of my application.
Then it is able to unprotect the token which is generated by another instance of my application.
Now its working fine.

Are all clients in a group assured to receive signalR calls when SignalR is scaled out behind load balancer?

I've been looking into SignalR implementation incorporated with a load balancer
and have a few basic (if not simple sounding) questions.
I must preface this by saying I've got zero (0) experience with load balancers.
We will have 2 servers sitting behind a load balancer.
The client is an ASP .Net application.
We've been told that the load balancer maintains session affinity.
Consider the following scenario:
Client1 & Client2 -- connect to GroupA--> Server1
Client3 & Client4 -- connect to GroupA--> Server2
1) Server1 makes a client call to GroupA - this assumes that
Clients 1-4 will get the notification, correct?
2) How does the processing occur on this?
3) Is it a function of SignalR itself, or the load balancer?
4) When sending messages at the group level, do messages only get delivered to the client
apps associated with the group on that specific server, or will messages get forwarded
to all clients of that group?
Does anyone have any thoughts on this?
Thanks,
JB
I believe the scenario you're looking at requires a SignalR Backplane to be setup.
Here's a relevant selection from the article but you'll want to read the full thing to answer your specific questions:
Each server instance connects to the backplane through the bus. When a
message is sent, it goes to the backplane, and the backplane sends it
to every server. When a server gets a message from the backplane, it
puts the message in its local cache. The server then delivers messages
to clients from its local cache.

Resources