I connect to my server, which is load balanced for an alias to point to 2 servers, 01 & 02 and it round-robins connections for arguments sake. I can connect to the hub without a problem, and I can even send stuff to the server, but when it goes to return it to the client, I never get my methods invoked. If I bypass the load balancer and use the server name explicitly, it always works just fine.
I'm even tracing it, and I send back the message from the exact originating server with the Clients.Client(clientId).completeJob(stuff), and that executes fine on the server, but if I ContinueWith, it never gets finished.
Oh, and it's connected with server sent events. Am I missing something or is this just not supported?
Server-sent events establishes a long running connection, but unlike WebSockets, it isn't bidirectional. The connection can only be used to push data to the client.
SignalR uses regular XHRs to send data from clients when the WebSocket transport is unavailable. This means that the load balancer will likely route client-to-server hub method invocations to a server different than the one the client originally established a server-sent event connection with.
The server executing Clients.Client(clientId).completeJob(stuff) likely doesn't own the connection that would allow it to push a message to the specified client. (Though returning a value from a hub method on the server will send data back to the client via the same connection that invoked the method.)
SignalR can work behind a load balancer. It just requires a little more setup so all the SignalR servers can communicate with each other via a backplane such as Service Bus or Redis. This allows messages to get dispatched to the server that owns the server-to-client connection.
https://github.com/SignalR/SignalR/wiki/Azure-service-bus details how you can setup a Service Bus backplane on Azure.
Related
I am planning to use vaultTrack method to track the changes in state object.Once I capture the events at client level am planning to store those data in offline DB or invoke another API. Will there will be any challenge in this implementation. As per my understanding RPC client library will be listening all the time for state changes and also it handles the incoming RPC calls from external parties . Will it slow down the performance. How exactly vaultTrack method working internally .
Hi I don’t see any challenge in your implementation.
In Corda we use Apache Artemis for RPC communication. The Corda-RPC library must be included on the client side in order to connect to the server.
Internally this works like this -
At startup Artemis will be created on the RPC client(client side) and RPC server (within the corda node), client and server queues are created, and sessions are enabled/established between client and server. The Corda-RPC library contains a client proxy, which translates RPC client calls to low-level Artemis messages and sends them to the server Artemis instance. These RPC requests are stored on the server side in Artemis queues. The server side consumer retrieves these messages, approprite RPC calls are made, and an acknowledgement is sent to the client. Once the method completes, a reply is sent back to the client. The reply is wrapped in an Artemis message and sent across by server Artemis to the client Artemis. The client then consumes the reply from client Artemis queue.
The client proxy within the the Corda-RPC library abstracts the above processes. From a client perspective you should only create the proxy instance and make the RPC calls.
I would urge you to use the Reconnecting Client. You can read more about this in a blog which I have written.
Also please read the last part in the blog which talks about how to handle reconnection/failover scenarios.
I'm using SignalR and a web farm in IIS, currently with 3 servers and requests are load balanced via ARR.
There are certain external events that happen which I want to be processed by the server to which the client is connected. So I want to track which of the 3 servers the client is currently connected.
I thought that I could do this using OnConnected and within that method store the MachineName against the ConnectionID in redis.
The problem is that OnConnected seems to get called an a different server to the one that the client is connected to.
Upon investigating, it seems that there are three calls, one to /negiotate one to /connect and one to /start. The /connect seems to be the websocket connection that is kept up for the duration, the others are just transient.
These three connections can happen on different servers, and it seems that the websocket connection can be to server A (so that's the server that the client's SignalR connection is going to), but the OnConnected gets fired on server B.
I was wondering if I'm overlooking something that will let me see which server the SignalR connection is actually connected to?
Thanks,
Will
If you are going to use a web farm, then you need to implement a backplane to track all of the messaging.
https://learn.microsoft.com/en-us/aspnet/signalr/overview/performance/scaleout-in-signalr
Without a proper backplane implementation its impossible to do what you want to do.
I believe that is something you would have to save. Assuming you are using a database for mapping users, you could have an additional field such as "LoggedInOn" and store the server host name or other identifier.
However, other than some aspect of troubleshooting your are looking to do, proper send/receive of messages should cross the backplane to all servers. This way no matter which server they are connected to, messages are received.
If you have external events as you say, once they complete and a message is ready to be sent back to a client, the backplane should push that to all servers.
If that's not happening I would review the docs as Kelso Sharp stated.
I am fairly new to signalr concepts. I have a scenario where load balancing is in place with two servers. The situation is that client request is taken by the load balancer and redirects it to a one of the server based on the load. After redirection the connection from client to the server is lost. Important thing here is that client request is for different purposes i.e they call different methods on the hub. The server continues processing the request further and during this time if it detects any status change, it has to push the notification back to the clients. However at this point, server won't be knowing to which client it has to respond back as the load balancer doesn't store any information about the same once the connection is lost from client to server. How to handle this kind of scenario?. Should I be manually storing session id and other details in a table?
I have gone through the scaleout options suggested for load balancing using backplane by the signalr team(Azure service bus, Redis and SQL Server). However my scenario is little different. Any help will be appreciated.
In the SignalR (server) hub I want do a license check. If the check negativ then I want in the OnConnected of the Hub block the connection. The client should get in the Hub start the Task as canceled with a message (no valid licence).
When I return a Task with a Aggregate Exception in the OnConnected of the SignalR Hub then the client gets a fault state, with a timeout exception.
How can I block the connection to the SignalR hub and give the client a message why I have block the connection?
As far as I know you can't just start or stop connections already on the server. The client has to disconnect itself. If you want to use the hub for licence check you need to have the client connect - send licence info - server checks and if it is invalid call $client.disconnect on the client.
The other option like blorkfish mentions is to allow them to connect, add them to a list and check this when they call methods on the server.
I don't think that you should block the connection with an Exception. Your client would then not be able to tell if there was a genuine error in the SignalR connection.
Rather send a specific SignalR message back that there is no license - and then manage the connection object on the server side.
Keep a list of licensed connections, and a list of unlicensed connections.
So instead of using Clients.All to broadcast, use Clients.Client("< client_connection_id >") to broadcast.
Hope this helps.
We have a requirement wherein the server needs to push the data to various clients. So we went ahead with SSE (Server-Sent events). I went through the documentation but am still not clear with the concept. I have following queries :
Scenario 1. Suppose there are 10 clients. So all the 10 clients will send the initial request to server. 10 connections are established. When the data enters the server, a message is pushed from server to client.
Query 1 : Will the server maintain the IP address of all the client? If yes is there an API to check it?
Query 2: What will happen if all the 10 client windows are closed? Will the server abort all connections after a period of time?
Query 3: What will happen if the Server is unable to send messages to client due to unavailability of client like machine shutdown. Will the server abort all connections after a period of time for those client for whom they are unable to send the message?
Please clarify?
This depends on how you implement the server.
If using PHP, as an Apache module, then each SSE connection creates a new PHP instance running in memory. Each "server" is only serving one client at a time. Q1: yes, but not your problem: you just echo messages to stdout. Q2/Q3: If the client closes the connection, for any reason, the PHP process will shutdown when it detects this.
If you are using a multi-threaded server, e.g. using http in node.js. Q1: the client IP is part of the socket abstraction, and you just send messages to the response object. Q2/Q3: as each client connection closes the socket, the request process that was handling it will end. Once all 10 have closed your server will still be running, but not sending data to any clients.
One key idea to realize with SSE is that each client is a dedicated socket. It is not a broadcast protocol, where you push out one message and all clients get exactly the same message. Instead, you have to send the data to each client, individually. But that also means you are free to send customized data to each client.