I'm having issues with SignalR failing to complete its connection cycle when running in a load balanced environment. I’m exploring Redis as a means of addressing this, but want a quick sanity check that I’m not overlooking something obvious.
Symptoms –
Looking at the network traffic, I can see the negotiate and connect requests made, via XHR and websockets respectively, which is what I’d expect. However, the start request fails with
Error occurred while subscribing to realtime feed. Error: Invalid start response: ''. Stopping the connection.
And an error message of ({"source":null,"context":{"readyState":4, "responseText":"","status":200, "statusText":"OK"}})
As expected, this occurs when the connect and start requests are made on different servers. Also, works 100% of the time in a non load-balanced environment.
Is this something that a Redis backplane will fix? It seems to make sense, but most of the rational I’ve seen for adding a backplane is around hub messaging getting lost, not failing to make a connection, so I'm wondering if I'm overlooking something basic.
Thanks!
I know this is a little late, but I believe that the backplane only allows one to send messages between the different environments user pools, it doesn't have any affect on how the connections are made or closed.
Yes, you are right. Backplane acts as cache to pass messages to other servers behind load balancer. Connection on load balancer is a different and a tricky topic, for which I am also looking for an answer
Related
I'd like to know if there's a way to communicate directly between two (or more) flask-socketio servers. I want to pass information between servers, and have clients connect a single web socket server, which would have all the combined logic and data from the other servers.
I found this example in JS Socket IO Server to Server where the solution was to use a socket.io-client to connect to another server.
I've looked through the Flask-SocketIO documentation, as well as other resources, however it doesn't appear that Flask-SocketIO has a client component to it.
Any suggestions or ideas?
Flask-SocketIO 2.0 can (maybe) do what you want. This is explained in the Using Multiple Workers section of the documentation.
Basically, the servers are configured to connect to a shared message queue service (redis, for example), and then a load balancer in front of them assigns clients to any of the servers in the pool using sticky sessions. Broadcasting operations are coordinated automatically among the servers by passing messages on the queue.
As an additional feature, if you use this set up, you can have any process connect to the message queue to post messages for clients, so for example, you can emit events to clients from a worker or other auxiliary process that is not a SocketIO server.
From your question it is unclear if you were looking to implement something like this, or if you wanted to have the servers communicate for a different reason. Sending of custom messages on the queue is currently not supported, but your question gave me the idea, this might be useful for some scenarios.
As far as using a SocketIO client as in the question you referenced, that shouud also work. You can use this Python package: https://pypi.python.org/pypi/socketIO-client. If you go this route, you can have a server be a client and receive events or join rooms.
Is using Sticky Sessions a supported scale out scenario? Has anyone deployed SignalR with Sticky Sessions and were there any unexpected issues?
We're investigating SignalR for a load-balanced, broadcast based project (similar to a stock ticker) where messages latency is an important factor. After reading though the Scale Out documentation it seems that the Backplane model could introduce significant latency in the messages, especially when message rates are high.
I've found some references implying that it would work with some side effects but not what the reliability and performance implications are.
Thanks!
If you use SignalR without a backplane, any client method invocation will only be able to reach clients connected directly to the server making the invocation.
This might be fine if you are only using Clients.Caller since the caller should always come back to the same server given sticky sessions. This could become an issue if you are using Clients.All, Clients.Others Clients.Client(connectionId), Clients.User(userName), Clients.Group(groupName), etc... In these cases, any client that is connected to a server different from the one executing the Clients... code will not receive the invocation regardless of whether the client is connected to the same Hub, has the right connectionId, etc...
I feel like I'm getting mixed reviews about SignalR and disconnection functionality and I'm trying to figure out who is right (these packages move so fast it's hard to tell what is right information these days since something you find online could be 2 months old and outdated).
I've seen many people setup pinging code to tell if a client is still connected. Yet I see others talking about the Disconnect() function that gets fired from the Hub when a client disconnects. Then I see some say the Disconnect() method isn't reliable?
Does anyone have the details on this as it stands today? Should I not be using the Disconnect() method because in some cases (which maybe I haven't ran into yet) it's not reliable? It's so confusing trying to search for information when these things change so often invalidating older information you find on the web about it.
There might be a couple of edge cases where you don't get timely notifications but in general it is reliable. Also, we raise disconnect events on the client as well and we have a keep alive functionality which ensures that if the client doesn't hear from the server within a specified timeout, we will try to reconnect and ultimately disconnect if reconnects fail. Therefore, you can take appropriate actions on the client.
You can read more about this here http://www.asp.net/signalr/overview/signalr-20/hubs-api/hubs-api-guide-server#connectionlifetime
I'm having a lot of trouble with an EC2 instance and I can't figure out what's going on. We're using it as a web server and it seems to work fine for single connection stuff - loading a simple page, RDP connection, ping etc. But as soon as a single client computer has more than one connection active with the server (a good example is if I try to browse the web site while I'm also logged into the server via RDP) the whole connection becomes incredibly unstable.
The biggest most annoying consequence of this is that the ASP.NET site that we're running consistently fails to load some pages since those pages use more than one connection. This wasn't a problem up until a few days ago when we were forced to migrate to different hardware because our hardware was apparently being retired by Amazon. Ever since then it's been tricky like this. Is it possible that there's a kink in Amazon's network and that it could potentially be resolved by stopping and starting the instance (and thus getting a different server?)
It turns out the problem was an underlying issue on Amazon's end. They investigated the issue and found a problem that they're correcting. I hope I haven't wasted too much StackOverflow brainpower with this dead-end of a question!
I'm building a client-server application and I am looking at adding failover to the client so that when a server is down it will try to connect to another available server. Are there any standards or specifications covering server failover? I'd rather adopt an existing standard than implement my own mechanism.
I don't there is, or needs to be any. It's pretty straight forward and all depends on how you can connect to your sever, but basically you need to keep sending pings/keepalives/heartbeats whatever you want to call em, and when a fail occurs (or n fails in a row, if you want) change a switch in your config.
Typically, the above would be running as a separate service on the client machine. Altenativly, you could create a method execution handler which handles thr execution of all server calls you make, and on Communication failure, in your 'catch' block, flick your switch in config
You're question is very general. here are some general answers:
Google for Fault Tolerant Computing
Google for High Availability Solutions
This is usually handled at either the load balancer or the server level. This isn't something you normally do in code at the client.
Typically, you multihome the servers each having their own IP + one that is shared between all of them. Further, they communicate with each other over tcp for the heartbeat to know which is the Active node in an Active / Passive cluster.
I can't tell what type of servers you have, but most of the windows servers can do this natively.
You might consider asking the question at serverfault to see how to properly configure your servers to support this.