When will "OnReconnected" occur? - signalr

If "OnConnected" is raised when the 1st time we connect to our website, when will "OnReconnected" happen?
1) Suppose someone is connected to the network and suddenly the network isn't available and soon it recovers, so OnReconnected happens?
2) Any other special that will make OnReconnted happen?
Thanks!

The Signalr documentation on Understanding and Handling Connection Lifetime Events in SignalR should have all the information you need.
Generally speaking OnReconnected will fire any time the SignalR client automatically reconnects to the SignalR server after it has lost its connection for any reason. These reasons can include network issues, the server restarting, etc...
The SignalR client will stop attempting to automatically reconnect to the server if it is unable to successfully do so within the DisconnectTimeout. If this happens, and you want to reestablish a connection, you will be required to manually restart the client by calling start() after the client becomes disconnected. If you manually restart the client by calling start(), OnConnected will be called instead of OnReconnected and the client will receive a new connection id.

Related

SignalR .Net 2.4.1-Dead Connection Id Staying Alive

I'm witnessing an odd behavior with my SignalR client (Android). OnDisconnected event is firing, the connection becomes dead, but my hub aborts the event and reissues the connection id as a new connection:
This seems to occur when the Android client goes into a slow state. It's messing up my status indicator on my UI showing that the user is still connected, yet they have logged out. What's the best approach to handle this situation? Should I stop my hub and reconnect when my connection is slow? I thought about getting the connection id from the hub but there's no way to indicate whether or not the connection is alive or dead.
I had a similar situation (using .NET client) and ended up having to implement a heartbeat hub where a simple message (heartbeat) is sent to each client every 15 seconds. The clients have a timer that listens to the heartbeat and resets when it gets a new one.
If two heartbeats are missed, the client closes the connection, waits a few seconds, and opens a new one. There is no way to close a connection from the server. This should allow the server to get the close connection message from the client and actually kill the connection.
Please be aware that the SignalR Java client is no longer maintained. https://github.com/SignalR/java-client#this-repository-is-obsolete-and-no-longer-used-or-maintained.

SignalR - Not calling OnReconnected using long polling

From what I have read a SignalR client should not miss any messages from the server while it's connected. This does not seem to be the case when using long polling.
I have a straightforward hub based application using SignalR 1.1.2. When using SSE, if the network cable is unplugged and plugged back in again within the timeout period, both the client and server are notified that a reconnect has occurred and, as far as I can tell, no messages are missed. When using long polling, this seems to happen:
When the connection is created ($.connection.hub.start()) the OnConnected method is called in the hub and the client goes into connected state.
If I then unplug the network cable and pop it back in quickly, there is no call to OnDisconnected or OnConnected. No messages are missed. Any messages waiting on the server are subsequently sent to the client. OK so far.
If I unplug the network cable and let the long poll expire, I get a call to OnDisconnected. There is no state change on the client.
If I plug the network cable back in the client starts receiving messages again. There has been no notification on the client that it has been disconnected, but the client has missed some messages. There is no call to OnReconnected or OnConnected on the server.
Is this a bug? The behaviour seems very different between SSE and long polling.
Is there a recommended strategy to ensure that the client does not miss messages in this scenario? I could keep track of connection ids on the server and send periodic pings from the client - if I get a ping after an OnDisconnected I could send a message to tell the client to resync, but this doesn't seem like the right thing to do.
Any suggestions?
WebSockets, Server Sent Events, and Forever Frame all utilize a client side keep alive which is used to ensure client connectivity. However, Long Polling does not utilize the client side keep alive feature due to technical limitations and has no guarantee of connectivity for events such as pulling the network cable out.
When I say no guarantee I'm simply stating that the Long Polling transport is no longer able to be ensured by SignalR but instead relies on the Browser to trigger the correct events on Long Polling's ajax connection (through which SignalR can respond to).
Keep in mind though, if the client does happen to regain connectivity with the server after pulling out the network cable it will receive any messages that it missed during its down time. So messages are not missed, they're just delayed.
Lastly in the case that the server does not see the client for an extended period of time the OnDisconnected event WILL be triggered. For this to happen in a situation such as pulling the network cable out the server will first timeout the current connection's request and then will timeout the connection itself. This means that you can still rely on the OnDisconnected event, it may just be delayed based on network conditions.
Soooo what you're seeing is 100% by design =)
Hope this helps!

Client Reconnection

My understanding of the (JavaScript) hub client is that if a connection is lost, it enters a 'Reconnecting...' phase which attempts to reconnect. If it can't do so, it will enter a 'Disconnected' state which is where it'll stay until asked to start again.
How long is the 'Reconnecting...' phase meant to last before it gives up? I've read 40 seconds before, but my client seems to take much less time - about 10, maybe less. [EDIT: Nevermind this part, I had configured a 10 disconnect on the server as a test... and forgot. I understand this is set by the server during the negotiate. Makes sense!] ... I'd prefer to have the client continually retry until it is told to abort - can this be done, and would it cause issues?
Another question; during the Reconnecting... phase, if I attempt to call a hub method (again, in JS) it never seems to complete. I'm using the returned Deferred to check for 'done' and 'fail' events, but neither seems to get called. Is this by design?
Thanks.
You can definitely have it continually reconnect.
Handle the disconnected event on the client and call connection.start:
$.connection.hub.disconnected(function() {
setTimeout(function() {
$.connection.hub.start();
}, 5000); // Re-start connection after 5 seconds
});
The only issues this would cause is that you could potentially be triggering infinite requests to a server that isn't there for client machines. This becomes even more troublesome when you introduce the mobile market into the situation (drains battery like crazy).
When you attempt to call a hub method while reconnecting SignalR will try to send your command. Since there are 2 channels, one for receiving data and one for sending, (for all transports except web sockets) in some cases it can still be possible to send requests while your offline. Therefore SignalR does not know if a request fails until the browser tells it that it could not successfully make the request.
Hope this helps!
I might have a clue... Touching the Web.config produces an appPool Recycle, meaning that a new worker process will be created for new requests while the existing process will continue for a while until the remaining requests end or the timeout is reached. Request that do not end in the timeout period are terminated.
Signalr client reconnects to the new process while the long running task is running in the old process, so when on the long running task you do
GlobalHost.ConnectionManager.GetHubContext<ForceHub>();
you actually get a reference for "old" hub while the client is connected to the "new" hub.
That's why the test preformed by Wasp worked: he was making a new request to publish on the signalr hub that was processed in the newly created worker process.
You could try to configure a singalr backplane (https://www.asp.net/signalr/overview/performance/scaleout-in-signalr), it’s really easy to configure it using Sql Server (https://www.asp.net/signalr/overview/performance/scaleout-with-sql-server). The backplane should be capable of connect the two worker processes and hopefully you will get the notification on the client.
If this is the problem, notifications generated by new requests will work even without the backplane. Notice that the real purpose of the backplane is to scale out signalr, this is, to connect a farm of WebServers between them.
Also keep in mind that running long-running task inside IIS is as task hard to achieve as, among other things, IIS does regular appPool recycles and has timeout limits for the requests to execute. I recommend that you read the following post: http://www.hanselman.com/blog/HowToRunBackgroundTasksInASPNET.aspx
“If you think you can just write a background task yourself, it's likely you'll get it wrong. I'm not impugning your skills, I'm just saying it's subtle. Plus, why should you have to?”
Hope this helps

signalr catching keep alive failure/disconnects

So I have signalr working all fine, pushing my data to the client no problems. I implemented my own keep alive using an ajax call to keep the connection alive. But I have been reading and there is the option that I am trying:
GlobalHost.Configuration.KeepAlive = TimeSpan.FromSeconds(30);
But my issue is if it fails to send the keep alive how do you capture it on the javascript end?
If this is the server pushing data to keep the connection alive does that mean the client will never know if it has failed?
Or will the javascript throw a connection.error?
I want to be able to pull the clients network cable out, and after XX seconds it display a message saying network connection lost. Atm I have this working using my ajax call but is this possible using the keepalive value?
This is already implemented for every transport except LongPolling.
By default the JS client will go into reconnecting if it has missed 2 keep alives.
If you want to tie into the reconnecting event you can do:
$.connection.hub.reconnecting(function() {
// Your logic
});
If you want to tie into the event that indicates that the connection MAY go into reconnecting can do:
$.connection.hub.connectionSlow(function() {
// Your logic
});
Keep in mind by default the client will stop trying to reconnect after a given time and will shift into the disconnected state to avoid unnecessary reconnect events. If you want to ensure that your connection is ALWAYS connected, even if there's down time see my answer here: Client Reconnection

netty client + keep-alive=true

I'm confused for how to deal with lots of connections in netty (3.6.2.FINAL) and keep-alive=true.
For work on a netty client as a server side connector, making http calls to another service, it wants to always keep the connection open for performance (keep-alive=true).
The issue: there is a hard limit for number of open channels, after which the client will hang when attempting to open a channel. Why no exception just hangs? Is this a setting in terms of channel timeout?
I can't seem to understand Netty in terms of overall managing of connections within worker threads:
With a blocking write/read client ChannelHandler (http request/response), how do you detect that the connection pool is empty?
The handler can receive ChannelEvent(s) but nothing about the overall count available in the connection pool (its very non-deterministic anyway). And if the channel is not open, does it make sense for the handler to initiate opening a new channel given its running in a worker thread?
But if the connection pool is exhausted, how do you go and cleanup some idle connections (within the handler)?
I had to completely rip apart my handler to get the client blocking call to work without hanging. The issue was mostly resolved by not holding onto local channel ref within the handler.
Now we just pass a ConnectionInterface#openConnection() [returns a new ChannelFuture] into the shared custom ChannelHandler#call( ConnectionInterface connectionInterface, HttpRequest request ).
Better to open-channel within the handler call method, and to pass that channel along with checks on its state before channel.write(x), if !channel.isWritable() then recycle the channel (from a new client connection eg. ConnectionInterface#openConnection()) and retry the write. There isn't even a need to close the channel (it gets handled in the pool).
Just ran it with 500 threads / 5000 requests and it succeeds fine.

Resources