SignalR - Not calling OnReconnected using long polling - asp.net

From what I have read a SignalR client should not miss any messages from the server while it's connected. This does not seem to be the case when using long polling.
I have a straightforward hub based application using SignalR 1.1.2. When using SSE, if the network cable is unplugged and plugged back in again within the timeout period, both the client and server are notified that a reconnect has occurred and, as far as I can tell, no messages are missed. When using long polling, this seems to happen:
When the connection is created ($.connection.hub.start()) the OnConnected method is called in the hub and the client goes into connected state.
If I then unplug the network cable and pop it back in quickly, there is no call to OnDisconnected or OnConnected. No messages are missed. Any messages waiting on the server are subsequently sent to the client. OK so far.
If I unplug the network cable and let the long poll expire, I get a call to OnDisconnected. There is no state change on the client.
If I plug the network cable back in the client starts receiving messages again. There has been no notification on the client that it has been disconnected, but the client has missed some messages. There is no call to OnReconnected or OnConnected on the server.
Is this a bug? The behaviour seems very different between SSE and long polling.
Is there a recommended strategy to ensure that the client does not miss messages in this scenario? I could keep track of connection ids on the server and send periodic pings from the client - if I get a ping after an OnDisconnected I could send a message to tell the client to resync, but this doesn't seem like the right thing to do.
Any suggestions?

WebSockets, Server Sent Events, and Forever Frame all utilize a client side keep alive which is used to ensure client connectivity. However, Long Polling does not utilize the client side keep alive feature due to technical limitations and has no guarantee of connectivity for events such as pulling the network cable out.
When I say no guarantee I'm simply stating that the Long Polling transport is no longer able to be ensured by SignalR but instead relies on the Browser to trigger the correct events on Long Polling's ajax connection (through which SignalR can respond to).
Keep in mind though, if the client does happen to regain connectivity with the server after pulling out the network cable it will receive any messages that it missed during its down time. So messages are not missed, they're just delayed.
Lastly in the case that the server does not see the client for an extended period of time the OnDisconnected event WILL be triggered. For this to happen in a situation such as pulling the network cable out the server will first timeout the current connection's request and then will timeout the connection itself. This means that you can still rely on the OnDisconnected event, it may just be delayed based on network conditions.
Soooo what you're seeing is 100% by design =)
Hope this helps!

Related

SignalR .Net 2.4.1-Dead Connection Id Staying Alive

I'm witnessing an odd behavior with my SignalR client (Android). OnDisconnected event is firing, the connection becomes dead, but my hub aborts the event and reissues the connection id as a new connection:
This seems to occur when the Android client goes into a slow state. It's messing up my status indicator on my UI showing that the user is still connected, yet they have logged out. What's the best approach to handle this situation? Should I stop my hub and reconnect when my connection is slow? I thought about getting the connection id from the hub but there's no way to indicate whether or not the connection is alive or dead.
I had a similar situation (using .NET client) and ended up having to implement a heartbeat hub where a simple message (heartbeat) is sent to each client every 15 seconds. The clients have a timer that listens to the heartbeat and resets when it gets a new one.
If two heartbeats are missed, the client closes the connection, waits a few seconds, and opens a new one. There is no way to close a connection from the server. This should allow the server to get the close connection message from the client and actually kill the connection.
Please be aware that the SignalR Java client is no longer maintained. https://github.com/SignalR/java-client#this-repository-is-obsolete-and-no-longer-used-or-maintained.

gRPC call, channel, connection and HTTP/2 lifecycle

I read the gRPC Core concepts, architecture and lifecycle, but it doesn't go into the depth I like to see. There is the RPC call, gRPC channel, gRPC connection (not described in the article) and HTTP/2 connection (not described in the article).
I'm interested in knowing how these come together. For example, what happens to the channel when a RPC throws an exception? What happens to the gRPC connection when the channel is closed? When is the channel closed? When is the gRPC connection closed? Heart beats? What if the deadline is exceeded?
Can anyone answer these questions, or point me to resources that can?
The connection is not a gRPC concept. It is not part of the normal API and is an implementation detail. This should be seen as fairly normal, like HTTP libraries providing details about HTTP exchanges but not exposing connections.
It is best to view RPCs and connections as two mostly-separate systems.
The only real guarantee is that "connections are managed by channels," for varying definitions of "managed." You must shut down channels when no longer used if you want connections and other resources to be freed. Other details are either an implementation detail or an advanced API detail.
There is no "gRPC connection." A "gRPC connection" would just be a standard "HTTP/2 connection." Except that is even an implementation detail of the transport in many gRPC implementations. That allows having alternative "connection" types like "inprocess" or QUIC (via Cronet, where there is not a classic "connection" at all).
It is the channel's job to hold all the connections and reconnect as necessary. It delegates part of that responsibility to load balancers and the load balancing APIs do have a concept of connections (subchannels). By not exposing connections to the application, load balancers have a lot of freedom to operate.
I'll note that gRPC C-core based implementations share connections across channels.
What happens to the channel when a RPC throws an exception?
The channel and connection is not impacted by a failed RPC. Note that connection-level failures typically cause RPCs to fail. But things like retries could allow the RPC to be re-sent on a new connection.
What happens to the gRPC connection when the channel is closed?
The connections are closed, eventually. Channel shutdown isn't instantaneous because existing RPCs can continue, and connection shutdown isn't instantaneous as well. But once all RPCs complete the connections are closed. Although C-core won't shut down a connection until no channels are using it.
When is the channel closed?
Only when the user closes it.
When is the gRPC connection closed?
Lots of times. The client may close it when no longer needed. For example, let's say the server IP address changes and the client need to connect to 1.1.1.2 instead of 1.1.1.1. A new connection will be created and new RPCs will go to the new IP address. The client may also close connections it thinks are dead (e.g., via keepalive timeouts).
Servers have a lot of say of when to close connections. They may close them simply because they are old, or because they have been idle, or because the server is overloaded. But those are simply use-cases; the server can shut down a connection at-will.
What if the deadline is exceeded?
Deadline only applies to RPCs and doesn't impact the channel or a connection.
I was actually waiting for Eric to answer this as he is the expert in this!
I also have been playing with gRPC for a while now, I would like to add few things here for beginners. Anyone more experienced, please feel free to edit!
Channel is an abstraction over a long-lived connection! The client application will create a channel on start up. The channel can be reused/shared among multiple threads. It is thred safe. One channel is enough (for most of the use cases) for multiple threads and multiplexing concurrent requests. It is channel's responsibility to close / reconnect / keep the connection alive etc. We as the users do not have to worry about this in general. The client application can close the channel anytime it wants. Channel creation seems to be an expensive process. So we would not open/close for every RPC.
When you use gRPC loadbalancer/nameresolver for a domain name and the nameresolver resolves the domain with multiple ip addresses, a channel creates multiple subchannels where each subchannel is an abstraction over a connection to 1 server. So a channel can also represent multiple connections!!
Adding some points to note from Eric's comment.
adding the default load balancer still only creates (approximately)
one connection if the name resolver returns multiple addresses, as the
default is pick_first. But if you change the load balancer to
round_robin or virtually any other policy, then yes, there will be
multiple connections in a channel. Even if a name resolver returns one
address, the load balancer is free to create multiple connections
(e.g., for higher throughput), but that's not common today
An underlying connection can be closed any time for any reason. For ex: remote server is shutting down gracefully for a scheduled maintenance or a connection is idle for longer duration. In that case, the server could send GOAWAY signal to the client and client might disconnect and reconnect to some other server. or Server might crash due to OOM error. In this case channel will detect connection failure and will retry for new connection for some other server etc.
A channel can keep sending PING frame to the server to keep the connection alive. These are all configurable via channel builder.
With these information above, if we look at your questions,
what happens to the channel when a RPC throws an exception?
Nothing happens to the channel. The unhandled exception on the server might the fail the RPC on the client side. But channel is still usable for any RPC calls.
What happens to the gRPC connection when the channel is closed?
Channel is an abstraction over the connection. So it will be closed. (again there is no gRPC connection as such as Eric had mentioned. It would be a HTTP2 connection)
When is the channel closed?
Any time you want. But normally when the application shuts down.
When is the gRPC connection closed?
It is not our problem. Channel takes care of this.
Heart beats?
Channel sends PING frames periodicaly to keep the connection alive.
What if the deadline is exceeded?
It is something like timeout on the client side. When the deadline exceeds, the client might cancel the request. Once again nothing happens to the channel. (But it might trigger exception on the server side which I had noticed few times. (Received DATA frame for an unknown stream. https://github.com/grpc/grpc-java/issues/3548). It seems to have been fixed now).

When will "OnReconnected" occur?

If "OnConnected" is raised when the 1st time we connect to our website, when will "OnReconnected" happen?
1) Suppose someone is connected to the network and suddenly the network isn't available and soon it recovers, so OnReconnected happens?
2) Any other special that will make OnReconnted happen?
Thanks!
The Signalr documentation on Understanding and Handling Connection Lifetime Events in SignalR should have all the information you need.
Generally speaking OnReconnected will fire any time the SignalR client automatically reconnects to the SignalR server after it has lost its connection for any reason. These reasons can include network issues, the server restarting, etc...
The SignalR client will stop attempting to automatically reconnect to the server if it is unable to successfully do so within the DisconnectTimeout. If this happens, and you want to reestablish a connection, you will be required to manually restart the client by calling start() after the client becomes disconnected. If you manually restart the client by calling start(), OnConnected will be called instead of OnReconnected and the client will receive a new connection id.

TCP keep-alive to determine if client disconnected in netty

I'm trying to determine if a client has closed a socket connection from netty. Is there a way to do this?
On a usual case where a client closes the socket via close() and the TCP closing handshake has been finished successfully, a channelInactive() (or channelClosed() in 3) event will be triggered.
However, on an unusual case such as where a client machine goes offline due to power outage or unplugged LAN cable, it can take a lot of time until you discover the connection was actually down. To detect this situation, you have to send some message to the client periodically and expect to receive its response within a certain amount of time. It's like a ping - you should define a periodic ping and pong message in your protocol which practically does nothing but checking the health of the connection.
Alternatively, you can enable SO_KEEPALIVE, but the keepalive interval of this option is usually OS-dependent and I would not recommend using it.
To help a user implement this sort of behavior relatively easily, Netty provides ReadTimeoutHandler. Configure your pipeline so that ReadTimeoutHandler raises an exception when there's no inbound traffic for a certain amount of time, and close the connection on the exception in your exceptionCaught() handler method. If you are the party who is supposed to send a periodic ping message, use a timer (or IdleStateHandler) to send it.
If you are writing a server, and netty is your client, then your server can detect a disconnect by calling select() or equivalent to detect when the socket is readable and then call recv(). If recv() returns 0 then the socket was closed gracefully by the client. If recv() returns -1 then check errno or equivalent for the actual error (with few exceptions, most errors should be treated as an ungraceful disconnect). The thing about unexpected disconnects is that they can take a long time for the OS to detect, so you would have to either enable TCP keep-alives, or require the client to send data to the server on a regular basis. If nothing is received from the client for a period of time then just assume the client is gone and close your end of the connection. If the client wants to, it can then reconnect.
If you read from a connection that has been closed by the peer you will get an end-of-stream indication of some kind, depending on the API. If you write to such a connection you will get an IOException: 'connection reset'. TCP doesn't provide any other way of detecting a closed connection.
TCP keep-alive (a) is off by default and (b) only operates every two hours by default when enabled. This probably isn't what you want. If you use it and you read or write after it has detected that the connection is broken, you will get the reset error above,
It depends on your protocol that you use ontop of netty. If you design it to support ping-like messages, you can simply send those messages. Besides that, netty is only a pretty thin wrapper around TCP.
Also see this SO post which describes isOpen() and related. This however does not solve the keep-alive problem.

how to determine if the client received the message using SignalR

If I send a messag using SignalR, is it possible that the client does not receive the message? How can you verify if any errors apeared in the communication? Iam thinking of sending a message back to server after the server notification was sent, but is there any better way?
Yes, it's possible that the client doesn't receive the message. SignalR keeps messages in memory for 30 seconds (by default, you can tweak that or use a persistent message bus), so if the client isn't connected for whatever reason and this timeout passes the client will miss the message. Note that if he reconnects within this period he receives all messages he hasn't got yet, including those that were sent when he was disconnected.
I don't know if SignalR provides a way of telling you when a broadcast failed, so it might be safer to just send an acknowledgement back to the server.
As long as the client is connected, it will get the messages. You can subscribe to connection state changes in client side code. In server side code you can implement IConnected and IDisconnect interfaces to handle the Connect, Disconnect, and Reconnect events.

Resources