I have noticed that SignalR does not recognize disconnection events at the transport layer in some circumstances. If the transport is disconnected gracefully, either through dropping my VPN connection to the server, or issuing an ipconfig /release the appropriate events are fired. If I am connected to the VPN and I either unplug the network cable or turn my wireless off, SignalR does not recognize we are disconnected until the VPN connection times out. That is mostly acceptable because it does eventually realize it. However if I am not using a VPN and I simply unplug the network cable or turn off the wireless, SignalR never recognizes a disconnect has occurred. This is using SignalR 2.0.1 over WebSockets in a non CORS environment. If I enable logging in the hub I see no events logged in the console after waiting for over 5 minutes. One thing that bothers me that I do see in the log during start up is:
SignalR: Now monitoring keep alive with a warning timeout of 1600000 and a connection lost timeout of 2400000.
Is this my problem? I have tried manually setting my GlobalHost.Configuration.DisconnectTimeout to 30 seconds but it does not change any behavior in this scenario, or alter that log statement. What else might I be overlooking here?
Edit: I noticed in fiddler that my negotiate response has a disconnect timeout of 3600 and a keepalive of 2400 and that trywebsockets is false. This particular server is 2008 R2, which I do not believe supports Web Sockets. Does that mean long polling would be used? I don't see any long polling requests going out in fiddler or the console.
The timeout settings were my problem. I'm not sure why I originally did not see a change in the logging output when I adjusted the setting, but I went back to a sample app and saw things change there and now things are behaving properly.
It should be noted that the default settings for SignalR produce the following statement in logging output and that this is measured in milliseconds.
SignalR: Now monitoring keep alive with a warning timeout of 13333.333333333332 and a connection lost timeout of 20000
This is obvious when you read the information under Transport Disconnect Scenarios on the following page which says: The default keepalive timeout warning period is 2/3 of the keepalive timeout. The keepalive timeout is 20 seconds, so the warning occurs at about 13 seconds.
Related
We'd like to configure our gRPC client to reconnect very quickly after a connection is lost. (I believe the default behavior is to attempt to reconnect after 20 seconds, backing off to 120 seconds between attempts.) After a review of available settings, we tried setting grpc.initial_reconnect_backoff_ms and grpc.min_reconnect_backoff_ms to 200. While that results in quick reconnects when a connection is lost, we sometimes see calls (from tests) fail with GRPC::Internal: 13:Completed without a response. Looking at logging from a tcp reverse proxy sitting between client and server, I see a connection lasting for just over 200ms, then a second connection lasting for longer. So it looks like the reconnect times are effectively serving as timeouts on connection attempts.
Is it possible to configure a gRPC client so that it will begin attempting a reconnect very quickly after a connection is lost, but allow creation of that connection to take longer than the reconnect time?
If it matters, this is a Ruby client.
The initial backoff is supposed to be 1 second.
You're experiencing a bug were the minimum connection timeout acts as both the timeout and the backoff (so the 1s initial backoff is ignored). So both your initial problem and the failed workaround are caused by the same bug.
(The bug was noticed a month ago, but an issue wasn't filed due to a mixup with a second bug. Your question here let me notice the missing issue.)
I am working with http connection and using a MultiThreadedHttpConnectionManager and httpClient.
For my purpose I am closing all the idle connection after 1ms with the following method : closeIdleConnections(1).
I am wondering what is considered as an " idle connection" in http ? It seems that waiting for an answer is not an idle connection.
Regards,
HTTP (1.1) specifies that connections should remain open until explicitly closed, by either party. Beyond that the specification provides only one example for a policy, suggesting using a timeout value beyond which an inactive (idle) connection should be closed. A connection kept open until the next HTTP request reduces latency and TCP connection establishment overhead. However, an idle open TCP connection consumes a socket and buffer space memory.
Excerpt from RFC 7230:
6.5. Failures and Timeouts
Servers will usually have some time-out value beyond which they will no longer maintain an inactive connection. Proxy servers might make this a higher value since it is likely that the client will be making more connections through the same server. The use of persistent connections places no requirements on the length (or existence) of this time-out for either the client or the server.
When a client or server wishes to time-out it SHOULD issue a graceful close on the transport connection. Clients and servers SHOULD both constantly watch for the other side of the transport close, and respond to it as appropriate. If a client or server does not detect the other side's close promptly it could cause unnecessary resource drain on the network.
A client, server, or proxy MAY close the transport connection at any time. For example, a client might have started to send a new request at the same time that the server has decided to close the "idle" connection. From the server's point of view, the connection is being closed while it was idle, but from the client's point of view, a request is in progress.
By studying the source code, in the HttpClient MultiThreadedHttpConnectionManager implementation, connection is simply considered idle when the connection in the pool's age is more than the idleTime. The idleTime is passed to the method closeIdleConnections(idleTime) as an argument.
I'm trying to determine if a client has closed a socket connection from netty. Is there a way to do this?
On a usual case where a client closes the socket via close() and the TCP closing handshake has been finished successfully, a channelInactive() (or channelClosed() in 3) event will be triggered.
However, on an unusual case such as where a client machine goes offline due to power outage or unplugged LAN cable, it can take a lot of time until you discover the connection was actually down. To detect this situation, you have to send some message to the client periodically and expect to receive its response within a certain amount of time. It's like a ping - you should define a periodic ping and pong message in your protocol which practically does nothing but checking the health of the connection.
Alternatively, you can enable SO_KEEPALIVE, but the keepalive interval of this option is usually OS-dependent and I would not recommend using it.
To help a user implement this sort of behavior relatively easily, Netty provides ReadTimeoutHandler. Configure your pipeline so that ReadTimeoutHandler raises an exception when there's no inbound traffic for a certain amount of time, and close the connection on the exception in your exceptionCaught() handler method. If you are the party who is supposed to send a periodic ping message, use a timer (or IdleStateHandler) to send it.
If you are writing a server, and netty is your client, then your server can detect a disconnect by calling select() or equivalent to detect when the socket is readable and then call recv(). If recv() returns 0 then the socket was closed gracefully by the client. If recv() returns -1 then check errno or equivalent for the actual error (with few exceptions, most errors should be treated as an ungraceful disconnect). The thing about unexpected disconnects is that they can take a long time for the OS to detect, so you would have to either enable TCP keep-alives, or require the client to send data to the server on a regular basis. If nothing is received from the client for a period of time then just assume the client is gone and close your end of the connection. If the client wants to, it can then reconnect.
If you read from a connection that has been closed by the peer you will get an end-of-stream indication of some kind, depending on the API. If you write to such a connection you will get an IOException: 'connection reset'. TCP doesn't provide any other way of detecting a closed connection.
TCP keep-alive (a) is off by default and (b) only operates every two hours by default when enabled. This probably isn't what you want. If you use it and you read or write after it has detected that the connection is broken, you will get the reset error above,
It depends on your protocol that you use ontop of netty. If you design it to support ping-like messages, you can simply send those messages. Besides that, netty is only a pretty thin wrapper around TCP.
Also see this SO post which describes isOpen() and related. This however does not solve the keep-alive problem.
From what I have read a SignalR client should not miss any messages from the server while it's connected. This does not seem to be the case when using long polling.
I have a straightforward hub based application using SignalR 1.1.2. When using SSE, if the network cable is unplugged and plugged back in again within the timeout period, both the client and server are notified that a reconnect has occurred and, as far as I can tell, no messages are missed. When using long polling, this seems to happen:
When the connection is created ($.connection.hub.start()) the OnConnected method is called in the hub and the client goes into connected state.
If I then unplug the network cable and pop it back in quickly, there is no call to OnDisconnected or OnConnected. No messages are missed. Any messages waiting on the server are subsequently sent to the client. OK so far.
If I unplug the network cable and let the long poll expire, I get a call to OnDisconnected. There is no state change on the client.
If I plug the network cable back in the client starts receiving messages again. There has been no notification on the client that it has been disconnected, but the client has missed some messages. There is no call to OnReconnected or OnConnected on the server.
Is this a bug? The behaviour seems very different between SSE and long polling.
Is there a recommended strategy to ensure that the client does not miss messages in this scenario? I could keep track of connection ids on the server and send periodic pings from the client - if I get a ping after an OnDisconnected I could send a message to tell the client to resync, but this doesn't seem like the right thing to do.
Any suggestions?
WebSockets, Server Sent Events, and Forever Frame all utilize a client side keep alive which is used to ensure client connectivity. However, Long Polling does not utilize the client side keep alive feature due to technical limitations and has no guarantee of connectivity for events such as pulling the network cable out.
When I say no guarantee I'm simply stating that the Long Polling transport is no longer able to be ensured by SignalR but instead relies on the Browser to trigger the correct events on Long Polling's ajax connection (through which SignalR can respond to).
Keep in mind though, if the client does happen to regain connectivity with the server after pulling out the network cable it will receive any messages that it missed during its down time. So messages are not missed, they're just delayed.
Lastly in the case that the server does not see the client for an extended period of time the OnDisconnected event WILL be triggered. For this to happen in a situation such as pulling the network cable out the server will first timeout the current connection's request and then will timeout the connection itself. This means that you can still rely on the OnDisconnected event, it may just be delayed based on network conditions.
Soooo what you're seeing is 100% by design =)
Hope this helps!
In my mochiweb application, I am using a long held HTTP request. I wanted to detect when the connection with the user died, and I figured out how to do that by doing:
Socket = Req:get(socket),
inet:setopts(Socket, [{active, once}]),
receive
{tcp_closed, Socket} ->
% handle clean up
Data ->
% do something
end.
This works when: user closes his tab/browser or refreshes the page. However, when the internet connection dies suddenly (say wifi signal lost all of a sudden), or when the browser crashes abnormally, I am not able to detect a tcp close.
Am I missing something, or is there any other way to achieve this?
There is a TCP keepalive protocol and it can be enabled with inet:setopts/2 under the option {keepalive, Boolean}.
I would suggest that you don't use it. The keep-alive timeout and max-retries tends to be system wide, and it is optional after all. Using timeouts on the protocol level is better.
The HTTP protocol has the status code Request Timeout which you can send to the client if it seems dead.
Check out the after clause in receive blocks that you can use to timeout waiting for data, or use the timer module, or use erlang:start_timer/3. They all have different performance characteristics and resource costs.
There isn't a default "keep alive" (but can be enabled if supported) protocol over TCP: in case there is a connection fault when no data is exchanged, this translates to a "silent failure". You would need to account for this type of failure by yourself e.g. implement some form of connection probing.
How does this affect HTTP? HTTP is a stateless protocol - this means that every request is independent of every other. The "keep alive" functionality of HTTP doesn’t change that i.e. "silent failure" can still occur.
Only when data is exchanged can this condition be detected (or when TCP Keep Alive is enabled).
I would suggest sending the application level keep alive messages over HTTP chunked-encoding. Have your client/server smart enough to understand the keep alive messages and ignore them if they arrive on time or close and re-establish the connection again.