SignalR long polling transport - asp.net

I'm using SignalR 0.5.3 with hubs and I'm explicitely setting transport to long polling like this:
$.connection.hub.start({ transport: 'longPolling' }, function () {
console.log('connected');
});
with configuration like this (in global.asax.cs Application_Start method):
GlobalHost.DependencyResolver.UseRedis(server, port, password, pubsubDB, "FooBar");
GlobalHost.Configuration.DisconnectTimeout = TimeSpan.FromSeconds(2);
GlobalHost.Configuration.KeepAlive = TimeSpan.FromSeconds(15);
However the long polling doesn't seem to be working neither on development (IIS express) nor on production (IIS 7.5) environment. Connection seems to be made properly, however the long poll request is always timed out (after ~2 minutes) and reconnect happens afterwards. Logs from IIS are here. Response from first timed out request:
{"MessageId":"3636","Messages":[],"Disconnect":false,"TimedOut":true,"TransportData":{"Groups":["NotificationHub.56DDB6692001Ex"],"LongPollDelay":0}}
Timed out reconnect responses looks like this:
{"MessageId":"3641","Messages":[],"Disconnect":false,"TimedOut":true,"TransportData":{"Groups":["NotificationHub.56DDB6692001Ex"],"LongPollDelay":0}}
I would appreciate any help regarding this issue. Thanks.
Edit
If reconnect means the beginning of a new long poll cycle why it is initiated after ~2 minutes when KeepAlive setting in global.asax.cs is set to 15 seconds? Problem with this is that I have a reverse proxy in front of IIS which timeouts keep-alive requests after 25 seconds therefore I get 504 response when this reverse proxy timeout is reached.

Take a look at this post: How signalr works internally. The way long pulling works is after a set time the connection will timeout or receive a response and repull (reconnect)

Keep alive functionality is disabled for long polling. Seems that ConnectionTimeout is used instead.
This setting represents the amount of time to leave a transport
connection open and waiting for a response before closing it and
opening a new connection. The default value is 110 seconds.
https://learn.microsoft.com/en-us/aspnet/signalr/overview/guide-to-the-api/handling-connection-lifetime-events#timeoutkeepalive
If the request is timed out and server is not sending any data, but you expect it to send, maybe it is some issue on the server side that you don't yet see.

Related

Can a gRPC client connect timeout be set independent of reconnect backoff settings?

We'd like to configure our gRPC client to reconnect very quickly after a connection is lost. (I believe the default behavior is to attempt to reconnect after 20 seconds, backing off to 120 seconds between attempts.) After a review of available settings, we tried setting grpc.initial_reconnect_backoff_ms and grpc.min_reconnect_backoff_ms to 200. While that results in quick reconnects when a connection is lost, we sometimes see calls (from tests) fail with GRPC::Internal: 13:Completed without a response. Looking at logging from a tcp reverse proxy sitting between client and server, I see a connection lasting for just over 200ms, then a second connection lasting for longer. So it looks like the reconnect times are effectively serving as timeouts on connection attempts.
Is it possible to configure a gRPC client so that it will begin attempting a reconnect very quickly after a connection is lost, but allow creation of that connection to take longer than the reconnect time?
If it matters, this is a Ruby client.
The initial backoff is supposed to be 1 second.
You're experiencing a bug were the minimum connection timeout acts as both the timeout and the backoff (so the 1s initial backoff is ignored). So both your initial problem and the failed workaround are caused by the same bug.
(The bug was noticed a month ago, but an issue wasn't filed due to a mixup with a second bug. Your question here let me notice the missing issue.)

SignalR not recognizing some transport disconnects

I have noticed that SignalR does not recognize disconnection events at the transport layer in some circumstances. If the transport is disconnected gracefully, either through dropping my VPN connection to the server, or issuing an ipconfig /release the appropriate events are fired. If I am connected to the VPN and I either unplug the network cable or turn my wireless off, SignalR does not recognize we are disconnected until the VPN connection times out. That is mostly acceptable because it does eventually realize it. However if I am not using a VPN and I simply unplug the network cable or turn off the wireless, SignalR never recognizes a disconnect has occurred. This is using SignalR 2.0.1 over WebSockets in a non CORS environment. If I enable logging in the hub I see no events logged in the console after waiting for over 5 minutes. One thing that bothers me that I do see in the log during start up is:
SignalR: Now monitoring keep alive with a warning timeout of 1600000 and a connection lost timeout of 2400000.
Is this my problem? I have tried manually setting my GlobalHost.Configuration.DisconnectTimeout to 30 seconds but it does not change any behavior in this scenario, or alter that log statement. What else might I be overlooking here?
Edit: I noticed in fiddler that my negotiate response has a disconnect timeout of 3600 and a keepalive of 2400 and that trywebsockets is false. This particular server is 2008 R2, which I do not believe supports Web Sockets. Does that mean long polling would be used? I don't see any long polling requests going out in fiddler or the console.
The timeout settings were my problem. I'm not sure why I originally did not see a change in the logging output when I adjusted the setting, but I went back to a sample app and saw things change there and now things are behaving properly.
It should be noted that the default settings for SignalR produce the following statement in logging output and that this is measured in milliseconds.
SignalR: Now monitoring keep alive with a warning timeout of 13333.333333333332 and a connection lost timeout of 20000
This is obvious when you read the information under Transport Disconnect Scenarios on the following page which says: The default keepalive timeout warning period is 2/3 of the keepalive timeout. The keepalive timeout is 20 seconds, so the warning occurs at about 13 seconds.

Azure Web Role - Long Running Request (Load Balancer Timeout?)

Our front-end MVC3 web application is using AsyncController, because each of our instances is servicing many hundreds of long-running, IO bound processes.
Since Azure will terminate "inactive" http sessions after some pre-determined interval (which seems to vary depending up on what website you read), how can we keep the connections alive?
Our clients MUST stay connected, and our processes will run from 30 seconds to 5 minutes or more. How can we keep the client connected/alive? I initially thought of having a timeout on the Async method, and just hitting the Response object with a few bytes of output, sort of like chunking the response, and then going back and waiting some more. However, I don't think this will work, since MVC3 is handling the hookup of an IIS thread back to the asynchronous response, which will have already rendered a view at that time.
How can we run a really long process on an AsyncController, but have the client not be disconnected by the Azure Load Balancer? Sending an immediate response to the caller, and asking that caller to poll or check another resource URL is not acceptable.
Azure load balancer idle time-out is 4 minutes. Can you try to configure TCP keep-alive on the client side for less than 4 minutes, that should keep the connection alive?
On the other hand, it's pretty expensive to keep a connection open per client for a long time. This will limit the number of clients you can handle per server. Also, I think IIS may still decide to close a connection regardless of keep-alives if it thinks it need the connection to serve other requests.

Keep-alive for long-lived HTTP session (not persistent HTTP)

At work, we have a client-server system where clients submit requests to a web server through HTTP. The server-side processing can sometimes take more than 60 seconds, which is the proxy timeout value set by our company's IT staff and cannot be changed. Is there a way to keep the HTTP connection alive for longer than 60 seconds (preferably for an arbitrarily long period of time), either by heartbeat messages from the server or the client?
I know there are HTTP 1.1 persistent connections, but that is not what I want.
Does HTTP have a keep-alive capability, or would this have to be done at the TCP level through some sort of socket option?
This should get you started.
Assuming you control both sides of the system, you can fake it by sending data back and forth periodically to keep the session from idling out -- most browsers won't terminate a connection as long as data is moving.
As a general suggestion, though, you're much better off re-designing the system so that the client submits a job request and then periodically queries (via Ajax) to see if it's completed. The Ajax queries can delay a while and the server can respond either when it has an affirmative status, or when the timeout period is near to elapsing. If the status-update request times out for some reason (timing errors or whatnot), the client simply re-submits it with no harm done and no visible disruption from the user's perspective.
Just have your server send a trickle of no-op data while it's doing the processing - if the result is in HTML, then something like:
<!-- keepalive -->
sent every 10 seconds should do.

long running http connection never gets response back

I am making an http request which ends up taking more than 8 mins. For me, this long running request work fine. I am able to get a response back to my browser without any issues. (I am located on the same network as the server).
However, for some users, the browser never returns any response. (Note: When the same http request executes in 1 min, these users are able to see the response without any problem)
These users happen to be on another network. And there probably is a firewall or two between their location and the server.
I can see on their fiddler that the request is just sitting there waiting for a response.
I am right now assuming that firewall is killing the idle http connection.. but I am not sure.
If you have any idea why the response never gets back, or why the connection never breaks.. it will be really helpful.
Also: Is it possible to fix this issue by writing an Applet that somehow manages to keep the sending dummy signal to the server, even after having sent (flushed) the request to the server?
The users might be behind a connection tracking firewall/NAT gateway. Such gateways tend to drop the TCP connection when nothing has happened for a period of time. In a custom protocol you could send some kind of heartbeat messags to keep the TCP connection alive, but with HTTP you don't have proper control over that connection, nor does HTTP facilitate what's needed to keep a tcp connection "alive".
The usual way to handle long running jobs initated by an HTTP request is to fire off that job in the background, sending a proper response back to the client immediately and have an applet/ajax request poll the status of that job and returning the result when it's done.
If you need a quick fix, see if you can control any timeouts on the gateways between the server and the user.
Have you considered that the users might be using a browser which has a HTTP timeout which causes the browser to stop waiting for a response after a certain amount of time?
http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html
http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/usingkeepalive.html
If you are using Linux machine try
# cat /proc/sys/net/ipv4/tcp_keepalive_time
7200
# cat /proc/sys/net/ipv4/tcp_keepalive_intvl
75
# cat /proc/sys/net/ipv4/tcp_keepalive_probes
9
# echo 1500 > /proc/sys/net/ipv4/tcp_keepalive_time
# echo 500 > /proc/sys/net/ipv4/tcp_keepalive_intvl
# echo 20 > /proc/sys/net/ipv4/tcp_keepalive_probes

Resources