In my application everything with SignalR was fine until i configured ARR, after that ServerSentEvents protocol was not getting connected and displays error of timeout, I google for it than I found a solution to set "response buffer threshold" to 0 in ARR.
After that ServerSentEvents get connected but after timeout 2 to 3 time, I tried a lot to figure it out but not able to understand why its not getting connected for the very first time and my other issue is that when server push a message to client it takes about 3 to 5 seconds to receive by the client but when i push more messages in same time from server then client receives all messages immediately but it takes 3 to 5 seconds again for the last message, I don't know if signalr having some sort of queuing mechanism for serversentevent or something like that,
All these issues are after configuring of ARR
So, any help will be appreciable
Related
Our application is ExtJS 3.4 based application we are frequently getting "Communication Failure" error on UI , we have our application deployed on different domain but on some domain we get this very frequently .
Without HTTP Keep Alive we are not getting that error. :
But in different scenarios for 1 sec and 5 sec we get it quite frequently.
We have observed on Wireshark was due to high RTT (Round Trip Time) the request were taking more time than expected.
There were inconsistency in packet flow the scenario was :
If keep alive was 5 sec :
When a request is successfully served it will return 200 OK(success response) and timeout parameter of 5 sec (where server tries to say to client that server will wait for 5 sec before closing this connection).
Now as soon as 5 sec of time is elapsed Server sends a FIN Packet(Finish packet which is to close connection is sent from server to client which is browser in our case).
Now here is the catch the time taken by ACK (Acknowledge Packet) from client to close connection is high ( high RTT).
Now server has initiated close but due to high RTT before the connection is closed client sends a new HTTP request(for eg ExampleABC.do request) before server receives ACK for FINISH from client.
Because of which server was not able to handle that request since it has initiated connection close.
Setting 1 sec as keep alive meant we are reducing time the server will wait to close the connection since we wanted after 1 sec one connection is to be closed and fresh connection is setup for new request to avoid unwanted request coming after 5 sec .
Thanks in advance
This is my first post please correct me if needed.
Sorry for bad English :)
Image for communication failure :
We solved this issue by synchronizing browser timeout and server timeouts.
The fix was to make sure the TCP keepalive time and browser coincide or come at same time, causing the TCP connection to completely drop.
Abstract
Hi, I was pondering whether it is possible to loose a message with SignalR. Suppose client disconnects but eventually reconnects in a short amount of time, for example 3 seconds. Will the client get all of the messages that were sent to him while he was disconnected?
For example let's consider LongPolling transport. As far as I'm aware long polling is a simple http request that is issued in advance by the client in order to wait a server event.
As soon as server event occurs the data getting published on the http request which leads to closing connection on issued http request. After that, client issues new http request that repeats the whole loop again.
The problem
Suppose two events happened on the server, first A then B (nearly instantly). Client gets message A which results with closing http connection. Now to get message B client has to issue second http request.
Question
If the B event happened while the client was disconnected from the server and was trying to reconnect.
Will the client get the B message automatically, or I have to invent some sort of mechanisms that will ensure message integrity?
The question applies not only to long-polling but to general situation with client reconnection.
P.S.
I'm using SignalR Hubs on the server side.
EDIT:
I've found-out that the order of messages is not guaranteed, I was not able to make SignalR loose messages
The answer to this question lies in the EnqueueOperation method here...
https://github.com/SignalR/SignalR/blob/master/src/Microsoft.AspNet.SignalR.Core/Transports/TransportDisconnectBase.cs
protected virtual internal Task EnqueueOperation(Func<object, Task> writeAsync, object state)
{
if (!IsAlive)
{
return TaskAsyncHelper.Empty;
}
// Only enqueue new writes if the connection is alive
Task writeTask = WriteQueue.Enqueue(writeAsync, state);
_lastWriteTask = writeTask;
return writeTask;
}
When the server sends a message to a client it calls this method. In your example above, the server would enqueue 2 messages to be sent, then the client would reconnect after receiving the first, then the second message would be sent.
If the server queues and sends the first message and the client reconnects, there is a small window where the second message could attempt to be enqueued where the connection is not alive and the message would be dropped at the server end. Then after reconnect the client wouldn't get the second message.
Hope this helps
We have a shell script setup on one Unix box (A) that remotely calls a web service deployed on another box (B). On A we just have the scripts, configurations and the Jar file needed for the classpath.
After the batch job is kicked off, the control is passed over from A to B for the transactions to happen on B. Usually the processing is finished on B in less than an hour, but in some cases (when we receive larger data for processing) the process continues for more than an hour. In those cases the firewall tears down the connection between the 2 hosts after an inactivity of 1 hour. Thus, the control is never returned back from B to A and we are not notified that the batch job has ended.
To tackle this, our network team has suggested to implement keep-alives at the application level.
My question is - where should I implement those and how? Will that be in the web service code or some parameters passed from the shell script or something else? Tried to google around but could not find much.
You basically send an application level message and wait for a response to it. That is, your applications must support sending, receiving and replying to those heart-beat messages. See FIX Heartbeat message for example:
The Heartbeat monitors the status of the communication link and identifies when the last of a string of messages was not received.
When either end of a FIX connection has not sent any data for [HeartBtInt] seconds, it will transmit a Heartbeat message. When either end of the connection has not received any data for (HeartBtInt + "some reasonable transmission time") seconds, it will transmit a Test Request message. If there is still no Heartbeat message received after (HeartBtInt + "some reasonable transmission time") seconds then the connection should be considered lost and corrective action be initiated....
Additionally, the message you send should include a local timestamp and the reply to this message should contain that same timestamp. This allows you to measure the application-to-application round-trip time.
Also, some NAT's close your TCP connection after N minutes of inactivity (e.g. after 30 minutes). Sending heart-beat messages allows you to keep a connection up for as long as required.
I have noticed that SignalR does not recognize disconnection events at the transport layer in some circumstances. If the transport is disconnected gracefully, either through dropping my VPN connection to the server, or issuing an ipconfig /release the appropriate events are fired. If I am connected to the VPN and I either unplug the network cable or turn my wireless off, SignalR does not recognize we are disconnected until the VPN connection times out. That is mostly acceptable because it does eventually realize it. However if I am not using a VPN and I simply unplug the network cable or turn off the wireless, SignalR never recognizes a disconnect has occurred. This is using SignalR 2.0.1 over WebSockets in a non CORS environment. If I enable logging in the hub I see no events logged in the console after waiting for over 5 minutes. One thing that bothers me that I do see in the log during start up is:
SignalR: Now monitoring keep alive with a warning timeout of 1600000 and a connection lost timeout of 2400000.
Is this my problem? I have tried manually setting my GlobalHost.Configuration.DisconnectTimeout to 30 seconds but it does not change any behavior in this scenario, or alter that log statement. What else might I be overlooking here?
Edit: I noticed in fiddler that my negotiate response has a disconnect timeout of 3600 and a keepalive of 2400 and that trywebsockets is false. This particular server is 2008 R2, which I do not believe supports Web Sockets. Does that mean long polling would be used? I don't see any long polling requests going out in fiddler or the console.
The timeout settings were my problem. I'm not sure why I originally did not see a change in the logging output when I adjusted the setting, but I went back to a sample app and saw things change there and now things are behaving properly.
It should be noted that the default settings for SignalR produce the following statement in logging output and that this is measured in milliseconds.
SignalR: Now monitoring keep alive with a warning timeout of 13333.333333333332 and a connection lost timeout of 20000
This is obvious when you read the information under Transport Disconnect Scenarios on the following page which says: The default keepalive timeout warning period is 2/3 of the keepalive timeout. The keepalive timeout is 20 seconds, so the warning occurs at about 13 seconds.
I wrote a simple app with SignalR and then I left the computer.
When I came back , after couple of hours , i've noticed this :
It seems that my app got signaled every 10 seconds.
why is that ? ( I didn't write it) , where can I configure that ?
It does this so that load balancers and proxies and other network devices that love to kill idle connections don't end up killing your connection. A classic example is the azure load balancer which kills idle connections after a minute.
The other reason SignalR sends keep alive pings is so that the client can detect network disconnects. The client side is set to expect a keep alive on a specific interval and when 3 of them are missed it'll drop the connection and try reconnecting.
GlobalHost.Configuration.KeepAlive has a default value of 10 seconds