I've read some articles about http long-polling, but I don't understand these 2 things:
Why is recommended keeping connection open less than a minute? It should cause problems such as timeout or ..?
Why I should reopen the connection after I've received data from the server?
I have not heard that a long poll should be open for less than a minute. However, my first thought is that you may do that to detect if the connection has been dropped or to account for mobile devices switching between wifi and mobile data.
Your second question is much easier to answer, If you application is relying on long-polling to receive push notifications from the server, it will need to constantly keep a long polling connection open. Once data is sent from the server over a long polled connection, the request is completed and the connection is closed, which means you would need to open it again to receive another notification.
Related
We have Safari mobile clients that are affected by one of their 5 connections being blocked by signalr. We have used the solution propped here: https://github.com/SignalR/SignalR/issues/1406#issuecomment-14284093
Where we have these settings changed to the following for signalR 2.x
GlobalHost.Configuration.ConnectionTimeout =
TimeSpan.FromMilliseconds(1000);
GlobalHost.Configuration.LongPollDelay = TimeSpan.FromMilliseconds(5000);
We are sending notifications from the server to the client with no message queue or acknowledgement framework. We don’t need to guarantee message delivery but we do want there to be a high probability of success. We think this should be possible due to our low message rate and a buffer size of 1000. However we have some questions:
Are messages held in a queue while the LongPollDelay occurs? Should
they be sent during the next long poll using the settings above?
Our tests with a single message being sent during a 2 minute
LongPollDelay suggest that they are not retrieved during the 1
second long poll request that follows. Are there any reasons for
this i.e. buffer flushing after 1 minute?
Does ConnectionTimeout affect all transports?
If ConnectionTimeout applies to all transports is there a way of
setting this for only Safari mobile users i.e. have two connections
available and use agent detection to point to a specific connection?
Is there a way of setting the LongPollDelay so that this also only
applied to only Safari mobile users?
All advice welcome and appreciated, Matt
[FOLLOW-UP QUESTIONS]
Thanks that helps a lot. We have retried with 30secs LongPollDelay and it works as expected. I have a couple of follow-up questions that you/someone might care to comment on:
1) During testing we also see the client sending a ping request to the server roughly every 5 minutes. Why is the ping period set to 5 minutes when the disconnect period is so much shorter, and what is the purpose of the client pinging the server if it assumes it is disconnected via an alternative mechanism.
2) w.r.t. Different configurations for different clients. Could we not set up another SignalR endpoint and point only Safari mobile to this? Something like the response to this post:
Can I reduce the Circular Buffer to "1"? Is that a good idea?
You are correct that the SignalR will queue/buffer messages. Even if there wasn't a LongPollDelay configured, SignalR needs to do this because there is always a chance that messages are sent while clients are repolling/reconnecting.
SignalR assumes that the client has disconnected if the client hasn't been connected to the server within the last DisconnectTimeout. Once the DisconnectTimeout triggers, SignalR will call OnDisconnected and clear any message buffers belonging to the supposedly disconnected client so it doesn't leak memory. The DisconnectTimeout defaults to 30 seconds which is far less than the 2 minute LongPollDelay you configured, so that explains this behavior.
The ConnectionTimeout only affects long polling unless you've disabled keep alives. If keep alives are disabled, it applies to all transports.
There is no way to selectively configure the ConnectionTimeout for specific types of clients. But as I stated, it only affects long polling by default.
There is no way to selective configure the LongPollDelay for specific types of clients.
I have a general question regarding TCP-IP communication...
for the time being I try to create a small communication between an ATMega and a Raspberry Pi. I will transmit some data for example every 5 minutes (e.g. 100 byte) via TCP/IP Protocol.
Does it make sense to keep the connection open or shall I create a new connection for each dataset?
Thanks for your help...
webbolle
I would lean towards keeping the TCP connection open rather than opening a new one everytime.
Here are a few reasons. First, by using the same connection, you would save on not having to send TCP handshake message (SYN-based messages) and teardown messages (FIN-based messages). In your case, if you are going to transmit 100 bytes every 5 minutes, the overhead of SYN/FIN messages might be more than that. Second, if you already have the connection open, then you would save on time since there is no need to do the reconnection. Third, TCP might go to slow-start every time you start the connection -- should not be a problem with 100 bytes, but if you need to send more bytes, then with every new connection, TCP would start its send window with 1 MSS. But, if you reuse an existing connection, TCP would (probably) use the current window.
Also:
An open connection doesn't consume any resources (bandwith etc.) except for the ports it holds on both devices. Basically every TCP-connection that has been opened and not been closed is still open, save unintended disconnections etc.
For detecting those is also doesn't make a difference wether you keep open or reopen:
If the connection dropped out in the meantime you'll receive the more or less same error.
From what I have read a SignalR client should not miss any messages from the server while it's connected. This does not seem to be the case when using long polling.
I have a straightforward hub based application using SignalR 1.1.2. When using SSE, if the network cable is unplugged and plugged back in again within the timeout period, both the client and server are notified that a reconnect has occurred and, as far as I can tell, no messages are missed. When using long polling, this seems to happen:
When the connection is created ($.connection.hub.start()) the OnConnected method is called in the hub and the client goes into connected state.
If I then unplug the network cable and pop it back in quickly, there is no call to OnDisconnected or OnConnected. No messages are missed. Any messages waiting on the server are subsequently sent to the client. OK so far.
If I unplug the network cable and let the long poll expire, I get a call to OnDisconnected. There is no state change on the client.
If I plug the network cable back in the client starts receiving messages again. There has been no notification on the client that it has been disconnected, but the client has missed some messages. There is no call to OnReconnected or OnConnected on the server.
Is this a bug? The behaviour seems very different between SSE and long polling.
Is there a recommended strategy to ensure that the client does not miss messages in this scenario? I could keep track of connection ids on the server and send periodic pings from the client - if I get a ping after an OnDisconnected I could send a message to tell the client to resync, but this doesn't seem like the right thing to do.
Any suggestions?
WebSockets, Server Sent Events, and Forever Frame all utilize a client side keep alive which is used to ensure client connectivity. However, Long Polling does not utilize the client side keep alive feature due to technical limitations and has no guarantee of connectivity for events such as pulling the network cable out.
When I say no guarantee I'm simply stating that the Long Polling transport is no longer able to be ensured by SignalR but instead relies on the Browser to trigger the correct events on Long Polling's ajax connection (through which SignalR can respond to).
Keep in mind though, if the client does happen to regain connectivity with the server after pulling out the network cable it will receive any messages that it missed during its down time. So messages are not missed, they're just delayed.
Lastly in the case that the server does not see the client for an extended period of time the OnDisconnected event WILL be triggered. For this to happen in a situation such as pulling the network cable out the server will first timeout the current connection's request and then will timeout the connection itself. This means that you can still rely on the OnDisconnected event, it may just be delayed based on network conditions.
Soooo what you're seeing is 100% by design =)
Hope this helps!
When Gmail loses the connection, it displays messages such as:
Not connected. Connecting in 3:36… [Try now]
Would faster reconnect intervals really be that big of a deal?
I am asking because I am developing a Socket.IO based mobile web app,
and I want to avoid having a message as on Gmail. Instead I imagine a
scheme such as:
reconnect at fast random intervals between one second and a minute,
plus
reconnect on certain user interaction, plus
reconnect on change of browser state.
One reason why your application loses connection to the server could be that the server or the connection to the server is overloaded. Spamming it with reconnection attempts could make the situation worse.
In the end, it depends on your usability requirements. When the user spends a long time in an email program, he is usually not interacting with it constantly but is reading a single email. Also, a mail client can live with being disconnected for several minutes, because it isn't unusual for emails to be read with a latency of several hours after they got sent. So GMail can live with longer delays before attempting to reconnect. When you have an application where the user is constantly interacting with the server, you might prefer shorter delays for reconnection attempts.
I have a desktop client application that is talking to a server application through a REST API using simple HTTP posts. I currently have the client polling every X minutes, but I would like the data to be refreshed more frequently. Is it possible to have the server notify the client of any new data, or is that outside the scope of what an HTTP server is meant to do? Any thoughts on the best way to approach this would be much appreciated. Thanks!
You may want to check the accepted answer to the following Stack Overflow post, which describes with a very basic example how to implement Long Polling using php on the server-side:
Simple “Long Polling” example code
When using Long Polling, your client application starts a request to the HTTP server, with an infinite timeout (or a very long one). Now as soon as new data is available, the server will find an active connection ready, so it can push the data immediately. In traditional polling, you would have to wait until the application initiates a new poll, plus the network latency to reach the server before new data is sent.
Then when the data is sent, the connection is closed, but your application should open a new one immediately in order to have a constantly open connection to the server. Actually there will be a very small gap where there will not be an active connection, but this is often negligible in many applications.
If you hold the HTTP connection open on the server side then you can send data whenever there's an update, followed by flushing the connection to actually send the data. This may cause issues with the TCP/IP stack if tens of thousands of connections are required though.