Using a SignalR persistent connection with a JS long polling client we see inconsistent reconnection behavior in different scenarios. When the client machine's network cable is unplugged the JS connection does not enter the reconnecting state and it never (at least not after 5 minutes) reaches the disconnected state. For other scenarios such as a restart of the IIS web application a long polling JS connection does enter the reconnecting state and successfully reconnects. I understand that the reasoning behind this is that keep-alive is not supported for the long polling transport.
I can see that a suggestion has been made on github to better support reconnections for the long polling transport (https://github.com/SignalR/SignalR/issues/1781), but it seems that there is no commitment to change it.
First, is there a proper workaround for detecting disconnections on the client in the case of long polling.
Second, does anyone know if there are plans to support reconnection in the case described?
Cheers
We've debated different alternatives to support a keep alive "like" feature for long polling; however, due to how long polling works under the covers it's not easy to implement without affecting the vast majority of users. As we continue to debate the "correct" solution I'll leave you with one work around for detecting network failure in the long polling client (if it's absolutely needed).
Create a server method, lets call it ping:
public class MyHub : Hub
{
public void Ping()
{
}
}
Now on the client create an interval in which you will "ping" the server:
var proxy = $.connection.myHub,
intervalHandle;
...
$.connection.hub.disconnected(function() {
clearInterval(intervalHandle);
});
...
$.connection.hub.start().done(function() {
// Only when long polling
if($.connection.hub.transport.name === "longPolling") {
// Ping every 10s
intervalHandle = setInterval(function() {
// Ensure we're connected (don't want to be pinging in any other state).
if($.connection.hub.state === $.signalR.connectionState.connected) {
proxy.server.ping().fail(function() {
// Failed to ping the server, we could either try one more time to ensure we can't reach the server
// or we could fail right here.
TryAndRestartConnection(); // Your method
});
}
}, 10000);
}
});
Hope this helps!
Related
Lets say that I have a ASP.NET application and I hold a connection for 10 seconds. In that time the client lost network access.
Can I detect that before returning the response?
You can't detect lost connection "in HTTP", because it is an application layer protocol and too abstract for that.
But you could detect that your client has closed the connection on a network level. I'm not familiar with ASP.net, but you could start from here: Instantly detect client disconnection from server socket.
You can check the IsClientConnectedProperty. For example
void HeavyProcessing()
{
while (longLoop)
{
if (!HttpContext.Current.Response.IsClientConnected) Response.End();
//Do heavy processing
}
}
I have this code to test asynchronous programming in SignalR. this code send back to client the text after 10 seconds.
public class TestHub : Hub
{
public async Task BroadcastMessage(string text)
{
await DelayResponse(text);
}
async Task DelayResponse(string text)
{
await Task.Delay(10000);
Clients.All.displayText(text);
}
}
this code work fine but there is an unexpected behavior. when 5 messages are sent in less than 10 second, client can't send more message until previous "DelayResponse" methods end. it happens per connection and if before 10 seconds close the connection and reopen it, client can send 5 messages again. I test it with chrome, firefox and IE.
I made some mistake or it is signalr limitation?
You are most likely hitting a browser limit. When using longPolling and serverSentEvent transport each send is a separate HTTP request. Since you are delaying response these requests are longer running and browsers have limits of how many concurrent connection can be opened. Once you reach the limit a new connection will not be open until one of the previous ones is completed.
More details on concurrent requests limit:
Max parallel http connections in a browser?
That's not the sens of signalR, that you waiting for a "long running" task. For that signalR supports server push mechanisme.
So if you have something which needs more time you can trigger this from client.
In the case the calculation is finish you can send a message from server to client.
I have an API which uses netty to open client connection to a tcp server. The server may send data to the client at any time. I'm facing the following scenario:
Client connects to server
Sends data to server
Disconnects and the JVM exist (not sure happens first)
This is what I expect:
Client connects to server
Sends data to server
Client simply keeps the connections open, waiting to receive data or for the user of client API to send data.
This is an outline of my connection method (obviously there is a much larger API around it):
```
public FIXClient connect(String host, int port) throws Throwable {
...
ChannelPipeline pipe = org.jboss.netty.channel.Channels.pipeline(...);
ChannelFactory factory = new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
ClientBootstrap bootstrap = new ClientBootstrap(factory);
bootstrap.setPipeline(pipe);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
ChannelFuture future = bootstrap.connect(new InetSocketAddress(host, port));
//forcing the connect call to block
//don't want clients to deal with async connect calls
future.awaitUninterruptibly();
if(future.isSuccess()){
this.channel = future.getChannel();
//channel.getCloseFuture();//TODO notifies whenever channel closes
}
else{
throw future.getCause();//wrap this in a more specific exception
}
return this;
}
```
That has nothing todo with netty... You need to make sure your "main" method will not exist if you call it from there. Otherwise it the job of the container..
There's a couple of ways you can do this, but one thing I have observed, is that with this code:
ChannelFactory factory = new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
... if you make a successful connection, your JVM will not shutdown of it's own accord for some time until you force it (like a kill) or you call a releaseExternalResources() on your channel factory. This is because:
The threads created by Executors.newCachedThreadPool() are nonDaemon threads.
At least 1 thread would be created once you submit your connection request.
The cached thread pool threads have a keep alive time of 60 seconds, meaning they don't go away until they've been idle for 60 seconds, so that would be 60 seconds after your connect and send (assuming that they both completed).
So I'm not sure if you're diagnosing the issue correctly. Having said that, I recommend you handle the task this this way:
Once you boot in your main method (in the main thread)
Now launch all your actual useful work in new threads.
Once the useful threads have been launched, in the main thread, call Thread.currentThread().join(). Since main is always non-dameon, you have made sure the JVM will not shutdown until you're good and ready.
At some point, unless you want to kill -9 the JVM as a shutdown strategy, you will want a controlled shutdown, so you can add a shutdown hook to shutdown Netty and then interrupt the main thread.
I hope that's helpful.
We have a Java client server application with a custom protocol using TCP/IP. We have found it necessary to use a heartbeat within the protocol due to dead socket connection issues.
We have had the heartbeat since the beginning going from client to server with the server responding with an acknowledgment.
We have recently had a timeout issues with the clients, and after analysing the code have come up with a couple of questions I am unsure about.
1 - What direction is best for a heartbeat, I think we chose 'client to server' as it takes the load of the server.
I was thinking of changing it to 'server to client' however we have control of both the client and server code, so we don't need worry so much about time wasting clients.
2 - Is it necessary to acknowledge heartbeats to prove the connection is alive in both directions?
Many thanks
I'm thinking any traffic in either direction should be enough to keep it alive but it doesn't hurt to respond with a "ping" with a "pong". Traditionally the client sends the heartbeat and the server will be responsible to shutting down unresponsive clients so what you have sounds right.
have you tried setting the timeout to zero? Could it be network devices that are interfering with your socket connection timeout?
try {
ServerSocket server = new ServerSocket(2048);
server.setSoTimeout(0); // never time out
try {
Socket s = server.accept( );
// handle the connection
// ...
}
catch (InterruptedIOException e) {
System.err.println("No connection within 30 seconds");
}
finally {
server.close( );
}
catch (IOException e) {
System.err.println("Unexpected IOException: " + e);
}
I'd like to know whether it's possible to easily detect (on the server side) when Flex clients disconnect from a BlazeDS destination please? My scenario is simply that I'd like to try and use this to figure out how long each of my clients are connected for each session. I need to be able to differentiate between clients as well (ie so not just counting the number of currently connected clients which I see in ds-console).
Whilst I could program in a "I'm now logging out" process in my clients, I don't know whether this will fire if the client simply navigates away to another web page rather than going though said logout process.
Can anyone suggest if there's an easy way to do this type of monitoring on the server side please.
Many thanks,
Alex
Implement you own adapter extending "extends ServiceAdapter"
Then set the function:
#Override
public boolean handlesSubscriptions() {
return true;
}
So you can handle subscription and unsubscription
Then you can manage those in the manage function:
#Override
public Object manage(CommandMessage commandMessage) {
switch (commandMessage.getOperation()) {
case CommandMessage.SUBSCRIBE_OPERATION:
break;
case CommandMessage.UNSUBSCRIBE_OPERATION:
break;
}
}
You can also catch different commands.
Hope this help
The only way to do it right is to implement the heartbeat mechanism in a way or another. You can use the keep-alive from http coupled with session expire as suggested before but my opinion is to use the messaging mechanism from BlazeDS (send a message at X seconds). You can control the time interval and other aspects (maybe you want to detect if the client is not doing anything for several hours and to invalidate the session even if your client is still connected).
If you want to be notified instantly (chat application) when a client is disconnecting a solution is to have a socket (RTMP) or some emulation (http streaming) which will detect instantly if the client is disconnected, however this disconnection can be temporary (maybe the network was down for one second, and after that is ok, and you should also detect that).
I would assume BlazeDS would provide a callback or event for when a client disconnects, but I haven't worked with Blaze so that would just be a guess. First step would be to check the documentation to see if it does though, as that would be your best bet.
What I've done in cases where there isn't a disconnect event (or it's not reliable) is to add a keepalive message. For instance, the client would be configured to send a keepalive message every 30 seconds, and if the server goes more than, say, 2 minutes without seeing a keepalive then it assumes the client has disconnected. This would let you differentiate between different clients, and you can play around with the rate and check times to get something you're happy with.