Heartbeat Direction and Acknowledgement - direction

We have a Java client server application with a custom protocol using TCP/IP. We have found it necessary to use a heartbeat within the protocol due to dead socket connection issues.
We have had the heartbeat since the beginning going from client to server with the server responding with an acknowledgment.
We have recently had a timeout issues with the clients, and after analysing the code have come up with a couple of questions I am unsure about.
1 - What direction is best for a heartbeat, I think we chose 'client to server' as it takes the load of the server.
I was thinking of changing it to 'server to client' however we have control of both the client and server code, so we don't need worry so much about time wasting clients.
2 - Is it necessary to acknowledge heartbeats to prove the connection is alive in both directions?
Many thanks

I'm thinking any traffic in either direction should be enough to keep it alive but it doesn't hurt to respond with a "ping" with a "pong". Traditionally the client sends the heartbeat and the server will be responsible to shutting down unresponsive clients so what you have sounds right.
have you tried setting the timeout to zero? Could it be network devices that are interfering with your socket connection timeout?
try {
ServerSocket server = new ServerSocket(2048);
server.setSoTimeout(0); // never time out
try {
Socket s = server.accept( );
// handle the connection
// ...
}
catch (InterruptedIOException e) {
System.err.println("No connection within 30 seconds");
}
finally {
server.close( );
}
catch (IOException e) {
System.err.println("Unexpected IOException: " + e);
}

Related

Does the HTTP protocol support detecting whenever the connection is lost from the server side?

Lets say that I have a ASP.NET application and I hold a connection for 10 seconds. In that time the client lost network access.
Can I detect that before returning the response?
You can't detect lost connection "in HTTP", because it is an application layer protocol and too abstract for that.
But you could detect that your client has closed the connection on a network level. I'm not familiar with ASP.net, but you could start from here: Instantly detect client disconnection from server socket.
You can check the IsClientConnectedProperty. For example
void HeavyProcessing()
{
while (longLoop)
{
if (!HttpContext.Current.Response.IsClientConnected) Response.End();
//Do heavy processing
}
}

SignalR long polling reconnection behavior

Using a SignalR persistent connection with a JS long polling client we see inconsistent reconnection behavior in different scenarios. When the client machine's network cable is unplugged the JS connection does not enter the reconnecting state and it never (at least not after 5 minutes) reaches the disconnected state. For other scenarios such as a restart of the IIS web application a long polling JS connection does enter the reconnecting state and successfully reconnects. I understand that the reasoning behind this is that keep-alive is not supported for the long polling transport.
I can see that a suggestion has been made on github to better support reconnections for the long polling transport (https://github.com/SignalR/SignalR/issues/1781), but it seems that there is no commitment to change it.
First, is there a proper workaround for detecting disconnections on the client in the case of long polling.
Second, does anyone know if there are plans to support reconnection in the case described?
Cheers
We've debated different alternatives to support a keep alive "like" feature for long polling; however, due to how long polling works under the covers it's not easy to implement without affecting the vast majority of users. As we continue to debate the "correct" solution I'll leave you with one work around for detecting network failure in the long polling client (if it's absolutely needed).
Create a server method, lets call it ping:
public class MyHub : Hub
{
public void Ping()
{
}
}
Now on the client create an interval in which you will "ping" the server:
var proxy = $.connection.myHub,
intervalHandle;
...
$.connection.hub.disconnected(function() {
clearInterval(intervalHandle);
});
...
$.connection.hub.start().done(function() {
// Only when long polling
if($.connection.hub.transport.name === "longPolling") {
// Ping every 10s
intervalHandle = setInterval(function() {
// Ensure we're connected (don't want to be pinging in any other state).
if($.connection.hub.state === $.signalR.connectionState.connected) {
proxy.server.ping().fail(function() {
// Failed to ping the server, we could either try one more time to ensure we can't reach the server
// or we could fail right here.
TryAndRestartConnection(); // Your method
});
}
}, 10000);
}
});
Hope this helps!

how can i keep my jvm from exiting while netty client connection is open?

I have an API which uses netty to open client connection to a tcp server. The server may send data to the client at any time. I'm facing the following scenario:
Client connects to server
Sends data to server
Disconnects and the JVM exist (not sure happens first)
This is what I expect:
Client connects to server
Sends data to server
Client simply keeps the connections open, waiting to receive data or for the user of client API to send data.
This is an outline of my connection method (obviously there is a much larger API around it):
```
public FIXClient connect(String host, int port) throws Throwable {
...
ChannelPipeline pipe = org.jboss.netty.channel.Channels.pipeline(...);
ChannelFactory factory = new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
ClientBootstrap bootstrap = new ClientBootstrap(factory);
bootstrap.setPipeline(pipe);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
ChannelFuture future = bootstrap.connect(new InetSocketAddress(host, port));
//forcing the connect call to block
//don't want clients to deal with async connect calls
future.awaitUninterruptibly();
if(future.isSuccess()){
this.channel = future.getChannel();
//channel.getCloseFuture();//TODO notifies whenever channel closes
}
else{
throw future.getCause();//wrap this in a more specific exception
}
return this;
}
```
That has nothing todo with netty... You need to make sure your "main" method will not exist if you call it from there. Otherwise it the job of the container..
There's a couple of ways you can do this, but one thing I have observed, is that with this code:
ChannelFactory factory = new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
... if you make a successful connection, your JVM will not shutdown of it's own accord for some time until you force it (like a kill) or you call a releaseExternalResources() on your channel factory. This is because:
The threads created by Executors.newCachedThreadPool() are nonDaemon threads.
At least 1 thread would be created once you submit your connection request.
The cached thread pool threads have a keep alive time of 60 seconds, meaning they don't go away until they've been idle for 60 seconds, so that would be 60 seconds after your connect and send (assuming that they both completed).
So I'm not sure if you're diagnosing the issue correctly. Having said that, I recommend you handle the task this this way:
Once you boot in your main method (in the main thread)
Now launch all your actual useful work in new threads.
Once the useful threads have been launched, in the main thread, call Thread.currentThread().join(). Since main is always non-dameon, you have made sure the JVM will not shutdown until you're good and ready.
At some point, unless you want to kill -9 the JVM as a shutdown strategy, you will want a controlled shutdown, so you can add a shutdown hook to shutdown Netty and then interrupt the main thread.
I hope that's helpful.

Can a http server detect that a client has cancelled their request?

My web app must process and serve a lot of data to display certain pages. Sometimes, the user closes or refreshes a page while the server is still busy processing it. This means the server will continue to process data for several minutes only to send it to a client who is no longer listening.
Is it possible to detect that the connection has been broken, and react to it?
In this particular project, we're using Django and NginX, or Apache. I assumed this is possible because the Django development server appears to react to cancelled requests by printing Broken Pipe exceptions. I'd love to have it raise an exception that my application code could catch. It appears JSP can do this. So can node.js here.
Alternatively, I could register an unload event handler on the page in question, have it do a synchronous XHR requesting that the previous request from this user be cancelled, and do some kind of inter-process communication to make it so. Perhaps if the slower data processing were handed to another process that I could more easily identify and kill, without killing the responding process...
While #Oded is correct that HTTP is stateless between requests, app servers can indeed detect when the underlying TCP/IP connection has broken for the request being processed. Why is this? Because TCP is a stateful protocol for reliable connections.
A common technique for .Net web apps processing a resource intensive request is to check Response.IsClientConnected (docs) before starting the resource intensive work. There is no point in wasting CPU cycles to send an expensive response to a client that isn't there anymore.
private void Page_Load(object sender, EventArgs e)
{
// Check whether the browser remains
// connected to the server.
if (Response.IsClientConnected)
{
// If still connected, do work
DoWork();
}
else
{
// If the browser is not connected
// stop all response processing.
Response.End();
}
}
Please reply with your target app server stack so I can provide a more relevant example.
Regarding your 2nd alternative to use XHR to post client page unload events to the server, #Oded's comment about HTTP being stateless between requests is spot on. This is unlikely to work, especially in a farm with multiple servers.
HTTP is stateless, hence the only way to detect a disconnected client is via timeouts.
See the answers to this SO question (Java Servlet : How to detect browser closing ?).

Can a blackberry HTTP request error out immediately if there's no connection available?

I have an HTTP connection, opened by
HttpConnection c = (HttpConnection)Connector.open(url);
where url is one of:
http://foo.bar;deviceside=false
http://foo.bar;deviceside=false;ConnectionType=mds-public
http://foo.bar;deviceside=true;ConnectionUID=xxxxxxx
http://foo.bar;deviceside=true;interface=wifi
Is there any way to cause the request to error out immediately if the connection cannot be established because the device is not connected to a network? As it is, it takes about a minute to timeout in many cases (specifically on the first call to get the information from the network: c.getResponseCode())
Edit: I mean error out. In one case, Wifi, specifically, it will sit around for several minutes if the wifi is not on before timing out, and I want it to stop right away.
I use the RadioInfo class to check if there is a connection and if the radio is turned on before trying to make a connection. Then you can just display a message to the user or turn the radio on (if it's off) before trying to connect, makes for a much better user experience.
Try using:
if (RadioInfo.getState() == RadioInfo.STATE_OFF)
OR
if (RadioInfo.getSignalLevel() == RadioInfo.LEVEL_NO_COVERAGE)
To check connection status before connecting.
I encase my posts in a thread to timeout faster. Make sure your "PostThread" catches all exceptions (and saves them).
public byte[] post(String url, byte[] requestString){
PostThread thread=new PostThread(url, requestString);
synchronized(thread){
try{
thread.start();
thread.wait(TIMEOUT);
}catch(Throwable e){
}//method
}//synch
if (thread.isAlive()){
try{
thread.interrupt();
}catch(Throwable e){
}//method
D.error("Timeout");
}//endif
if (thread.error!=null) D.error(thread.error);
if (thread.output!=null) return thread.output;
throw D.error("No output");
}//method
There is also the ConnectionTimeout parameter, which I have not tested: eg socket://server:80/mywebservice;ConnectionTimeout=2000
Not any way that can be specified programmatically. It can be irritating, but a connection from a mobile device - especially a BlackBerry - generally goes through a few different networks and gateways before reaching the destination server: wireless->Carrier APN->Internet->BES (maybe)->foo.bar server so a large timeout is built-in to account for potential delays at any of those points.
You can control default device connection timeout from your BES/MDS server (or in the JDE, from the MDS\config\rimpublic.property file) - but that probably won't help you.
It would be better to have a Timeout check from a different thread, Because this is gonna happen even when the connection is established, say the network latency is very high, so u dont want the user to wait for so long or such thing.
So, in that case have a check from a different thread, whether the current time minus time entered for initiating the connection is more than your set time, close the connection using connection.close()!

Resources