I'm trying to maintain a persistent connection between client and Remote server using Qt. My sever side is fine. I'm doing my client side in Qt. Here I will be using QNetworkAccessManager for requesting server with get method(Part of QNetworkRequest method). I will be able to send and receive requests.
But after sometime(approx ~ 2 min)the client is intimating server, the connection has closed by posting a request automatically. I think QNetworkAccessManager is setting a timeout for this connection. I want to maintain a persistent connection between the ends.
Is my approach correct, if not, can someone guide me in correct path?
This question is interesting, so let's do some research. I set up a nginx server with big keep alive timeout and wrote the simpliest Qt application:
QApplication a(argc, argv);
QNetworkAccessManager manager;
QNetworkRequest r(QUrl("http://myserver/"));
manager.get(r);
return a.exec();
Also I used the following command (in Linux console) to monitor connections and check if the problem reproduces at all:
watch -n 1 netstat -n -A inet
I took quick look at Qt sources and found that it uses QTcpSocket and closes it in QHttpNetworkConnectionChannel::close. So I opened the debugger console (Window → Views → Debugger log in Qt Creator) and added a breakpoint while the process was paused:
bp QAbstractSocket::close
Note: this is for cdb (MS debugger), other debuggers require other commands. Another note: I use Qt with debug info, and this approach may not work without it.
After two minutes of waiting I got the backtrace of the close() call!
QAbstractSocket::close qabstractsocket.cpp 2587 0x13fe12600
QHttpNetworkConnectionPrivate::~QHttpNetworkConnectionPrivate qhttpnetworkconnection.cpp 110 0x13fe368c4
QHttpNetworkConnectionPrivate::`scalar deleting destructor' untitled 0x13fe3db27
QScopedPointerDeleter<QObjectData>::cleanup qscopedpointer.h 62 0x140356759
QScopedPointer<QObjectData,QScopedPointerDeleter<QObjectData>>::~QScopedPointer<QObjectData,QScopedPointerDeleter<QObjectData>> qscopedpointer.h 99 0x140355700
QObject::~QObject qobject.cpp 863 0x14034b04f
QHttpNetworkConnection::~QHttpNetworkConnection qhttpnetworkconnection.cpp 1148 0x13fe35fa2
QNetworkAccessCachedHttpConnection::~QNetworkAccessCachedHttpConnection untitled 0x13fe1e644
QNetworkAccessCachedHttpConnection::`scalar deleting destructor' untitled 0x13fe1e6e7
QNetworkAccessCachedHttpConnection::dispose qhttpthreaddelegate.cpp 170 0x13fe1e89e
QNetworkAccessCache::timerEvent qnetworkaccesscache.cpp 233 0x13fd99d07
(next lines are not interesting)
The class responsible for this action is QNetworkAccessCache. It sets up timers and makes sure that its objects are deleted when QNetworkAccessCache::Node::timestamp is in the past. And these objects are HTTP connections, FTP connections, and credentials.
Next, what is timestamp? When the object is released, its timestamp is calculated in the following way:
node->timestamp = QDateTime::currentDateTime().addSecs(ExpiryTime);
And ExpiryTime = 120 is hardcoded.
All involved classes are private, and I found no way to prevent this from happening. So it's way simpler to send keep-alive requests each minute (at least now you know that 1 minute is safe enough), as the alternative is to rewrite Qt code and compile custom version.
I'd say by definition, a 2 minute timeout connection qualifies for persistent. I mean if it wasn't persistent, you'd have to reconnect on every request. 2 minutes is quite generous comparing to some other software out there. But it is set to eventually timeout after a period of inactivity and that's a good thing which should not surprise. Some software allows the timeout period to be changed, but from Pavel's investigation it would appear in the case of Qt the timeout is hardcoded.
Luckily the solution is simple, just rig a timer to send a heartbeat (just a dummy request, do not confuse with "heartbeat network") every 1 minute or so to keep the connection alive. Before you use your connection deactivate the timer, and after you are done with the connection, restart the timer.
Related
My understanding of the (JavaScript) hub client is that if a connection is lost, it enters a 'Reconnecting...' phase which attempts to reconnect. If it can't do so, it will enter a 'Disconnected' state which is where it'll stay until asked to start again.
How long is the 'Reconnecting...' phase meant to last before it gives up? I've read 40 seconds before, but my client seems to take much less time - about 10, maybe less. [EDIT: Nevermind this part, I had configured a 10 disconnect on the server as a test... and forgot. I understand this is set by the server during the negotiate. Makes sense!] ... I'd prefer to have the client continually retry until it is told to abort - can this be done, and would it cause issues?
Another question; during the Reconnecting... phase, if I attempt to call a hub method (again, in JS) it never seems to complete. I'm using the returned Deferred to check for 'done' and 'fail' events, but neither seems to get called. Is this by design?
Thanks.
You can definitely have it continually reconnect.
Handle the disconnected event on the client and call connection.start:
$.connection.hub.disconnected(function() {
setTimeout(function() {
$.connection.hub.start();
}, 5000); // Re-start connection after 5 seconds
});
The only issues this would cause is that you could potentially be triggering infinite requests to a server that isn't there for client machines. This becomes even more troublesome when you introduce the mobile market into the situation (drains battery like crazy).
When you attempt to call a hub method while reconnecting SignalR will try to send your command. Since there are 2 channels, one for receiving data and one for sending, (for all transports except web sockets) in some cases it can still be possible to send requests while your offline. Therefore SignalR does not know if a request fails until the browser tells it that it could not successfully make the request.
Hope this helps!
I might have a clue... Touching the Web.config produces an appPool Recycle, meaning that a new worker process will be created for new requests while the existing process will continue for a while until the remaining requests end or the timeout is reached. Request that do not end in the timeout period are terminated.
Signalr client reconnects to the new process while the long running task is running in the old process, so when on the long running task you do
GlobalHost.ConnectionManager.GetHubContext<ForceHub>();
you actually get a reference for "old" hub while the client is connected to the "new" hub.
That's why the test preformed by Wasp worked: he was making a new request to publish on the signalr hub that was processed in the newly created worker process.
You could try to configure a singalr backplane (https://www.asp.net/signalr/overview/performance/scaleout-in-signalr), it’s really easy to configure it using Sql Server (https://www.asp.net/signalr/overview/performance/scaleout-with-sql-server). The backplane should be capable of connect the two worker processes and hopefully you will get the notification on the client.
If this is the problem, notifications generated by new requests will work even without the backplane. Notice that the real purpose of the backplane is to scale out signalr, this is, to connect a farm of WebServers between them.
Also keep in mind that running long-running task inside IIS is as task hard to achieve as, among other things, IIS does regular appPool recycles and has timeout limits for the requests to execute. I recommend that you read the following post: http://www.hanselman.com/blog/HowToRunBackgroundTasksInASPNET.aspx
“If you think you can just write a background task yourself, it's likely you'll get it wrong. I'm not impugning your skills, I'm just saying it's subtle. Plus, why should you have to?”
Hope this helps
i am using two m/c A and B, both are having same vxworks image as well as hardware. but only change is application. suppose M/c A is server and M/c B is client. while communication over ethernet client M/c is not able send the data. it's getting stuck send() and task state will be Pend.
wState = send(vstCCEUSerSocket.wCCEUAcceptFD,(char* )vstCCEUAppTask.rgubyCCEUTxPkt,sizeof(vstCCEUAppTask.rgubyCCEUTxPkt),0);
/*logMsg("\nTrmtd = %d\t",wState);*/
if(wState == ERROR)
{
perror("write");
Close the Fd
}
From the VxWorks OS Libraries API Reference
Page 497/498 you can find info about the connect() but there's also a connectWithTimeout()
Page 1203/1204 you might find some interesting items for TCP sockets. For example the KEEP_ALIVE
If you rely on a quick connection time, and you want to keep control you can combine connectWithTimeout with the keep alive.
It can take another day for me to recall old code to check how I ever solved this in one of my projects.
VxWorks 5.5 Network Programmers Guide - Stream Sockets
I have a couple of NSStreams (in & out TLS) to a server, I can send and receive data via them just fine but after a while maybe 5 minutes after without any traffic, connection seems to be closed on it's own but my delegate does not called with NSStreamEventEndOccured and I only get NSStreamEventErrorOccurred AFTER I try to send something.
COnnection shouldn't close on it's own in the first place because
-app is still active
-device is not locked
-wifi it's using does not disconnect
-Remote server has a long tcp lifetime and SO_KEEPALIVE flag active, iPhone side also SO_KEEPALIVE active on it's native socket handles.
Still, I'm more concerned about why doesn't my delegate get called than my connection getting closed.
Any ideas?
Thanks
The client use ssh login and start up a server on remote machine, then the clinet create a tcp connect to the server.
The server need exit when the client has exit normally or crashed or network is dropped.
So the question is how to detect if the client which the server has connected to is crashed.
The first try is using error() signal, catch QAbsoluteSocket::NetworkError to determine the network has dropped. But I can't receive error() signal at all even if i pull out the network cable.
The second try is using the SocketState, i think whenever SocketState is UnconnectedState,the client may has exit normally and the server should exit too. This way works fine for "normal exit", but I don't know how to deal with "crash" and "dead network".
Help me, thanks!
I'd recommend using TCP keep alive. It is not exposed through the public QTcpSocket interface, but you can use setsockopt with QAbstractSocker::socketDescriptor to activate the SO_KEEPALIVE feature.
EDIT: It appears that keep alive was added to QAbstractSocket at some point. So, simply call QAbstractSocket::setSocketOption with QAbstractSocket::KeepAliveOption.
You can find information about adjusting the timeout of keep alive request here: http://www.gnugk.org/keepalive.html
Most of the time, the only way you will know there is a problem with a socket connection is when you try to read or write with it. There are some exceptions: Windows will change the state of sockets if the network cable is unplugged, Linux (in my experience) will not.
The most reliable way to detect connection problems is to have the client regularly send a small message at an agreed upon interval with the server. If the server does not see this message within a reasonable time, it should consider the client dead and drop the connection. This will also give both sides regular opportunities to detect a problem via reads and writes.
I'm using Websphere Application Server (WAS) 6.1's default messaging provider for JMS. My remote client application creates a connection, then does a setExceptionListener to register the callback.
When I simply stop the messaging engine using the WAS Integrated Solutions Console, my app behaves as expected, i.e., onException is called immediately and my app reacts accordingly. However, when I pull the network cable, the onException callback does not get called back for somewhere between 30 and 60 seconds.
The ugly result is that my app just tries to keep sending messages to WAS during this 30 to 60 second time frame and those messages just get lost. I've done several searches trying to find out more about the ExceptionListener (e.g., is there some configuration parameter used to specify a callback timeout), but have not had any success.
Hopefully, this makes sense to someone out there. Any suggestions how I might be able to detect the cable "cut" scenario more quickly? Thanks for your help.
-Kris
You don't happen to have a 30 second TCP timeout defined?
If so, then MQ has handed over its responsibility temporarily to the JVM/OS and is waiting for for it to ACK with whatever network related operation it has requested. Perhaps try lowering the TCP timeout value...