How do I check client connection is still alive - networking

I am working on a network programming using epoll. I have a connection list and put every client in the list. I can detect user disconnection by reading 0 if the user disconnected normally. However, if the user somehow got disconnected unexpectedly then there is no way it knows about this until it tries send data to the user.
I don't think epoll provides a nice way to handle this..so I think I should handle this on my own. I will be very appreciated if you guys can provide me anything like examples or references related to this problem.

epoll_wait will return a EPOLLHUP or EPOLLERR for the socket if the other side disconnects. EPOLLHUP and EPOLLERR are set automatically but you can also set the newer EPOLLRDHUP which explicitly reports peer shutdown.
Also if you use send with the flag MSG_NOSIGNAL it will set EPIPE on closed connections.
int resp = send ( sock, buf, buflen, MSG_NOSIGNAL );
if ( resp == -1 && errno == EPIPE ) { /* other side gone away */ }
Much nicer than getting a signal.

How about TCP Keepalives: http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html. See "Checking for dead peers". A later section on the same site has example code: http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/programming.html.

Related

Qt: Detect a QTcpSocket disconnection in a console app when the user closes it

My question title should be enough. I already tried (without success):
Using a C-style destructor in a function: __attribute__((destructor)):
void sendToServerAtExit() __attribute__((destructor)) {
mySocket->write("$%BYE_CODE%$");
}
The application destructor is called, but the socket is already disconnected and I can't write to the server.
Using the standard C function atexit(), but the TCP connection is already lost so I can't send anything to the server.
atexit(sendToServerAtExit); // is the same function of point 1
The solution I found is check every second if all connected sockets are still connected, but I don't want to do so inefficient thing. It's only a temporary solution. Also, I want that others apps (even web ones) can join the chat room of my console app, and I don't want to request data every second.
What should I do?
Handle the below signal (QTcpSocket is inherited from QAbstractSocket)
void QAbstractSocket::stateChanged(QAbstractSocket::SocketState socketState)
Inside the slot called, check if socketState is QAbstractSocket::ClosingState.
QAbstractSocket::ClosingState indicates the socket is about to close.
http://doc.qt.io/qt-5/qabstractsocket.html#SocketState-enum
You can connect a slot to the disconnect signal.
connect(m_socket, &QTcpSocket::disconnected, this, &Class::clientDisconnected);
Check the documentation.
You can also know which user has been disconnected using a slot like this:
void Class::clientDisconnected
{
QTcpSocket* client = qobject_cast<QTcpSocket*>(sender());
if(client)
{
// Do something
client->deleteLater();
}
else
{
// Handle error
}
}
This method is usefull if you have a connections pool. You can use it as well if you have a single connection, but do not forget nullptr after client->deleteLater().
If I understand you question correctly, you want to send data over TCP to notify the remote computer that you are closing the socket.
Technically this can be done in Qt by listenning to the QIODevice::aboutToClose() or QAbstractSocket::stateChanged() signals.
However, if you graciously exit your program and close the QTcpSocket by sending a FIN packet to the remote computer. This means that on the remote computer,
the running program will be notified that the TCP connection finished. For instance, if the remote program is also using QTcpSocket, the QAbstractSocket::disconnected()
signal will be emitted.
The real issues arise when one of the program does not graciously exit (crash, hardware issue, cable unplugged, etc.). In this case, the TCP FIN packet will
not be sent and the remote computer will never get notified that the other side of the TCP connection is disconnected. The TCP connection will just time-out after a few minutes.
However, in this case you cannot send your final piece of data to the server either.
In the end the only solution is to send a "I am here" packet every now and then. Even though you claim it is ineficient, it is a widely used technique and it also has the advantage that it works.

How to query TCP connection state in go?

On the client side of a TCP connection, I am attempting to to reuse established connections as much as possible to avoid the overhead of dialing every time I need a connection. Fundamentally, it's connection pooling, although technically, my pool size just happens to be one.
I'm running into a problem in that if a connection sits idle for long enough, the other end disconnects. I've tried using something like the following to keep connections alive:
err = conn.(*net.TCPConn).SetKeepAlive(true)
if err != nil {
fmt.Println(err)
return
}
err = conn.(*net.TCPConn).SetKeepAlivePeriod(30*time.Second)
if err != nil {
fmt.Println(err)
return
}
But this isn't helping. In fact, it's causing my connections to close sooner. I'm pretty sure this is because (on a Mac) this means the connection health starts being probed after 30 seconds and then is probed at 8 times at 30 second intervals. The server side must not be supporting keepalive, so after 4 minutes and 30 seconds, the client is disconnecting.
There might be nothing I can do to keep an idle connection alive indefinitely, and that would be absolutely ok if there were some way for me to at least detect that a connection has been closed so that I can seamlessly replace it with a new one. Alas, even after reading all the docs and scouring the blogosphere for help, I can't find any way at all in go to query the state of a TCP connection.
There must be a way. Does anyone have any insight into how that can be accomplished? Many thanks in advance to anyone who does!
EDIT:
Ideally, I'd like to learn how to handle this, low-level with pure go-- without using third-party libraries to accomplish this. Of course if there is some library that does this, I don't mind being pointed in its direction so I can see how they do it.
The socket api doesn't give you access to the state of the connection. You can query the current state it in various ways from the kernel (/proc/net/tcp[6] on linux for example), but that doesn't make any guarantee that further sends will succeed.
I'm a little confused on one point here. My client is ONLY sending data. Apart from acking the packets, the server sends nothing back. Reading doesn't seem an appropriate way to determine connection status, as there's noting TO read.
The socket API is defined such that that you detect a closed connection by a read returning 0 bytes. That's the way it works. In Go, this is translated to a Read returning io.EOF. This will usually be the fastest way to detect a broken connection.
So am I supposed to just send and act on whatever errors occur? If so, that's a problem because I'm observing that I typically do not get any errors at all when attempting to send over a broken pipe-- which seems totally wrong
If you look closely at how TCP works, this is the expected behavior. If the connection is closed on the remote side, then your first send will trigger an RST from the server, fully closing the local connection. You either need to read from the connection to detect the close, or if you try to send again you will get an error (assuming you've waited long enough for the packets to make a round trip), like "broken pipe" on linux.
To clarify... I can dial, unplug an ethernet cable, and STILL send without error. The messages don't get through, obviously, but I receive no error
If the connection is actually broken, or the server is totally unresponsive, then you're sending packets off to nowhere. The TCP stack can't tell the difference between packets that are really slow, packet loss, congestion, or a broken connection. The system needs to wait for the retransmission timeout, and retry the packet a number of times before failing. The standard configuration for retries alone can take between 13 and 30 minutes to trigger an error.
What you can do in your code is
Turn on keepalive. This will notify you of a broken connection more quickly, because the idle connection is always being tested.
Read from the socket. Either have a concurrent Read in progress, or check for something to read first with select/poll/epoll (Go usually uses the first)
Set timeouts (deadlines in Go) for everything.
If you're not expecting any data from the connection, checking for a closed connection is very easy in Go; dispatch a goroutine to read from the connection until there's an error.
notify := make(chan error)
go func() {
buf := make([]byte, 1024)
for {
n, err := conn.Read(buf)
if err != nil {
notify <- err
return
}
if n > 0 {
fmt.Println("unexpected data: %s", buf[:n])
}
}
}()
There is no such thing as 'TCP connection state', by design. There is only what happens when you send something. There is no TCP API, at any level down to the silicon, that will tell you the current state of a TCP connection. You have to try to use it.
If you're sending keepalive probes, the server doesn't have any choice but to respond appropriately. The server doesn't even know that they are keepalives. They aren't. They are just duplicate ACKs. Supporting keepalive just means supporting sending keepalives.

TcpClient/NetworkStream not detecting disconnection

I created a simple persistent socket connection for our game using TcpClient and NetworkStream. There's no problem connecting, sending messages, and disconnecting normally (quitting app/server shuts down/etc).
However, I'm having some problems where, in certain cases, the client isn't detecting a disconnection from the server. The easiest way I have of testing this is to pull out the network cable on the wifi box, or set the phone to airplane mode, but it's happened in the middle of a game on what should otherwise be a stable wifi.
Going through the docs for NetworkStream etc, it says that the only way to detect a disconnection is to try to write to the socket. Fair enough, except, when I try, the write passes as if nothing is wrong. I can write multiple messages like this, and everything seems fine. It's only when I plug the cable back in that it sees that it's disconnected (all messages are buffered?).
The TcpClient is set to NoDelay, and there's a Flush() called after every write anyway.
What I've tried:
Writing a message to the NetworkStream - no joy
CanWrite, Connected, etc all return true
TcpClient.Client.Poll( 1000, SelectMode.SelectWrite ); - returns true
TcpClient.Client.Poll( 1000, SelectMode.SelectRead ) && TcpClient.Client.Available == 0 - returns true
TcpClient.Client.Receive(buffer, SocketFlags.Peek) == 0 - when connected, blocks for about 10-20s, then returns true. When no server, blocks forever(?)
NetworkStream.Write() - doesn't throw an error
NetworkStream.BeginWrite() - doesn't throw an error (not even when calling EndWrite())
Setting a WriteTimeout - had no effect
Having a specific time where we haven't received a message from the server (normally there's a keep-alive) - I had this, but removed it, as we were getting a lot of false-positives due to lag etc (some clients would see between 10-20s of lag)
So am I doing something wrong here? Is there any way to get the NetworkStream to throw an error (like it should) when writing to a socket that should be disconnected?
I've no problem with a keep-alive (the default case is the server will notify the client that it hasn't received anything in a while, and the client will send a heartbeat), but at the minute, according to the NetworkStream everything's hunky-dory.
It's for a game, so ideally the detection should be quick enough (as the user can still move through the game until they need to make a server call, some of which will block the UI, so the game seems broken).
I'm using Unity, so it's .Net 2.0
is to pull out the network cable on the wifi box
That's a good test. If you do that the remote party is not notified. How could it possibly find out? It can't.
when I try, the write passes as if nothing is wrong
Writes can (and are) buffered. They eventually enter a block hole... No reply comes back. The only way to detect this is a timeout.
So am I doing something wrong here?
You have tried a lot of things but fundamentally you cannot find out about disconnects if no reply comes back telling you that. Use a timeout.

recvfrom does not receive anything from the client, but the client is sending the info

So I wanted to make sure I am not missing something and I have the concepts clean in my head.
This is the part of the code I have:
UDP_Msg mensajeRecibidoUdp;
struct sockaddr_in c_ain;
sock_udp=socket(AF_INET,SOCK_DGRAM,0);
struct sockaddr_in c_ain;
socklen_t tam_dir;
while(1)
{
if(recvfrom(sock_udp, &mensajeRecibidoUdp, sizeof(UDP_Msg), 0,
(struct sockaddr*) &c_ain, &tam_dir) <0)
....
The problem is that it waits for a message to arrive. And the message is sent, but this code does not get anything, it doesn't unfreeze.
It's a simple exercise, and the client is already built. It gets the port from a file, then sends a UDP_Message that is a struct with a couple ints and an array (the client already knows the IP and port).
I thought the way I handle the buffer could be wrong, but every example I've seen uses it like that. I also thought that the c_ain variable might needed to be initialized, but that's not the case if I understand it properly. So I don't get why the process gets... blocked, not sure what's the proper word, and since a lot of time passes by, an alarm goes off and the process gets killed (because it should have got info and keep with the code but it did not).
I can add lots of other info, I tried to keep it short. I kind of know that call is the one that isn't working properly because of how it behaves when I run the client-server thingy.
Edit: Bind:
bzero((char*)&dir_udp_serv,sizeof(struct sockaddr_in));
dir_udp_serv.sin_family=AF_INET;
dir_udp_serv.sin_addr.s_addr=inet_addr(HOST_SERVIDOR);
dir_udp_serv.sin_port=0;
fprintf(stderr,"SERVIDOR: Asignacion del puerto servidor: ");
if(bind(sock_udp,
(struct sockaddr*)&dir_udp_serv,
sizeof(struct sockaddr_in))<0)
{
fprintf(stderr,"ERROR\n");
close(sock_udp); exit(1);
}
Second edit: I just realized I have two binds in the code. The idea is I create two connections on the same server, one for UDP and one for TCP. So I made the same steps for both UDP and TCP. From the answer I got I realize that might be wrong. Would it be only 1 bind per socket, even if I create two sockets, one for UDp and one for TCP?
The above does not seem to be the case.
Also, I don't know what more details should I add, but the server and client are both on the same computer so their address are both 127.0.0.1, but that's expected and I believe does not change anything as to why the server does not get the info sent from the client.
Ask yourself: on which port do your clients send their messages? Where is that port in your server code? Nowhere. You forgot to bind your local socket to your local address (and port) :
struct sockaddr_in local;
local.sin_family = AF_INET;
local.sin_addr.s_addr = inet_addr("0.0.0.0");
local.sin_port = htons(PORT); // Now the server will listen on PORT.
if(bind(sock_udp, (struct sockaddr*)&local, sizeof(local)) < 0){
perror("bind");
exit(1);
}
// Now you may call recvfrom on your socket : your server is truly listening.
Note that you only need to bind once. No need to put this in a loop.
Allright not sure if this is the place, but the answer to my own question was one expected: The server was listening in a port, and the client was sending to a different one. That was because of the way the port assigned was sent to the client: in my case, without transform it to network format (ntohs()), so when the client did htons(), the number was a completly different one.

Can a blackberry HTTP request error out immediately if there's no connection available?

I have an HTTP connection, opened by
HttpConnection c = (HttpConnection)Connector.open(url);
where url is one of:
http://foo.bar;deviceside=false
http://foo.bar;deviceside=false;ConnectionType=mds-public
http://foo.bar;deviceside=true;ConnectionUID=xxxxxxx
http://foo.bar;deviceside=true;interface=wifi
Is there any way to cause the request to error out immediately if the connection cannot be established because the device is not connected to a network? As it is, it takes about a minute to timeout in many cases (specifically on the first call to get the information from the network: c.getResponseCode())
Edit: I mean error out. In one case, Wifi, specifically, it will sit around for several minutes if the wifi is not on before timing out, and I want it to stop right away.
I use the RadioInfo class to check if there is a connection and if the radio is turned on before trying to make a connection. Then you can just display a message to the user or turn the radio on (if it's off) before trying to connect, makes for a much better user experience.
Try using:
if (RadioInfo.getState() == RadioInfo.STATE_OFF)
OR
if (RadioInfo.getSignalLevel() == RadioInfo.LEVEL_NO_COVERAGE)
To check connection status before connecting.
I encase my posts in a thread to timeout faster. Make sure your "PostThread" catches all exceptions (and saves them).
public byte[] post(String url, byte[] requestString){
PostThread thread=new PostThread(url, requestString);
synchronized(thread){
try{
thread.start();
thread.wait(TIMEOUT);
}catch(Throwable e){
}//method
}//synch
if (thread.isAlive()){
try{
thread.interrupt();
}catch(Throwable e){
}//method
D.error("Timeout");
}//endif
if (thread.error!=null) D.error(thread.error);
if (thread.output!=null) return thread.output;
throw D.error("No output");
}//method
There is also the ConnectionTimeout parameter, which I have not tested: eg socket://server:80/mywebservice;ConnectionTimeout=2000
Not any way that can be specified programmatically. It can be irritating, but a connection from a mobile device - especially a BlackBerry - generally goes through a few different networks and gateways before reaching the destination server: wireless->Carrier APN->Internet->BES (maybe)->foo.bar server so a large timeout is built-in to account for potential delays at any of those points.
You can control default device connection timeout from your BES/MDS server (or in the JDE, from the MDS\config\rimpublic.property file) - but that probably won't help you.
It would be better to have a Timeout check from a different thread, Because this is gonna happen even when the connection is established, say the network latency is very high, so u dont want the user to wait for so long or such thing.
So, in that case have a check from a different thread, whether the current time minus time entered for initiating the connection is more than your set time, close the connection using connection.close()!

Resources