I've been learning about UDP socket lately by browsing the net and all the pages that were explaining it were mentioning that UDP sockets are "connection less". This, if I understand it correctly means that one does not have a "connection" between two sockets but instead shoots datagram packets to specified endpoints without knowing whether the other end is listening.
Then I go and start reading the boost::asio::ip::udp::socket docs and find that it mentions API like:
async_connect: Start an asynchronous connect.
async_receive: Start an asynchronous receive on a connected socket.
async_send: Start an asynchronous send on a connected socket.
Now this is a bit confusing for a novice. I can find 3 possible causes for my confusion (in order of likehood :) )
I'm missing something
The asio implementation is doing something behind the scenes to virtualize the connection.
The documentation is wrong
There is also a slight glitch in the docs, when you open the page for basic_datagram_socket::async_connect the example in there is instantiating TCP sockets (instead of UDP ones).
Would someone please enlighten me?
The Single UNIX specification has a better explanation of what connect does for connection-less sockets:
If the initiating socket is not connection-mode, then connect() sets the socket's peer address, but no connection is made. For SOCK_DGRAM sockets, the peer address identifies where all datagrams are sent on subsequent send() calls, and limits the remote sender for subsequent recv() calls.
Related
We have a simple TCP server behind an AWS Network ELB (similar to Echo server with long-lived connections) written in Netty and I'm trying to implement a keep-alive mechanism similar to TCP keep-alive mechanism to keep our idle connections open. Unfortunately we cannot rely on TCP keep-alive mechanism since NELBs do not forward keep-alive TCP packets to the other side of the loadbalancer.
What I'm thinking to do is to watch for idle connections and send an empty string (empty byte array) to clients. What I did so far in the code is:
Add a IdleStateHandler with some timeout values
Register a GprsKeepAliveHandler, a sub class of ChannelDuplexHandler, overriding userEventTriggered method sending (ctx.writeAndFlush) the Unpooled.EMPTY_BUFFER.
This way, I expect to receive a RST packet if the connection is gone. Otherwise the connection will become active again.
The problem is Netty does not do anything with the empty message, it does not send any packets to the client (monitored with Wireshark). If I change the message to Unpooled.wrappedBuffer(new byte[]{0}) I see what I'm expect to see.
Questions
I couldn't find a better way to achieve my objective (keep connections alive and detect dead connections). If there's a better way please let me know.
What is the proper way to send an empty message in Netty? (I saw this question but it didn't help)
If the issue is because of OS TCP stack behavior, is there a way to solve this problem?
from my perspective you need to send something meaningful, because you try to do (e.g. ping/pong, heartbeating behavior). Also see Is it is possible to force TCP socket to send 0 bytes in case of packet losses - python
It seems that Netty does not make any syscall in case of empty messages. (see this)
I'm trying to determine if a client has closed a socket connection from netty. Is there a way to do this?
On a usual case where a client closes the socket via close() and the TCP closing handshake has been finished successfully, a channelInactive() (or channelClosed() in 3) event will be triggered.
However, on an unusual case such as where a client machine goes offline due to power outage or unplugged LAN cable, it can take a lot of time until you discover the connection was actually down. To detect this situation, you have to send some message to the client periodically and expect to receive its response within a certain amount of time. It's like a ping - you should define a periodic ping and pong message in your protocol which practically does nothing but checking the health of the connection.
Alternatively, you can enable SO_KEEPALIVE, but the keepalive interval of this option is usually OS-dependent and I would not recommend using it.
To help a user implement this sort of behavior relatively easily, Netty provides ReadTimeoutHandler. Configure your pipeline so that ReadTimeoutHandler raises an exception when there's no inbound traffic for a certain amount of time, and close the connection on the exception in your exceptionCaught() handler method. If you are the party who is supposed to send a periodic ping message, use a timer (or IdleStateHandler) to send it.
If you are writing a server, and netty is your client, then your server can detect a disconnect by calling select() or equivalent to detect when the socket is readable and then call recv(). If recv() returns 0 then the socket was closed gracefully by the client. If recv() returns -1 then check errno or equivalent for the actual error (with few exceptions, most errors should be treated as an ungraceful disconnect). The thing about unexpected disconnects is that they can take a long time for the OS to detect, so you would have to either enable TCP keep-alives, or require the client to send data to the server on a regular basis. If nothing is received from the client for a period of time then just assume the client is gone and close your end of the connection. If the client wants to, it can then reconnect.
If you read from a connection that has been closed by the peer you will get an end-of-stream indication of some kind, depending on the API. If you write to such a connection you will get an IOException: 'connection reset'. TCP doesn't provide any other way of detecting a closed connection.
TCP keep-alive (a) is off by default and (b) only operates every two hours by default when enabled. This probably isn't what you want. If you use it and you read or write after it has detected that the connection is broken, you will get the reset error above,
It depends on your protocol that you use ontop of netty. If you design it to support ping-like messages, you can simply send those messages. Besides that, netty is only a pretty thin wrapper around TCP.
Also see this SO post which describes isOpen() and related. This however does not solve the keep-alive problem.
Consider a TCP connection established between two TCP endpoints, where one of them calls either:
close():
Here, no further read or write is permitted.
shutdown(fd, SHUT_WR):
This converts the full duplex connection to simplex, where the endpoint invoking SHUT_WR can still read.
However, in both the cases a FIN packet is sent on the wire to the peer endpoint. So the question is, how can the TCP endpoint which receives the FIN distinguish whether the other endpoint has used close() or SHUT_WR, since in the latter scenario it should still be able to send data?
Basically, the answer is, it doesn't. Or, rather, the only general way to find out is to try to send some data and see if you get an ACK or an RST in response.
Of course, the application protocol might provide some mechanism for one side of the connection to indicate in advance that it no longer wants to receive any more data. But TCP itself doesn't provide any such mechanism.
I've a Client Socket that pushes Image Data to Server Socket after connection Handshake is done. and the Server sockets process them without responding anything
It works well for few minutes. But After sometime the Server socket stops getting those Data. That I couldn't figure out why ? Is there any such thing in TCP like if client keep pushing data the server must say something otherwise the conversation will stop ?
I wrote this code years ago. and to make it work I made the server returning a string "ACK" response. However If I change that to any string it will work.
But now I want to figure out the Why to reconstruct the Program.
"One-way" communication with TCP is totally fine unless you need an acknowledgment from the receiver on the sending side. But that's your application-level protocol. At the transport level the packets still flow both ways - TCP keeps sequence numbers in both directions and acknowledges them to the other side. This allows for detecting dropped/duplicate packets and for re-transmission, thus providing reliability of the stream. The window sizes negotiated during connection handshake and updated during the life of the conversation allow TCP to slow down fast sender that would overwhelm a slow receiver.
What you really need to do is to record the TCP connection with a sniffer like tcpdump(1) or wireshark and find out what happens on the wire at the point when "socket stops getting those Data".
There are 3 different accept versions in winsock. Aside from the basic accept which is there for standard compliance, there's also AcceptEx which seems the most advanced version (due to it's overlapped io capabilities), and WSAAccept. The latter supports a condition callback, which as far as I understand, allows the rejection of connection requests before they're accepted (when the SO_CONDITIONAL_ACCEPT option is enabled). None of the other versions supports this functionality.
Since I prefer to use AcceptEx with overlapped io, I wonder how come this functionality is only available in the simpler version?
I don't know enough about the inner workings of TCP to tell wehter there's actually any difference between rejecting a connection before it has been accepted, and disconnecting a socket right after a connection has been established? And if there is, is there any way to mimic the WSAAccept functionality with AcceptEx?
Can someone shed some light over this issue?
When a connection is established, the remote end sends a packet with the SYN flag set. The server answers with a SYN,ACK packet, and after that the remote end sends an ACK packet, which may already contain data.
There are two ways to break a TCP connection from forming. The first is resetting the connection - this is the same as the common "connection refused" message seen when connecting to a port nobody is listening to. In this case, the original SYN packet is answered with a RST packet, which terminates the connection immediately and is stateless. If the SYN is resent, RST will be generated from every received SYN packet.
The second is closing the connection as soon as it has been formed. On the TCP level, there is no way to close the connection both ways immediately - the only thing you can say is that "I am not going to send any more data". This happens so that when the initial SYN, SYN,ACK, ACK exchange has finished, the server sends a FIN packet to the remote end. In most cases, telling the other end with FIN that "I am not going to send any more data" makes the other end close the connection as well, and send it's own FIN packet. A connection terminated this way isn't any different from a normal connection where no data was sent for some reason. This means that the normal state tracking for TCP connections and the lingering close states will persist, just like for normal connections.
Now, on the C API side, this looks a bit different. When calling listen() on a port, the OS starts accepting connections on that port. This means that is starts replying SYN,ACK packets to connections, regardless if the C code has called accept() yet. So, on the TCP side, it makes no difference whether the connection is somehow closed before or after accept. The only additional concern is that a listening socket has a backlog, which means the number of non-accepted connections it can have waiting, before it starts saying RST to the remote end.
However, on windows, the SO_CONDITIONAL_ACCEPT call allows the application to take control of the backlog queue. This means that the server will not answer anything to a SYN packet until the application does something with the connection. This means, that rejecting connections at this level can actually send RST packets to the network without creating state.
So, if you cannot get the SO_CONDITIONAL_ACCEPT functionality enabled somehow on the socket you are using AcceptEx on, it will show up differently to the network. However, not many places actually use the immediate RST functionality, so I would think the requirement for that must mean a very specialized system indeed. For most common use cases, accepting a socket and then closing it is the normal way to behave.
I can't comment on the Windows side of things but as far as TCP is concerned, rejecting a connection is a bit different than disconnecting from it.
For one, disconnecting from a connection means there were more resources already "consumed" (e.g. ports state maintained in Firewalls & end-points, forwarding capacity used in switches/routers etc.) in both the network and the hosts. Rejecting a connection is less resource intensive.