Windows socket close(closesocket) function generates RST.
On linux when I call close function to close a tcp socket it goes through fin/ack from both client server and the socket gets closed.
But on windows winsock, whenever i call closesocket it always generates RST message.
I tried using shutdown call. It is generating FIN. But finally I have to call closesocket and it sends RST.
Is there a way to call closesocket to release socket resources without sending RST message.
There are several things that can cause closesocket() to send an RST instead of a FIN:
Calling it when there is unread data pending in the socket receive buffer.
Calling it after you have set the SO_LINGER option to 'on' with a zero timeout.
If there are no more data to send and/or receive, you can use 'shutdown()'.
Ex: shutdown(clisocket, SD_BOTH);
Related
I'm trying to determine if a client has closed a socket connection from netty. Is there a way to do this?
On a usual case where a client closes the socket via close() and the TCP closing handshake has been finished successfully, a channelInactive() (or channelClosed() in 3) event will be triggered.
However, on an unusual case such as where a client machine goes offline due to power outage or unplugged LAN cable, it can take a lot of time until you discover the connection was actually down. To detect this situation, you have to send some message to the client periodically and expect to receive its response within a certain amount of time. It's like a ping - you should define a periodic ping and pong message in your protocol which practically does nothing but checking the health of the connection.
Alternatively, you can enable SO_KEEPALIVE, but the keepalive interval of this option is usually OS-dependent and I would not recommend using it.
To help a user implement this sort of behavior relatively easily, Netty provides ReadTimeoutHandler. Configure your pipeline so that ReadTimeoutHandler raises an exception when there's no inbound traffic for a certain amount of time, and close the connection on the exception in your exceptionCaught() handler method. If you are the party who is supposed to send a periodic ping message, use a timer (or IdleStateHandler) to send it.
If you are writing a server, and netty is your client, then your server can detect a disconnect by calling select() or equivalent to detect when the socket is readable and then call recv(). If recv() returns 0 then the socket was closed gracefully by the client. If recv() returns -1 then check errno or equivalent for the actual error (with few exceptions, most errors should be treated as an ungraceful disconnect). The thing about unexpected disconnects is that they can take a long time for the OS to detect, so you would have to either enable TCP keep-alives, or require the client to send data to the server on a regular basis. If nothing is received from the client for a period of time then just assume the client is gone and close your end of the connection. If the client wants to, it can then reconnect.
If you read from a connection that has been closed by the peer you will get an end-of-stream indication of some kind, depending on the API. If you write to such a connection you will get an IOException: 'connection reset'. TCP doesn't provide any other way of detecting a closed connection.
TCP keep-alive (a) is off by default and (b) only operates every two hours by default when enabled. This probably isn't what you want. If you use it and you read or write after it has detected that the connection is broken, you will get the reset error above,
It depends on your protocol that you use ontop of netty. If you design it to support ping-like messages, you can simply send those messages. Besides that, netty is only a pretty thin wrapper around TCP.
Also see this SO post which describes isOpen() and related. This however does not solve the keep-alive problem.
I'm studying Unix sockets programming. I made a time server that sends raw time data and a client for it that receives that data and converts it to local time.
When I run the server, connect a client to it (which causes both of them to do their job and shutdown) and then rerun the server, I get errno = 98 on bind() call. I have to change the port in the server's source code and recompile it to get rid of that error. When I run the server and connect to it again it's ok, after another rerun the situation repeats. But I can change back to previous port then. So I'm jumping from port 1025 to 1026 and vice-versa each debug run (which are very frequent, so this annoys a little).
It works like this: The server opens the listener socket, binds to it, listens to it, accepts a connection into a data socket, writes a time_t to it, closes the data socket and then closes the listener socket. The client opens a socket, connects to a server, reads data and closes the socket.
What's the problem?
Thanks in advance.
The sockets have a lingering time after they close. They may keep the port taken for a litte while after the execution of your application, so they may send any unsent data. If you wait long enough the port will be released and can be taken again for another socket.
For more info on Socket Lingering check out:
http://www.developerweb.net/forum/archive/index.php/t-2982.html
setsockopt and SO_REUSEADDR
errno 98 - Address already in use
Look into SO_REUSEADDR
Beej's guide to network programming
More details on what causes this:
http://www.serverframework.com/asynchronousevents/2011/01/time-wait-and-its-design-implications-for-protocols-and-scalable-servers.html
If a tcp server and client are connected, I'd like to determine when the client is no longer connected. I thought I can simply do this by attempting to send a message to the client and once send() returns with a -1, I can then tear down the socket. This implementation works find on Windows but the minute I try doing this on Linux with BSD sockets, the call to send() on the server side app causes my server app to crash if the client is no longer connected. It doesn't even return a -1...just terminates the program.
Please explain why this is happening. Thanks in advance!
This is caused by the SIGPIPE signal. See send(2):
The send() function shall fail if:
[EPIPE]
The socket is shut down for writing, or the socket is connection-mode and is no longer connected. In the latter case, and if the socket is of type SOCK_STREAM or SOCK_SEQPACKET and the MSG_NOSIGNAL flag is not set, the SIGPIPE signal is generated to the calling thread.
You can avoid this by using the MSG_NOSIGNAL flag on the send() call, or by ignoring the SIGPIPE signal with signal(SIGPIPE, SIG_IGN) at the beginning of your program. Then the send() function will return -1 and set errno to EPIPE in this situation.
You need to ignore the SIGPIPE signal. If a write error happens on a socket, your process with get SIGPIPE, and the default behavior of that signal is to kill your process.
Writing networking code on *nix you usually want:
signal(SIGPIPE,SIG_IGN);
When using a TCP socket, what does
shutdown(sock, SHUT_RD);
actually do? Does it just make all recv() calls return an error code? If so, which error code?
Does it cause any packets to be sent by the underlying TCP connection? What happens to any data that the other side sends at this point - is it kept, and the window size of the connection keeps shrinking until it gets to 0, or is it just discarded, and the window size doesn't shrink?
Shutting down the read side of a socket will cause any blocked recv (or similar) calls to return 0 (indicating graceful shutdown). I don't know what will happen to data currently traveling up the IP stack. It will most certainly ignore data that is in-flight from the other side. It will not affect writes to that socket at all.
In fact, judicious use of shutdown is a good way to ensure that you clean up as soon as you're done. An HTTP client that doesn't use keepalive can shutdown the write-side as soon as it is done sending the request, and a server that sees Connection: closed can likewise shutdown the read-side as soon as it is done receiving the request. This will cause any further erroneous activity to be immediately obvious, which is very useful when writing protocol-level code.
Looking at the Linux source code, shutdown(sock, SHUT_RD) doesn't seem to cause any state changes to the socket. (Obviously, shutdown(sock, SHUT_WR) causes FIN to be set.)
I can't comment on the window size changes (or lack thereof). But you can write a test program to see. Just make your inetd run a chargen service, and connect to it. :-)
shutdown(,SHUT_RD) does not have any counterpart in TCP protocol, so it is pretty much up to implementation how to behave when someone writes to a connection where other side indicated that it will not read or when you try to read after you declared that you wont.
On slightly lower level it is beneficial to remember that TCP connection is a pair of flows using which peers send data until they declare that they are done (by SHUT_WR which sends FIN). And these two flows are quite independent.
I test shudown(sock,SHUT_RD) on Ubuntu 12.04. I find that when you call shutdown(sock,SHUT_RD) if there are no any type of data(include FIN....) in the TCP buffer, the successive read call will return 0(indicates end of stream). But if there are some data which arrived before or after shutdown function, read call will process normally as if shutdown function was not called. It seems that shutdown(sock,SHUT_RD) doesn't cause any TCP states changed to the socket
It has two effects, one of them platform-dependent.
recv() will return zero, indicating end of stream.
Any further writes to the connection by the peer will either be (a) silently thrown away by the receiver (BSD), (b) be buffered by the receiver and eventually cause send() to block or return -1/EAGAIN/EWOULDBLOCK (Linux), or (c) cause the receiver to send an RST (Windows).
shutdown(sock, SHUT_RD) causes any writer to the socket to receive a sigpipe signal.
Any further reads using the read system call will return a -1 and set errno to EINVAL.
The use of recv will return a -1 and set errno to indicate the error (probably ENOTCONN or ENOTSOCK).
Situation: The server calls accept(). The client sends a SYN to the server. The server gets the SYN, and then sends a SYN/ACK back to the client. However, the client now hangs up / dies, so it never sends an ACK back to the server.
What happens? Does accept() return as soon as it receives the SYN, or does block until the client's ACK is returned? If it blocks, does it eventually time-out?
The call to accept() blocks until it has a connection. Unless and until the 3-way handshake completes there is no connection, so accept() should not return. For non-blocking sockets it won't block, but neither will it give you info about partially completed handshakes.
If the client never sends an ACK, accept() will either block or return EAGAIN if the socket is marked non-blocking.
It will eventually time out, because that scenario is in actual face a DoS (Denial of Service) and the resource for the accept returned to for use by the operating system. if might cause the master socket to block, since client is connected to the server once the accept returns with a valid file discriptor
In the event that a error occurs during the connection from the client, the value errno will be set and a good idea would be log or display an error message. , however read the man pages it is the best source of info in most cases.
In the case there is a failure, say, a timeout because a handshake does not complete, it will return -1 and set errno. I believe, after looking at the man page, that it will set errno to ECONNABORTED.