No FIN flags set in Wireshark trace - tcp

I have a trace from http://example.com. I can see the SYN and ACK and the other TCP goodies, however there's no FIN.
I've tries some other sites as well, still no FIN.
Are all the webservers/browsers misbehaving and not closing as they should, or is this correct behavior and to so with reusing connections (or something else)?

Are you closing the browser after visiting the page? Try that, and see if now you see the FIN when the browser exits.

Related

How to limit HTTPS to one TCP connection?

I'm using uIP along with mbed TLS to run a simple web server on a microcontroller, and host an HTTPS page.
The problem is: my chip only has enough RAM to handle one TLS connection at a time, but Firefox (and Chrome) tries to open multiple connections at once to load the images on the page. If I tell uIP to abort or close additional connections, Firefox assumes an error and gives up loading the rest of the page.
I can tell uIP to limit the total connections to 1, and in that case it just drops new SYN packets if there is already a connection. This actually works, as Firefox will wait and try again until the page is fully loaded. I can't use this a solution however, since I do need to allow more than 1 TCP connection total in order to handle other types of connections (I can serve a regular HTTP web page at the same time, for example). If I could tell uIP to limit connections on a specific port to 1 at a time, that may solve the problem, but I don't think uIP has that capability. I also don't see a way to force uIP to drop certain packets.
I've looked all over the web, but I can't find any information on running a web server using just one TCP connection at a time.
Does anyone have any ideas?
Thanks!
Marlon
Just ignore the SSL connection until you are ready to process it. Browsers should tolerate this.

What does an HTTP server send to client when the socket is to be closed?

As per title. I've been google'ing extensively and found nothing. I've tried the RFC definition of HTTP/1.1, however it is a quite cubersome document. Does the server send anything? (now is a good time to mention that I want to use persistant connections?)
With HTTP/1.1 persistent connections, the connection may be kept open after a request, typically for a few seconds. Once some time has passed, the TCP connection is simply closed. No messages or anything are sent. None are needed.
You can easily see this yourself if you fire up a packet sniffer, like Wireshark.

How does TCP connection terminate if one of the machine dies?

If a TCP connection is established between two hosts (A & B), and lets say host A has sent 5 octets to host B, and then the host B crashes (due to unknown reason).
The host A will wait for acknowledgments, but on not getting them, will resend octets and also reduce the sender window size.
This will repeat couple times till the window size shrinks to zero because of packet loss. My question is, what will happen next?
In this case, TCP eventually times out waiting for the ack's and return an error to the application. The application have to read/recv from the TCP socket to learn about that error, a subsequent write/send call will fail as well. Up till the point that TCP determined that the connection is gone, write/send calls will not fail, they'll succeed as seen from the application or block if the socket buffer is full.
In the case your host B vanishes after it has sent its ACKs, host A will not learn about that until it sends something to B, which will eventually also time out, or result in an ICMP error. (Typically the first write/send call will not fail as TCP will not fail the connection immediately, and keep in mind that write/send calls does not wait for ACKs until they complete).
Note also that retransmission does not reduce the window size.
Please follow this link
now a very simple answer to your question in my view is, The connection will be timed out and will be closed. another possibility that exists is that some ICMP error might be generated due to due un-responsive machine.
Also, if the crashed machine is online again, then the procedure described in the link i just pasted above will be observed.
Depends on the OS implementation. In short it will wait for ACK and resend packets until it times out. Then your connection will be torn down. To see exactly what happens in Linux look here other OSes follow similar algorithm.
in your case, A FIN will be generated (by the surviving node) and connection will eventually migrate to CLOSED state. If you keep grep-ing for netstat output on the destination ip address, you will watch the migration from ESTABLISHED state to TIMED_WAIT and then finally disappear.
In your case, this will happen since TCP keeps a timer to get the ACK for the packet it has sent. This timer is not long enough so detection will happen pretty quickly.
However, if the machine B dies after A gets ACK and after that A doesn't send anything, then the above timer can't detect the same event, however another timer (calls idle timeout) will detect that condition and connection will close then. This timeout period is high by default. But normally this is not the case, machine A will try to send stuff in between and will detect the error condition in send path.
In short, TCP is smart enough to close the connection by itself (and let application know about it) except for one case (Idle timeout: which by default is very high).
cforfun
In normal cases, each side terminating its end of the connectivity by sending a special message with a FIN(finish) bit set.
The device receiving this FIN responds with an acknowledgement to the FIN to indicate that it has been received.
The connection as a whole is not considered terminated until both the devices complete the shut down procedure by sending an FIN and receiving an acknowledgement.

Why will a TCP Server send a FIN immediately after accepting a connection?

From the ethreal packet capture, I see the following behaviour which appears quite strange to me:
Client --> Server [SYN]
Server --> Client [SYN, ACK]
Client --> Server [ACK]
Server --> Client [FIN, ACK]
Client --> Server [ACK]
Client --> Server [TCP Segment of a reassembled PDU] (I don't know what this means)
Server --> Client [RST]
Any ideas as to why this could be happening?
Also, the Server Port is 6000. Could that cause any problem?
My other doubts:
Why is there a FIN, ACK? Shouldn't it be only FIN? What is the meaning of the ACK in that message?
Shouldn't there be a FIN from Client also?
EDIT:
After some more analysis, I found if the number of file descriptors have exceeded the limit then a FIN is sent by the Server. But, in this case it doesn't appear that the file descriptors have exceeded the limit. For what other scenarios can this happen?
Upon deep analysis, the following was found to be the reason of the problem:
When a Client tries TCP connect, even if the server is not currently calling accept, the connection will pass. This will happen if server has called 'listen' function and it will keep accepting the connections till backlog limit is reached.
But, if the application process exceeds the limit of max file descriptors it can use, then when server calls accept, then it realizes that there are no file descriptors available to be allocated for the socket and fails the accept call and the TCP connection sending a FIN to other side.
I just though of posting this finding here. I am still leaving the accepted answer as that of Habbie's.
Thanks to all those who answered this question.
FIN usually means the other side called shutdown(..) on the socket.
I'm guessing the connection is being accepted by inetd or a similar daemon, which then attempts to fork and exec another program to handle the connection, and that either the fork is failing (due to resource exhaustion) or the exec is failing (due to nonexistent file, permissions error, etc.).
I think the FIN was sent by calling close() instead of shutdown().
The connection is in backlog queue; after accept(), the server decides to terminate it for whatever reason(e.g. TCP wrapper ACL or out of file descriptors). In this case, a close() decreases file descriptor(FD)'s link count by 1 to 0, so FD for this connection is fully destroyed. Afterwards the client sends data to a non-existing socket from server's point of view, server has to respond a RST.
If it was a shutdown(), server can still revive data sent by client and have to wait for FIN from client to close the connection gracefully. No RST is sent back.
p.s.
close() vs shutdown()
Could be TCP wrappers. If the server process was built with libwrap support, it will accept the connection, check /etc/hosts.allow and /etc/hosts.deny, and then immediately close the connection if denied by policy.
It's easy to see if the server is using libwrap:
> ldd /usr/sbin/sshd | grep libwrap
libwrap.so.0 => /lib64/libwrap.so.0 (0x00007f1562d44000)
Seems like the server calls shutdown very shortly after accepting the connection.

Rejecting a TCP connection before it's being accepted?

There are 3 different accept versions in winsock. Aside from the basic accept which is there for standard compliance, there's also AcceptEx which seems the most advanced version (due to it's overlapped io capabilities), and WSAAccept. The latter supports a condition callback, which as far as I understand, allows the rejection of connection requests before they're accepted (when the SO_CONDITIONAL_ACCEPT option is enabled). None of the other versions supports this functionality.
Since I prefer to use AcceptEx with overlapped io, I wonder how come this functionality is only available in the simpler version?
I don't know enough about the inner workings of TCP to tell wehter there's actually any difference between rejecting a connection before it has been accepted, and disconnecting a socket right after a connection has been established? And if there is, is there any way to mimic the WSAAccept functionality with AcceptEx?
Can someone shed some light over this issue?
When a connection is established, the remote end sends a packet with the SYN flag set. The server answers with a SYN,ACK packet, and after that the remote end sends an ACK packet, which may already contain data.
There are two ways to break a TCP connection from forming. The first is resetting the connection - this is the same as the common "connection refused" message seen when connecting to a port nobody is listening to. In this case, the original SYN packet is answered with a RST packet, which terminates the connection immediately and is stateless. If the SYN is resent, RST will be generated from every received SYN packet.
The second is closing the connection as soon as it has been formed. On the TCP level, there is no way to close the connection both ways immediately - the only thing you can say is that "I am not going to send any more data". This happens so that when the initial SYN, SYN,ACK, ACK exchange has finished, the server sends a FIN packet to the remote end. In most cases, telling the other end with FIN that "I am not going to send any more data" makes the other end close the connection as well, and send it's own FIN packet. A connection terminated this way isn't any different from a normal connection where no data was sent for some reason. This means that the normal state tracking for TCP connections and the lingering close states will persist, just like for normal connections.
Now, on the C API side, this looks a bit different. When calling listen() on a port, the OS starts accepting connections on that port. This means that is starts replying SYN,ACK packets to connections, regardless if the C code has called accept() yet. So, on the TCP side, it makes no difference whether the connection is somehow closed before or after accept. The only additional concern is that a listening socket has a backlog, which means the number of non-accepted connections it can have waiting, before it starts saying RST to the remote end.
However, on windows, the SO_CONDITIONAL_ACCEPT call allows the application to take control of the backlog queue. This means that the server will not answer anything to a SYN packet until the application does something with the connection. This means, that rejecting connections at this level can actually send RST packets to the network without creating state.
So, if you cannot get the SO_CONDITIONAL_ACCEPT functionality enabled somehow on the socket you are using AcceptEx on, it will show up differently to the network. However, not many places actually use the immediate RST functionality, so I would think the requirement for that must mean a very specialized system indeed. For most common use cases, accepting a socket and then closing it is the normal way to behave.
I can't comment on the Windows side of things but as far as TCP is concerned, rejecting a connection is a bit different than disconnecting from it.
For one, disconnecting from a connection means there were more resources already "consumed" (e.g. ports state maintained in Firewalls & end-points, forwarding capacity used in switches/routers etc.) in both the network and the hosts. Rejecting a connection is less resource intensive.

Resources