I am writing a small stateful firewall application as a school project.
I try to figure out, why my webserver (172.16.0.11) does TCP retransmissions of his FIN+ACK packet.
The first retransmission might be because of the retransmission timeout (the firewall application was quite busy at this time).
I am more interested in the retransmissions at the end.
From my point of view, the sequence 1635678 was acknowledged by the client (172.16.0.10) in his FIN+ACK (No. 298).
Why are there retransmissions of the FIN+ACK, even if the sequence was already acknowledged?
Both (client and server) are Debian 8.1 systems, using Iceweasel as browser and Apache as webserver.
Capture at server:
Capture at client:
Related
I think it relates just to the TCP layer, but I describe my setup in the following paragraph:
On google compute engine I set up a http and websocket server (python, geventwebsocket+gevent.WSGIServer). At home I have my computer (esp8266) that connects to it using websockets.
I use websockets because I need bidirectional communication (a couple of messages a day, it goes like this: a message from server, a response from client.) The connection itself is initiated by the client, as it's behind a NAT.
The problem is that a couple of seconds from the last packet exchange, the messages from server don't arrive to the client. However, the client can send packets to the server even minutes after (and possibly much longer). And interestingly then, the probably retransmitted packets from server finally arrive.
I examined the packets are indeed sent from server with wireshark (and retrasmitted, if not ack'ed) and log every network communication on the client, so the problem probably isn't the application software. I get no exceptions in the applications. The connections are open.
I tested the time server can sent packets after the connection initiation/last delivered packet generally and it's between 6 and 20 seconds, varying between tests. In the test server sends out packets with a set, fixed, delay between them.
In a test (couple of packets) with the single set delay usually either all packets arrive, or none (yeah if one doesn't arrive, the next won't).
I suspect that might be because of the NAT. But then the one solution I see would be to periodically (every 6 seconds or less) send out keep alive packets (Pings and Pongs in websocket, or the TCP's keepalive) from the client. But that doesn't seem elegant, as there should be only a few data messages in a day.
And the similar thing happens when ssh'ing from my desktop to the server: after a couple seconds of inactivity at my and server side, the server stops sending anything (tested e.g. with watch -n20 date. Sometimes it just freezes and doesn't update until I press a key = send a packet from client. But the update is not instant in case of the ssh, it takes a couple of seconds after the keypress to see new stuff. Edit: of course that must be due to the retransmission timer algorithm)
So I studied what is the purpose of TCP keep-alive packets etc. and the thing is that routers and NAT's forget the connections or mappings or whatever in some time/keep only the newest. (So I guess in the case of client->server the mappings just recreate as the destination ip is public and is the actual server. And in the opposite direction it is not possible, so it doesn't work.)
But didn't think it can be as bad as in 6 seconds. The websockets almost reduce to polling (although with a possibly smaller lag).
It seems that the router's NAT mechanism may cause the problem. Maybe you can usee some little tools like NAT-PMP or Upnp to open a port and mapping to your local client. This will last long enough for you to do bidirectional communication.
As you see above, the tcp connection release so slow.
I'm wondering how it happened and if it affect my program (http layer)?
This is persistent connections that defined by HTTP/1.1. When client makes requests to the server several requests can share one underlying TCP connection.
In your case request was performed and system waits for a while expecting other request. After 30 seconds inactivity it considers connection as idle and close it (sends TCP FIN).
About impact to the system: some resources are consumed for TCP connection handling. This may be an issue on huge servers handling millions requests but I don't think that this is your case.
I got two Tomcats with identical java apps ('clients') that communicate with a single MQ server
On one client, the communication is fast, but on the other it takes a lot of time (> 13 sec).
On the infrastructure the only difference is that the 'slow client' is on a different domain then the server, while the 'fast client' is on the same domain, which should cause any trouble, as far as I know.
I've sniffed the communication both ways with Wireshark, and I've found one difference: the MQ server sends an additional "ACK" to the slow client, does not get nothing, waits for 13 seconds and then sends the "INITIAL DATA" packet that actually starts the work, like this:
Client - > server :IBM WebSphere SYN
Server - > client :IBM WebSphere SYN-ACK
Client - > server :IBM WebSphere ACK
Client - > server :INITIAL DATA
Server - > client :IBM WebSphere ACK // only on slow client!!!
// Waits for 13 seconds - only on slow client.
Server - > client :INITIAL DATA // from here, it flows on both clients
Notes:
1) The communication to the server is not SSL, and uses IP address rather then DNS names, so resolving issues should not occur.
2) I've published a similar question before - this is not the same case! last time it was BIOS over TCP issue.
They way we used to overcome the issue is using SimpleConnctionManager and a long timeout.
It's a work around, but it works.
I saw a large number of failed connections between two hosts on my intranet (call them client and server).
Using netstat on both machines, I see corresponding port numbers where the server end is in SYN_RECV state and the client is in SYN_SENT.
My interpretation is that the server has responded to the client’s SYN with a SYN,ACK but this packet has been lost. The handshake is disrupted, the socket connection is in an incomplete state, and I see the client time out after 20-45 seconds.
My question is, does TCP offer a way for the server to re-transmit the SYN,ACK after some interval? Is this a good or bad idea?
More system details if relevant: both ends RHEL5, ssh succeeds, ping loses 100%, traceroute succeeds. Client is built on OpenOrb (Java), server is Mico (C++).
SYN and FIN flags are considered part of the sequence space and are transmitted reliably (so, the answer to your immediate question is "yes, it does, by default").
However, I think you really want to dig a bit deeper, because:
If you have a large number of failed connections on the hosts on your intranet, this points to a problem in the network - normally you should have a low, if any, connections that are stuck in these states. Retransmissions would mean your connection will hiccup for 2,4,8,.. seconds (though not necessary - depends on the TCP stack. Nonetheless nothing pretty for the users).
I would advise to run tcpdump or wireshark on both hosts and trace where the loss of the packets happens - and fix it.
On older hardware, a frequent reason could be a duplex mismatch on some pair of the devices in the path (incorrectly autodetected, or incorrectly hardcoded). Some other reasons may be a problem with the driver, or a bad cable (not enough bad to cause complete outage, but bad enough to cause periodic blackouts).
I always thought that if you didn't implement a heartbeat, there was no way to know if one side of a TCP connection died unexpectedly. If the process was just killed on one side and didn't exit gracefully, there was no way for the socket to send FIN or let the other side know that it was closed.
(See some of the comments here for example http://www.perlmonks.org/?node_id=566568 )
But there is a stock order server that I connect to that has a new "cancel all orders on disconnect feature" that cancels live orders if the client dis-connects. It works even when I kill the process on my end, and there is definitely no heartbeat from my app to it.
So how is it able to detect when I've killed the process? My app is running on Windows Server 2003 and the order server is on Suse Linux Enterprise Server 10. Does Windows detect that the process associated with the socket is no longer alive and send the FIN?
When a process exits - for whatever reason - the OS will close the TCP connections it had open.
There's numerous other ways a TCP connection can go dead undetected
someone yanks out a network cable inbetween.
the computer at the other end gets nuked.
a nat gateway inbetween silently drops the connection
the OS at the other end crashes hard.
the FIN packets gets lost.
Though enabling tcp keepalive, you'll detect it eventually - atleast during a couple of hours.
It could be using a TCP Keep Alive to check for dead peers:
http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html
As far as I know, the OS detects the process termination and closes all the file descriptors/sockets/handles the process was using. So, there isn't difference between "killing" application and "gracefully terminating". Of course, the kernel itself must be running (=pc turned on, wire connected...). But it's on the OS the job of sending the FIN and so on...
Also, if a host becomes unreachable /turned off, disconnected...) an intermediate gateway (or the client itself) may detect the event (e.g. loss of carrier, DHCP lease not renewed...) and reply to the packets sent to the died host with a ICMP error (host/network unreachable). This causes the peer's TCP connection to die, but it happens only if the client has some packet to send to the host.