TCP, How to Simulate FIN from both server and Client? - tcp

I have to test my application in 2 scenarios .....i.e
1. Connection initiator receive FIN from Server
2. Connection initiator sending FIN to server
I have www server as a server, windows VM as client. I did telnet on 80 to simulate scenario 1. But i need to simulate scenario 2 ...please help me on the same.
Thanks in Advance

Just close the sockets, or if you must, shutdown the socket for output at both ends. That sends a FIN.
But I don't know why you're testing such low level stuff. You should be testing what happens when you or the peer closes the socket, or shuts it down for output: not the operation of the TCP stack.

Related

What happens to the connections that have been established when a service stops listening on that TCP port?

Can you describe the main process with TCP connection status?
In fact I'm more concerned about whether those connections that have been established can be closed after the client receives a proper reply from the server ...... That's part of the graceful shutdown, I think.
The accepted connections are totally independent of the listening socket. So the server can stop listening and the accepted sockets can still be used as if nothing happened. This means that each accepted socket has it's own tcp connection state (diagram).
Often, though, servers stop listening when they are shut down, so they close all sockets at that time.

A TCP connection between client and server, but no data transmission yet, and then client is down because of network issue, server can know that?

Client and server went through 3-way handshake to create a TCP connection, but they don't have any data communication, and then client disconnect because of network issue, the server will know that situation ?
How to make server know that ?

TCP Retransmission after Reset RST flag

I have around 20 clients communicating together with a central server in the same LAN. The clients can make transaction simultaneously with the server. The server forward each transaction to external appliance in the network. Sometimes it works, sometimes my application shows a "time out" message in a client screen (randomly)
I mirrored all traffic and found TCP Retransmission after TCP Reset packets for the first TCP Sequence. I immediately thought about packet loss but all my cables/NIC are fine, and I do not see DUP ACK in the capture.
It seems that RST packets may have different significations.
What causes those TCP Reset?
Where should I focus my investigation: network or application design ?
I would appreciate any help. Thanks in advance.
Judging by the capture, I assume your central server is 137.56.64.31. What's happening is the clients are initiating a connection to the server with a SYN packet and the server responds with a RST. This is typical if the server has no application listening on that particular port e.g. the webserver application isn't running and a client tries to connect to port 80.
The clients are all connecting to different ports on the server, which is unusual for an central server, but not unheard of. The destination ports the clients are connecting to on the server are: 11007, 11012, 11014, 11108, and 11115. Is that normal for the application? If not, the clients should be connecting to whatever port the application server is listening on.
The reason for the retransmits is that instead of giving up on the connection upon receiving a RST from the server, the client tries to initiate the connection again so Wireshark considers it a retransmission.

CLOSE_WAIT state in server

In one of our server, many connections are in CLOSE_WAIT. I understand that it means the other side of the connection is closed and now it is upto the server to send the FIN and change the state to LAST_ACK and close the connection.
My questions are:
What if the client send a RST when the server is in CLOSE_WAIT state?
After the client has send the FIN and if the server still wants to send more data, what would be the state of the server in this case?
What if the client send a RST when the server is in CLOSE_WAIT state?
The server would still have the socket open so the state won't change. CLOSE_WAIT means that the local TCP is waiting for the local application to close the socket.
After the client has send the FIN and if the server still wants to send more data, what would be the state of the server in this case?
The FIN means the client has stopped sending. It doesn't imply that the client can't receive. If the server tries to send, either:
It will succeed, which means the client only did shutdown for output, or
It will provoke an RST from the client, which means the client closed the socket. The RST probably won't happen on the first send but on a subsequent one, due to TCP buffering.

Will keep-alive useful to use with load balancer and firewalls

I have client and server component. Server may be installed behind the firewall or load balancer. Many sites/forums suggested to use TCP keep-alive feature to avoid connection termination due to inactivity.
The question is whether the keep-alive message from client will actually reach to server?
I tried to simulate the deployment using tcptrace utility and found that the keep-alive messages does not reach to server still the client was getting ACK for keep alive message.
I am not sure whether LB/FW work in same manner.
Is the keep-alive good option to avoid connection termination due to inactivity over socket in case of firewall and load balancer?
The answer is, of course: "it depends".
Many firewalls and load balancers maintain separate frontend and backend TCP connections, e.g.:
client <-- TCP --> firewall/balancer <-- TCP --> server
For situations like this, using TCP keepalive will not work as you'd expect. Why not? The TCP keepalive works for that TCP session only, and the keepalive probe packets are more like "administrative overhead" packets that data-bearing packets. This means that a) using TCP keepalive on the client end only means keeping the TCP connection to the firewall/balancer alive, and b) the firewall/balancer does not "forward" those keepalive probe packets across to the backend connection.
So is using TCP keepalive useful? Yes. There are other types of proxies which work at lower layers in the OSI stack, and which do forward those packets; using TCP keepalive is good for keeping your idle connection alive through those types of network intermediaries.
If your client/server application uses a long-lived, possibly idle TCP connection through firewalls/balancers, the best way to ensure that that connection is not torn down (sometimes politely, e.g. with a RST packet sent by the firewall/balancer, sometimes silently) is to use a "ping" or "heartbeat" message at the application layer. (Think of this as an "application keepalive".) This is just some kind of message that is sent e.g. from the client to the server. A simple and effective technique is to have the client periodically send some bytes to the server, which the server echoes back to the client. The client knows which bytes it sent, and when it receives those same bytes back from the server, it knows that everything in the network path is still working as expected.
Hope this helps!

Resources