WebSocket - Close Frame - tcp

The websocket RFC states the following:
If an endpoint receives a Close frame and did not previously send a Close frame, the endpoint MUST send a Close frame in response.
Later it also mentions:
The server MUST close the underlying TCP connection immediately;
It makes sense when the server initiate the close handshake since on retrieval of the close frame response from the client, the server can do a TCP close.
What should happen when the close handshake starts from the client? Should the client resent some type of message to instruct the server endpoint to do a TCP close? or should the server do a tcp close on retrieval of a close frame?

Should the client resent some type of message to instruct the server endpoint to do a tcp close?
(a) That's exactly what the CLOSE FRAME message is for, and (b) no, the server must close the TCP connection immediately, as per the text you quoted.
or should the server do a tcp close on retrieval of a close frame?
That's exactly what it says. After sending a CLOSE FRAME to the client of course.

As you noted, the RFC says:
If an endpoint receives a Close frame and did not previously send a Close frame, the endpoint MUST send a Close frame in response.
This applies to both the server and the client. So, when the Close originates from the Client, the following would happen:
Client sends Close frame
Server receives Close frame and echoes it back to the client, since it did not send one itself
Server closes its TCP connection (it must do this immediately after sending the Close frame)
Client closes its TCP connection. It can either do this straight after receiving the echoed Close from the server, or it can wait until the server has closed it at its end (it would know about this due to the connection state changing)

Related

How is a TCP "Connection" maintained, and how does HTTP Keep-Alive affect it?

I'm an application developer looking to learn more about the transport layer of my requests that I've been making all these years. I've also been learning more of the backend and am building my own live data service with websockets, which has me curious about how data actually moves around.
As such I've learned about TCP, and I understand how it works, but there's still one term that confuses me-- a "TCP Connection". I have seen it everywhere, and actually there was a thread opened with the exact same question... but as the OP said in the comments, nobody actually answered the question:
TCP vs UDP - What is a TCP connection?
"when we say that there is a connection established between two hosts,
what does that mean? If I could get a magic microscope and inspect the
server or the client, and - a-ha! - find the connection, what would I
be looking at? Some variable allocated by the OS code? Some entry in
some kind of table? How and when does that gets there, and how and
when it is removed from there"
I've been reading to try to figure this out on my own,
Here is a nice resource that details HTTP flow, also mentions "TCP Connection"
https://blog.catchpoint.com/2010/09/17/anatomyhttp/
Here is another thread about HTTP Keep-alive, same "TCP Connection":
HTTP Keep Alive and TCP keep alive
My understanding:
When a client wants data from server, SYN/ACK handshake happens, this "connection" is established, and both parties agree on the starting sequence number, maximum packet size, etc.
as long as this "connection" is still open, client can request/receive data without doing another handshake. TCP Keep-alive sends a heartbeat to keep this "connection" open
1) Somehow a HTTP Header "Keep-alive" also keeps this TCP "connection" open, even though HTTP headers are part of the packet payload and it doesn't seem to make sense that the TCP layer would parse the HTTP headers?
To me it seems like a "connection" between two machines in the literal sense can never be closed, because a client is always free to hit a server with packets (like the first SYN packet, for example)
2) Is a TCP "connection" just the client and server saving the sequence number from the other's IP address? maybe it's just a flag that's saying "hey this client is cool, accept messages from them without a handshake"? So would closing a connection just be wiping that data out from memory?
... both parties agree on the starting sequence number
No, they don't "agree" one a number. Each direction has their own sequence numbering. So the client sends in the SYN to the server the initial sequence number (ISN) for the data from client to server, the server sends in its SYN the ISN for the data from server to client.
Somehow a HTTP Header "Keep-alive" also keeps this TCP "connection" open ...
Not really. With HTTP keep-alive the client just asks a server nicely to not close the connection after the HTTP response was sent so that another HTTP request can be sent using the same TCP connection. The server might decide to follow the clients wish or not.
To me it seems like a "connection" between two machines in the literal sense can never be closed,
Each side can send a packet with a FIN flag to signal that it will no longer send any data. If both sides has send the FIN the the connection is considered close since no one will send anything and thus nothing can be received. If one side decides that it does not want to receive any more data it can send a packet with a RST flag.
Is a TCP "connection" just the client and server saving the sequence number from the other's IP address?
Kind of. Each side saves the current state of the connection, i.e. IP's and ports involved, currently expected sequence number for receiving, current sequence number for sending, outstanding bytes which were not ACKed yet ... If no such state is there (for example one site crashed) then there is no connection.
... maybe it's just a flag that's saying "hey this client is cool, accept messages from them without a handshake"
If a packet got received which fits an existing state then it is considered part of the connection, i.e. it will be processed and the state will be updated.
So would closing a connection just be wiping that data out from memory?
Closing is telling the other that no more data will be send (using FIN) and if both side have done it both can basically remove the state and then there is no connection anymore.

Should gRPC server-side half-closing implicitly terminate the client?

In the http2-spec, the scenario where the server half-closed the stream (server sent http2.END_STREAM), the client is still allowed to send data (since it's half-closed).
Consider the following gRPC scenario:
Client opens bidi-stream to server and starts sending data
Server closes the response stream and sends the status trailers (translates to sending http2.END_STREAM)
Client continues to send data
Are the semantics well-defined in gRPC?
Possible ways:
Follow the http2-spec: The client is allowed to continue to send data that is processed by the server.
Not follow the http2-spec: The client-connection is implicitly terminated if the server closes the stream.
NOTE: I just tested and it looks like gRPC for Java follows variant "not follow the http2-spec", i.e. if the server closes the downwards stream, also the upwards stream is closed.
In the semantics of gRPC, when the server sends a status, it means that the entire call is complete. The official servers implementations send RST_STREAM in addition to END_STREAM to shut down the stream in both directions at the HTTP/2 protocol level, in accordance with this part of Section 8.1 of the HTTP/2 RFC:
A server can send a complete response prior to the client sending an entire request if the response does not depend on any portion of the request that has not been sent and received. When this is true, a server MAY request that the client abort transmission of a request without error by sending a RST_STREAM with an error code of NO_ERROR after sending a complete response (i.e., a frame with the END_STREAM flag).
When servers do not do this, the gRPC protocol does not prohibit the client sending more data after receiving a status but the server will not process it so there is no reason to do so. Because of this, when the official gRPC clients receive a status they consider the call to be complete and stop sending data.

How HTTP client detect web server crash

From HTTP:The definitive guide :
But without Content-Length, clients cannot distinguish between
successful connection close at the end of a message and connection
close due to a server crash in the middle of a message.
Let's assume that for this purpose the "server crash" means crash of the server's HW or OS without closing the TCP connection or possibly link being broken.
If the web server crashes without closing TCP connection, how does the client detect that the connection "has been closed"?
From what I know, if FIN segment is not sent the client will keep waiting for the data unless there is a timer or it tries to send some data (failing which detects TCP connection shutdown).
How is this done in HTTP?
If the web server crashes without closing TCP connection, how does the client detect that the connection "has been closed"?
Since the closing will be done by the kernel that would mean, that the whole system crashed or that the connection broke somewhere else (router crashed, power blackout at server side or similar).
You can only detect this if you sent data to the server and don't get any useful response back.
From what I know, if FIN segment is not sent the client will keep waiting for the data unless there is a timer or it tries to send some data (failing which detects TCP connection shutdown).
How is this done in HTTP?
HTTP uses TCP as the underlying protocol, so if TCP detects a connection close HTTP will too. Additionally HTTP can detect in most cases if the response is complete, by using information from Content-length header or similar information with chunked transfer encoding. In the few cases where the end of response is only indicated by a connection close HTTP can only rely on TCP do detect problems. So far the theory, but in practice most browsers simply ignore an incomplete response and show as much as they got.

TCP: data on a half-closed connection

I'm implementing a TCP stack, and have encountered an issue with half-closed connections.
My implementation acts as the server side. A client establishes a connection, then sends some data, and then sends a "FIN" message. The server then acknowledges the data from the client, sends some data of its own, and only then closes its half of the connection (sends "FIN").
The problem is that the client does not acknowledge the data sent by the server on the half-closed connection, nor its final "FIN" message. According to netstat, the client is in state FIN_WAIT2. In an identical scenario in which the server does not send any data, things go smoothly.
The client in question is netcat, so I assume the problem is on my end :)
A screen shot is available here.
The actual PCAP file is available here.
My question is, generally, should I expect ACKS for data sent on a half-closed connection; and, particularly, what am I doing wrong in the example above.
Any help would be much appreciated!
Maybe the server should send ACK=2561 instead of 2562 in FIN/ACK?
FIN-WAIT-2 means it saw the ACK, so the sequence number must be correct, but it also means it didn't see the FIN in the same segment. If the FIN counts as 1 byte maybe the LEN should be 1?

Client socket sends but Silent Server socket doesn't recieve

I've a Client Socket that pushes Image Data to Server Socket after connection Handshake is done. and the Server sockets process them without responding anything
It works well for few minutes. But After sometime the Server socket stops getting those Data. That I couldn't figure out why ? Is there any such thing in TCP like if client keep pushing data the server must say something otherwise the conversation will stop ?
I wrote this code years ago. and to make it work I made the server returning a string "ACK" response. However If I change that to any string it will work.
But now I want to figure out the Why to reconstruct the Program.
"One-way" communication with TCP is totally fine unless you need an acknowledgment from the receiver on the sending side. But that's your application-level protocol. At the transport level the packets still flow both ways - TCP keeps sequence numbers in both directions and acknowledges them to the other side. This allows for detecting dropped/duplicate packets and for re-transmission, thus providing reliability of the stream. The window sizes negotiated during connection handshake and updated during the life of the conversation allow TCP to slow down fast sender that would overwhelm a slow receiver.
What you really need to do is to record the TCP connection with a sniffer like tcpdump(1) or wireshark and find out what happens on the wire at the point when "socket stops getting those Data".

Resources