I'm having an application on a micro controller with a small TCP/IP stack. This application waits for a connection on a specific TCP port. As soon as the TCP connection is established the micro controller need to send 8 KB of data. Because the TX buffer of the TCP socket is only 1 KB tall I will need to send 8 segments of data. The TX buffer size can't be changed!
But now I have the problem, that after every segment a delay of 200 ms occurs. I know that this is caused by the delayed ack (which is on Windows 200 ms). But in my use case this means the whole process includes 1400 ms of delay time, which is just wasted time.
Is there any possibility that I can force the PC to acknowledge the data instantly (maybe a bit in the TCP header)? I can't change anything on the PC side.
Or should I instead sent two 512 byte tall segments instead of the 1 KB segment? Would this fix / trick out the issue? I have read that the PC will acknowledge the data if a second segement arrives.
What is the right way to solve such a usecase?
Say we have sender A sending a message to receiver B using TCP. Say the message to be sent from A to B is split into three packets of length 500 bytes, 500 bytes and 50 bytes, to be sent in that order. How does A indicate to B that the packet of length 50 bytes is the last part of the message? I can understand that an ACK from B to A, sent every other packet received by B, indicates using the sequence number how much data has been received by B since the last ACK was sent by B. I read that FIN is used to terminate the connection between the sender and receiver. However, I can't find a description of how the the last packet, of a message split into several packets, is indicated. I'm thinking the packets have to be reassembled, in order, before the message is sent to the receiving application. I think that as one of TCPs actions is to split the message into packets, there must be some way of the sender flagging the last packet of a message has been sent.
I think that as one of TCPs actions is to split the message into
packets
No, TCP takes a stream of data and segments it into PDUs called segments. It is IP that uses the TCP segments as the payload of IP packets, which are in turn the payload of the data-link protocol, e.g. ethernet, frames.
However, I can't find a description of how the the last packet, of a
message split into several packets, is indicated.
Something like that is up to a higher protocol, e.g. HTTP. I think you are looking at TCP the wrong way. A TCP connection is like a bidirectional pipe; whatever you put in one end comes out the other end. TCP has no idea of the data structure, it just sends whatever it gets from the application or application-layer protocol. When an application or application-layer protocol is through using the connection, it tells TCP to tear it down.
The receiving TCP simply receives data and reorders it, asking for lost or missing segments. It passes properly ordered data up to the application or application-layer protocol, having no idea of the data structure because it is just a data stream to TCP.
Also, remember that both ends of a TCP connection are peers that can send and receive, and either end can send a segment with FIN that tells the other end that it is done sending, but the end sending the FIN is obligated to continue to receive until the other end also sends a FIN to say it is done sending. Either side could also kill the connection with a RST segment.
there must be some way of the sender flagging the last packet of a
message has been sent.
Probably, but that is not the job of TCP, that is up to the application or application-layer protocol. When the application-layer is done, it tells TCP to close, and that starts the FIN process. TCP has no idea what is the last part of a message is because it knows nothing about the data. It keeps the pipe open until it is told to close it.
I am using raw sockets to communicate with a TCP server. For the purposes of my project, I need to emulate a TCP timeout.
Whenever a timeout occurs, server re-transmits the first lost packet. On receiving ACK for this packet, the sever re-transmits the second packet and also sends a packet that was previously unseen (due to F-RTO algorithm). In order to stop F-RTO, I need to send duplicate ACK for the later packet.
Lets says the congestion window is 20 at the time of time out. Server will send packet 1 and I will ACK packet 1. Server will then send packet 2 and packet 21. I will ACK packet 2 and send duplicate ACK for packet 21 to stop F-RTO. The problem that I am having is that although client is sending 2 ACKs, for some unknown reasons server is only getting one ACK. As a results it gets stuck in F-RTO.
Wireshark shows client sends multiple duplicate ACKs but from server side I can only see a single ACK. Since the second ACK is duplicate to first one, their fields and checksums are same. Can some one please help me out?
TCP RFC mentions that the receiver should send an ACK for every 2 full size segment it receives (assuming they are inorder) and should not delay an ACK.
Considering the window size is 8 segments and sender sends 8 full segments, does this mean, the receiver sends 4 ACKs even though it has received 8 segments?
Can it not acknowledge all 8 segments with one ACK?
Ill just copy paste the important Part of the RFC right in here:
4.2.3.2 When to Send an ACK Segment
A host that is receiving a stream of TCP data segments can
increase efficiency in both the Internet and the hosts by
sending fewer than one ACK (acknowledgment) segment per data
segment received; this is known as a "delayed ACK" [TCP:5].
A TCP SHOULD implement a delayed ACK, but an ACK should not
be excessively delayed; in particular, the delay MUST be
less than 0.5 seconds, and in a stream of full-sized
segments there SHOULD be an ACK for at least every second
segment.
The Full RFC can be found here: RFC 1122
I am looking at an application which transmits precisely 4380 bytes over HTTP, including headers, before the connection stalls. 4380 happens to be exactly 3 times the maximum segment size of a 1500-byte Ethernet frame (1500 bytes minus 40 bytes of TCP/IP headers times 3).
The connection appears to remain open; it's just that no data is being received.
Why would it stall after precisely 3 ethernet frames? Where should I start looking for a reason?