TCP/IP ACK sender: Transport layer or the app? - tcp

A newbie question: who exact send the ACK, the transport layer or the app? I have a COM-server with particle counters to send the data to my app. Sometimes I have a lost data. When I check the Wireshark protocol I see that the packets were sent from COM-Server but failed ACK from receiver. I think that ACK is missing because the my program has the error and can't edit the data properly. My colleague says that the interface (socket) simply gets no data and can't return ACK. Who is right?

TCP is a transport layer protocol. The ACK is part of TCP. Thus the ACK is part of the transport layer and send there.
Note that there might be apps which include the transport layer (i.e user space TCP implementations) in which case the ACK is send by the app, but not in the application layer but still in the transport layer. But in most cases TCP is implemented in the kernel and is thus outside the app. See OSI or TCP/IP model for more information about these layers.
My colleague says that the interface (socket) simply gets no data and can't return ACK. Who is right?
Assuming that you are not using a user space TCP implementation: The OS kernel will ACK the data as soon as these data are put into the socket buffer of your application. It will not ACK the packet if it failed to put it into the socket buffer, i.e. if the socket buffer is full because your application failed to read the data. In this case it will also reduce the window so that the peer will not send anymore data.

Related

Can TCP transmit multiple application layer messages concurrently withing the same TCP connection?

I’m reading the article how TCP breaks application message into smaller parts (TCP segments) and use sequence and acknowledge numbers for interacting. In the article there is a example how is transmitting one application layer message within one TCP connection. So, I have a question. Can TCP transmit multiple application layer messages concurrently withing the same TCP connection.
In other words:
Creating TCP connection between client and server.
Client breaks message1 from application layer into TCP segments and start sending them
Client breaks message2 from application layer into TCP segments and start sending them
Client breaks message3 from application layer into TCP segments and start sending them
Step 2,3,4 are going in parallel withing the same TCP connection.
Server receives segments from message1, message2, message3 and send responses in parallel.
Is this possible somehow in TCP? Or we can transmit messages only consequently within the same TCP connection. I’m interested in TCP protocol itself, it doesn’t matter what tricks are used in application layer like multiplexing.
I’m reading the article how TCP breaks application message into smaller parts
TCP does not break application messages into smaller parts because there is no concept of an "application message" at the TCP level. From the perspective of TCP there is only a byte stream with no inherent semantics. Arbitrary splitting but also merging can be done at the byte stream for optimal transport. All what matters is that the bytes are delivered reliably, in order and without duplicates to the peer.
Any application which relies on TCP somehow maintaining message boundaries is broken - and StackOverflow is full of such examples and the problems caused by this.
Can TCP transmit multiple application layer messages concurrently withing the same TCP connection.
It is up to the application protocol and not TCP to implement such a behavior. For example HTTP/1 has only a concept of one message after the other - which clear protocol defined boundaries where messages start and end in the byte stream. In HTTP/2 messages can overlap though because HTTP/2 implements some kind of multiplexing inside the application protocol. But again - all of this is defined and implemented at the application layer and not at the transport layer (TCP).

TCP Server sends [ACK] followed by [PSH,ACK]

I am working on a high-performance TCP server, and I see the server not processing fast enough on and off when I pump high traffic using a TCP client. Upon close inspection, I see spikes in "delta time" on the TCP server. And, I see the server sending an ACK and 0.8 seconds later sending PSH,ACK for the same seqno. I am seeing this pattern multiple times in the pcap. Can experts comment on why the server is sending an ACK followed by a PSH,ACK with a delay in between?
TCP SERVER PCAP
To simplify what ACK and PSH means
ACK will always be present, it simply informs the client what was the last received byte by the server.
PSH tells the client/server to push the bytes to the application layer (the bytes forms a full message).
The usual scenario you are used to, is more or less the following:
The OS has a buffer where it stores received data from the client.
As soon as a packet is received, it is added to the buffer.
The application calls the socket receive method and takes the data out of the buffer
The application writes back data into the socket (response)
the OS sends a packet with flags PSH,ACK
Now imagine those scenarios:
step 4 does not happen (application does not write back any data, or takes too long to write it)
=> OS acknowledge the reception with just an ACK (the packet will not have any data in it), if the application decides later on to send something, it will be sent with PSH,ACK.
the message/data sent by the server is too big to fit in one packet:
the first packets will not have PSH flag, and will only have the ACK flag
the the last packet will have the flags PSH,ACK, to inform the end of the message.

In decapsulation process in TCP/IP, are IP packets merged back before passing to Application Layer?

As known, in decapsulation process, a message follows Data link, Network, Transport and Application Layers, and the question is while decapsulating a request/response data in HTTP, does Application Layer get the message as a whole or Transport Layer sends up as it receives?
There is no such thing as a message in TCP. It is a byte-stream protocol. You get bytes.

When does a Java socket send an ack?

My question is that when a socket at the receiver-side sends an ack? At the time the application read the socket data or when the underlying layers get the data and put it in the buffer?
I want this because I want both side applications know whether the other side took the packet or not.
It's up to the operating system TCP stack when this happens, since TCP provides a stream to the application there's no guarenteed 1:1 correlation between the application doing read/writes and the packets sent on the wire and the TCP acks.
If you need to be assured the other side have received/processed your data, you need to build that into your application protocol - e.g. send a reply stating the data was received.
TCP ACKs are meant to acknowledge the TCP packets on the transmission layer not the application layer. Only your application can signal explicitly that it also has processed the data from the buffers.
TCP/IP (and therefor java sockets) will guarantee that you either successfully send the data OR get an error (exception in the case of java) eventually.

How does TCP/IP report errors?

How does TCP/IP report errors when packet delivery fails permanently? All Socket.write() APIs I've seen simply pass bytes to the underlying TCP/IP output buffer and transfer the data asynchronously. How then is TCP/IP supposed to notify the developer if packet delivery fails permanently (i.e. the destination host is no longer reachable)?
Any protocol that requires the sender to wait for confirmation from the remote end will get an error message. But what happens for protocols where a sender doesn't have to read any bytes from the destination? Does TCP/IP just fail silently? Perhaps Socket.close() will return an error? Does the TCP/IP specification say anything about this?
TCP/IP is a reliable byte stream protocol. All your bytes will get to the receiver or you'll get an error indication.
The error indication will come in the form of a closed socket. Regardless of what the communication pattern (who does the sending), if the bytes can't be delivered, the socket will close.
So the question is, how do you see the socket close? If you're never reading, you'd eventually get an error trying to write to the closed socket (with ECONNRESET errno, I think).
If you have a need to sleep or wait for input on another file handle, you might want to do your waiting in a select() call where you include the socket in the list of sources you're waiting on (even if you never expect to receive anything). If the select() indicates that the socket is ready for a read call, you may get a -1 return (with ECONNRESET, I think). An EOF would indicate an orderly close (other side did a shutdown() or close().
How to distinguish this error close from a clean close (other program exiting, for example)? The errno values may be enough to distinguish error from orderly close.
If you want an unambiguous indication of a problem, you'll probably need to build some sort of application level protocol above the socket layer. For example, a short "ack" message sent by the receiver back to the sender. Then the violation of that higher level application protocol (sender didn't see an ack) would be a confirmation that it was an error close vs a clean close.
The sockets API has no way of informing the writer exactly how many bytes have been received as acknowledged by the peer. There are no guarantees made by the presence of a successful shutdown or close either.
The TCP/IP specification says nothing about the application interface (which is nearly always the sockets API).
SCTP is an alternative to TCP which attempts to address these shortcomings, among others.
In C, if you write to a socket that has failed with send(), you will get back the number of bytes that were sent. If this does not match the number of bytes you meant to send, then you have a problem. But also, when you write to a failed socket, you get SIGPIPE back. Before you start socket handling, you need to have a signal handler in place that will alert you when you get SIGPIPE.
If you are reading from a socket, you really should wrap it with an alarm so you can timeout. Like "alarm(timeout_val); recv(); alarm(0)". Check the return code of recv, and if it's 0, that indicates that the connection has been closed. A negative return result indicates a read failure and you need to check errno.
TCP is built upon the IP protocol, which is the centerpiece for the Internet, providing much of the interoperability that drives Routing, which is what determines how to get packets from their source to their destination. The IP protocol specifies that error messages should be sent back to the sender via Internet Control Message Protocol(ICMP) in the case of a packet failing to get to the sender. Some of these reasons include the Time To Live(TTL) field being decremented to zero, often meaning that the packet got stuck in a routing loop, or the packet getting dropped due to switch contention causing buffer overruns. As others have said, it is the responsibility of the Socket API that is being used to relay these errors at the IP layer up to the application interacting with the network at the TCP layer.
TCP/IP packets are either raw, UDP, or TCP. TCP requires each byte to be acked, and it will re-transmit bytes that are not acked in time. raw, and UDP are connectionless (aka best effort), so any lost packets (barring some ICMP cases, but many of these get filtered for security) are silently dropped. Upper layer protocols can add reliability, such as is done with some raw OSPF packets.

Resources