Is TCP URG (urgent data) acknowledged? - networking

In a TCP segment with the URG flag up there might be normal data as well. How does the receiving host handles the urgent packet? How does it acknowledge the urgent data if it is not part of the data stream? Does it acknowledge the rest of it?
I understand that it is not usually used, but if both hosts support the same RFC about the URG flag, how do they handle out-of-band data?
If the urgent data is an abort message, the receiver will drop all other data, but the sender will still want an acknowledgment that the message was received.

A bit of background:
The TCP urgent mechanism permits a point in the data stream to be designated as the end of urgent information. Thus we have the Urgent Pointer that contains a positive offset from the sequence number in this tcp segment. This field is significant only with the URG control bit set.
Discrepancies about the Urgent Pointer:
RFC 793 (1981, page 17):
The urgent pointer points to the sequence number of the octet
following the urgent data.
RFC 1011 (1987, page 8):
Page 17 is wrong. The urgent pointer points to the last octet of
urgent data (not to the first octet of non-urgent data).
The same thing in RFC 1122 (1989, page 84):
..the urgent pointer points to the sequence number of the LAST octet
(not LAST+1) in a sequence of urgent data.
The intelligible RFC 6093 (2011, pages 6-7) says:
Considering that as long as both the TCP sender and the TCP receiver
implement the same semantics for the Urgent Pointer there is no
functional difference in having the Urgent Pointer point to "the
sequence number of the octet following the urgent data" vs. "the last
octet of urgent data", and that all known implementations interpret
the semantics of the Urgent Pointer as pointing to "the sequence
number of the octet following the urgent data".
Thus the updating RFC 793, RFC 1011, and RFC 1122 is
the urgent pointer points to the sequence number of the octet
following the urgent data.
It meets virtually all existing TCP implementations.
Note: Linux provides the net.ipv4.tcp_stdurg sysctl to override the default behaviour but this sysctl only affects the processing of incoming segments. The Urgent Pointer in outgoing segments will still be set as specified in RFC 793.
About the data handling
You can gain urgent data in two ways (keep in mind that the TCP concept of "urgent data" is mapped to the socket API as "out-of-band data"):
using recv with MSG_OOB flag set.
(normally you should establish ownership of the socket with something like fcntl(sock, F_SETOWN, getpid()); and establish a signal handler for SIGURG). Thus you will be notified with SIGURG signal. The data will be read separately from the normal data stream.
using recv without MSG_OOB flag set. Previously, you should set SO_OOBINLINE socket option such way:
int so_oobinline = 1; /* true */
setsockopt(sock, SOL_SOCKET, SO_OOBINLINE, &so_oobinline, sizeof so_oobinline);
The data remain "in-line". And you can determine the Urgent Pointer with a help of ioctl:
int flag; /* True when at mark */
ioctl(sock, SIOCATMARK, &flag);
Besides it is recommended for new applications not to use the mechanism of urgent data at all to use (if so) receiving in-line, as mentioned above.
From RFC 1122:
The TCP urgent mechanism is NOT a mechanism for sending "out-of-band"
data: the so-called "urgent data" should be delivered "in-line" to the
TCP user.
Also from RFC 793:
TCP does not attempt to define what the user specifically does upon
being notified of pending urgent data
So you can handle as you want. It is an application level issue.
Accordingly, the answer to your question about acknowledgements when all other data was dropped is "You can implement it in your application".
As for tcp-ack, I found nothing special about it in the case of urgent data.
About the length of "Urgent Data"
Almost all implementations really can provide only one byte of "out-of-band data".
RFC 6093 says:
If successive indications of "urgent data" are received before the
application reads the pending "out-of-band" byte, that pending byte
will be discarded (i.e., overwritten by the new byte of "urgent
data").
So TCP urgent mode and its urgent pointer cannot provide marking the boundaries of the urgent data in practice.
Rumor has it that there are some implementations that queue each of the received urgent bytes. Some of them have been known to fail to enforce any limits on the amount of "urgent data", that they queue. Thus, they become vulnerable to trivial resource exhaustion attacks.
P. S. All of the above probably covers a little more than was asked, but that's only to make it clear for people unfamiliar with this issue.
Some more useful links:
TCP Urgent Pointer, buffer management, and the "Send" call
Difference between push and urgent flags in TCP
Understanding the urgent pointer

Related

Can urgent pointer in TCP increase communication speed?

TCP has a field called Urgent Pointer.
From RFC 793, about urgent pointer:
This field communicates the current value of the urgent pointer as a positive offset from the sequence number in this segment. The urgent pointer points to the sequence number of the octet following the urgent data. This field is only be interpreted in segments with the URG control bit set.
Let's say I want to upload a file to a remote server that is handling multiple requests from multiple clients.
Can setting this flag improve the total perfromance of the transmission: speed, goodput, time?
If the above example is not suitable, in what scenario can urgent pointer improve performance?
The urgent pointer is just a marker that this packet contains information which should be processed with urgency by the end application. It does not cause any faster delivery in the network and thus it does not improve network performance. The only performance it improves is how fast the application might react to user activity in that it can process OOB data (like a Control-S, i.e. stop terminal output) before processing all the preceding data (like terminal output).

TCP Urgent Pointer, buffer management, and the "Send" call

My question relates to TCP Segment-creation on the sending device.
My understanding of TCP is that it will buffer "non-urgent" bytes of traffic until either some kind of internal timeout is reached...or the MSS is reached. Then the segment is finished and transmitted onto the wire.
My question: If TCP has been buffering "normal/non-urgent" bytes, and then receives a string of "urgent" bytes from the upper-layer process will it:
Terminate buffering of "non-urgent" bytes, send the non-urgent segment, and start creation of a new TCP segment, beginning with the "urgent" bytes...or...
Continue building the currently-buffered, partial-segment, placing the urgent bytes somewhere in the middle of the segment after the normal bytes.
RFC 1122 (section 4.2.2.4) indicates that the Urgent Pointer points to the LAST BYTE of urgent data in a segment (inferring that non-urgent data could follow the urgent data within the same segment). It does not clarify if a segment must BEGIN with urgent data...or, if the urgent data might be "in the middle".
This question concerns a possible TCP segment with the "urgent" bit set but NOT the "push" bit. My understanding of RFC 793 is that they are mutually exclusive of each other (although typically set together).
Thanks!
My understanding of TCP is that it will buffer "non-urgent" bytes of traffic until either some kind of internal timeout is reached
If the Nagle algorithm is enabled, which it is by default. Otherwise it will just send the data immediately, subject to windowing etc.
...or the MSS is reached. Then the segment is finished and transmitted onto the wire.
Not really. It will transmit as and when it can, subject only to the Nagle algorithm, windowing, congestion control, etc.
My question: If TCP has been buffering "normal/non-urgent" bytes, and then receives a string of "urgent" bytes from the upper-layer process will it:
Terminate buffering of "non-urgent" bytes, send the non-urgent segment, and start creation of a new TCP segment, beginning with the "urgent" bytes...or...
Continue building the currently-buffered, partial-segment, placing the urgent bytes somewhere in the middle of the segment after the normal bytes.
Neither. See above. From the point of view of actually sending, there is nothing 'urgent' about urgent data.
RFC 1122 (section 4.2.2.4) indicates that the Urgent Pointer points to the LAST BYTE of urgent data in a segment (inferring that non-urgent data could follow the urgent data within the same segment).
Correct, except that you mean 'implying'.
It does not clarify if a segment must BEGIN with urgent data...or, if the urgent data might be "in the middle".
It doesn't require that the segment must begin with urgent data, so it needn't.
This question concerns a possible TCP segment with the "urgent" bit set but NOT the "push" bit.
Why? The PUSH bit is basically meaningless to modern TCP implementations, and ignored, and there is no way you can detect segment boundaries, so why do you care?
My understanding of RFC 793 is that they are mutually exclusive of each other (although typically set together).
Why? Please explain.

TCP -- acknowledgement

The 32-bit acknowledgement field, say x, on the TCP header
tells the other host that "I received all the bytes up until and including x-1,
now expecting
the bytes from x and on". In this case, the receiver may have received some
further bytes, say x+100 through x+180,
but it hasn't yet received x-th byte yet.
Is there a case that, although the receiver hasn't received
x through x+100 bytes but received the bytes say x+100 through x+180,
the receiver is acknowledging that it received x+180?
One resource I read indicates the acknowledgement of bytes received despite a gap in the earlier bytes.
However, every other source tells
"acknowledgement of x tells all bytes up until x-1 are received".
Are there any exceptional cases? I'm looking to verify this.
TIA.
This can be achieved by TCP option called SACK.
Here, client can say through a duplicate ACK that it has only up to particular packet number 2 (sequence number of packet) in order and append SACK(Selective Acknowledgement) option for the range of contiguous packets received like packets numbered 4 to 5 (sequence number). This in turn shall enable the server to retransmit only the packets(3 sequence number) that were not received by the client.
Provided below an extract from RFC 2018 : TCP Selective Acknowledgement Options
The SACK option is to be sent by a data receiver to inform the data
sender of non-contiguous blocks of data that have been received and
queued. The data receiver awaits the receipt of data (perhaps by
means of retransmissions) to fill the gaps in sequence space between
received blocks. When missing segments are received, the data
receiver acknowledges the data normally by advancing the left window
edge in the Acknowledgement Number Field of the TCP header. The SACK
option does not change the meaning of the Acknowledgement Number
field.
From the TCP RFC at https://www.rfc-editor.org/rfc/rfc793.txt:
3.3. Sequence Numbers
A fundamental notion in the design is that every octet of data sent
over a TCP connection has a sequence number. Since every octet is
sequenced, each of them can be acknowledged. The acknowledgment
mechanism employed is cumulative so that an acknowledgment of sequence
number X indicates that all octets up to but not including X have been
received.
That seems pretty clear to me, the sequence number stops at the first missing data.

Bits in XMAS Scan

I've seen conflicting data for exactly which flags are set in an xmas packet. nmap and other packet tools use PUF flags. However, I also see documentation that states all flags are set; and that the PUF flags are used for certain implementations but, by definition, an xmas has all flags set.
Even http://en.wikipedia.org/wiki/Christmas_tree_packet is a bit confusing in that it alludes to all flags set but then goes on to talk about what happens when the SYN flag is omitted which would not be all flags:
"Some stateless firewalls only check
against security policy those packets
which have the SYN flag set (that is,
packets that initiate connection
according to the standards). Since
Christmas tree scan packets do not
have the SYN flag turned on, they can
pass through these simple systems and
reach the target host."
I know the distinction is a bit meaningless because either way you're essentially sending junk combinations of bits that wouldn't normally be used in a TCP/IP stream. However, I'd like to know whether an xmas packet has all bits or just the PUF bits (or either, etc.)
'When I use a word,' Humpty Dumpty
said, in rather a scornful tone, 'it
means just what I choose it to mean —
neither more nor less.'
Such is the case with "xmas packets". There is no authoritative definition - it means whatever the person using the term chooses it to mean.

Why does a SYN or FIN bit in a TCP segment consume a byte in the sequence number space?

I am trying to understand the rationale behind such a design. I skimmed through a few RFCs but did not find anything obvious.
It's not particularly subtle - it's so that the SYN and FIN bits themselves can be acknowledged (and therefore re-sent if they're lost).
For example, if the connection is closed without sending any more data, then if the FIN did not consume a sequence number the closing end couldn't tell the difference between an ACK for the FIN, and an ACK for the data that was sent prior to the FIN.
SYNs and FINs require acknowledgement, thus they increment the stream's sequence number by one when used.

Resources