i asked this previous question here:
tcp two sides trying to connect simultaneously
i tried the method given in the solution and while sending while using netcat and sniffing packets with ethereal i observed that when i sent a "hello" string from one side to the other it was sent in a segment with the push flag set.
who decides to set the push flag?
what are the rules for setting the push or urgent flag in a tcp segment?
is it possible to do it using the sockets api?
The PUSH flag is used by the tcp stack to basically say "i'm done for now". It's just indicating the data segment is fully transmitted by the source.
There is no way to indicate or control it in userland (your socket api), as this is just the tcp stack/implementation to do it. As the URGENT flag.
(Source: Richard Stevens - Tcp/ip illustrated vol 1, chapter 20)
You can set the URGENT flag using send() with the MSG_OOB flag set. As this implies, the data sent will be Out-Of-Band, meaning it will not normally be received by the calling side simply calling read(). The receiving side will get some indication of OOB data like the signal SIGURG, or the socket will become exceptional and can be checked as such using select().
Most of the time you don't want to set the URGENT flag.
Related
As far as I understand from Is it possible to handle TCP flags with TCP socket? and what I've read so far, a server application does not handle and cannot access TCP flags at all.
And from what I've read in RFCs the PSH flag tells the receiving host's kernel to forward data from receive buffer to the application.
I've found this interesting read https://flylib.com/books/en/3.223.1.209/1/ and it mentions that "Today, however, most APIs don't provide a way for the application to tell its TCP to set the PUSH flag. Indeed, many implementors feel the need for the PUSH flag is outdated , and a good TCP implementation can determine when to set the flag by itself."
"Most Berkeley-derived implementations automatically set the PUSH flag if the data in the segment being sent empties the send buffer. This means we normally see the PUSH flag set for each application write, because data is usually sent when it's written. "
If my understanding is correct and TCPStack decides by itself using different conditions,etc. when to set the PSH flag, then what can I do if TCPStack doesn't set the PSH flag when it should?
I have a server application written in Java and client written in C, there are 1000 clients each on a separate host and they all connect to server. A mechanism which acts as a keep-alive involves server sending each 60 seconds a request to each client that requests some info. The response is always less than MTU(1500bytes) so all the time response frames should have PSH flag set.
It happened at some point that client was sending 50 replies to only one request and all of them with PSH flag not set. Buffer got full probably before the client even sent the 3rd or 4th time the same reply and receiving app thrown an exception because it received more data than it was expecting from receive buffer of host.
My question is, what can I do in such a situation if I cannot communicate at all with TCPStack?
P.S. - I know that client should not send more than 1 reply but still in normal operation all the replies have PSH flag set and in this certain situation they didn't, which is not application fault
Consider a TCP connection established between two TCP endpoints, where one of them calls either:
close():
Here, no further read or write is permitted.
shutdown(fd, SHUT_WR):
This converts the full duplex connection to simplex, where the endpoint invoking SHUT_WR can still read.
However, in both the cases a FIN packet is sent on the wire to the peer endpoint. So the question is, how can the TCP endpoint which receives the FIN distinguish whether the other endpoint has used close() or SHUT_WR, since in the latter scenario it should still be able to send data?
Basically, the answer is, it doesn't. Or, rather, the only general way to find out is to try to send some data and see if you get an ACK or an RST in response.
Of course, the application protocol might provide some mechanism for one side of the connection to indicate in advance that it no longer wants to receive any more data. But TCP itself doesn't provide any such mechanism.
In most descriptions of the TCP PUSH function, it is mentioned that the PUSH feature not only requires the sender to send the data immediately (without waiting for its buffer to fill), but also requires that the data be pushed to receiving application on the receiver side, without being buffered.
What I dont understand is why would TCP buffer data on receiving side at all? After all, TCP segments travel in IP datagrams, which are processed in their entirety (ie IP layer delivers only an entire segment to TCP layer after doing any necessary reassembly of fragments of the IP datagram which carried any given segment). Then, why would the receiving TCP layer wait to deliver this data to its application? One case could be if the application were not reading the data at that point in time. But then, if that is the case, then forcibly pushing the data to the application is anyway not possible. Thus, my question is, why does PUSH feature need to dictate anything about receiver side behavior? Given that an application is reading data at the time a segment arrives, that segment should anyway be delivered to the application straightaway.
Can anyone please help resolve my doubt?
TCP must buffer received data because it doesn't know when the application is going to actually read the data and it has told the sender that it is willing to receive (the available "window"). All this data gets stored in the "receive window" until such time as it gets read out by the application.
Once the application reads the data, it drops the data from the receive window and increases the size it reports back to the sender with the next ACK. If this window did not exist, then the sender would have to hold off sending until the receiver told it to go ahead which it could not do until the application issued a read. That would add a full round-trip-delay worth of latency to every read call, if not more.
Most modern implementations also make use of this buffer to keep out-of-order packets received so that the sender can retransmit only the lost ones rather than everything after it as well.
The PSH bit is not generally used acted upon. Yes, implementations send it but it typically doesn't change the behavior of the receiving end.
Note that, although the other comments are correct (the PSH bit doesn't impact application behaviour much at all in most implementations), it's still used by TCP to determine ACK behaviour. Specifically, when the PSH bit is set, the receiving TCP will ACK immediately instead of using delayed ACKs. Minor detail ;)
I'm trying to make a simple server/application in Erlang.
My server initialize a socket with gen_tcp:listen(Port, [list, {active, false}, {keepalive, true}, {nodelay, true}]) and the clients connect with gen_tcp:connect(Server, Port, [list, {active, true}, {keepalive, true}, {nodelay, true}]).
Messages received from the server are tested by guards such as {tcp, _, [115, 58 | Data]}.
Problem is, packets sometimes get concatenated when sent or received and thus cause unexpected behaviors as the guards consider the next packet as part of the variable.
Is there a way to make sure every packet is sent as a single message to the receiving process?
Plain TCP is a streaming protocol with no concept of packet boundaries (like Alnitak said).
Usually, you send messages in either UDP (which has limited per-packet size and can be received out of order) or TCP using a framed protocol.
Framed meaning you prefix each message with a size header (usualy 4 bytes) that indicates how long the message is.
In erlang, you can add {packet,4} to your socket options to get framed packet behavior on top of TCP.
assuming both sides (client/server) use {packet,4} then you will only get whole messages.
note: you won't see the size header, erlang will remove it from the message you see. So your example match at the top should still work just fine
You're probably seeing the effects of Nagle's algorithm, which is designed to increase throughput by coalescing small packets into a single larger packet.
You need the Erlang equivalent of enabling the TCP_NODELAY socket option on the sending socket.
EDIT ah, I see you already set that. Hmm. TCP doesn't actually expose packet boundaries to the application layer - by definition it's a stream protocol.
If packet boundaries are important you should consider using UDP instead, or make sure that each packet you send is delimited in some manner. For example, in the TCP version of DNS each message is prefixed by a 2 byte length header, which tells the other end how much data to expect in the next chunk.
You need to implement a delimiter for your packets.
One solution is to use a special character ; or something similar.
The other solution is to send the size of the packet first.
PacketSizeInBytes:Body
Then read the provided amount of bytes from your message. When you're at the end you got your whole packet.
Nobody mentions that TCP may also split your message into multiple pieces (split your packet into two messages).
So the second solution is the best of all. But a little hard. While the first one is still good but limits your ability to send packets with special characters. But the easiest to implement. Ofc theres a workaround for all of this. I hope it helps.
When transferring data in TCP, and given all the incoming and outcoming packets, how will one know if the packet received is the last of the data?
TCP packets are fragmented into smaller parts. I'm transferring over the HTTP protocol.
When the FIN flag is set by one end of the connection, it indicates that that end will not be sending anymore data.
If the connection is not being closed after the last of the data, then there must be an application-layer method of determining it. For HTTP, the rules are reasonably complicated.
You might use PSH flag of TCP protocol. It should be set to 1 in last packet.
To confirm this just start tracing, make HTTP GET and filter session. You will find that last packet for each response on your HTTP GET is marked by this flag.
I'm assuming you're using some sort of socket library. You can tell a TCP connection is finished because a read() on the socket will return 0. This will happen when the other side closes the connection (which it never has to do).