TCP send queue depth - tcp

How do I discover how many bytes have been sent to a TCP socket but have not yet been put on the wire?
Looking at the diagram here:
I would like to know the total of Categories 2, 3, and 4 or the total of 3 and 4. This is in C(++) and on both Windows and Linux. Ideally there is a ioctl that I could use, but there doesn't seem to be any.

Under Linux, see the man page for tcp(7).
It appears that you can get the number of untransmitted bytes by ioctl(sock,SIOCINQ ...
Other stats might be available from members of the structure given back by the TCP_INFO getsockopt() call.

Some Unix flavors may have an API way to do this, but there is no way to do it that is portable across different variants.

If you want to determine wheter to add data or not: don't worry, send will block until the data is in the queue. If you don't want it to block, you can tell it to send(2):
send(socket, buf, buflen, MSG_DONTWAIT);
But this only works on Linux.
You can also set the socket to non-blocking:
fcntl(socket, F_SETFD, O_NONBLOCK);
This way write will return an error (EAGAIN) if the data cannot be written to the stream.

Related

I don't understand what exactly does the function bytesToWrite() Qt

I searched for bytesToWrite in doc and that what I found "For buffered devices, this function returns the number of bytes waiting to be written. For devices with no buffer, this function returns 0."
First what does mean buffered devices. And can anyone please explain to me what exactly this function does and where or how can I use it.
Many IO devices are buffered, which means that data isn't sent straight away, but it is accumulated to be sent in bulk when there is a sufficient amount.
This is done essentially to have better performance, as sending data normally has some fixed overhead (at the very least the syscall overhead), which is well amortized when sending data in bulk, but would have to be paid for each write if no buffering would be used.
(notice that here we are only talking about QIODevice buffers, normally there are also all kinds of kernel-mode buffers and buffers internal to hardware devices themselves)
bytesToWrite tells you how much stuff is in the QIODevice write buffer, i.e. how many bytes you wrote that are waiting to be actually written (as in, given to the OS to write).
I never actually had to use that member, but I suppose it could be useful e.g. to in a producer-consumer scenario (=if the write buffer is lower than something, then you have to actually calculate the next chunk of data to send), to manually handle buffering in some places or even just for debugging/logging purposes.
it's actually very usefull when you're using an asynchronous API.
you can for example, use it inside a bytesWritten() slot to tell wether the buffer is empty and the data has been fully written or not.

How to handle passing different types of serialized messages on a network

I'm currently sitting with the problem of passing messages that might contain different data over a network. I have created a prototype of my game, and now I'm busy implementing networking for my game.
I want to send different types of messages, as I think it would be silly to constantly send all the information every network-tick and I would rather send different messages that contain different data. What would be the best way to distinguish what message is received on the receiving side?
Currently I have a system where I prepend a string which distinguishes a certain type of message. My message is then sent through my own message parser class where it determines the type, and deserializes it to the correct type.
What I would like to know is if there is a better way of doing this? It seems like it should be a fairly common problem and so there must be a more trivial solution, unless I'm already doing it the trivial way.
Thanks!
I have read again carefully your question, and now I do not understand what is your problem, you say Currently I have a system where I prepend a string which distinguishes a certain type of message. My message is then sent through my own message parser class where it determines the type, and deserializes it to the correct type.
Looks OK, you may reduce the size of your message with my answer below horizontal line but the principle stays identical.
This the right way for asynchronous communication, but if you do synchrone you know that when you send A message you will receive B answer, so you do not have to prepend with a string which distinguishes the message, but you have to take care not sending another message before having the answer from the previous ...
So if you know how is formatted the answer you do not need any identification bytes, for example you know that the first four bytes is an integer, then a float on eight bytes, etc ...
Use boost::serialization, typically you save your structures, even with pointers, within a dumb bytes buffer, send that buffer over your network, and the other side de-serialize.
This example shows how Boost.Serialization can be used with asio to encode and decode structures for transmission over a socket.
Even if it is using boost::asio you could extract only the serialization part easily.

Handling messages over TCP

I'm trying to send and receive messages over TCP using a size of each message appended before the it starts.
Say, First three bytes will be the length and later will the message:
As a small example:
005Hello003Hey002Hi
I'll be using this method to do large messages, but because the buffer size will be a constant integer say, 200 Bytes. So, there is a chance that a complete message may not be received e.g. instead of 005Hello I get 005He nor a complete length may be received e.g. I get 2 bytes of length in message.
So, to get over this problem, I'll need to wait for next message and append it to the incomplete message etc.
My question is: Am I the only one having these difficulties to appending messages to each other, appending lengths etc.. to make them complete Or is this really usually how we need to handle the individual messages on TCP? Or, if there is a better way?
What you're seeing is 100% normal TCP behavior. It is completely expected that you'll loop receiving bytes until you get a "message" (whatever that means in your context). It's part of the work of going from a low-level TCP byte stream to a higher-level concept like "message".
And "usr" is right above. There are higher level abstractions that you may have available. If they're appropriate, use them to avoid reinventing the wheel.
So, there is a chance that a complete message may not be received e.g.
instead of 005Hello I get 005He nor a complete length may be received
e.g. I get 2 bytes of length in message.
Yes. TCP gives you at least one byte per read, that's all.
Or is this really usually how we need to handle the individual messages on TCP? Or, if there is a better way?
Try using higher-level primitives. For example, BinaryReader allows you to read exactly N bytes (it will internally loop). StreamReader lets you forget this peculiarity of TCP as well.
Even better is using even more higher-level abstractions such as HTTP (request/response pattern - very common), protobuf as a serialization format or web services which automate pretty much all transport layer concerns.
Don't do TCP if you can avoid it.
So, to get over this problem, I'll need to wait for next message and append it to the incomplete message etc.
Yep, this is how things are done at the socket level code. For each socket you would like to allocate a buffer of at least the same size as kernel socket receive buffer, so that you can read the entire kernel buffer in one read/recv/resvmsg call. Reading from the socket in a loop may starve other sockets in your application (this is why they changed epoll to be level-triggered by default, because the default edge-triggered forced application writers to read in a loop).
The first incomplete message is always kept in the beginning of the buffer, reading the socket continues at the next free byte in the buffer, so that it automatically appends to the incomplete message.
Once reading is done, normally a higher level callback is called with the pointers to all read data in the buffer. That callback should consume all complete messages in the buffer and return how many bytes it has consumed (may be 0 if there is only an incomplete message). The buffer management code should memmove the remaining unconsumed bytes (if any) to the beginning of the buffer. Alternatively, a ring-buffer can be used to avoid moving those unconsumed bytes, but in this case the higher level code should be able to cope with ring-buffer iterators, which it may be not ready to. Hence keeping the buffer linear may be the most convenient option.

LWIP: How exactly does the TCP_INTERVAL relate to the reception of ACK Messages?

I am trying to implement a data transfer from an embedded board to a PC. For this, I need to use low latency communication and I am bound to use Ethernet with TCP/IP.
Furthermore, I'm using the lwip stack.
First of all, I disabled nagle algorithm, because I have to send small packets of data (10 KB) and I want them to be sent as soon as possible, without waiting for intermediate ACKS.
The Wireshark Log shows me that this is working quite fine (the whole data is being sent to the PC in about 1msec).
After that, the PC takes about 200msec to send the last ACK (because the last Segment is not maximum size).
The problem is now, that on the embedded processor, it takes a very long time, until the lwip gives my application the message, that all of the data has been ACKED.
When I decrease the TCP_INTERVAL (to let's say 5), it speeds up greatly.
I am wondering, why lwip behaves like this? I would think that the Periodic-TCP-Tasks (which are being called according to the TCP_INTERVAL) have nothing to do with the Handling of the received frames (which is really another call in the main).
I hope I could state my problem somehow understandable, if not I would appreciate feedback, so I can improve my question!
Thanks!
EDIT:
After more debugging, I found out that the process of sending data results in the following function calls:
My main calls tcp_write(...)
tcp_tmr() is called multiple times (through the LwIP_Periodic_Handle() function). This happens seven times. During the eigth call:
tcp_output() is called. During this call, all segments which were added during the last tcp_write() call are sent by calling tcp_output_segment().
So now it is clear that if I reduce the TCP_INTERVAL, of course the data gets sent sooner, because the tcp_tmr() function is called more quickly.
but my question is still: Is this the normal behaviour? It seems a bit odd, that lwIP is waiting such a long time before actually sending the data.
Since Youre doing this My main calls tcp_write(...)
use tcp_output() immediately after tcp_write
or else use tcp_write() in tcp_recv callback

How do you read without specifying the length of a byte slice beforehand, with the net.TCPConn in golang?

I was trying to read some messages from a tcp connection with a redis client (a terminal just running redis-cli). However, the Read command for the net package requires me to give in a slice as an argument. Whenever I give a slice with no length, the connection crashes and the go program halts. I am not sure what length my byte messages need going to be before hand. So unless I specify some slice that is ridiculously large, this connection will always close, though this seems wasteful. I was wondering, is it possible to keep a connection without having to know the length of the message before hand? I would love a solution to my specific problem, but I feel that this question is more general. Why do I need to know the length before hand? Can't the library just give me a slice of the correct size?
Or what other solution do people suggest?
Not knowing the message size is precisely the reason you must specify the Read size (this goes for any networking library, not just Go). TCP is a stream protocol. As far as the TCP protocol is concerned, the message continues until the connection is closed.
If you know you're going to read until EOF, use ioutil.ReadAll
Calling Read isn't guaranteed to get you everything you're expecting. It may return less, it may return more, depending on how much data you've received. Libraries that do IO typically read and write though a "buffer"; you would have your "read buffer", which is a pre-allocated slice of bytes (up to 32k is common), and you re-use that slice each time you want to read from the network. This is why IO functions return number of bytes, so you know how much of the buffer was filled by the last operation. If the buffer was filled, or you're still expecting more data, you just call Read again.
A bit late but...
One of the questions was how to determine the message size. The answer given by JimB was that TCP is a streaming protocol, so there is no real end.
I believe this answer is incorrect. TCP divides up a bitstream into sequential packets. Each packet has an IP header and a TCP header See Wikipedia and here. The IP header of each packet contains a field for the length of that packet. You would have to do some math to subtract out the TCP header length to arrive at the actual data length.
In addition, the maximum length of a message can be specified in the TCP header.
Thus you can provide a buffer of sufficient length for your read operation. However, you have to read the packet header information first. You probably should not accept a TCP connection if the max message size is longer than you are willing to accept.
Normally the sender would terminate the connection with a fin packet (see 1) not an EOF character.
EOF in the read operation will most likely indicate that a package was not fully transmitted within the allotted time.

Resources