flow control implementation - how - tcp

I'm sending 1k data using TCP/IP (using FreeRTOS + LwiP). From documents I understood that TCP/IP protocol has its flow control inside its stack itself, but this flow control is dependent on the Network buffers. I'm not sure how this can be handled in my scenario which is described below.
Receive data of 1k size using TCP/IP from wifi (this data rate will be in 20Mb/s)
The received Wifi data is put into a queue of 10k size10 block, each block having a size of 1K
From the queue, each block is taken and send to another interface at lower rate 1Mb/s
So in this scenario, do I have to implement flow control manually between data from wifi <-> queue? How can I achieve this?

No you do not have to implement flow control yourself, the TCP algorithm takes care of it internally.
Basically what happens is that when a TCP segment is received from your sender LwIP will send back an ACK that includes the available space remaining in its buffers (the window size). Since the data is arriving faster than you can process it the stack will eventually send back an ACK with a window size of zero. This tells the sender's stack to back off and try again later, which it will do automatically. When you get around to extracting more data from the network buffers the stack should re-ACK the last segment it received, only this time it opens up the window to say that it can receive more data.
What you want to avoid is something called silly window syndrome because it can have a drastic effect on your network utilisation and performance. Try to read data off the network in big chunks if you can. Avoid tight loops that fill a buffer 1-byte at a time.

Related

QTcpSocket High frequency writes slows transfer rate

I'm working on a streaming app, I get a frame, encode it to h264 and then send it via tcp to the client, then the client receive the encoded frame, it decode it a display it. What I found is that, when calling several times the write method in smalls intervals of time, the transfer rate is considerably affected.
The interval time is between 12 ms and 17 ms, which is the time it take to grab a frame and encode it. In the client I am counting the time it takes from read one frame and another. Using 12/17 ms the time it takes to arrive to the client is ~400 ms. However, if I add a sleep in the writes, from lets say 12/17 ms to 150 ms,the time in the client reduces to ~150 ms.
So I tried to send one frame, once the client receive it, it send an acknowledgment and then the server grab the next frame. Using this method the latency was the same ~150 ms.
I'm splitting the data into chunks of an specified size (using 512 bytes at the moment), the client receive the chunks and assembly them, using sha256 I confirmed that the info arrives right, the frames size varies (VBR) from 1200 bytes to 65kb. So my conclusion is that if you stress the socket with a lot of writes the transfer rate get affected, is this right or I may be doing something wrong?
And aside, 150 ms is like 6 fps, how VNC and other streaming apps do it? they buffer some frames and then play them? So there would be a latency but the "experience" would be of a higher frame rate?
Thanks
The TCP/IP protocol stack is free to optimize its behavior to trade off latency vs. bandwidth. You have not demonstrated that you lack the bandwidth, only that you get notified of the arriving data less often that you wish - with there being no technical justification for such wish. There's nothing in the protocol itself that guarantees that the data will arrive at the receiving end in chunks of any particular size.
Thus, suppose that you send each frame in a single write on the QIODevice. The receiving end can receive this data in one or more chunks, and it may receive this data after an arbitrary delay. And that's precisely what you're seeing.
The type of data you're sending has no bearing on performance: you should be able to create a 10-line testcase for the sender, and a similarly sized testcase for the receiver, and confirm that indeed you receive all the data you desire, only it comes in larger chunks than you incorrectly expect. And that's fine and it's normal. To maintain streaming, you should pre-buffer a "sufficient" amount of data at the receiving end.

Idea to handle lost packets (theory)

Context
We have got un unsteady transmission channel. Some packets may be lost.
Sending a single network packet in any direction (from A to B or from B to A) takes 3 seconds.
We allow a signal delay of 5 seconds, no more. So we have a 5-second buffer. We can use those 5 seconds however we want.
Currently we use only 80% of the transmission channel, so we have 1/4 more room to utilize.
The quality of the video cannot be worsened.
We need to use UDP.
Problem
We need to make the quality better. How to handle lost packets? We need to use UDP and handle those errors ourselves. How to do it then? How to make sure that not so many packets will be lost as currently (we can't guarantee 100%, so we only want it better), without retransmitting them? We can do everything, this is theory.
There are different logic's to handle these things.It depends on what application you are using. Are you doing real time video streaming? stringent requirements?
As you said you have a buffer, you can actually maintain a buffer for the packets and then send an acknowledgement for the lost packets (if you feel you can wait).
As this is video application, send acknowledgements only to the key frames. make sure that you have a key or I frame and then do interpolation at the rx side.
Look into something called forward error correction, fountain codes, luby codes. Here, you will encode the packets 1 and 2 and produce packet 3. If packet1 is lost, use packet3 and packet2 to get the packet1 back at the rx side. Basically you send redundant packets. Its little harsh on network but you get most of the data.

Dealing with network packet loss in realtime games - TCP and UDP

Reading lots on this for my first network game, I understand the core difference of guaranteed delivery versus time-to-deliver for TCP v UDP. I've also read diametrically opposed views whether realtime games should use UDP or TCP! ;)
What no-one has covered well is how to handle the issue of a dropped packet.
TCP : Read an article using TCP for an FPS that recommended only using TCP. How would an authoritative server using TCP client input handle a packet drop and sudden epic spike in lag? Does the game just stop for a moment and then pick up where it left off? Is TCP packet loss so rare that it's not really that much of an issue and an FPS over TCP actually works well?
UDP : Another article suggested only ever using UDP. Clearly one-shot UDP events like "grenade thrown" aren't reliable enough as they won't fire some of the time. Do you have to implement a message-received, resend protocol manually? Or some other solution?
My game is a tick-based authoritative server with 1/10th second updates from the server to clients and local simulation to keep things seeming more responsive, although the question is applicable to a lot more applications.
I did a real-time TV editing system. All real-time communication was via UDP, but none-real-time used TCP as it is simpler. With the UDP we would send a state packet every frame. e.g. start video in 100 frames, 99,98,…3,2,1,0,-1,-2,-3 so even if no message gets through until -3 then the receiver would start on the 4th frame (just skipping the first 3), hoping that no one would notice, and knowing that this was better than lagging from here on in. We even added the countdown from around +¼ second (as no-one will notice), this way hardly any frames where dropped.
So in summary, we sent the same status packet every frame. It contained all real-time data about past, current, and future events.
The trick is keeping this data-set small. So instead of sending play button pressed event (there is an unbound number of these), we send the video-id, frame-number, start-mask and end-mask. (start/stop mask are frame numbers, if start-mask is positive and stop-mask is negative then show video, at frame frame-number).
Now we need to be able to start a video during another or shortly after it stops. So we consider how many consecutive video can be played at the same time. We need a slot for each, but can we reuse them immediately? If we have pressed stop, so do not know the stop mask until then, then reuse the slot will the video stop. Well there will be no slot for this video, so we should stop it. So yes we can reuse the slot immediately, as long as we use unique IDs.
Other tips: Do not send +1 events instead send current total. If two players have to update the some total, then each should have their own total, sum all totals at point of use, but never edit someone else's total.

Congestion Control Algorithm at Receiver

Assume we talking about the situation of many senders sending packets to a receiver.
Often senders would be the one that control congestion by using sliding window that limits sending rate.
We have:
snd_cwnd = min(cwnd,rwnd)
Using explicit or implicit feedback information from network (router,switch), sender would control cwnd to control sending rate.
Normally, rwnd is always big enough that sender only care about cwnd. But if we consider rwnd, using it to limit snd_cwnd, it would make congestion control more efficiently.
rwnd is the number of packets (or bytes) that receiver be able to receive. What I'm concerned about is capability of senders.
Questions:
1. So how do receiver know how many flows sending packets to it?
2. Is there anyway that receiver know the snd_cwnd of sender?
This is all very confused.
The number of flows into a receiver isn't relevant to the rwnd of any specific flow. The rwnd is simply the amount of space left in the receive buffer for that flow.
The receiver has no need to know the sender's cwnd. That's the sender's problem.
Your statement that 'normally rwnd is always big enough that sender only cares about cwnd' is simply untrue. The receive window changes with every receive; it is re-advertised with every ACK; and it frequently drops to zero.
Your following statement 'if we consider rwnd, using it to limit cwnd ...' is simply a description of what already happens, as per 'snd_cwnd = min(cwnd, rwnd)'.
Or else it may constitute a completely unexplained proposal to needlessly modify TCP's flow control which has been working for 25 years, and which didn't work for several years before that: I remember several Arpanet freezes in the middle 1980s.

Benefit of small TCP receive window?

I am trying to learn how TCP Flow Control works when I came across the concept of receive window.
My question is, why is the TCP receive window scale-able? Are there any advantages from implementing a small receive window size?
Because as I understand it, the larger the receive window size, the higher the throughput. While the smaller the receive window, the lower the throughput, since TCP will always wait until the allocated buffer is not full before sending more data. So doesn't it make sense to have the receive window at the maximum at all times to have maximum transfer rate?
My question is, why is the TCP receive window scale-able?
There are two questions there. Window scaling is the ability to multiply the scale by a power of 2 so you can have window sizes > 64k. However the rest of your question indicates that you are really asking why it is resizeable, to which the answer is 'so the application can choose its own receive window size'.
Are there any advantages from implementing a small receive window size?
Not really.
Because as I understand it, the larger the receive window size, the higher the throughput.
Correct, up to the bandwidth-delay product. Beyond that, increasing it has no effect.
While the smaller the receive window, the lower the throughput, since TCP will always wait until the allocated buffer is not full before sending more data. So doesn't it make sense to have the receive window at the maximum at all times to have maximum transfer rate?
Yes, up to the bandwidth-delay product (see above).
A small receive window ensures that when a packet loss is detected (which happens frequently on high collision network),
No it doesn't. Simulations show that if packet loss gets above a few %, TCP becomes unusable.
the sender will not need to resend a lot of packets.
It doesn't happen like that. There aren't any advantages to small window sizes except lower memory occupancy.
After much reading around, I think I might just have found an answer.
Throughput is not just a function of receive window. Both small and large receive windows have their own benefits and harms.
A small receive window ensures that when a packet loss is detected (which happens frequently on high collision network), the sender will not need to resend a lot of packets.
A large receive window ensures that the sender will not be idle a most of the time as it waits for the receiver to acknowledge that a packet has been received.
The receive window needs to be adjustable to get the optimal throughput for any given network.

Resources