QTcpSocket High frequency writes slows transfer rate - qt

I'm working on a streaming app, I get a frame, encode it to h264 and then send it via tcp to the client, then the client receive the encoded frame, it decode it a display it. What I found is that, when calling several times the write method in smalls intervals of time, the transfer rate is considerably affected.
The interval time is between 12 ms and 17 ms, which is the time it take to grab a frame and encode it. In the client I am counting the time it takes from read one frame and another. Using 12/17 ms the time it takes to arrive to the client is ~400 ms. However, if I add a sleep in the writes, from lets say 12/17 ms to 150 ms,the time in the client reduces to ~150 ms.
So I tried to send one frame, once the client receive it, it send an acknowledgment and then the server grab the next frame. Using this method the latency was the same ~150 ms.
I'm splitting the data into chunks of an specified size (using 512 bytes at the moment), the client receive the chunks and assembly them, using sha256 I confirmed that the info arrives right, the frames size varies (VBR) from 1200 bytes to 65kb. So my conclusion is that if you stress the socket with a lot of writes the transfer rate get affected, is this right or I may be doing something wrong?
And aside, 150 ms is like 6 fps, how VNC and other streaming apps do it? they buffer some frames and then play them? So there would be a latency but the "experience" would be of a higher frame rate?
Thanks

The TCP/IP protocol stack is free to optimize its behavior to trade off latency vs. bandwidth. You have not demonstrated that you lack the bandwidth, only that you get notified of the arriving data less often that you wish - with there being no technical justification for such wish. There's nothing in the protocol itself that guarantees that the data will arrive at the receiving end in chunks of any particular size.
Thus, suppose that you send each frame in a single write on the QIODevice. The receiving end can receive this data in one or more chunks, and it may receive this data after an arbitrary delay. And that's precisely what you're seeing.
The type of data you're sending has no bearing on performance: you should be able to create a 10-line testcase for the sender, and a similarly sized testcase for the receiver, and confirm that indeed you receive all the data you desire, only it comes in larger chunks than you incorrectly expect. And that's fine and it's normal. To maintain streaming, you should pre-buffer a "sufficient" amount of data at the receiving end.

Related

sending two packages simultaneously through a bandwidth link between two network devices?

if i have two network devices A and B, and there is a bandwidth link of 1000 Mbps and i would like to send two packages simultaneously each with the size 500 Mb from device A to device B. how it works in real life. option (A) the link only transmits one package at a time until it reaches to its destination then sending the next package. for example, if i sent the two packages at 10:00 pm for the first package it will take (500/1000)(transmission delay) = 0.5 second to reach to device B at 10:05 pm then the next package will reach at 10:10 pm. option (B) the two packages will be sent at the same time and all reach to its destination (device B ) at 10:05 pm as the bandwidth can stand the two packages 500 + 500 = 1000 Mbps. if the second option is the correct answer, then if i want to send three packages each with the size 500 Mb, does that mean the third package will be lost due to inefficient bandwidth ?? please help
i am using a simulator, and in that simulator only one package is transmitted at a time until reaching its destination and then the second package is sent. is that how sending packages work in real life??
Why would you want to send two packages simultaneously? That's not a rhetorical question. It could make sense to send audio and video simultaneously, so the sound track matches up with the events on screen.
From a programming perspective, you hand off your data to the OS. This function call might not return immediately, if the amount of data is large and the OS has not enough RAM available to buffer it.
Note: you seem to mix up size and bandwidth, when you talk about 500 Mb + 500 Mb = 1000 Mbps. The units make it clear that this does not add up like that. Sending a 500 Mb package over a 1000 Mbps link indeed takes half a second (500 ms), sending 3 such packages takes 1500 ms. There's no magic at the 1000 millisecond barrier that would cause the first two packages to be sent, but the third package to be lost. In fact, it's quite possible to download a 700 MB file (~1 CD, 5800 MBit) over a 10 Mbit line. That just takes 580+ seconds.
Real world networking is a little more complicated. Firstly the data you send is not just send as a big block of data, but instead split up into Segments, Packets, Frames and bits by the different networking layers. If you want to know more read up on the OSI-model.
If the data is send over a normal networking cable (like CAT6) the Ethernet protocol is used, which depending on the version uses different encoding protocols: Although not used anymore Manchester Code is probably the easiest to get a rough understanding of what those do. Through that only one bit for every time-slot can be received.
If you are using an optic carrier it is possible to transmit multiple signals at the same time (compare multiplexing). Since this requires much more complex hardware it is not used between two (normal) computers, but between Providers and cities.
In your specific case the data send by some application is processed first by the operating system and then the network card until it is split up into Ethernet frames of 1518 bytes (compare MTU) which are then send over the network encoded by the specific method determined by the transmission technology. On Host B the same process is reversed. The different parts of your two data-packets can be send after each other, alternating or in some other form, which will be determined by the different layers and depending on their exact configurations.

Trading off between User Bandwidth and Download Interval

I am designing a non commercial open source client app which needs to download data of exactly 100 KB from server on regular interval and show an alert in client app based on the data changes. Now I need to trade off between the user bandwidth and download interval.
Analysis,
If I set the interval = 1 hour. That means within 1 month app will download 30*24*100KB = 72MB.
If I set the interval = 30 mins. That means within 1 month app will download 30*48*100KB = 144MB.
And so on.
Now, I am considering only the file size where in practice there will be some portion of bandwidth used for control flow apart from data flow. For downloading file of exactly 100 KB from server, how much overhead bandwidth of control flow should I consider in my analysis for TCP communication? Is there any guideline/reference or research on that topic?
Assume, if 10KB is used for control flow, total monthly usage will include 14.4MB extra data which needed to be identified in my analysis.
Note: (1) I am limited to analyse only the client app part. (2) No changes in server side can be done at that moment (i.e. pull based to push based, partial data change api etc. cannot be applied). (3) I am limited to download the file using TCP. (4) Although, that much granularity is not often be considered in practice, let's assume, for my case the analysis required to be that much granular that I need to know the data vs control bandwidth ratio.
If you are asking only for the TCP/IP part, the payload/PDU ratio is 1460/1500 for IPv4 and 1440/1500 for IPv6, assuming an MTU of 1500 bytes (sources: this already mentioned discussion, this other discussion, this other article).
I also found this really nice page that allows you to see all the header sizes for an arbitrary protocol stack and this academic paper.
However besides the protocol headers, there are more effects that reduce the bandwidth:
TCP will send additional messages, e.g. for performing a handshake when establishing the connection,
Retransmission of data may occur,
Actual frame sizes are negotiated on the lower communication layers, so TCP segments might be smaller than assumed.
In summary, this is not easy to answer precisely, because there are influences in the transmission process that are beyond your control.
Have you considered to measure the actual amount of data needed for transmitting one (or more) 100KB chunk(s) of payload rather than performing a theoretical analysis?

Muxing non-synchronised streams to Haali

I have 2 input streams of data that are being passed to a Haali Muxer (mp4 format).
Currently I stream these to Haali directly in a DirectShow graph without a clock. I wondered if I should be trying to write these to the muxer synchronised, or whether it happily accepts a stream of audio data that stops before the video data stream stops. (I have issues with the output file not playing audio after seeking, and I'm not sure why this could occur)
I can't find much in the way of documentation for muxing with the Haali muxer, does anyone know the best place to look for info on this filter?
To have the streams multiplexed into single MP4 file you need single instance of multiplexer (Haali, GDCL, commercial, wrapper over mp4v2 library, over Media Foundation sink etc) with two (or more) input pins on it connected to respective sources, which in turn are going to be written as tracks.
Filter graph clock does not matter. Clock is for presentation, and file writers accept incoming data and write it as soon as possible anyway. It is more accurate to remove the clock, as you seem to already be doing, but having standard clock is not going to be different.
Data is synchronized using time stamps on individual media samples, parts of media streams. Multiplexer builds internal queues for every stream and then consumes data from the streams to build single file, in a sort of way that original stream data is interleaved. If one stream supplies too much data, that is, if data is available too early while another stream supplies data slowly, multiplexer blocks further data reception on this particular stream by not returning from respective processing call (IPin::Receive) expecting that during this wait the slow stream provides additional input. Eventually, what multiplexer looks at when matching data from different streams is data time stamps.
To obtain synchronized data in resulting MP4 file you, thus, need to make sure the payload data is properly time stamped. Multiplexer will take care of the rest.
This also includes that the time stamps should be monotonously increasing within a stream, and key frames/splice points are respectively indicated. Otherwise some multiplexers might issue a failure immediately, other would produce the output file but it might have playback issues (esp. seeking).

flow control implementation - how

I'm sending 1k data using TCP/IP (using FreeRTOS + LwiP). From documents I understood that TCP/IP protocol has its flow control inside its stack itself, but this flow control is dependent on the Network buffers. I'm not sure how this can be handled in my scenario which is described below.
Receive data of 1k size using TCP/IP from wifi (this data rate will be in 20Mb/s)
The received Wifi data is put into a queue of 10k size10 block, each block having a size of 1K
From the queue, each block is taken and send to another interface at lower rate 1Mb/s
So in this scenario, do I have to implement flow control manually between data from wifi <-> queue? How can I achieve this?
No you do not have to implement flow control yourself, the TCP algorithm takes care of it internally.
Basically what happens is that when a TCP segment is received from your sender LwIP will send back an ACK that includes the available space remaining in its buffers (the window size). Since the data is arriving faster than you can process it the stack will eventually send back an ACK with a window size of zero. This tells the sender's stack to back off and try again later, which it will do automatically. When you get around to extracting more data from the network buffers the stack should re-ACK the last segment it received, only this time it opens up the window to say that it can receive more data.
What you want to avoid is something called silly window syndrome because it can have a drastic effect on your network utilisation and performance. Try to read data off the network in big chunks if you can. Avoid tight loops that fill a buffer 1-byte at a time.

How does a device receiving data tell when data transmission stops?

I'm trying to understand asynchronous serial data transmission. I know that the transmitting device sends a start bit (e.g. 1) to the receiver to indicate that transmission has begun; then a stop bit (e.g. 0) afterwards to indicate that the transmission has ended.
What I don't understand: how does the receiving device know which bit is the stop bit? The stop bit is surely no different from the other bits of data. The only way I can think of is if the transmitting device stops sending bits for a significant gap, the receiving device would know that no more bits are forthcoming, and the last bit must have been a stop bit. But if that is the case, then why would a stop bit be required at all, rather than the receiving device simply waiting for a bit, and considering the transmission to be ended when the transmitting device doesn't send any more bits?
That becomes a question of protocol. start and stop bits only have meaning if the communicating devices agree on that meaning (e.g. a frame consists of a start bit, 8 data bits, and a stop bit). Similarly, how to denote when a particular communication is complete needs to be agreed between the participants (e.g. define one or more frames that denote message termination).So for a particular communication either a full frame is received and the listener keeps listening, a partial frame is received with no subsequent data transmission and the connection can be considered faulted after some duration, or a full frame is received and that frame denotes the end of the exchange.

Resources