Why MPEG-2 Encoder decreases fast in Mbit/s - multimedia

I'm trying to understand how the MPEG-2 works, so I can finish a project which is based on it. First of all I want to understand the MPEG-2. I want to know why the total Mbit/s decreased very fast between 1995 and 2005? It went from 6 Mbit/s to 2 Mbit/s. How was this possible?

Related

How To calculate bandwidth, bitrate and buffer size of switches

Lets say I have 2 switches and 2-2 devices connected to each switch.
Each device sends data to other devices in a cylic manner for example Device1 sends data at 100msec, device 2 at 200ms.
So i want to calculate the required bandwidth for each device and switch if the data size sent is approx 2000bytes.
So now in my simulation I have given bandwidth values of 10Mbps, but after certain period lets say after 1 minute of simulation..
switch buffer starts filling up and messages are getting droped.
So in my conclusion i think bandwith is the problem because messages are not sent or accepted with required bitrates.
So I want to calculate bandwith of each device and switch.enter image description here
2000 bytes every 100 ms is 2000 * 8 / .1 = 160 kbit/s. If you've got four such sources, you're using roughly 6.5% of a 10 Mbit/s link.
Even if each device unicasts that amount to each of the three others, that total bandwidth is only tripled. Only half of those unicasts (~1 Mbit/s) cross the switch interconnect which is your bottleneck. Also, modern Ethernet is full duplex, so a 10 Mbit/s interface can transmit 10 Mbit/s and receive 10 Mbit/s at the same time.
Of course, a better approach would be to use multicast. That way, each data chunk is only propagated through the network in a single instance.
If your network goes down after a few seconds, then the parameters above or the diagram aren't correct, or the simulation is flawed.

sending two packages simultaneously through a bandwidth link between two network devices?

if i have two network devices A and B, and there is a bandwidth link of 1000 Mbps and i would like to send two packages simultaneously each with the size 500 Mb from device A to device B. how it works in real life. option (A) the link only transmits one package at a time until it reaches to its destination then sending the next package. for example, if i sent the two packages at 10:00 pm for the first package it will take (500/1000)(transmission delay) = 0.5 second to reach to device B at 10:05 pm then the next package will reach at 10:10 pm. option (B) the two packages will be sent at the same time and all reach to its destination (device B ) at 10:05 pm as the bandwidth can stand the two packages 500 + 500 = 1000 Mbps. if the second option is the correct answer, then if i want to send three packages each with the size 500 Mb, does that mean the third package will be lost due to inefficient bandwidth ?? please help
i am using a simulator, and in that simulator only one package is transmitted at a time until reaching its destination and then the second package is sent. is that how sending packages work in real life??
Why would you want to send two packages simultaneously? That's not a rhetorical question. It could make sense to send audio and video simultaneously, so the sound track matches up with the events on screen.
From a programming perspective, you hand off your data to the OS. This function call might not return immediately, if the amount of data is large and the OS has not enough RAM available to buffer it.
Note: you seem to mix up size and bandwidth, when you talk about 500 Mb + 500 Mb = 1000 Mbps. The units make it clear that this does not add up like that. Sending a 500 Mb package over a 1000 Mbps link indeed takes half a second (500 ms), sending 3 such packages takes 1500 ms. There's no magic at the 1000 millisecond barrier that would cause the first two packages to be sent, but the third package to be lost. In fact, it's quite possible to download a 700 MB file (~1 CD, 5800 MBit) over a 10 Mbit line. That just takes 580+ seconds.
Real world networking is a little more complicated. Firstly the data you send is not just send as a big block of data, but instead split up into Segments, Packets, Frames and bits by the different networking layers. If you want to know more read up on the OSI-model.
If the data is send over a normal networking cable (like CAT6) the Ethernet protocol is used, which depending on the version uses different encoding protocols: Although not used anymore Manchester Code is probably the easiest to get a rough understanding of what those do. Through that only one bit for every time-slot can be received.
If you are using an optic carrier it is possible to transmit multiple signals at the same time (compare multiplexing). Since this requires much more complex hardware it is not used between two (normal) computers, but between Providers and cities.
In your specific case the data send by some application is processed first by the operating system and then the network card until it is split up into Ethernet frames of 1518 bytes (compare MTU) which are then send over the network encoded by the specific method determined by the transmission technology. On Host B the same process is reversed. The different parts of your two data-packets can be send after each other, alternating or in some other form, which will be determined by the different layers and depending on their exact configurations.

How to synchronize QAudioOutput on multiple devices?

I have an audio playing app that runs on several distributed devices, each with their own clock. I am using QAudioOutput to play the same audio on each device, and UDP broadcast from a master device to synchronize the other devices, so far so good.
However, I am having a hard time getting an accurate picture of "what is playing now" from QAudioOutput. I am using the QAudioOutput bufferSize() and bytesFree() to estimate what audio frame is currently being fed to the sound system, but the bytesFree() value progresses in a "chunky" fashion, so that (bufferSize() - bytesFree()) / bytesPerFrame doesn't give the number of frames remaining in the buffer, but some smaller number that bounces around relative to it.
The result I am getting now is that when my "drift indicator" updates, it will run around 0 for several seconds, then get indications in the -15 to -35 ms range every few seconds for maybe 20 seconds, then a correcting jump of about +120ms. Although I could try to analyze this long term pattern and tease out the true drift rate (maybe a couple of milliseconds per minute), I'd much rather work with more direct information if it's available.
Is there any way to read the true number of frames remaining in the QAudioOutput buffer while it is playing a stream?
I realize I could minimize my problems by radically reducing the buffer size and feeding QAudioOutput with a high priority process, but I'd rather have a solution that uses longer buffers and isn't so fussy about what it runs on - target platforms vary from Windows 10 machines to Raspberry Pi Zero Ws, possibly to Android phones.

How to best push a lot of WiFi data to LED christmas lights?

I'm tinkering with hardware as follows: an ESP32 chip that can control a 5M run of 144-per-meter LEDs (720 total). Each has Wifi, and I have a web server up and running on a bunch of them and the clocks synchronized to within a few microseconds with a local NTP server.
Let's say I have 10 of them and want to treat them like a big long Christmas light display. I'd want to push data to each of them representing their portion (720 pixels) of the total display (7200 pixels).
The simplest way is to HTTP POST a JSON-encoded version of the data, but that feels very wrong in terms of overhead. I'd guess a binary UDP blob is likely more appropriate.
What do you think is the best way to send the data to each little wifi webserver?
The amount of data might be something like:
720 pixels x 3 bytes per pixel x 30 frames per second = 64K/sec
Not sure why someone would downvote this, but a suitably useful answer would be:
WebSockets

Is an actual baudrate of 115,200 or higher possible?

While running some tests with an FT232R USBtoRS232 Chip, which should be able to manage speeds up to 3 Mbaud, I have the problem that my actual speed is only around 38 kbaud or 3,8 KB/s.
I've searched the web, but I could not find any comparable data, to prove or disprove this limitation.
While I am looking further into this, I would like to know, if someone here has comparable data.
I tested with my own code and with this tool here:
http://www.aggsoft.com/com-port-stress-test.htm
Settings would be 115,200, 8N1, and 64 byte data-packet.
I would have expected results like these:
At 115200 baud -> effectively 11,520 byte/s or 11,52 KB/s
At 921600 baud -> 92,16 KB/s
I need to confirm a minimal speed of 11,2 KB/s, better speeds around 15-60 KB/s.
Based on the datasheet, this should be no problem - based on reality, I am stuck at 3,8 KB/s - for now at least.
Oh my, found a quite good hint - my transfer rate is highly dependent on the size of the packets. So, while using 64 byte packets, I end up with 3,8 KB/s, using 180 byte packets, it somewhat averages around 11,26 KB/s - and the main light went on, when I checked the speed for 1 byte packets -> around 64 byte/s!
Adding some math to it -> 11,52 KB/s divided by 180 equals to 64 byte/s. So basically the speed scales with the byte-size. Is this right? And why is that?
The results that you observe are because of the way serial over USB works. This is a USB 1.1 chip. The USB does transfers using packets and not a continuous stream as for example serial.
So your device will get a time sliced window and it is up to the driver to utilize this window effectively. When you set the packet size to 1 you can only transmit one byte per USB packet. To transmit the next byte you have to wait for your turn again.
Usually a USB device has a buffer on the device end where it can buffer the data between transfers and thus keep the output rate constant. You are under-flowing this buffer when you set packet size too low. The time slice on USB 1.1 is 10 ms which only gives you 100 transfers per second to be shared between all of the devices.
When you make a "send" call, all of your data will go out in one transfer to keep interactive applications working right. It is best to use the maximum transfer size to achieve best performance on USB devices. This is not always possible if you have interactive application, but mostly possible when you have a data transfer application.

Resources