How To calculate bandwidth, bitrate and buffer size of switches - networking

Lets say I have 2 switches and 2-2 devices connected to each switch.
Each device sends data to other devices in a cylic manner for example Device1 sends data at 100msec, device 2 at 200ms.
So i want to calculate the required bandwidth for each device and switch if the data size sent is approx 2000bytes.
So now in my simulation I have given bandwidth values of 10Mbps, but after certain period lets say after 1 minute of simulation..
switch buffer starts filling up and messages are getting droped.
So in my conclusion i think bandwith is the problem because messages are not sent or accepted with required bitrates.
So I want to calculate bandwith of each device and switch.enter image description here

2000 bytes every 100 ms is 2000 * 8 / .1 = 160 kbit/s. If you've got four such sources, you're using roughly 6.5% of a 10 Mbit/s link.
Even if each device unicasts that amount to each of the three others, that total bandwidth is only tripled. Only half of those unicasts (~1 Mbit/s) cross the switch interconnect which is your bottleneck. Also, modern Ethernet is full duplex, so a 10 Mbit/s interface can transmit 10 Mbit/s and receive 10 Mbit/s at the same time.
Of course, a better approach would be to use multicast. That way, each data chunk is only propagated through the network in a single instance.
If your network goes down after a few seconds, then the parameters above or the diagram aren't correct, or the simulation is flawed.

Related

sending two packages simultaneously through a bandwidth link between two network devices?

if i have two network devices A and B, and there is a bandwidth link of 1000 Mbps and i would like to send two packages simultaneously each with the size 500 Mb from device A to device B. how it works in real life. option (A) the link only transmits one package at a time until it reaches to its destination then sending the next package. for example, if i sent the two packages at 10:00 pm for the first package it will take (500/1000)(transmission delay) = 0.5 second to reach to device B at 10:05 pm then the next package will reach at 10:10 pm. option (B) the two packages will be sent at the same time and all reach to its destination (device B ) at 10:05 pm as the bandwidth can stand the two packages 500 + 500 = 1000 Mbps. if the second option is the correct answer, then if i want to send three packages each with the size 500 Mb, does that mean the third package will be lost due to inefficient bandwidth ?? please help
i am using a simulator, and in that simulator only one package is transmitted at a time until reaching its destination and then the second package is sent. is that how sending packages work in real life??
Why would you want to send two packages simultaneously? That's not a rhetorical question. It could make sense to send audio and video simultaneously, so the sound track matches up with the events on screen.
From a programming perspective, you hand off your data to the OS. This function call might not return immediately, if the amount of data is large and the OS has not enough RAM available to buffer it.
Note: you seem to mix up size and bandwidth, when you talk about 500 Mb + 500 Mb = 1000 Mbps. The units make it clear that this does not add up like that. Sending a 500 Mb package over a 1000 Mbps link indeed takes half a second (500 ms), sending 3 such packages takes 1500 ms. There's no magic at the 1000 millisecond barrier that would cause the first two packages to be sent, but the third package to be lost. In fact, it's quite possible to download a 700 MB file (~1 CD, 5800 MBit) over a 10 Mbit line. That just takes 580+ seconds.
Real world networking is a little more complicated. Firstly the data you send is not just send as a big block of data, but instead split up into Segments, Packets, Frames and bits by the different networking layers. If you want to know more read up on the OSI-model.
If the data is send over a normal networking cable (like CAT6) the Ethernet protocol is used, which depending on the version uses different encoding protocols: Although not used anymore Manchester Code is probably the easiest to get a rough understanding of what those do. Through that only one bit for every time-slot can be received.
If you are using an optic carrier it is possible to transmit multiple signals at the same time (compare multiplexing). Since this requires much more complex hardware it is not used between two (normal) computers, but between Providers and cities.
In your specific case the data send by some application is processed first by the operating system and then the network card until it is split up into Ethernet frames of 1518 bytes (compare MTU) which are then send over the network encoded by the specific method determined by the transmission technology. On Host B the same process is reversed. The different parts of your two data-packets can be send after each other, alternating or in some other form, which will be determined by the different layers and depending on their exact configurations.

Using 3 different communication protocols in the same MCU

For a project I need to make communicate in a CANBus network, ethernet network and with RS-232. I want to use one single MCU that will act as the main unit of CANBus start topology, Ethernet start topology and that MCU also will be transfering the RS232 data that comes to it to another device. Now I want to use high speed CAN which can be up to 1 Mbits per second. However,RS-232 is max 20 k baud. I wonder if it is doable with 1 MCU to handle 3 different communications ( CANBus, ethernet and RS-232). I am afraid of to get overrun with data at some point. I can buffer data short term if data comes in bursts that can be averaged out. For continuous data where I'll never be able to keep up, I'll need to discard messages, perhaps in a managed way. But I do not want to discard any data. So my question is: Would using 1 MCU for this case work? And are there any software tricks that would help me with this case? (Like giving CANBus a higher priority etc.)
Yes, this can be done with a single MCU. Even a simple MCU should easily be able to handle data rates of 1 Mbps. Most likely you want to use DMA enabled transfer so the CPU core will only need to act when the transmission of a chunk of data has completed.
The problem of being overrun by data due to the mismatch in data rate is a separate topic:
If the mismatch persists, no system can handle it, no matter how capable.
If the mismatch is temporary, it's just a function of the available buffer size.
So if the worst case you want to handle is 10s of incoming data at 1 Mbps (with an outgoing rate of 20kbps), then you will need 10s x (1Mbps - 20kps) = 9.8 Mbit = 1.225 MByte of buffer memory.

Not sampling at 5 ms when I increase NREADINGS to 2 in the Oscilloscope application (TinyOS)

In the TinyOS Oscilloscope application on micaz motes, when I set the sampling rate to 5 ms with NREADINGS = 1, I notice the blinking of the green LED going really fast. But when I set NREADINGS = 2 and sampling rate to 5 ms, I notice the blinking becomes slower which means I am sending fewer packets than in the previous case. Is there any way I can get the blinking to be faster, that is, can I do something to increase the number of packets I send at NREADINGS = 2 and sampling rate equal to 5 ms?
The sampling rate determines how often the Oscilloscope application samples a sensor. NREADINGS determines how many samples the application obtains before it sends them in a radio packet. LED blinks each time the application sends a packet. Therefore, if you increase NREADINGS from 1 to 2, it will blink approximately every 10 ms instead of 5 ms (every two samples).
If you want to send packets with the same frequency when you increase NREADINGS, at the same you have to decrease the sampling interval. Note, however, that sampling a sensor as well as sending a packet takes some time, so there are some constraints on how fast the application can work.

Is an actual baudrate of 115,200 or higher possible?

While running some tests with an FT232R USBtoRS232 Chip, which should be able to manage speeds up to 3 Mbaud, I have the problem that my actual speed is only around 38 kbaud or 3,8 KB/s.
I've searched the web, but I could not find any comparable data, to prove or disprove this limitation.
While I am looking further into this, I would like to know, if someone here has comparable data.
I tested with my own code and with this tool here:
http://www.aggsoft.com/com-port-stress-test.htm
Settings would be 115,200, 8N1, and 64 byte data-packet.
I would have expected results like these:
At 115200 baud -> effectively 11,520 byte/s or 11,52 KB/s
At 921600 baud -> 92,16 KB/s
I need to confirm a minimal speed of 11,2 KB/s, better speeds around 15-60 KB/s.
Based on the datasheet, this should be no problem - based on reality, I am stuck at 3,8 KB/s - for now at least.
Oh my, found a quite good hint - my transfer rate is highly dependent on the size of the packets. So, while using 64 byte packets, I end up with 3,8 KB/s, using 180 byte packets, it somewhat averages around 11,26 KB/s - and the main light went on, when I checked the speed for 1 byte packets -> around 64 byte/s!
Adding some math to it -> 11,52 KB/s divided by 180 equals to 64 byte/s. So basically the speed scales with the byte-size. Is this right? And why is that?
The results that you observe are because of the way serial over USB works. This is a USB 1.1 chip. The USB does transfers using packets and not a continuous stream as for example serial.
So your device will get a time sliced window and it is up to the driver to utilize this window effectively. When you set the packet size to 1 you can only transmit one byte per USB packet. To transmit the next byte you have to wait for your turn again.
Usually a USB device has a buffer on the device end where it can buffer the data between transfers and thus keep the output rate constant. You are under-flowing this buffer when you set packet size too low. The time slice on USB 1.1 is 10 ms which only gives you 100 transfers per second to be shared between all of the devices.
When you make a "send" call, all of your data will go out in one transfer to keep interactive applications working right. It is best to use the maximum transfer size to achieve best performance on USB devices. This is not always possible if you have interactive application, but mostly possible when you have a data transfer application.

Calculating network throughput

suppose i have a 4MBits network and i want to calculate the data throughput, this is considering the max transfer rate minus overhead from ethernet/IP/TCP headers.
Reading on the web i found out that the MSS ( maximum segment size) of a TCP segment is 576 - 20 - 20, these last two being TCP and IP headers overhead, resulting in a 93% of data, meaning i will be only using 93% of my 4MBits link to transfer data. Now where's the link ayer overhead? Shouldn't it be added as well? If im not wrong an ethernet header is around 46 bytes so the final sum would be 576 - 20 - 20 - 46 = 490, resulting in an 85% data throughput, but am i doing something wrong?
Just work bottom up. Regular ethernet frames (no jumbo frames, no vlan tagging) are 1542 bytes in total and can have a payload of 1500 bytes. An Ipv4 header without options is 20 bytes and a TCP header without options also 20 bytes. So you end up with 1460 bytes possible payload of a 1542 byte link-layer frame. So your efficiency is 1460/1542=0.9468223086900129, resulting in a maximum throughput of 3.7872892347600517Mbps.
Notice however this will usually be lower. This is the theoretical maximum rate for a continuous stream you can get on a full duplex link, after the TCP session is established and when you're the only user of that link. Also note that as soon as you're sending at a slightly higher rate for some time your link will get congested, you will see drops and your actual TCP throughput might drop significantly because of slow-start.
If the link is wireless (802.11) the calculation becomes a lot more complex because of RTS/CTS mechanisms, but it's about /2 for only one active user and that's without incorporating loss, which is unrealistic.
In general, the protocol can impact network throughput and much more than simply the packet overhead. You mention that you want to measure throughput on an Ethernet/IP/TCP network but the impact of packet overhead of those protocols is NOT the only thing to consider. TCP is a connection-oriented protocol and uses ACK's to signal if a packet has been received or not. user1777914 missed the mark about ACK's but was on to something - they do not take up any more SPACE but they can DELAY the transmission of packets. As latency increases the overall network throughput can decrease based on how often the application or hosting OS expects a response.
W. Richard Stevens has written an AMAZING book on TCP/IP. Here is an except that explains theoretical TCP performance, what impacts it and how it is calculated.
There too is the Nagle algorithm helps with latency but if disabled can slow down throughput.

Resources