How to programmatically simulate delay ack? - tcp

I need to simulate my wireless device to send in the ack package after X number of seconds. X can be range from 3 - 6 second. Is there a way to do it? I can't think of a way to do it in java.

Related

How To calculate bandwidth, bitrate and buffer size of switches

Lets say I have 2 switches and 2-2 devices connected to each switch.
Each device sends data to other devices in a cylic manner for example Device1 sends data at 100msec, device 2 at 200ms.
So i want to calculate the required bandwidth for each device and switch if the data size sent is approx 2000bytes.
So now in my simulation I have given bandwidth values of 10Mbps, but after certain period lets say after 1 minute of simulation..
switch buffer starts filling up and messages are getting droped.
So in my conclusion i think bandwith is the problem because messages are not sent or accepted with required bitrates.
So I want to calculate bandwith of each device and switch.enter image description here
2000 bytes every 100 ms is 2000 * 8 / .1 = 160 kbit/s. If you've got four such sources, you're using roughly 6.5% of a 10 Mbit/s link.
Even if each device unicasts that amount to each of the three others, that total bandwidth is only tripled. Only half of those unicasts (~1 Mbit/s) cross the switch interconnect which is your bottleneck. Also, modern Ethernet is full duplex, so a 10 Mbit/s interface can transmit 10 Mbit/s and receive 10 Mbit/s at the same time.
Of course, a better approach would be to use multicast. That way, each data chunk is only propagated through the network in a single instance.
If your network goes down after a few seconds, then the parameters above or the diagram aren't correct, or the simulation is flawed.

A theoretical question using TCP sliding window: how is a mismatching window size handled?

I know this probably doesn't happen in real life at all, but I am curious. I was studying for an exam and came across a theoretical question while doing some sliding window problems as I wrote one down incorrectly; what happens if the RWS (receiving window size) and SWS (sending window size) are somehow mismatched? As an example, let's say I'm sending 10 packets of information, with a timeout of 1 RTT, but the SWS is 4 but the RWS is 3. For this purpose, let's say that the RX side is not able to send an ACK for packet 1 before packet 4 arrives so the window is not "slid" in time.
My idea was that the extra 4th packet is just discarded and never ACK'ed, so it would time out on the TX end and resend packet 4 after 1 RTT, and continue this process for 5,6,7,8 with 8 being the one discarded and having the same process done. Would this be a correct assumption? Or is there something to this that I'm maybe not understanding?
Thanks in advance.

Using 3 different communication protocols in the same MCU

For a project I need to make communicate in a CANBus network, ethernet network and with RS-232. I want to use one single MCU that will act as the main unit of CANBus start topology, Ethernet start topology and that MCU also will be transfering the RS232 data that comes to it to another device. Now I want to use high speed CAN which can be up to 1 Mbits per second. However,RS-232 is max 20 k baud. I wonder if it is doable with 1 MCU to handle 3 different communications ( CANBus, ethernet and RS-232). I am afraid of to get overrun with data at some point. I can buffer data short term if data comes in bursts that can be averaged out. For continuous data where I'll never be able to keep up, I'll need to discard messages, perhaps in a managed way. But I do not want to discard any data. So my question is: Would using 1 MCU for this case work? And are there any software tricks that would help me with this case? (Like giving CANBus a higher priority etc.)
Yes, this can be done with a single MCU. Even a simple MCU should easily be able to handle data rates of 1 Mbps. Most likely you want to use DMA enabled transfer so the CPU core will only need to act when the transmission of a chunk of data has completed.
The problem of being overrun by data due to the mismatch in data rate is a separate topic:
If the mismatch persists, no system can handle it, no matter how capable.
If the mismatch is temporary, it's just a function of the available buffer size.
So if the worst case you want to handle is 10s of incoming data at 1 Mbps (with an outgoing rate of 20kbps), then you will need 10s x (1Mbps - 20kps) = 9.8 Mbit = 1.225 MByte of buffer memory.

Tcp and receiving order

I'm preparing an exam and I stumbled upon kind of interesting question. Basically what happens when for whatever reason the receiver receives the packets in reverse order?
For example I have a window of size 2 and I send packet 2 and 3 but 3 arrives faster than 2.
I'm not quite sure about the answer so I'll ask here.
Does the receiver sends a duplicate ack upon arrival of 3 and then ack2 upon arrival of 2?
And then, what happens to the transmission window?

Network: sending a video (in theory)

Context
We have got un unsteady transmission channel. Some packets may be lost.
Sending a single network packet in any direction (from A to B or from B to A) takes 3 seconds.
We allow a signal delay of 5 seconds, no more. So we have a 5-second buffer. We can use those 5 seconds however we want.
Currently we use only 80% of the transmission channel, so we have 1/4 more room to utilize.
The quality of the video cannot be worsened.
Problem
We need to make the quality better. How to handle lost packets?
Solution proposition
A certain thing - we cannot use TCP in this case, because when TCP detects some problems, it requests retransmission of lost data. That would mean that a packet would arrive after 9 seconds, which is more than the limit.
Therefore we need to use UDP and handle those errors ourselves. How to do it then? How to make sure that not so many packets will be lost as currently, without retransmitting them?
Its a complicated solution, but by far the best option will be to add forward error correction (FEC). This is how images are transfered form space probes where the latency is measures in minutes or hours. Its also use by cell phones where delayed packets are bad for two way communication.
A not as good, but eaiser to implement option is to use UDT. This is a UDP with tcp like retranmission library, but allows you much more controll over the protocol.

Resources