I have two apps communicating over UDP on the same host and I would like to send packets with varying delays (jitter) but no out of order packets. I have this rule for loopback interface:
sudo tc qdisc add dev lo root handle 1: netem delay 10ms 100ms
This seems to create the jitter successfully; however, there are out of order packets.. Basically I would like to recieve the packets on the receiver side in the order that they are sent from the sender, with just varying delay, i.e. with jitter.
I tried some basic reorder commands.. when I use reorder 100%, it does the reorder but there is no jitter in this case. If I use reorder command with anything less than 100%, then there is out of order packets.
It says here that if execute the following command, the packets will stay in order:
sudo tc qdisc add dev lo parent 1:1 pfifo limit 1000
But I still get out of order packets. Any help is much appreciated.
(§1) According to the official documentation - delay section this code
# tc qdisc change dev eth0 root netem delay 100ms 10ms.
... causes the added delay to be 100ms ± 10m
In your code the second ms command line argument is greater than the first.
(§2) Additionally, under the packet re-ordering section this code
# tc qdisc change dev eth0 root netem delay 100ms 75ms
... will cause some reordering. If the first packet gets a random delay of 100ms (100ms base - 0ms jitter) and the second packet is sent 1ms later and gets a delay of 50ms (100ms base - 50ms jitter); the second packet will be sent first.
Educated guess: (didn't test)
Switch the position of your last two arguments from
sudo tc qdisc add dev lo root handle 1: netem delay 10ms 100ms
to
sudo tc qdisc add dev lo root handle 1: netem delay 100ms 10ms
Although according to (§2) it is still possible that your packets can get reordered if you send them back-to-back in under 20ms: 1st packet gets 100+10=110ms delay, 2nd packet that you send 1ms later gets 100-10=90ms delay; 2nd packet will arrive before 1st one.
Related
I am sending a 30Mbyte file from one machine to another(both raspi 4) using scp. I tried to add noise to the network using netem on the client.
sudo tc qdisc change dev wlan0 root netem delay 50ms 30ms distribution normal
and it takes 07:17 minutes (or rate of 70.6KB/s)
However, when I increase the mean of the added noise normal distribution using
sudo tc qdisc change dev wlan0 root netem delay 500ms 30ms distribution normal
it only takes 00:33 minutes (or rate of 920.1KB/s). Logically it has to take at least the same time as the former distribution, however, it is way faster. I repeated it multiple times, and I got the same result.
I was wondering if I am using the tc in a correct way.
In STP we have the MAX age timer. It tells us how long to wait for a superior BPDU (On root or blocking ports) before it assumes the Root bridge or the link is dead. Giving that timer, why does every bridge must know the Hello timer as well? How do they benefit from it?
According to 802.1D, Hello Time is the interval between periodic transmissions of Configuration Messages by Designated Ports. Using the timer makes bridge sure that at least one BPDU is transmitted by a Designated Port in each HelloTime period.
Check the standart.
I'm trying to use tc to add latency to responses from a webserver in order to simulate a WAN.
I found a few related posts and tried out the command:
tc qdisc add dev eth0 root netem delay 100ms
I am using a 10G NIC to make a high volume of requests equalling about 3Gbps. After using tc to add latency, I see a massive drop off in throughput, and the latency of responses gets closer to about 3 seconds.
Am I missing something in the above command, it it limiting the rate / throughput in addition to adding latency?
N.B tc qdisc returns the following:
qdisc netem 8005: dev eth0 root refcnt 72 limit 1000 delay 100.0ms 10.0ms rate 10000Mbit
Firstly, I think tc cannot process packets at such high data rates. I experienced drop in throughput as well when I was playing with it few years ago. I used both 10GbE and 40GbE.
Unfortunately, I have no access to such hardware now.
I would suggest you to check the buffer sizes as you are emulating a delay of 100ms. packets are getting dropped somewhere and affecting your throughput. The increased latency can be because of the packet making it to the destination after being dropped many times (small buffer size) or being queued for a ver long time (very large buffer size)
Hope I've came to the right place to ask this, if not I guess my question becomes where can I find people who know the answer as a week on google doesnt help!
I have NetEm setup and got it finally working but what I want to do is test using an IPv4 filter, i.e I want to have latency added to one IP without adding it to others to test the effect of a range of different latencies all commected to one server.
Im running NetEm through ubuntu, any advice pointing me to the right direction would help!
Thanks,
Dave
Please use the below set of commands to setup netem to do whatever you want for a particular ip address.
tc qdisc del dev eth0 root
(assuming eth0 is the interface)
tc qdisc add dev eth0 root handle 1: htb
tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit
(100mbit rate of tokens)
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 flowid 1:1 match ip dst 192.168.2.219
(assuming you want to throttle bw for this dst ip address)
tc qdisc add dev eth0 parent 1:1 handle 10: netem delay 25ms
(assuming you want a 25ms delay)
Refer to my other answer for a better explaination
And this excellent thesis for a better understanding
I open a TCP socket and connect it to another socket somewhere else on the network. I can then successfully send and receive data. I have a timer that sends something to the socket every second.
I then rudely interrupt the connection by forcibly losing the connection (pulling out the Ethernet cable in this case). My socket is still reporting that it is successfully writing data out every second. This continues for approximately 1hour and 30 minutes, where a write error is eventually given.
What specifies this time-out where a socket finally accepts the other end has disappeared? Is it the OS (Ubuntu 11.04), is it from the TCP/IP specification, or is it a socket configuration option?
Pulling the network cable will not break a TCP connection(1) though it will disrupt communications. You can plug the cable back in and once IP connectivity is established, all back-data will move. This is what makes TCP reliable, even on cellular networks.
When TCP sends data, it expects an ACK in reply. If none comes within some amount of time, it re-transmits the data and waits again. The time it waits between transmissions generally increases exponentially.
After some number of retransmissions or some amount of total time with no ACK, TCP will consider the connection "broken". How many times or how long depends on your OS and its configuration but it typically times-out on the order of many minutes.
From Linux's tcp.7 man page:
tcp_retries2 (integer; default: 15; since Linux 2.2)
The maximum number of times a TCP packet is retransmitted in
established state before giving up. The default value is 15, which
corresponds to a duration of approximately between 13 to 30 minutes,
depending on the retransmission timeout. The RFC 1122 specified
minimum limit of 100 seconds is typically deemed too short.
This is likely the value you'll want to adjust to change how long it takes to detect if your connection has vanished.
(1) There are exceptions to this. The operating system, upon noticing a cable being removed, could notify upper layers that all connections should be considered "broken".
If want a quick socket error propagation to your application code, you may wanna try this socket option:
TCP_USER_TIMEOUT (since Linux 2.6.37)
This option takes an unsigned int as an argument. When the
value is greater than 0, it specifies the maximum amount of
time in milliseconds that transmitted data may remain
unacknowledged before TCP will forcibly close the
corresponding connection and return ETIMEDOUT to the
application. If the option value is specified as 0, TCP will
use the system default.
See full description on linux/man/tcp(7). This option is more flexible (you can set it on the fly, just right after a socket creation) than tcp_retries2 editing and exactly applies to a situation when you client's socket doesn't aware about server's one state and may get into so called half-closed state.
Two excellent answers are here and here.
TCP user timeout may work for your case: The TCP user timeout controls how long transmitted data may remain unacknowledged before a connection is forcefully closed.
there are 3 OS dependent TCP timeout parameters.
On Linux the defaults are:
tcp_keepalive_time default 7200 seconds
tcp_keepalive_probes default 9
tcp_keepalive_intvl default 75 sec
Total timeout time is tcp_keepalive_time + (tcp_keepalive_probes * tcp_keepalive_intvl), with these defaults 7200 + (9 * 75) = 7875 secs
To set these parameters on Linux:
sysctl -w net.ipv4.tcp_keepalive_time=1800 net.ipv4.tcp_keepalive_probes=3 net.ipv4.tcp_keepalive_intvl=20