I am sending a 30Mbyte file from one machine to another(both raspi 4) using scp. I tried to add noise to the network using netem on the client.
sudo tc qdisc change dev wlan0 root netem delay 50ms 30ms distribution normal
and it takes 07:17 minutes (or rate of 70.6KB/s)
However, when I increase the mean of the added noise normal distribution using
sudo tc qdisc change dev wlan0 root netem delay 500ms 30ms distribution normal
it only takes 00:33 minutes (or rate of 920.1KB/s). Logically it has to take at least the same time as the former distribution, however, it is way faster. I repeated it multiple times, and I got the same result.
I was wondering if I am using the tc in a correct way.
Related
In STP we have the MAX age timer. It tells us how long to wait for a superior BPDU (On root or blocking ports) before it assumes the Root bridge or the link is dead. Giving that timer, why does every bridge must know the Hello timer as well? How do they benefit from it?
According to 802.1D, Hello Time is the interval between periodic transmissions of Configuration Messages by Designated Ports. Using the timer makes bridge sure that at least one BPDU is transmitted by a Designated Port in each HelloTime period.
Check the standart.
I have a solarflare nic with paired rx and tx queues (8 sets, 8 core machine real machine, not hyperthreading, running ubuntu) and each set shares an IRQ number. I have used smp_affinity to set which irqs are processed by which core. Does this ensure that the transmit (tx) interrupts are also handled by the same core. How will this work with xps?
For instance, lets say the irq# is 115, set to core 2 (via smp_affinity). Say the nic chooses tx-2 for outgoing tcp packets, which also happens to have 115 irq number. If I have an xps setting saying tx-2 should be accessible by cpu 4, then which one takes precedence - xps or smp_affinity?
Also is there a way to see/set which tx queue is being used for a particular app/tcp connection? I have an app that receives udp data, processes it and sends tcp packets, in a very latency sensitive environment. I wish to handle the tx interrupts on the outgoing on the same cpu (or one on the same numa node) as the app creating this traffic, however, I have no idea how to find which tx queue is being used by this app for this purpose. While the receive side has indirection tables to set up rules, I do not know if there is a way to set the tx-queue selection and therefore pin it to a set of dedicated cpus.
You can tell the application the preferred CPU by setting the cpu affinity (taskset) or numa node affinity, and you can also set the IRQ affinities (in /proc/irq/270/node, or by using the old intel script floating around 'set_irq_affinity.sh' which is on github). This won't completely guarantee which irq / cpu is being used, but it will give you a good head start on it. If all that fails, to improve latency you might want to enable packet steering in the rxqueue so you get the packets in quicker to the correct cpu (/sys/class/net//queues/rx-#/rps_cpus and tx-#/xps-cpus). There is also the irqbalance program and more....it is a broad subject and i am just learning much of it myself.
I'm trying to use tc to add latency to responses from a webserver in order to simulate a WAN.
I found a few related posts and tried out the command:
tc qdisc add dev eth0 root netem delay 100ms
I am using a 10G NIC to make a high volume of requests equalling about 3Gbps. After using tc to add latency, I see a massive drop off in throughput, and the latency of responses gets closer to about 3 seconds.
Am I missing something in the above command, it it limiting the rate / throughput in addition to adding latency?
N.B tc qdisc returns the following:
qdisc netem 8005: dev eth0 root refcnt 72 limit 1000 delay 100.0ms 10.0ms rate 10000Mbit
Firstly, I think tc cannot process packets at such high data rates. I experienced drop in throughput as well when I was playing with it few years ago. I used both 10GbE and 40GbE.
Unfortunately, I have no access to such hardware now.
I would suggest you to check the buffer sizes as you are emulating a delay of 100ms. packets are getting dropped somewhere and affecting your throughput. The increased latency can be because of the packet making it to the destination after being dropped many times (small buffer size) or being queued for a ver long time (very large buffer size)
Hope I've came to the right place to ask this, if not I guess my question becomes where can I find people who know the answer as a week on google doesnt help!
I have NetEm setup and got it finally working but what I want to do is test using an IPv4 filter, i.e I want to have latency added to one IP without adding it to others to test the effect of a range of different latencies all commected to one server.
Im running NetEm through ubuntu, any advice pointing me to the right direction would help!
Thanks,
Dave
Please use the below set of commands to setup netem to do whatever you want for a particular ip address.
tc qdisc del dev eth0 root
(assuming eth0 is the interface)
tc qdisc add dev eth0 root handle 1: htb
tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit
(100mbit rate of tokens)
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 flowid 1:1 match ip dst 192.168.2.219
(assuming you want to throttle bw for this dst ip address)
tc qdisc add dev eth0 parent 1:1 handle 10: netem delay 25ms
(assuming you want a 25ms delay)
Refer to my other answer for a better explaination
And this excellent thesis for a better understanding
I have two apps communicating over UDP on the same host and I would like to send packets with varying delays (jitter) but no out of order packets. I have this rule for loopback interface:
sudo tc qdisc add dev lo root handle 1: netem delay 10ms 100ms
This seems to create the jitter successfully; however, there are out of order packets.. Basically I would like to recieve the packets on the receiver side in the order that they are sent from the sender, with just varying delay, i.e. with jitter.
I tried some basic reorder commands.. when I use reorder 100%, it does the reorder but there is no jitter in this case. If I use reorder command with anything less than 100%, then there is out of order packets.
It says here that if execute the following command, the packets will stay in order:
sudo tc qdisc add dev lo parent 1:1 pfifo limit 1000
But I still get out of order packets. Any help is much appreciated.
(§1) According to the official documentation - delay section this code
# tc qdisc change dev eth0 root netem delay 100ms 10ms.
... causes the added delay to be 100ms ± 10m
In your code the second ms command line argument is greater than the first.
(§2) Additionally, under the packet re-ordering section this code
# tc qdisc change dev eth0 root netem delay 100ms 75ms
... will cause some reordering. If the first packet gets a random delay of 100ms (100ms base - 0ms jitter) and the second packet is sent 1ms later and gets a delay of 50ms (100ms base - 50ms jitter); the second packet will be sent first.
Educated guess: (didn't test)
Switch the position of your last two arguments from
sudo tc qdisc add dev lo root handle 1: netem delay 10ms 100ms
to
sudo tc qdisc add dev lo root handle 1: netem delay 100ms 10ms
Although according to (§2) it is still possible that your packets can get reordered if you send them back-to-back in under 20ms: 1st packet gets 100+10=110ms delay, 2nd packet that you send 1ms later gets 100-10=90ms delay; 2nd packet will arrive before 1st one.