Throughput decreases after MTU 5000 - tcp

I am trying to test throughput between two computers directly connected through 1 GbE and testing with iperf. I am getting a throughput around 980Mbps when MTU is between 5000 and 5050, however it drastically falls to around 680Mbps anything above MTU=5050. I have checked varying window sizes, but with same result.
Increasing the MTU should decrease the overheads and thereby should increase the bandwidth or at least should not fall.
I can't figure out this strange behavior. By the way testing TCP throughput.
Any help ! and thanks guys. This is my post ever post(question) on any forum :) usually I find answers....
Additional info!
Two centos systems
One system is Xen 4.2 host (but it shouldn't be the problem)
Checked with varying buffer sizes in /pro/sys/net/ipv4

Just a thought... With MTUs at that size you start heading towards the memory page limit, though admittedly at 5000-5050 you should have eclipsed it already for a 4K page size (default for Xen 3.0). Still, I'd just wonder if your memory is getting fragmented. Try upping your memory page size so that you are sure that what you want to fill in a frame will fit in one page of memory and see if that helps.
... Actually, the answer to your question might be here: http://comments.gmane.org/gmane.network.iperf.user/485

Related

Would a device physically closer to me transfer a file quicker(P2P)

Hello all I am new to networking and a question arose in my head. Would a device that is physically closer to another device transfer a file quicker than a device which is across the globe if a P2P connection were used?
Thanks!
No, not generally.
The maximum throughput between any two nodes is limited by the slowest interconnect they are using in their path. When acknowledgments are used (eg. with TCP), throughput is also limited by congestion, possible send/acknowledgment window size, round-trip time (RTT) - you cannot transfer more than one full window in each RTT period - and packet loss.
Distance doesn't matter basically. However, for long distance a large number of interconnects is likely used, increasing the chance for a weak link, congestion, or packet loss. Also, RTT inevitably increases, requiring a large send window (TCP window scale option).
Whether the links are wired, copper, fiber, or wireless doesn't matter - the latter means there's some risk for additional packet loss, however. P2P or classic client-server doesn't matter either.

Does in/out bandwidth share the same limit of network card?

This may be a very rookie question. Say I have a network card with bandwidth limit 100MB/s, so is it possible that in/out bandwidth reach that limit at the same time? Or I'll have this inequality at any point of time: in bandwidth + out bandwidth <= 100MB/s
First, your network card is probably 100Mb/sec not 100MB/sec. Ethernet is the most common wired network type by far, and it commonly comes in 10, 100, 1000 mega bits per second. A 100 megaBIT/sec ethernet interface is roughly capable of 12.5 MegaBYTES per second.
If you're plugged into an ethernet switch, you're most likely going to be connecting in Full Duplex mode. This allows both ends to speak to each other simultaneously without affecting the performance of each other.
You'll never quite reach the full advertised speed though, a Gigabit network interface (1000Mb/sec) will usually be able to transfer in the high 900's each direction without problem. There are a few things that cause overhead that prevent you from reaching the full speed. Also, many lower end network cards or computers struggle to reach the full speed, so you won't always be able to reach this.
If you're plugged into an ethernet hub, only one end can be talking at a time. There, in + out can't go higher than the link speed, and is typically far lower because of collisions. It's really unlikely you can find a hub anymore unless you're really trying to, switches are pretty much the only thing you can buy now outside of exotic applications.
TL;DR: You're almost always using full duplex mode, which allows up to (but usually less than) the advertised link speed in both directions simultaneously.

SNMP network bandwith logger-monitor

I have a switch working with SNMP protocol. I want to get/log or monitor the data of bandwith for switch and connected devices/ports. the amount of incoming or outgoing data have to be calculated periodically into a log file simply.
As another option, a simple program for monitoring the network bandwith, total data traffic etc. of SNMP network may be useful for me. But it have to be so compact and light software. many programs are not freeware and their sizes are very big. Is there a solution to do that process? Thanks..
Interfaces monitored through SNMP report their data usage in the ifInOctets and ifOutOctets counters. The numbers they report can't be used directly; you need to sample them every X minutes or seconds, where X gets smaller the faster the interface. You simply subtract the previous number from the current one to give you how much traffic went by during those X minutes. Watch out for wrapping as it gets to the 32 bit integer limit (it certainly won't send negative traffic ;-) The number X will be greatly affected by how long it takes to wrap a 32 bit number at the interfaces maximum speed.
If you have a high speed switch, ideally you should actually use the ifHCInOctets and ifHCOutOctets if your switch supports it. These are 64-bit numbers and won't wrap frequently and thus X can become much much larger. But not all devices support them.

Does a large TCP window size cause issues on high error rate networks?

When sending TCP packets over a high latency network, one can set the TCP window size on some operating systems to allow the network utilization to be higher.
Will this cause issues on networks that also have high error rates?
When an error is found during transmission, does the whole window need to be retransmitted? If your window is large enough, is it true that a network with a high error rate might not make progress due to high probability of an error in each chunk of window size?
This answer is pretty anecdotal as I don't have access to the code or data anymore. Just an old guy's memories of pain.
Beware of cascading effects if you do this.
In the mid '90s I worked on software that ran over satellite links that were also error prone.
Certain events raised our error rate to 30% or more. With big windows, we sometimes couldn't get one packet transmitted before the errors started hammering us. This was before there was true window scaling.
Take a look at RFC 1323 and judge your window sizes based on your bandwidth, your latency and the algorithms therein.
It's also likely you will find this blog post useful.

Compensating for jitter

I have a voice-chat service which is experiencing variations in the delay between packets. I was wondering what the proper response to this is, and how to compensate for it?
For example, should I adjust my audio buffers in some way?
Thanks
You don't say if this is an application you are developing yourself or one which you are simply using - you will obviously have more control over the former so that may be important.
Either way, it may be that your network is simply not good enough to support VoIP, in which case you really need to concentrate on improving the network or using a different one.
VoIP typically requires an end to end delay of less than 200ms (milli seconds) before the users perceive an issue.
Jitter is also important - in simple terms it is the variance in end to end packet delay. For example the delay between packet 1 and packet 2 may be 20ms but the delay between packet 2 and packet 3 may be 30 ms. Having a jitter buffer of 40ms would mean your application would wait up to 40ms between packets so would not 'lose' any of these packets.
Any packet not received within the jitter buffer window is usually ignored and hence there is a relationship between jitter and the effective packet loss value for your connection. Packet loss typically impacts users perception of voip quality also - different codes have different tolerance - a common target might be that it should be lower than 1%-5%. Packet loss concealment techniques can help if it just an intermittent problem.
Jitter buffers will either be static or dynamic (adaptive) - in either case, the bigger they get the greater the chance they will introduce delay into the call and you get back to the delay issue above. A typical jitter buffer might be between 20 and 50ms, either set statically or adapting automatically based on network conditions.
Good references for further information are:
- http://www.voiptroubleshooter.com/indepth/jittersources.html
- http://www.cisco.com/en/US/tech/tk652/tk698/technologies_tech_note09186a00800945df.shtml
It is also worth trying some of the common internet connection online speed tests available as many will have specific VoIP test that will give you an idea if your local connection is good enough for VoIP (although bear in mind that these tests only indicate the conditions at the exact time you are running your test).

Resources