Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I am wondering why iperf shows much better performance in TCP than UDP. This question is very similar to this one.
UDP should be much faster than TCP because there are no acknowledge and congestion detection. I am looking for an explanation.
UDP (807 MBits/sec)
$ iperf -u -c 127.0.0.1 -b10G
------------------------------------------------------------
Client connecting to 127.0.0.1, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 127.0.0.1 port 52064 connected with 127.0.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 962 MBytes 807 Mbits/sec
[ 3] Sent 686377 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 960 MBytes 805 Mbits/sec 0.004 ms 1662/686376 (0.24%)
[ 3] 0.0-10.0 sec 1 datagrams received out-of-order
TCP (26.7 Gbits/sec)
$ iperf -c 127.0.0.1
------------------------------------------------------------
Client connecting to 127.0.0.1, TCP port 5001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[ 3] local 127.0.0.1 port 60712 connected with 127.0.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 31.1 GBytes 26.7 Gbits/sec
The default length of UDP datagrams is 1470 bytes. You probably need to increase the length with the -l parameter. For 26Gb/s I'd try something like 50000 for your -l parameter and go up or down from there
You also probably need to add a space between your '-b10G' so that it knows 10G is the value to use for the -b parameter. Also I believe the capital G means GigaBYTES. Your maximum achievable bandwidth with a TCP test is 26 GigaBITS which isnt anywhere close to 10GB. I would make your -b parameter value 26g, with a lower-case g.
I suspect you're using the old iperf version 2.0.5 which has known performance problems with UDP. I'd suggest a 2.0.10 version.
iperf -v will give the version
Note 1: The primary issue in 2.0.5 associated with this problem is due to mutex contention between the client thread and the reporter thread. The shared memory between these two threads was increased to address the issue.
Note 3: There are other performance related enhancements in 2.0.10.
Bob
UDP should be much faster than TCP because there are no acknowledge and congestion detection.
That will mostly depending on what you are looking to do. If you need to transfer files between two end-points in the Internet, unless you manually implement a reliable transmission mechanism on UDP at the application level, you will want to use TCP.
In my opinion, it does not make much sense to do a pure UDP bandwidth test with iPerf, as essentially it just results in iPerf trying to put packets on the wire as fast as possible. I would suggest using it for generating UDP flows with a constant data rate, in order to roughly measure what would happen to UDP traffic, such as VoIP, in your network.
TCP is helped with various hadware offloads such as tso/gro where as UDP is not helped by any of those offloads as they don't apply on udp datagrams.
Related
topology
This is my experimental setup in Mininet. VM1 and VM2 are separate Virtualbox VM instances running on my computer connected by Bridged adapter, and S1 and S2 are connected with vxlan forwarding.
Then I used D-ITG on H1 and H2 to generate traffic. I send TCP traffic from H1 to H2 and use wireshark to capture. During a 10sec TCP flow, I used a python script that changes the tunnel id of the first rule on S1 from 100 to 200.
If the packet/sec rate and payload size is small enough, the TCP session does not seem to be affected, but when I start sending around 100 packet/sec each with payload of 64 bytes, TCP stop sending after receiving a dup ACK. Here is the wireshark capture:
wireshark1
wireshark2
On the link between H1 and S1 I received ICMP destination unreachable (fragmentation needed).
After the two errors, TCP stopped sending. I understand that the "previous segment not captured" is caused by the fact that when I alter the S1 routing table, there is some down time and packets are dropped by the switch. However, I don't understand why TCP does not initiate retransmission.
This does not happen if I reduce the packet rate or the payload to a smaller amount, or if I use UDP. Is this an issue with the TCP stack, or maybe D-ITG? Or maybe it is an issue with the sequence numbers? Is there a range where if very previous packets are not ACKed, they will not be retransmitted?
This problem has been bothering me for a while, so I hope someone here can maybe provide some clarification. Thanks a lot for reading XD.
I suspected it may be a problem with mininet NICs, so I tried to disable TCP fragmentation offload, and it worked much better. I suppose that the virtual NICs in mininet in a VM could not handle the large amount of traffic generated by D-ITG, so using TCP fragmentation offload can overload? the NIC and cause segmentation errors.
This is just my speculation, but disabling TSO did help my case. Additional input is welcomed!
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have two adjacent computers, both running a recent version of Ubuntu. Both computers have:
Multiple USB 2.0 ports
RJ-45 connection
5400RPM hard drive
Express Card card slot
PCMCIA Type II
I want to transfer as much data as possible in a set period of time.
What is the fastest physical medium to transfer data between the two computers without swapping hard drives?
What is the fastest protocol (not necessarily TCP/IP based) for transferring high-entropy data? If it is TCP/IP, what needs to be tweaked for optimal performance?
First of all, RJ-45 is not a medium, but just a connector type. So your ethernet connection could be anything between 10BASE-T (10 Mbit) and 10GBASE-T (10 Gbit). Using ethernet the link speed is defined by the lowest common speed grade supported by both peers.
The USB Hi-Speed mode is specified for 480 Mbit/s (60 MByte/s), but the typical maximum speed is somewhere near (40 MByte/s) due to the protocol overhead. This speed is only for direct USB host to client connections, but you have 2 USB hosts and so you need some kind of device in the middle to handle the client parts. I guess that will also lower the achievable data rate.
With ethernet you have a simple plug 'n play technology with a well known (socket) API. The transfer speed depends on the link type:
Max. TCP/IP data transfer rates (taken from here):
Fast Ethernet (100Mbit): 11.7 MByte/s
Gigabit Ethernet (1000Mbit): 117.6 MByte/s
The USB 2.0 specification results in a 480 Mbit/s rate, which is 60 MB/s.
Ethernet depends on the network cards (NIC) used and to a lesser degree the wiring used. If both NICs are 1Gbit/s they will both auto-negotiate to 1 Gbit/s translating to 125 MB/s. If one or both NICs only support 100 Mbit/s then they will auto-negotiate to 100 Mbit/s and your speed will be 12.5 MBytes/s.
Wireless is also an option with 802.11n supporting up to 600 Mb/s (75 MB/s) - faster than USB 2.0.
USB 3.0 is the latest USB spec supporting up to 5 Gb/s (625 MB/s).
Ofcourse actual throughput will differ and depend on many other factors, such as wiring, interference, latency, etc.
TCP vs. UDP protocol depends on the type of connection you need and your application's capacity to deal with dropped packets, etc. TCP has a higher initial cost for building up the initial connection, but the transmission is reliable and for long running transactions may turn out to be the fastest. UDP is cheaper to create connections, but you may have dropped packets.
Maximum Transmission Unit (MTU) is a parameter than can have a significant affect on an IP based network. Picking the right MTU depends on several factors. The Internet has numerous articles on this.
Other tweaks are the basics like closing known chatty apps, netbios service if your on windows, etc (lots of hits on google for speeding up tcp).
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I need to retrieve both TCP and UDP ports in the same scan with Nmap in the fastest way possible. I'll try to explain it better. If I use the most common command:
nmap 192.168.1.1
It retrieves ONLY TCP ports and it is really fast.
If I use the following command:
nmap -sU 192.168.1.1
It retrieves ONLY UDP ports and it is quite fast (well not so fast but still).
My question: is there a combination of the two commands? I tryed:
nmap -sU -sS 192.168.1.1
nmap -sU -sT 192.168.1.1
But they are TERRIBLY slow.
I am using Nmap 5.51, any suggestion?
As you've seen, UDP scanning is slow as open/filtered ports typically don't respond so nmap has to time out and then retransmit whilst closed ports will send a ICMP port unreachable error, which systems typically rate limit.
You can add the -T switch to increase the speed of the scan, though this may reduce accuracy and make it easier to detect.
-T<0-5>: Set timing template (higher is faster)
-PN will turn off the ping scan element
You could also scan more hosts in parallel,
or reduce the number of ports you're scanning with the -p switch or --top-ports , which will scan the highest-ratio ports found in the nmap-services file.
If you were scanning multiple hosts, you could use --host-timeout to skip slow hosts.
Regarding TCP, -sS should be quicker than -sT.
HTH!
You didn't say how slow your scans get, but I think you would benefit from playing with the --min-parallelism option, which adjusts the minimum number of outstanding probes.
I'm seeing 70% reductions in scan time (compared with bare -sT -sU scans) like this. Note that it is possible to set --min-parallelism too high, such that the host (or network) cannot buffer this many queries simultaneously.
[mpenning#Hotcoffee]$ sudo nmap --min-parallelism 100 -sT -sU localhost
Starting Nmap 5.00 ( http://nmap.org ) at 2012-05-10 01:07 CDT
Interesting ports on localhost (127.0.0.1):
Not shown: 1978 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
49/tcp open tacacs
53/tcp open domain
80/tcp open http
111/tcp open rpcbind
631/tcp open ipp
2003/tcp open finger
2004/tcp open mailbox
3389/tcp open ms-term-serv
5901/tcp open vnc-1
5910/tcp open unknown
6001/tcp open X11:1
7002/tcp open afs3-prserver
53/udp open|filtered domain
69/udp open|filtered tftp
111/udp open|filtered rpcbind
123/udp open|filtered ntp
161/udp open|filtered snmp
631/udp open|filtered ipp
1812/udp open|filtered radius
1813/udp open|filtered radacct
Nmap done: 1 IP address (1 host up) scanned in 1.54 seconds
[mpenning#Hotcoffee]$
i am writting to ask about iptables performance in TCP and UDP filtering. I was testing it with large number of iptables rules.
When in FORWARD chain is 10 000 mixed TCP and UDP rules i get TCP throughput 35.5 Mbits/sec and UDP throughput 25.2 Mbits/sec
I am confused why TCP throughput is bigger than UDP? I thought TCP will be slower because of ACK packets. I have already tested it with cisco ACL, there UDP is faster.
PC ---- FW ----- PC
Topology
Firewall overhead is most significant with respect to packets, not bytes. So if the average UDP packets were smaller than the average TCP packets, then the CPU will be maxed out at a smaller number of bits-per-second with UDP than with TCP.
Conversely, if the UDP packets are large enough to cause fragmentation and the firewall is configured to reassemble fragments before inspecting them, then the reassembly will cause substantial overhead which will reduce bits-per-second throughput.
There may be also other factors specific to the firewall implementation and configuration, but I believe those two would be first-order.
Does anyone else have any benchmarks on how many packets per second this NIC can receive without dropping any UDP traffic? Using 64 byte UDP packets, I'm seeing roughly 100k packets/sec until drops.
I've done testing using multiple dnsperf as packet generators and dummy echo programs on an HP DL785 with four such NICs in it running CentOS 5.2
The 100 kpps figure you're seeing is around about the right order of magnitude - in my experience beyond that the kernel keeps one core fully occupied just handling the interrupts from the NIC.