Difference between IPoIB and TCP over Infiniband - networking

Can someone explain the concepts of IPoIB and TCP over infiniband? I understand the overall concept and data rates provided by native infiniband, but dont quite understand how TCP and IPoIB fit in. Why do u need them and what do they do? What is the difference when someone says their network uses IPoIB or TCP with infiniband? Which one is better? I am not from a strong networking background, so it would be nice if you could elaborate.
Thank you for your help.

InfiniBand adapters ("HCAs") provide a couple of advanced features that can be used via the native "verbs" programming interface:
Data transfers can be initiated directly from userspace to the hardware, bypassing the kernel and avoiding the overhead of a system call.
The adapter can handle all of the network protocol of breaking a large message (even many megabytes) into packets, generating/handling ACKs, retransmitting lost packets, etc. without using any CPU on either the sender or receiver.
IPoIB (IP-over-InfiniBand) is a protocol that defines how to send IP packets over IB; and for example Linux has an "ib_ipoib" driver that implements this protocol. This driver creates a network interface for each InfiniBand port on the system, which makes an HCA act like an ordinary NIC.
IPoIB does not make full use of the HCAs capabilities; network traffic goes through the normal IP stack, which means a system call is required for every message and the host CPU must handle breaking data up into packets, etc. However it does mean that applications that use normal IP sockets will work on top of the full speed of the IB link (although the CPU will probably not be able to run the IP stack fast enough to use a 32 Gb/sec QDR IB link).
Since IPoIB provides a normal IP NIC interface, one can run TCP (or UDP) sockets on top of it. TCP throughput well over 10 Gb/sec is possible using recent systems, but this will burn a fair amount of CPU. To your question, there is not really a difference between IPoIB and TCP with InfiniBand -- they both refer to using the standard IP stack on top of IB hardware.
The real difference is between using IPoIB with a normal sockets application versus using native InfiniBand with an application that has been coded directly to the native IB verbs interface. The native application will almost certainly get much higher throughput and lower latency, while spending less CPU on networking.

Related

How to test packet loss?

I'm working on ovs-dpdk, I want to test whether a port has packet loss. For hardware switch, you could use IXIA or some to send continuous packet, but this is virtual switch and I have no IXIA.
So I use ping to test this, but ping's packet rate is too low, could I use pktgen to test this? If I use pktgen, how to verify if there is packet loss?
Or is there some other method? Thank you~
You can generate a literate of small 64-byte packets using DPDK applications, like DPDK Pktgen, Cisco TRex or even the testpmd app included in DPDK. All those software generators will be able to generate quite a lot of traffic in virtualized environment as well as on the host.
If all that you are interested in is a packet loss, you can use any of the listed above options. TRex and Pktgen do support RFC 2544 tests as well.
A typical setup would include one VM with a generator, another VM with either a generator or a forwarding DPDK application (like l2fwd or l3fwd).
The packet loss is basically the difference between sent and received packets, so just run the test for a while and then see the difference.
Overall, it might be a bit scary at the beginning, but once you understand the basics, it is quite easy to setup and use. And you can always ask a question on StackOverflow...

Is TCP/IP a mandatory for MQTT?

If so, do you know examples of what can go wrong in a non-TCP network?
Learning about MQTT I came across several mentions of the fact that MQTT relies on TCP/IP stack. For example, from mqtt.org:
MQTT for Sensor Networks is aimed at embedded devices on non-TCP/IP
networks, whereas MQTT itself explicitly expects a TCP/IP stack.
But if you read the reference documents, you won't finding anything like that. Moreover, there's QoS field that can be used for reliable delivery and whose values other than 0 are essentially useless in TCP/IP networks. Right now I see nothing that would prevent me from establishing an MQTT connection using UNIX pipe, domain or UDP socket rather than TCP socket.
MQTT just needs a delivery that is ordered and reliable, it doesn't have to be TCP. SCTP works just fine, for example, but UDP doesn't because there is no way to guarantee your large PUBLISH packet made up of multiple UDP packets will arrive in order and complete.
With regards TCP reliability, in theory what you are saying is true, but in practice when an application calls write() and receives a successful return, it can't guarantee when the data has actually made it out of the computer to the remote host. All write() (or send()) does is copy the data to the kernel buffers, at which point you have no further control.
The only way to be sure that a message reaches the remote host at the application level is to have the remote host reply.
MQTT-SN (for sensor Network ) is the solution for the problem that MQTT was having while running over TCP/IP .
There is a concept of MQTT gateway which is brought in in for MQTT-SN which helps in bringing non-TCP / IP implementation.
http://emqttd-docs.readthedocs.io/en/latest/mqtt-sn.html

Utility to benchmark udp and tcp performance for large data transfer

I referred to different threads about reliable UDP vs TCP for large file transfers. However, before making the decision of choosing UDP over TCP ( and add reliability mechanism to UDP ) I want to benchmark performance of UDP & TCP. Is there any utility in linux or windows that can give me this performance benchmark ?
I found Iperf is one such utility. But when I used Iperf on two linux machines to send data using both udp and tcp, I found that TCP performs better than UDP for 10MB of data. This was surprising for me as it is well known fact that UDP performs better than TCP.
My Questions are :
Does UDP always perform better than TCP ? or is there any specific
scenario where UDP is better than TCP.
Is there any published
benchmarks for validating this fact ?
Is there any standard utilty to measure tcp and udp performance on a particular network ?
Thanks in Advance
UDP is NOT always faster than TCP. There are many TCP performance turning including RSS/vRSS. For example, TCP on Linux-on-HyperV can get 30Gbps and on Linux-on-Azure can get 20G+. //I think for Windows VM, it is similar; also on other virt platform such XEN, KVM, TCP did even better.
There are lot of tools to measure: iPerf, ntttcp (Windows), ntttcp-for-Linux, netperf, etc:
iPerf3: https://github.com/esnet/iperf
Windows NTTTCP: https://gallery.technet.microsoft.com/NTttcp-Version-528-Now-f8b12769
ntttcp-for-Linux: https://github.com/Microsoft/ntttcp-for-linux
Netperf: http://www.netperf.org/netperf/
The differences have two sides, conceptually and practically. Many documentation regarding performance is from the '90s when CPU power was significantly faster than network speed and network adapters were quite basic.
Consider, UDP can technically be faster due to less overheads but modern hardware is not fast enough to saturate even 1 GigE channels with smallest packet size. TCP is pretty much accelerated by any card from checksumming to segmentation through to full offload.
Use UDP when you need multicast, i.e. distributing to say more than say a few recipients. Use UDP when TCP windowing and congestion control is not optimised, such as high latency, high bandwidth WAN links: see UDT and WAN accelerators for example.
Find any performance documentation for 10 GigE NICs. The basic problem is that hardware is not fast enough to saturate the NIC, so many vendors provide total TCP/IP stack offload. Also consider file servers, such as NetApp et al, if software is used you may see tweaking the MTU to larger sizes to reduce the CPU overheads. This is popular with low end equipment such as SOHO NAS devices from ReadyNAS, Synology, etc. With high end equipment if you offload the entire stack then, if the hardware is capable, better latency can be achieved with normal Ethernet MTU sizes and Jumbograms become obsolete.
iperf is pretty much the one goto tool for network testing, but it will not always be the best on Windows platforms. You need to look at Microsoft's own tool NTttcp:
http://msdn.microsoft.com/en-us/windows/hardware/gg463264.aspx
Note these tools are more about testing the network and not application performance. Microsoft's tool goes to extremes with basically a large memory locked buffer queued up waiting for the NIC to send as fast as possible with no interactivity. The tool also includes a warm up session to make sure no mallocs are necessary during the test period.

Simulate high speed network connection

I have created a bandwidth meter application to measure total Internet traffic. I need to test the application with relatively high data transfer rates, such as 4 Mbps. I have a slow Internet connection, so I need a simulator to test my application to see the behavior with high throughput rates.
As an option, you can run some HTTP server in one virtual machine with NAT'ed network adapter and test your bandwidth meter against it from the host system or a similar VM.
There are commercial packet generators that do this, and also a few freely available ones like PackETH and Bit-Twist.
There are also other creative solutions. For example, do the packets need to be IP packets for your purpose? If not, you could always get a "dumb" switch or hub (no spanning-tree or other loop protection) and plug a crossover cable into it. (or a straight-through Ethernet cable would work if the switch supports Auto-MDIX) The idea would be that with a loop in your network, the hub/switch will flood the network to 100% for you since it will continually re-forward the same packets.
If you try this, be sure yours is the only computer on the network, since this technique will effectively render it useless. ;-)
You could always send some IP broadcast packets to "seed" the loop. Otherwise, the first thing I think you'd likely see is broadcast ARP packets, which won't help if you're measuring layer 3 traffic only.
Lastly, (and especially if this sounds like too much trouble) I recommend you read up on dependency injection and refactor your code so you can test it without the need for a high-speed interface. Of course, you'll still need to test your code in a real high-speed environment, but doing this will give you much more confidence in your code.

When is it appropriate to use UDP instead of TCP? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
Since TCP guarantees packet delivery and thus can be considered "reliable", whereas UDP doesn't guarantee anything and packets can be lost. What would be the advantage of transmitting data using UDP in an application rather than over a TCP stream? In what kind of situations would UDP be the better choice, and why?
I'm assuming that UDP is faster since it doesn't have the overhead of creating and maintaining a stream, but wouldn't that be irrelevant if some data never reaches its destination?
This is one of my favorite questions. UDP is so misunderstood.
In situations where you really want to get a simple answer to another server quickly, UDP works best. In general, you want the answer to be in one response packet, and you are prepared to implement your own protocol for reliability or to resend. DNS is the perfect description of this use case. The costs of connection setups are way too high (yet, DNS
does support a TCP mode as well).
Another case is when you are delivering data that can be lost because newer data coming in will replace that previous data/state. Weather data, video streaming, a stock quotation service (not used for actual trading), or gaming data comes to mind.
Another case is when you are managing a tremendous amount of state and you want to avoid using TCP because the OS cannot handle that many sessions. This is a rare case today. In fact, there are now user-land TCP stacks that can be used so that the application writer may have finer grained control over the resources needed for that TCP state. Prior to 2003, UDP was really the only game in town.
One other case is for multicast traffic. UDP can be multicasted to multiple hosts whereas TCP cannot do this at all.
If a TCP packet is lost, it will be resent. That is not handy for applications that rely on data being handled in a specific order in real time.
Examples include video streaming and especially VoIP (e.g. Skype). In those instances, however, a dropped packet is not such a big deal: our senses aren't perfect, so we may not even notice. That is why these types of applications use UDP instead of TCP.
The "unreliability" of UDP is a formalism. Transmission isn't absolutely guaranteed. As a practical matter, they almost always get through. They just aren't acknowledged and retried after a timeout.
The overhead in negotiating for a TCP socket and handshaking the TCP packets is huge. Really huge. There is no appreciable UDP overhead.
Most importantly, you can easily supplement UDP with some reliable delivery hand-shaking that's less overhead than TCP. Read this: http://en.wikipedia.org/wiki/Reliable_User_Datagram_Protocol
UDP is useful for broadcasting information in a publish-subscribe kind of application. IIRC, TIBCO makes heavy use of UDP for notification of state change.
Any other kind of one-way "significant event" or "logging" activity can be handled nicely with UDP packets. You want to send notification without constructing an entire socket. You don't expect any response from the various listeners.
System "heartbeat" or "I'm alive" messages are a good choice, also. Missing one isn't a crisis. Missing half a dozen (in a row) is.
I work on a product that supports both UDP (IP) and TCP/IP communication between client and server. It started out with IPX over 15 years ago with IP support added 13 years ago. We added TCP/IP support 3 or 4 years ago. Wild guess coming up: The UDP to TCP code ratio is probably about 80/20. The product is a database server, so reliability is critical. We have to handle all of the issues imposed by UDP (packet loss, packet doubling, packet order, etc.) already mentioned in other answers. There are rarely any problems, but they do sometimes occur and so must be handled. The benefit to supporting UDP is that we are able to customize it a bit to our own usage and tweak a bit more performance out of it.
Every network is going to be different, but the UDP communication protocol is generally a little bit faster for us. The skeptical reader will rightly question whether we implemented everything correctly. Plus, what can you expect from a guy with a 2 digit rep? Nonetheless, I just now ran a test out of curiosity. The test read 1 million records (select * from sometable). I set the number of records to return with each individual client request to be 1, 10, and then 100 (three test runs with each protocol). The server was only two hops away over a 100Mbit LAN. The numbers seemed to agree with what others have found in the past (UDP is about 5% faster in most situations). The total times in milliseconds were as follows for this particular test:
1 record
IP: 390,760 ms
TCP: 416,903 ms
10 records
IP: 91,707 ms
TCP: 95,662 ms
100 records
IP: 29,664 ms
TCP: 30,968 ms
The total data amount transmitted was about the same for both IP and TCP. We have extra overhead with the UDP communications because we have some of the same stuff that you get for "free" with TCP/IP (checksums, sequence numbers, etc.). For example, Wireshark showed that a request for the next set of records was 80 bytes with UDP and 84 bytes with TCP.
There are already many good answers here, but I would like to add one very important factor as well as a summary. UDP can achieve a much higher throughput with the correct tuning because it does not employ congestion control. Congestion control in TCP is very very important. It controls the rate and throughput of the connection in order to minimize network congestion by trying to estimate the current capacity of the connection. Even when packets are sent over very reliable links, such as in the core network, routers have limited size buffers. These buffers fill up to their capacity and packets are then dropped, and TCP notices this drop through the lack of a received acknowledgement, thereby throttling the speed of the connection to the estimation of the capacity. TCP also employs something called slow start, but the throughput (actually the congestion window) is slowly increased until packets are dropped, and is then lowered and slowly increased again until packets are dropped etc. This causes the TCP throughput to fluctuate. You can see this clearly when you download a large file.
Because UDP is not using congestion control it can be both faster and experience less delay because it will not seek to maximize the buffers up to the dropping point, i.e. UDP packets are spending less time in buffers and get there faster with less delay. Because UDP does not employ congestion control, but TCP does, it can take away capacity from TCP that yields to UDP flows.
UDP is still vulnerable to congestion and packet drops though, so your application has to be prepared to handle these complications somehow, likely using retransmission or error correcting codes.
The result is that UDP can:
Achieve higher throughput than TCP as long as the network drop rate is within limits that the application can handle.
Deliver packets faster than TCP with less delay.
Setup connections faster as there are no initial handshake to setup the connection
Transmit multicast packets, whereas TCP have to use multiple connections.
Transmit fixed size packets, whereas TCP transmit data in segments. If you transfer a UDP packet of 300 Bytes, you will receive 300 Bytes at the other end. With TCP, you may feed the sending socket 300 Bytes, but the receiver only reads 100 Bytes, and you have to figure out somehow that there are 200 more Bytes on the way. This is important if your application transmit fixed size messages, rather than a stream of bytes.
In summary, UDP can be used for every type of application that TCP can, as long as you also implement a proper retransmission mechanism. UDP can be very fast, has less delay, is not affected by congestion on a connection basis, transmits fixed sized datagrams, and can be used for multicasting.
UDP is a connection-less protocol and is used in protocols like SNMP and DNS in which data packets arriving out of order is acceptable and immediate transmission of the data packet matters.
It is used in SNMP since network management must often be done when the network is in stress i.e. when reliable, congestion-controlled data transfer is difficult to achieve.
It is used in DNS since it does not involve connection establishment, thereby avoiding connection establishment delays.
cheers
UDP does have less overhead and is good for doing things like streaming real time data like audio or video, or in any case where it is ok if data is lost.
One of the best answer I know of for this question comes from user zAy0LfpBZLC8mAC at Hacker News. This answer is so good I'm just going to quote it as-is.
TCP has head-of-queue blocking, as it guarantees complete and in-order
delivery, so when a packet gets lost in transit, it has to wait for a
retransmit of the missing packet, whereas UDP delivers packets to the
application as they arrive, including duplicates and without any
guarantee that a packet arrives at all or which order they arrive (it
really is essentially IP with port numbers and an (optional) payload
checksum added), but that is fine for telephony, for example, where it
usually simply doesn't matter when a few milliseconds of audio are
missing, but delay is very annoying, so you don't bother with
retransmits, you just drop any duplicates, sort reordered packets into
the right order for a few hundred milliseconds of jitter buffer, and
if packets don't show up in time or at all, they are simply skipped,
possible interpolated where supported by the codec.
Also, a major part of TCP is flow control, to make sure you get as
much througput as possible, but without overloading the network (which
is kinda redundant, as an overloaded network will drop your packets,
which means you'd have to do retransmits, which hurts throughput), UDP
doesn't have any of that - which makes sense for applications like
telephony, as telephony with a given codec needs a certain amount of
bandwidth, you can not "slow it down", and additional bandwidth also
doesn't make the call go faster.
In addition to realtime/low latency applications, UDP makes sense for
really small transactions, such as DNS lookups, simply because it
doesn't have the TCP connection establishment and teardown overhead,
both in terms of latency and in terms of bandwidth use. If your
request is smaller than a typical MTU and the repsonse probably is,
too, you can be done in one roundtrip, with no need to keep any state
at the server, and flow control als ordering and all that probably
isn't particularly useful for such uses either.
And then, you can use UDP to build your own TCP replacements, of
course, but it's probably not a good idea without some deep
understanding of network dynamics, modern TCP algorithms are pretty
sophisticated.
Also, I guess it should be mentioned that there is more than UDP and
TCP, such as SCTP and DCCP. The only problem currently is that the
(IPv4) internet is full of NAT gateways which make it impossible to
use protocols other than UDP and TCP in end-user applications.
Video streaming is a perfect example of using UDP.
UDP has lower overhead, as stated already is good for streaming things like video and audio where it is better to just lose a packet then try to resend and catch up.
There are no guarantees on TCP delivery, you are simply supposed to be told if the socket disconnected or basically if the data is not going to arrive. Otherwise it gets there when it gets there.
A big thing that people forget is that udp is packet based, and tcp is bytestream based, there is no guarantee that the "tcp packet" you sent is the packet that shows up on the other end, it can be dissected into as many packets as the routers and stacks desire. So your software has the additional overhead of parsing bytes back into usable chunks of data, that can take a fair amount of overhead. UDP can be out of order so you have to number your packets or use some other mechanism to re-order them if you care to do so. But if you get that udp packet it arrives with all the same bytes in the same order as it left, no changes. So the term udp packet makes sense but tcp packet doesnt necessarily. TCP has its own re-try and ordering mechanism that is hidden from your application, you can re-invent that with UDP to tailor it to your needs.
UDP is far easier to write code for on both ends, basically because you do not have to make and maintain the point to point connections. My question is typically where are the situations where you would want the TCP overhead? And if you take shortcuts like assuming a tcp "packet" received is the complete packet that was sent, are you better off? (you are likely to throw away two packets if you bother to check the length/content)
Network communication for video games is almost always done over UDP.
Speed is of utmost importance and it doesn't really matter if updates are missed since each update contains the complete current state of what the player can see.
The key question was related to "what kind of situations would UDP be the better choice [over tcp]"
There are many great answers above but what is lacking is any formal, objective assessment of the impact of transport uncertainty upon TCP performance.
With the massive growth of mobile applications, and the "occasionally connected" or "occasionally disconnected" paradigms that go with them, there are certainly situations where the overhead of TCP's attempts to maintain a connection when connections are hard to come by leads to a strong case for UDP and its "message oriented" nature.
Now I don't have the math/research/numbers on this, but I have produced apps that have worked more reliably using and ACK/NAK and message numbering over UDP than could be achieved with TCP when connectivity was generally poor and poor old TCP just spent it's time and my client's money just trying to connect. You get this in regional and rural areas of many western countries....
In some cases, which others have highlighted, guaranteed arrival of packets isn't important, and hence using UDP is fine. There are other cases where UDP is preferable to TCP.
One unique case where you would want to use UDP instead of TCP is where you are tunneling TCP over another protocol (e.g. tunnels, virtual networks, etc.). If you tunnel TCP over TCP, the congestion controls of each will interfere with each other. Hence one generally prefers to tunnel TCP over UDP (or some other stateless protocol). See TechRepublic article: Understanding TCP Over TCP: Effects of TCP Tunneling on End-to-End Throughput and Latency.
UDP can be used when an app cares more about "real-time" data instead of exact data replication. For example, VOIP can use UDP and the app will worry about re-ordering packets, but in the end VOIP doesn't need every single packet, but more importantly needs a continuous flow of many of them. Maybe you here a "glitch" in the voice quality, but the main purpose is that you get the message and not that it is recreated perfectly on the other side. UDP is also used in situations where the expense of creating a connection and syncing with TCP outweighs the payload. DNS queries are a perfect example. One packet out, one packet back, per query. If using TCP this would be much more intensive. If you dont' get the DNS response back, you just retry.
UDP when speed is necessary and the accuracy if the packets is not, and TCP when you need accuracy.
UDP is often harder in that you must write your program in such a way that it is not dependent on the accuracy of the packets.
It's not always clear cut. However, if you need guaranteed delivery of packets with no loss and in the right sequence then TCP is probably what you want.
On the other hand UDP is appropriate for transmitting short packets of information where the sequence of the information is less important or where the data can fit into a single
packet.
It's also appropriate when you want to broadcast the same information to many users.
Other times, it's appropriate when you are sending sequenced data but if some of it goes
missing you're not too concerned (e.g. a VOIP application).
Some protocols are more complex because what's needed are some (but not all) of the features of TCP, but more than what UDP provides. That's where the application layer has to
implement the additional functionality. In those cases, UDP is also appropriate (e.g. Internet radio, order is important but not every packet needs to get through).
Examples of where it is/could be used
1) A time server broadcasting the correct time to a bunch of machines on a LAN.
2) VOIP protocols
3) DNS lookups
4) Requesting LAN services e.g. where are you?
5) Internet radio
6) and many others...
On unix you can type grep udp /etc/services to get a list of UDP protocols implemented
today... there are hundreds.
Look at section 22.4 of Steven's Unix Network Programming, "When to Use UDP Instead of TCP".
Also, see this other SO answer about the misconception that UDP is always faster than TCP.
What Steven's says can be summed up as follows:
Use UDP for broadcast and multicast since that is your only option ( use multicast for any new apps )
You can use UDP for simple request / reply apps, but you'll need to build in your own acks, timeouts and retransmissions
Don't use UDP for bulk data transfer.
We know that the UDP is a connection-less protocol, so it is
suitable for process that require simple request-response communication.
suitable for process which has internal flow ,error control
suitable for broad casting and multicasting
Specific examples:
used in SNMP
used for some route updating protocols such as RIP
Comparing TCP with UDP, connection-less protocols like UDP assure speed, but not reliability of packet transmission.
For example in video games typically don't need a reliable network but the speed is the most important and using UDP for games has the advantage of reducing network delay.
You want to use UDP over TCP in the cases where losing some of the data along the way will not completely ruin the data being transmitted. A lot of its uses are in real-time applications, such as gaming (i.e., FPS, where you don't always have to know where every player is at any given time, and if you lose a few packets along the way, new data will correctly tell you where the players are anyway), and real-time video streaming (one corrupt frame isn't going to ruin the viewing experience).
We have web service that has thousands of winforms client in as many PCs. The PCs have no connection with DB backend, all access is via the web service. So we decided to develop a central logging server that listens on a UDP port and all the clients sends an xml error log packet (using log4net UDP appender) that gets dumped to a DB table upon received. Since we don't really care if a few error logs are missed and with thousands of client it is fast with a dedicated logging service not loading the main web service.
I'm a bit reluctant to suggest UDP when TCP could possibly work. The problem is that if TCP isn't working for some reason, because the connection is too laggy or congested, changing the application to use UDP is unlikely to help. A bad connection is bad for UDP too. TCP already does a very good job of minimizing congestion.
The only case I can think of where UDP is required is for broadcast protocols. In cases where an application involves two, known hosts, UDP will likely only offer marginal performance benefits for substantially increased costs of code complexity.
Only use UDP if you really know what you are doing. UDP is in extremely rare cases today, but the number of (even very experienced) experts who would try to stick it everywhere seems to be out of proportion. Perhaps they enjoy implementing error-handling and connection maintenance code themselves.
TCP should be expected to be much faster with modern network interface cards due to what's known as checksum imprint. Surprisingly, at fast connection speeds (such as 1Gbps) computing a checksum would be a big load for a CPU so it is offloaded to NIC hardware that recognizes TCP packets for imprint, and it won't offer you the same service.
UDP is perfect for VoIP addressed where data packet has to be sent regard less its reliability...
Video chatting is an example of UDP (you can check it by wireshark network capture during any video chatting)..
Also TCP doesn't work with DNS and SNMP protocols.
UDP does not have any overhead while TCP have lots of Overhead

Resources