Should I use UDP or TCP in this case? - networking

P2P Network:
Largest message is about 300KB. Most of the messages are smaller (5-50kb). It is perfectly OK if they do not receive the messages, as they will initiate bootstrap (re-send).
I am leaning towards UDP, and you guessed it, its a blockchain software! However, our current design is TCP.

The largest size of a UDP packet is 65,535 bytes (including an 8 byte UDP header and 20 byte IP header), so for your largest messages you would have to implement a form of "chunking" which divides the message into smaller parts (unless you are using IPv6 Jumbograms), with a application generated header which contained the ordering of the packets, and possibly data size. You also have the issue of fragmentation when you are over the MTU size (although with a reliability mechanism like you mention this is probably not such an issue).
I guess you have to ask yourself what benefits UDP would give you over your current TCP design. The main reason to use UDP is when you need a lightweight protocol with a very small network delay or you need to be able to broadcast or multicast packets over a LAN. If you dont have these needs and TCP is doing the job, why change ?

Related

UDP fragmentation packet order

I send mixtures of large UDP packets back-to-back with small UDP packets. The large packets get fragmented to my MTU.
On RHEL6 (CentOS6), the small UDP packets always arrive at the receivers in the correct order with respect to the final fragment of any previous large packet.
On RHEL7, this no longer is the case. The small packet can get transmitted in-between the fragments of the larger packet, thus causing the receiver to see the small packet BEFORE the reassembled large packet.
As near as I can tell with ethtool, the configuration of the NIC is the same on both machines (It's actually the same machine and I swap hard drives).
So, my question is... What controls this behavior in RHEL7+? It's not udp-fragmentation-offload (That's set the same in both configurations). I'd like to find out how to force the fragments to be transmitted as a complete group, with no interfering packets, in RHEL7+.
Thanks,
XL600

with SIP, when to use TCP not UDP?

I know pretty the differences between UDP and TCP in general (eg. http://www.onsip.com/about-voip/sip/udp-versus-tcp-for-voip)
Question is, in what circumstances would using TCP as the transport have advantages specifically under SIP VOiP communications?
A lot of people would generally associate UDP with voip and probably leave it at that, but in simple terms there are two parts to voip - connection and voice data transfer.
SIP is a very light weight protocol, once the connections is established it's effectively left idle until the infrequent event of someone making a phone call. TCP (unlike UDP) will actually reduce traffic to the server by eliminating need to;
Re-register every few minutes
Refresh/ping server
You can run SIP over TCP and then use (as is recommended) UDP for RTP.
I couldn't help but also point out the obvious things that I have looked over. Eg. number of devices connecting to the server. As the number grows, the equation tilts in UDPs favor. But then you also have to consider SIP User Agents expanding to cover multiple codecs, multimedia, video and screen-sharing. The INVITE packets can start to grow large and potentially run over the UDP single datagram size thereby tilting the equation again in favor of TCP.
All that being said I hope you have enough information to answer the question you were looking to answer.
Hope this helps.
Credit: The wonderful discussion at onSip: https://www.onsip.com/blog/sip-via-udp-vs-tcp
SIP over TCP has a significant advantage over UDP for mobile devices. The reason is due to the use of NAT, and how NAT table entries in a wireless router or a cell providers' router are generally timed out much quicker for UDP vs TCP. Since keeping the same NAT table entry is necessary to be able to reliably receive calls, SIP must periodically send out keep-alives to maintain the NAT table entry. The required frequency of keep-alives is much higher for UDP (maybe every 30 seconds) vs TCP (maybe every 15 minutes) thus resulting in noticeably higher mobile device battery usage. Often when you see someone complaining about how their battery usage takes a major hit when using a VOIP client, it's because the client is using UDP.
So, TCP wins out over UDP hands down for mobile devices.
Note that the above assumes you want to be able to reliably receive calls on your mobile device. If all you want to do is be able to make calls, then it's a different story.
If a message is large (within 200 bytes of MTU size), RFC 3261 section 18.1.1 mandates use of TCP (to be precise, it mandates use of a "congestion controlled transport protocol, such as TCP"). I've hit that in practice when sending an initial INVITE with lots of headers and a complex Request URI.
You cannot reliably assemble an audio stream from a TCP based protocol. In audio it is far better to lose a packet than to have a packet retransmitted because of a packet drop. Audio does not work if there is excessive jitter in the packet timing. Audio is real-time and requires a protocol like UDP to work correctly. Packet loss does not break audio, it only reduces the quality. TCP's perfect delivery does not help audio in any way, there can be no quality if you get 100% of the packets, but they are not in real time. In audio it is the timing (latency, jitter) that determine quality more than data integrity.
This sip works BEST when signal and control are over TCP but voice data is over UDP.
I have been working with transmission of digital voice over network protocols since I designed one of the first smartphones in 1987 for the newly emerging digital cellular network in Japan. Since 1987, the only aspect of digital voice transmission that has not changed is what I describe here. The real-time nature of audio (voice) transmission and how that impacts system design is still exactly the same as it was in the dinosaur days I come from.
TCP can get through with perfect clarity on a lossy connection, when UDP may not be understandable. You get lower latency with UDP, but that doesn't help you if you can't understand what is being said.

I'm confused on terminology about wifi

I am trying to simulate a wifi video transmission and for that I created a connection using a socket between 2 devices, however I then started to doubt whether this is required or if I was supposed to create a UDP connection.
I think I'm just confused on terms here and I've Googled and I found out that Wifi can has TCP or UDP my question would then be would a Wifi Transmission over TCP be as reliable for a simulation as one with UDP?
I'd suggest you to read Difference between TCP and UDP?.
For streaming like video transmission you would generally want to use UDP. If a packet cannot reach the server in time, it'd better be discarded than pausing the whole transmission in order to wait for one tiny missing packet that just contains the other person blinking.
But obviously it's up to you and how you implement your software.
You may need to read up a bit on the TCP/IP protocol. TCP and UDP are just types of packets/datagrams. The main difference is that TCP packets include extra protocol information, whereas UDP are simpler packets with just a destination, the data itself, and a checksum.
The upshot is that the sender of a UDP packet has no way of knowing whether or not the packet was received at the other end. Often this doesn't matter - because it may be handled in other ways by higher layers in the software, or can be simply lost and ignored without any negative consequences. So UDP can be a more efficient use of the bandwidth, in some scenarios - because there is less protocol information being exchanged, and therefore more actual data. Plus TCP is more complicated because you have to handle the protocol stuff.
So when you create your system, you have a choice - either TCP or UDP packets, depending on what you are trying to achieve and how you want to go about it. But both packet types are really all part of the "tcp/ip" protocol stack, and have similarities.

UDP Packet size and packet losses

I've been writing a program that uses a stop and wait protocol on top of UDP to send packets over LAN and also over WAN. I've recently been testing my program and have noticed that the packet loss rate is higher for larger packets (approaching 64k bytes). Intuitively this makes sense but what are the actual reasons for this?
UDP packets greater than the MTU size of the network that carries them will be automatically split up into multiple packets, and then reassembled by the recipient. If any of those multiple sub-packets gets dropped, then the receiver will drop the rest of them as well.
So for example if you send a 63k UDP packet, and it goes over Ethernet, it will get broken up into 47+ smaller "fragment" packets (because Ethernet's MTU is 1500 bytes, but some of those are used for UDP headers, etc, so the amount of user-data-space available in a UDP packet is smaller than that). The receiver will only "see" that UDP packet if all 47+ of those fragment-packets make it through okay. If just one of those fragment-packets gets dropped, the whole operation fails.
Well, data networks are far from reliable; packets get dropped all the time. Overloaded routers, full buffers and corrupt packets are some of the reasons. Since UDP has no flow control capabilities, it can't slow down if for example the receiving end is overloaded.
As Jeremy explained, the bigger the payload, the more packets it is going to be split into, and therefore a bigger chance of losing some of them.
UDP is used in cases where a dropped packet here in there won't affect anything or cases that you need something to get there in time or not at all. (VOIP, streaming video etc)
Its all about IP fragmentation and defragmentation. Packet more than MTU would be fragmented and has to be defragmented at the final host, there are also chances the fragments gets fragmented again on the path and which again can add the delay. sometimes if some N/W element is configured for layer 4 filtering then it defragments(not the final host) apply rules and then again frgaments and forward. Thats the reason the applicaiton which need performance always try to send data with size <= (MTU-ETHHDR-IPHDR)

Maximum packet size for a TCP connection

What is the maximum packet size for a TCP connection or how can I get the maximum packet size?
The absolute limitation on TCP packet size is 64K (65535 bytes), but in practicality this is far larger than the size of any packet you will see, because the lower layers (e.g. ethernet) have lower packet sizes.
The MTU (Maximum Transmission Unit) for Ethernet, for instance, is 1500 bytes. Some types of networks (like Token Ring) have larger MTUs, and some types have smaller MTUs, but the values are fixed for each physical technology.
This is an excellent question and I run in to this a lot at work actually. There are a lot of "technically correct" answers such as 65k and 1500. I've done a lot of work writing network interfaces and using 65k is silly, and 1500 can also get you in to big trouble. My work goes on a lot of different hardware / platforms / routers, and to be honest the place I start is 1400 bytes. If you NEED more than 1400 you can start to inch your way up, you can probably go to 1450 and sometimes to 1480'ish? If you need more than that then of course you need to split in to 2 packets, of which there are several obvious ways of doing..
The problem is that you're talking about creating a data packet and writing it out via TCP, but of course there's header data tacked on and so forth, so you have "baggage" that puts you to 1500 or beyond.. and also a lot of hardware has lower limits.
If you "push it" you can get some really weird things going on. Truncated data, obviously, or dropped data I've seen rarely. Corrupted data also rarely but certainly does happen.
At the application level, the application uses TCP as a stream oriented protocol. TCP in turn has segments and abstracts away the details of working with unreliable IP packets.
TCP deals with segments instead of packets. Each TCP segment has a sequence number which is contained inside a TCP header.
The actual data sent in a TCP segment is variable.
There is a value for getsockopt that is supported on some OS that you can use called TCP_MAXSEG which retrieves the maximum TCP segment size (MSS). It is not supported on all OS though.
I'm not sure exactly what you're trying to do but if you want to reduce the buffer size that's used you could also look into: SO_SNDBUF and SO_RCVBUF.
There're no packets in TCP API.
There're packets in underlying protocols often, like when TCP is done over IP, which you have no interest in, because they have nothing to do with the user except for very delicate performance optimizations which you are probably not interested in (according to the question's formulation).
If you ask what is a maximum number of bytes you can send() in one API call, then this is implementation and settings dependent. You would usually call send() for chunks of up to several kilobytes, and be always ready for the system to refuse to accept it totally or partially, in which case you will have to manually manage splitting into smaller chunks to feed your data into the TCP send() API.
According to http://en.wikipedia.org/wiki/Maximum_segment_size, the default largest size for a IPV4 packet on a network is 536 octets (bytes of size 8 bits). See RFC 879
Generally, this will be dependent on the interface the connection is using. You can probably use an ioctl() to get the MTU, and if it is ethernet, you can usually get the maximum packet size by subtracting the size of the hardware header from that, which is 14 for ethernet with no VLAN.
This is only the case if the MTU is at least that large across the network. TCP may use path MTU discovery to reduce your effective MTU.
The question is, why do you care?
If you are with Linux machines, "ifconfig eth0 mtu 9000 up" is the command to set the MTU for an interface. However, I have to say, big MTU has some downsides if the network transmission is not so stable, and it may use more kernel space memories.
One solution can be to set socket option TCP_MAXSEG (http://linux.die.net/man/7/tcp) to a value that is "safe" with underlying network (e.g. set to 1400 to be safe on ethernet) and then use a large buffer in send system call.
This way there can be less system calls which are expensive.
Kernel will split the data to match MSS.
This way you can avoid truncated data and your application doesn't have to worry about small buffers.
It seems most web sites out on the internet use 1460 bytes for the value of MTU. Sometimes it's 1452 and if you are on a VPN it will drop even more for the IPSec headers.
The default window size varies quite a bit up to a max of 65535 bytes. I use http://tcpcheck.com to look at my own source IP values and to check what other Internet vendors are using.
The packet size for a TCP setting in IP protocol(Ip4). For this field(TL), 16 bits are allocated, accordingly the max size of packet is 65535 bytes: IP protocol details

Resources