what is ``differentiated service : congestion experienced``? - ip

I was trying to send packets and capture them using wireshark to see how they look like using scapy
sr1(IP(src='192.168.1.35', dst='192.168.1.1', tos=3)/'my packet')
when i sent the packet and captured it in wireshark i got the full ip stack but this piece of the ip stack got my attention
Differentiated Services Field: 0x03 (DSCP: CS0, ECN: CE)
0000 00.. = Differentiated Services Codepoint: Default (0)
.... ..11 = Explicit Congestion Notification: Congestion Experienced (3)]]
what is Explicit Congestion Notification: Congestion Experienced ?

Try to read RFC 3168.
TCP/IP is a complex protocol (of protocols). Your congestion notification looks to be a congestion check (ECN) implemented in your router (or source-dest path), a feature your/a router has (a good thing, an extra thing). The message you report, not necessarily has to do with your networking experiment. Can be a context issue related with overall traffic on your (intra/external) net, what usually tell systems aware, to modify speed transmissions, among other things, etc.
[ECN notifies networks about congestion with the goal of reducing packet loss and delay by making the sending device decrease the transmission rate until the congestion clears, without dropping packets. RFC 3168, The Addition of Explicit Congestion Notification (ECN) to IP, defines ECN].
Reading RFCs is a good habit.
TCP/IP is an endless deep ocean. Getting a good foundation is not easy. But try hard, there is light at the end of the tunnel.

Related

UDP TCP management of server congestion

Does TCP and UDP protocol have a way to manage their saturation?
When I write saturation, I mean Network congestion: what happens if the buffer of the server is full and the client sends a datagram UDP/TCP to the server?
Have these protocols a way to handle this scenario, or data would be lost?
This is a question about TCP/UDP basics. For this reason this answer is not going to be a complete TCP and UDP guide.
Network congestion at low level protocols
In case of network congestions, the data sender will usually notice it because of the failure of data sending APIs (e.g. the BSD functions send() and sendto()).
For example I have personal experience of TCP/IP over GPRS, in which the network problems caused data sending APIs to fail. In that case it was up to the sender to preserve its data in order to send it as soon as possible.
Congestion at receiver's side
That's what the asker had actually in mind.
Let's start from UDP. Really short answer: by its own design, data sent to congested servers will be lost. From Wikipedia,EN:
[...]It has no handshaking dialogues, and thus exposes the user's
program to any unreliability of the underlying network; there is no
guarantee of delivery, ordering, or duplicate protection[...]
Finally TCP. It has been designed to provide what is missing in UDP. From Wikipedia, EN:
TCP provides reliable, ordered, and error-checked delivery of a stream
of octets (bytes) between applications running on hosts communicating
via an IP network
How are these features achieved? I cannot provide a full TCP tutorial in this answer, but I can list three TCP's fundamental traits for achieving reliability:
Packets are numbered (each packet, but we can say each byte has a specific sequence number)
Retransmissions. The receiver sends an acknowledge (ACK) for each packet (but we can say each byte) it receives. For this reason the sender understands that the packet has not been received and can retransmit it (the number of retransmissions allowed vary according to different implmentations and user settings)
Sliding window. Let's describe it in a simply way: each peer currently informs the remote peer about its window, the number of bytes it is able to receive. As soon as a congestion occurs, a peer can reduce the windows so that the sender will slow down until the congestion ends.
To answer OP's question: in case of server congestion in case of TCP connection, the protocol assure retransmissions and throughput dynamic management that preserve for a reasonable amount of time any sent data.
I hope this simple description helps. It probably has raised even more questions, and in this case I suggest to deepen your study at the real source:
RFC 768 (UDP)
RFC 793 (TCP)

how transport , network, data link layers functions achieve reliability?

we know that transport layer protocols like tcp control the flow and take care of the reliability by slide window and acknowledges ...etc. the data link layer with LLC sub layer has the same functionality for reliable connections also. the first question : is this means that both layers do the same functions twice? or when we use tcp in transport layer there is no need for LLC reliability functions ?how is it working ?
the second question: since IP layer is unreliable when it sends and receives packets, is this means that routers witch are layer 3 devices with no tcp protocol above it depends on LLC sub layer to takes care about the reliability "I mean between two routers" ?
I dont think you need other reliability, in words of disruption, both IP and Physical protocls (such as Ethernet) have CRC and Checksum in order to prevent disruption, but in terms of packet loss, TCP is what's called a stream protocol - it transfers you a stream of bytes, Sequence / Acknowledgment number server that purpose and help you keep track of bytes sent/read by client, if some weird "jump" in those numbers would be made, an ACK message wont be sent by receiving side, and protocol's sender would re-send lost packets hence I don't think that you need some reliability in terms of packet loss as TCP covers that for all downwards layers.... not sure I fully understood your question though, thats when we use TCP
about using pure IP and physical layer protocols, I honestly have no idea how to prevent packet-loss yet disruption is prevented by as mentioned earlier - checksums

TCP vs UDP on video stream

I just came home from my exam in network-programming, and one of the question they asked us was "If you are going to stream video, would you use TCP or UDP? Give an explanation for both stored video and live video-streams". To this question they simply expected a short answer of TCP for stored video and UDP for live video, but I thought about this on my way home, and is it necessarily better to use UDP for streaming live video? I mean, if you have the bandwidth for it, and say you are streaming a soccer match, or concert for that matter, do you really need to use UDP?
Lets say that while you are streaming this concert or whatever using TCP you start losing packets (something bad happened in some network between you and the sender), and for a whole minute you don't get any packets. The video-stream will pause, and after the minute is gone packets start to get through again (IP found a new route for you). What would then happen is that TCP would retransmit the minute you lost and continue sending you the live stream. As an assumption the bandwidth is higher than the bit-rate on the stream, and the ping is not too high, so in a short amount of time, the one minute you lost will act as a buffer for the stream for you, that way, if packet-loss happens again, you won't notice.
Now, I can think of some appliances where this wouldn't be a good idea, like for instance video-conferences, where you need to always be at the end of the stream, because delay during a video-chat is just horrible, but during a soccer-match, or a concert what does it matter if you are a single minute behind the stream? Plus, you are guaranteed that you get all the data and it would be better to save for later viewing when it's coming in without any errors.
So this brings me to my question. Are there any drawbacks that I don't know of about using TCP for live-streaming? Or should it really be, that if you have the bandwidth for it you should go for TCP given that it is "nicer" to the network (flow-control)?
Drawbacks of using TCP for live video:
As you mentioned, TCP buffers the unacknowledged segments for every client. In some cases this is undesirable, such as TCP streaming for very popular live events: your list of simultaneous clients (and buffering requirements) are large in this case. Pre-recorded video-casts typically don't have as much of a problem with this because viewers tend to stagger their replay activity.
TCP's delivery guarantees are a blocking function which isn't helpful in interactive conversations. Assume your network connection drops for 15 seconds. When we miss part of a conversation, we naturally ask the person to repeat (or the other party will proactively repeat if it seems like you missed something). UDP doesn't care if you missed part of a conversation for the last 15 seconds; it keeps working as if nothing happened. On the other hand, the app could be designed for TCP to replay the last 15 seconds (and the person on the other end may not want or know about that). Such a replay by TCP aggravates the problem, and makes it more difficult to stay in sync with other parties in the conversation. Comparing TCP and UDP’s behavior in the face of packet loss, one could say that it’s easier for UDP to stay in sync with the state of an interactive conversation.
IP multicast significantly reduces video bandwidth requirements for large audiences; multicast requires UDP (and is incompatible with TCP). Note - multicast is generally restricted to private networks. Please note that multicast over the internet is not common. I would also point out that operating multicast networks is more complicated than operating typical unicast networks.
FYI, please don't use the word "packages" when describing networks. Networks send "packets".
but during a soccer-match, or a
concert what does it matter if you are
a single minute behind the stream?
To some soccer fans, quite a bit. It has been remarked that delays of even a few seconds present in digital video streams due to encoding (or whatever) can be very annoying when, during high-profile events such as world cup matches, you can hear the cheers and groans from the guys next door (who are watching an undelyed analog program) before you get to see the game moves that caused them.
I think that to someone caring a lot about sports (and those are the biggest group of paying customers for digital TV, at least here in Germany), being a minute behind in a live video stream would be completely unacceptable (As in, they'd switch to your competitor where this doesn't happen).
Usually a video stream is somewhat fault tolerant. So if some packages get lost (due to some router along the way being overloaded, for example), then it will still be able to display the content, but with reduced quality.
If your live stream was using TCP/IP, then it would be forced to wait for those dropped packages before it could continue processing newer data.
That's doubly bad:
old data be re-transmitted (that's probably for a frame that was already displayed and therefore worthless) and
new data can't arrive until after old data was re-transmitted
If your goal is to display as up-to-date information as possible (and for a live-stream you usually want to be up-to-date, even if your frames look a bit worse), then TCP will work against you.
For a recorded stream the situation is slightly different: you'll probably be buffering a lot more (possibly several minutes!) and would rather have data re-transmitted than have some artifacts due to lost packages. In this case TCP is a good match (this could still be implemented in UDP, of course, but TCP doesn't have as much drawbacks as for the live stream case).
There are some use cases suitable to UDP transport and others suitable to TCP transport.
The use case also dictates encoding settings for the video. When broadcasting soccer match focus is on quality and for video conference focus is on latency.
When using multicast to deliver video to your customers then UDP is used.
Requirement for multicast is expensive networking hardware between broadcasting server and customer. In practice this means if your company owns network infrastructure you can use UDP and multicast for live video streaming. Even then quality-of-service is also implemented to mark video packets and prioritize them so no packet loss happens.
Multicast will simplify broadcasting software because network hardware will handle distributing packets to customers. Customers subscribe to multicast channels and network will reconfigure to route packets to new subscriber. By default all channels are available to all customers and can be optimally routed.
This workflow places dificulty on authorization process. Network hardware does not differentiate subscribed users from other users. Solution to authorization is in encrypting video content and enabling decryption in player software when subscription is valid.
Unicast (TCP) workflow allows server to check client's credentials and only allow valid subscriptions. Even allow only certain number of simultaneous connections.
Multicast is not enabled over internet.
For delivering video over internet TCP must be used. When UDP is used developers end up re-implementing packet re-transmission, for eg. Bittorrent p2p live protocol.
"If you use TCP, the OS must buffer the unacknowledged segments for every client. This is undesirable, particularly in the case of live events".
This buffer must exist in some form. Same is true for jitter buffer on player side. It is called "socket buffer" and server software can know when this buffer is full and discard proper video frames for live streams. It is better to use unicast/TCP method because server software can implement proper frame dropping logic. Random missing packets in UDP case will just create bad user experience. like in this video: http://tinypic.com/r/2qn89xz/9
"IP multicast significantly reduces video bandwidth requirements for large audiences"
This is true for private networks, Multicast is not enabled over internet.
"Note that if TCP loses too many packets, the connection dies; thus, UDP gives you much more control for this application since UDP doesn't care about network transport layer drops."
UDP also doesn't care about dropping entire frames or group-of-frames so it does not give any more control over user experience.
"Usually a video stream is somewhat fault tolerant"
Encoded video is not fault tolerant. When transmitted over unreliable transport then forward error correction is added to video container. Good example is MPEG-TS container used in satellite video broadcast that carry several audio, video, EPG, etc. streams. This is necessary as satellite link is not duplex communication, meaning receiver can't request re-transmission of lost packets.
When you have duplex communication available it is always better to re-transmit data only to clients having packet loss then to include overhead of forward-error-correction in stream sent to all clients.
In any case lost packets are unacceptable. Dropped frames are ok in exceptional cases when bandwidth is hindered.
The result of missing packets are artifacts like this one:
Some decoders can break on streams missing packets in critical places.
I recommend you to look at new p2p live protocol Bittorent Live.
As for streaming it's better to use UDP, first because it lowers the load on servers, but mostly because you can send packets with multicast, it's simpler than sending it to each connected client.
It depends. How critical is the content you are streaming? If critical use TCP. This may cause issues in bandwidth, video quality (you might have to use a lower quality to deal with latency), and latency. But if you need the content to guaranteed get there, use it.
Otherwise UDP should be fine if the stream is not critical and would be preferred because UDP tends to have less overhead.
One of the biggest problems with delivering live events on Internet is 'scale', and TCP doesn’t scale well. For example when you are beaming a live football match -as opposed to an on demand movie playback- the number of people watching can easily be 1000 times more. In such a scenario using TCP is a death sentence for the CDNs (content delivery networks).
There are a couple of main reasons why TCP doesn't scale well:
One of the largest tradeoffs of TCP is the variability of throughput achievable between the sender and the receiver. When streaming video over the Internet the video packets must traverse multiple routers over the Internet, each of these routers is connected with different speed connections. The TCP algorithm starts with TCP window off small, then grows until packet loss is detected, the packet loss is considered a sign of congestion and TCP responds to it by drastically reducing the window size to avoid congestion. Thus in turn reducing the effective throughput immediately. Now imagine a network with TCP transmission using 6-7 router hops to the client (a very normal scenario), if any of the intermediate router looses any packet, the TCP for that link will reduce the transmission rate. In-fact The traffic flow between routers follow an hourglass kind of a shape; always gong up and down in-between one of the intermediate routers. Rendering the effective through put much lower compared to best-effort UDP.
As you may already know TCP is an acknowledgement-based protocol. Lets for example say a sender is 50ms away (i.e. latency btw two points). This would mean time it takes for a packet to be sent to a receiver and receiver to send an acknowledgement would be 100ms; thus maximum throughput possible as compared to UDP based transmission is already halved.
The TCP doesn’t support multicasting or the new emerging standard of multicasting AMT. Which means the CDNs don’t have the opportunity to reduce network traffic by replicating the packets -when many clients are watching the same content. That itself is a big enough reason for CDNs (like Akamai or Level3) to not go with TCP for live streams.
While reading the TCP UDP debate I noticed a logical flaw. A TCP packet loss causing a one minute delay that's converted into a one minute buffer cant be correlated to UDP dropping a full minute while experiencing the same loss. A more fair comparison is as follows.
TCP experiences a packet loss. The video is stopped while TCP resend's packets in an attempt to stream mathematically perfect packets. Video is delayed for one minute and picks up where it left off after missing packet makes its destination. We all wait but we know we wont miss a single pixel.
UDP experiences a packet loss. For a second during the video stream a corner of the screen gets a little blurry. No one notices and the show goes on without looking for the lost packets.
Anything that streams gains the most benefits from UDP. The packet loss causing a one minute delay to TCP would not cause a one minute delay to UDP. Considering that most systems use multiple resolution streams making things go blocky when starving for packets, makes even more sense to use UDP.
UDP FTW when streaming.
If the bandwidth is far higher than the bitrate, I would recommend TCP for unicast live video streaming.
Case 1: Consecutive packets are lost for a duration of several seconds. => live video will stop on the client side whatever the transport layer is (TCP or UDP). When the network recovers:
- if TCP is used, client video player can choose to restart the stream at the first packet lost (timeshift) OR to drop all late packets and to restart the video stream with no timeshift.
- if UDP is used, there is no choice on the client side, video restart live without any timeshift.
=> TCP equal or better.
Case 2: some packets are randomly and often lost on the network.
- if TCP is used, these packets will be immediately retransmitted and with a correct jitter buffer, there will be no impact on the video stream quality/latency.
- if UDP is used, video quality will be poor.
=> TCP much better
Besides all the other reasons, UDP can use multicast. Supporting 1000s of TCP users all transmitting the same data wastes bandwidth.
However, there is another important reason for using TCP.
TCP can much more easily pass through firewalls and NATs. Depending on your NAT and operator, you may not even be able to receive a UDP stream due to problems with UDP hole punching.
For video streaming bandwidth is likely the constraint on the system. Using multicast you can greatly reduce the amount of upstream bandwidth used. With UDP you can easily multicast your packets to all connected terminals.
You could also use a reliable multicast protocol, one is called Pragmatic General Multicast (PGM), I don't know anything about it and I guess it isn't widespread in its use.
All the 'use UDP' answers assume an open network and 'stuff it as much as you can' approach. Good for old-style closed-garden dedicated audio/video networks, which are a vanishing sort.
In the actual world, your transmission will go through firewalls (that will drop multicast and sometimes udp), the network is shared with others more important ($$$) apps, so you want to punish abusers with window scaling.
This is the thing, it is more a matter of content than it is a time issue. The TCP protocol requires that a packet that was not delivered must be check, verified and redelivered. UDP does not use this requirement. So if you sent a file which contains millions of packets using UDP, like a video, if some of the packets are missing upon delivery, they will most likely go unmissed.

Difference between TCP and UDP?

What is the difference between TCP and UDP?
I know that TCP is used in the case of non-time critical applications, and UDP is used for games or applications that require fast transmission of data. I know that TCP is used for HTTP, HTTPs, FTP, SMTP, and Telnet. I know that UDP is used for DNS and DHCP.
But why? What characteristics of TCP and UDP make it useful for their respective use cases?
TCP is a connection oriented stream over an IP network. It guarantees that all sent packets will reach the destination in the correct order. This imply the use of acknowledgement packets sent back to the sender, and automatic retransmission, causing additional delays and a general less efficient transmission than UDP.
UDP is a connection-less protocol. Communication is datagram oriented. The integrity is guaranteed only on the single datagram. Datagrams reach destination and can arrive out of order or don't arrive at all. It is more efficient than TCP because it uses non ACK. It's generally used for real time communication, where a little percentage of packet loss rate is preferable to the overhead of a TCP connection.
In certain situations UDP is used because it allows broadcast packet transmission. This is sometimes fundamental in cases like DHCP protocol, because the client machine hasn't still received an IP address (this is the DHCP negotiaton protocol purpose) and there won't be any way to establish a TCP stream without the IP address itself.
From the Skullbox article:
TCP (Transmission Control Protocol) is the most commonly used protocol on the Internet.
The reason for this is because TCP offers error correction. When the TCP protocol is used there is a "guaranteed delivery." This is due largely in part to a method called "flow control." Flow control determines when data needs to be re-sent, and stops the flow of data until previous packets are successfully transferred. This works because if a packet of data is sent, a collision may occur. When this happens, the client re-requests the packet from the server until the whole packet is complete and is identical to its original.
UDP (User Datagram Protocol) is anther commonly used protocol on the Internet. However, UDP is never used to send important data such as webpages, database information, etc; UDP is commonly used for streaming audio and video. Streaming media such as Windows Media audio files (.WMA) , Real Player (.RM), and others use UDP because it offers speed! The reason UDP is faster than TCP is because there is no form of flow control or error correction. The data sent over the Internet is affected by collisions, and errors will be present. Remember that UDP is only concerned with speed. This is the main reason why streaming media is not high quality.
1) TCP is connection oriented and reliable where as UDP is connection less and unreliable.
2) TCP needs more processing at network interface level where as in UDP it’s not.
3) TCP uses, 3 way handshake, congestion control, flow control and other mechanism to make sure the reliable transmission.
4) UDP is mostly used in cases where the packet delay is more serious than packet loss.
Think of TCP as a dedicated scheduled UPS/FedEx pickup/dropoff of packages between two locations, while UDP is the equivalent of throwing a postcard in a mailbox.
UPS/FedEx will do their damndest to make sure that the package you mail off gets there, and get it there on time. With the post card, you're lucky if it arrives at all, and it may arrive out of order or late (how many times have you gotten a postcard from someone AFTER they've gotten home from the vacation?)
TCP is as close to a guaranteed delivery protocol as you can get, while UDP is just "best effort".
Reasons UDP is used for DNS and DHCP:
DNS - TCP requires more resources from the server (which listens for connections) than it does from the client. In particular, when the TCP connection is closed, the server is required to remember the connection's details (holding them in memory) for two minutes, during a state known as TIME_WAIT_2. This is a feature which defends against erroneously repeated packets from a preceding connection being interpreted as part of a current connection. Maintaining TIME_WAIT_2 uses up kernel memory on the server. DNS requests are small and arrive frequently from many different clients. This usage pattern exacerbates the load on the server compared with the clients. It was believed that using UDP, which has no connections and no state to maintain on either client or server, would ameliorate this problem.
DHCP - DHCP is an extension of BOOTP. BOOTP is a protocol which client computers use to get configuration information from a server, while the client is booting. In order to locate the server, a broadcast is sent asking for BOOTP (or DHCP) servers. Broadcasts can only be sent via a connectionless protocol, such as UDP. Therefore, BOOTP required at least one UDP packet, for the server-locating broadcast. Furthermore, because BOOTP is running while the client... boots, and this is a time period when the client may not have its entire TCP/IP stack loaded and running, UDP may be the only protocol the client is ready to handle at that time. Finally, some DHCP/BOOTP clients have only UDP on board. For example, some IP thermostats only implement UDP. The reason is that they are built with such tiny processors and little memory that the are unable to perform TCP -- yet they still need to get an IP address when they boot.
As others have mentioned, UDP is also useful for streaming media, especially audio. Conversations sound better under network lag if you simply drop the delayed packets. You can do that with UDP, but with TCP all you get during lag is a pause, followed by audio that will always be delayed by as much as it has already paused. For two-way phone-style conversations, this is unacceptable.
One of the differences is in short
UDP : Send message and dont look back if it reached destination, Connectionless protocol
TCP : Send message and guarantee to reach destination, Connection-oriented protocol
TCP establishes a connection before the actual data transmission takes place, UDP does not. In this way, UDP can provide faster delivery. Applications like DNS, time server access, therefore, use UDP.
Unlike UDP, TCP uses congestion control. It responses to the network load. Unlike UDP, it slows down when network congestion is imminent. So, applications like multimedia preferring constant throughput might go for UDP.
Besides, UDP is unreliable, it doesn't react on packet losses. So loss sensitive applications like multimedia transmission prefer UDP. However, TCP is a reliable protocol, so, applications that require reliability such as web transfer, email, file download prefer TCP.
Besides, in today's internet UDP is not as welcoming as TCP due to middle boxes. Some applications like skype fall down to TCP when UDP connection is assumed to be blocked.
Run into this thread and let me try to express it in this way.
TCP
3-way handshake
Bob: Hey Amy, I'd like to tell you a secret
Amy: OK, go ahead, I'm ready
Bob: OK
Communication
Bob: 'I', this is the first letter
Amy: First letter received, please send me the second letter
Bob: ' ', this is the second letter
Amy: Second letter received, please send me the third letter
Bob: 'L', this is the third letter
After a while
Bob: 'L', this the third letter
Amy: Third letter received, please send me the fourth letter
Bob: 'O', this the forth letter
Amy: ...
......
4-way handshake
Bob: My secret is exposed, now, you know my heart.
Amy: OK. I have nothing to say.
Bob: OK.
UDP
Bob: I LOVE U
Amy received: OVI L E
TCP is more reliable than UDP with even message order guaranteed, that's no doubt why UDP is more lightweight and efficient.
The Law of Leaky Abstractions
by Joel Spolsky
http://www.joelonsoftware.com/articles/LeakyAbstractions.html
Short and simple differences between Tcp and Udp protocol:
1) Tcp - Transmission control protocol and Udp - User datagram protocol.
2) Tcp is reliable protocol, Where as Udp is a unreliable protocol.
3) Tcp is a stream oriented, where as Udp is a message oriented protocol.
4) Tcp is a slower than Udp.
This sentence is a UDP joke, but I'm not sure that you'll get it. The below conversation is a TCP/IP joke:
A: Do you want to hear a TCP/IP joke?
B: Yes, I want to hear a TCP/IP joke.
A: Ok, are you ready to hear a TCP/IP joke?
B: Yes, I'm ready to hear a TCP/IP joke.
A: Well, here is the TCP/IP joke.
A: Did you receive a TCP/IP joke?
B: Yes, I **did** receive a TCP/IP joke.
TCP and UDP are transport layer protocol, Layer 4 protocol in OSI(open systems interconnection model). The main difference along with pros and cons are as following.
TCP
PROS:
Acknowledgment
Guaranteed Delivery
Connection based
Ordered packets
Congestion control
CONS:
Larger Packet
More bandwidth
Slower
Statefull
Consume memory
UDP
PROS:
Packets are smaller
Consume less bandwidth
Faster
Stateless
CONS:
No acknowledgment
No guaranteed delivery
Connectionless
No congestion control
No order packet
TLDR;
TCP - stream-oriented, requires a connection, reliable, slow
UDP - message-oriented, connectionless, unreliable, fast
Before we start, remember that all disadvantages of something are a continuation of its advantages. There only a right tool for a job, no panacea. TCP/UDP coexist for decades, and for a reason.
TCP
It was designed to be extremely reliable and it does its job very well. It's so complex because it accomplishes a hard task: providing a reliable transport over the unreliable IP protocol.
Since all TCP's complex logic is encapsulated into the network stack, you are free from doing lots of laborious, error-prone low-level stuff in the application layer.
When you send data over TCP, you write a stream of bytes to the socket at the sender side where it gets broken into packets, passed down the stack and sent over the wire. On the receiver side packets get reassembled again into a continous stream of bytes.
Maintaining this nice abstraction has a cost in terms of complexity and performance. If the 1st packet from the byte stream is lost, the receiver will delay processing of subsequent packets even those have already arrived (the so-called "head of line blocking").
In addition, in order to be reliable, TCP implements this:
TCP requires an established connection, which requires 3 round-trips ("infamous" 3-way handshake)
TCP has a feature called "slow start" when it gradually ramps up the transmission rate after establishing a connection to allow a receiver to keep up with data rate
Every sent packet has to be acknowledged or else a sender will stop sending more data
And on and on and on...
All this is exacerbated in slow unreliable wireless networks because TCP was designed for wired networks where delays are predictable and packet loss is not so common. In addition, like many people already mentioned, for some things TCP just doesn't work at all (DHCP). However, where relevant, TCP still does its work exceptionally well.
Using a mail analogy a TCP session is similar to telling a story to your secretary who breaks it into mails and sends over a crappy mail service to a publisher. On the other side another secretary assembles mails into a single piece of text. Some mails get lost, some get corrupted, so a very complex procedure is required for reliable delivery and your 10-page story can take a long time to reach your publisher.
UDP
UDP, on the other hand, is message-oriented, so a receiver writes a message (packet) to the socket and then it gets transmitted to a receiver as-is, without any splitting/assembling in the transport layer.
Compared to TCP, its specification is very straightforward. Essentially, all it does for you is adding a checksum to the packet so a receiver can detect its corruption. Everything else must be implemented by you, a software developer. Now read the voluminous TCP spec and try thinking of re-implementing even a small subset of it.
Some people went this way and got very decent results, to the point that HTTP/3 uses QUIC - a protocol based on UDP. However, this is more of an exception. Common applications of UDP are audio/video streaming and conferencing applications like Skype, Zoom or Google Hangout where loosing packets is not so important compared to a delay introduced by TCP.
Simple Explanation by Analogy
TCP is like this.
Imagine you have a pen-pal on Mars (we communicated with written letters back in the good ol' days before the internet).
You need to send your pen pal the seven habits of highly effective people. So you decide to send it in seven separate letters:
Letter 1 - Be proactive
Letter 2 - Begin with the end in mind...
etc.
etc..Letter 7 - Sharpen the Saw
Requirements:
You want to make sure that your pen pal receives all your letters - in order and that they arrive perfectly. If your pen pay receives letter 7 before letter 1 - that's no good. if your pen pal receives all letters except letter 3 - that also is no good.
Here's how we ensure that our requirements are met:
Confirmation Letter: So your pen pal sends a confirmation letter to say "I have received letter 1". That way you know that your pen pal has received it. If a letter does not arrive, or arrives out of order, then you have to stop, and go back and re-send that letter, and all subsequent letters.
Flow Control: Around the time of Xmas you know that your pen pal will be receiving a lot of mail, so you slow down because you don't want to overwhelm your pen pal. (Your pen pal sends you constant updates about the number of unread messages there are in penpal's mailbox - if your pen pal says that the inbox is about to explode because it is so full, then you slow down sending your letters - because your pen pal won't be able to read them.
Perfect arrival. Sometimes while you send your letter in the mail, it can get torn, or a snail can eat half of it. How do you know that all your letter has arrived in perfect condition? Well your pen pal will give you a mechanism by which you can check whether they've got the full letter and that it was the exactly the letter that you sent. (e.g. via a word count etc. ). a basic analogy.

TCP compatibility: Why is TCP not compatible with packet broadcast and multicasting actions?

** http://en.wikipedia.org/wiki/User_Datagram_Protocol: **
"Unlike TCP, UDP is compatible with packet broadcast (sending to all on local network) and multicasting (send to all subscribers)."
'Compatible' is a very poor choice of words here. 'Supports' is what is really being described. TCP is a point to point protocol, by design. Period. TCP multicast is a contradiction in terms.
EDIT: I updated the Wikipedia page to reflect this comment.
EDIT 2: Incredibly enough, somebody has removed all mention of multicast from the Wikipedia UDP page since this question was posted. I fixed it. Again.
TCP establishes a connection between the sender and the receiver. The sender sends a packet, then waits for acknowledgement from the receiver before sending another1. If a packet goes too long without being acknowledged, it resends the packet until it does receive an acknowledgement (that's how it gets its reliability).
In the case of multicast and broadcast, the sender doesn't even know how many receivers there might be, not to mention who they are. That makes it pretty much impossible for it to wait for confirmation and re-send packets if somebody doesn't acknowledge a packet correctly.
1Technically, there's a "window" that allows it to send, say, five packets before it receives acknowledgement, but you get the idea -- it still needs to know who's receiving, and get acknowledgement of packets that it sent, and re-send packets if they're not acknowledged.
TCP incorporates both flow control and reliability based on acknowledgment from the recipient of the data. A broadcast or multicast transmitter has no idea which or how many other nodes are listening; even if it did by some sort of multipoint synchronization algorithm similar to TCP's point-to-point synchronization, flow control would be an issue because the receiver under the worst conditions would limit the speed of the entire flow.
The short answer is because broadcast TCP is complicated.
The long answer is that the important parts of the TCP protocol, namely reliability and congestion control when ported to broadcast semantics are easily subject to abuse, don't scale well, and multicast is simply and optional component of the IPv4 standard and either not implemented or disabled at most core routers.
Many papers have been published investigating new protocols to improve scalability; IPv6 promotes multicast to a core protocol requirement and alongside source-specific-multicast significantly improves core routing support and security; leaving the still significant area of abuse.
Abuse covers many aspects of the protocol, from man-in-the-middle attacks, to overloading network infrastructure, causing network storms of traffic upstream to the source.
With a Windows machine today you can use the PGM protocol with streams support which operates pretty much as broadcast TCP. It is used by Microsoft's messaging system MSMQ.
http://msdn.microsoft.com/en-us/library/ms740125(v=vs.85).aspx

Resources