I am an embedded software developer who has any experience with TCPIP on connected devices. Also, I am not a software protocol expert, so I am a bit confusing about TCPIP protocol stack + responsiblities of its various phy layers.
First of all, I have experiences with such protocols like UART, SPI, CAN, USB... As you know, the phy layer directly affects you while selecting the protocol you used at the software level. For example, if you use usb and you build a software protocol on it, you do not occasionally deal with some details like checking corrupted frame in your sofware protocol, because phy layer of it guarantees this operation. CAN also has some CAN Controller facilities like crc and bit stuffing so, it is really reliable. But the situation is not the same for simple peripherals like UART/USART. Let's say you are using a bluetooth module to upgrade your firmware, you need to be aware of almost everything that can occur while communicating like delays, corrupted frames, payload validating etc.
Briefly, i am trying to understand the exact role of newtork interfaces come included in MCUs, that are interfaced with RJ45 phy sockets directly. In another words, imagine that I wrote a server application on my pc. Also i configured and ran an application in my development board which has an RJ45 socket and it runs as a client. Also imagine they established a connection over TCP. So, what will be the situation at the client side, when i send a 32 bytes of data to the socket from the server side? What will I see at the lowest level of MCU that is an RxCompleteInterrupt()? Are the data I sent and some other stuffs appended to the TCP packet guaranteed to be delivered by the eth controller in the MCU and ethernet controller of my PC? OR am i responsible (or the stack i used) check all the things necessary to validate whether the frame is valid or not?
I tried to be as clear as it would be. Please if you have experience, then try to write clean comments. I am not a TCPIP expert, maybe I used some wrong terminology, please focus the main concept of the question.
Thanks folks.
If you don't have any prior experience with the TCP/IP protocol suite, I would strongly suggest you to have a look at this IBM Redbook, more specifically at chapters 2, 3 and 4.
This being said:
So, what will be the situation at the client side, when i send a 32
bytes of data to the socket from the server side? What will I see at
the lowest level of MCU that is an RxCompleteInterrupt()?
You should have received an Ethernet frame in your buffer. This Ethernet frame should contain an IP packet. This IP packet should contain a TCP packet, which payload should consist in your 32 bytes of data. But there will be several exchanges between the client and the server prior to your data to be received, because of TCP being a connection-oriented protocol, i.e. several Ethernet frames will be sent/received.
Are the data I sent and some other stuffs appended to the TCP packet
guaranteed to be delivered by the eth controller in the MCU and
ethernet controller of my PC? OR am i responsible (or the stack i
used) check all the things necessary to validate whether the frame
is valid or not?
The TCP packet will ultimately be delivered, but there there are not warranties that your Ethernet frames and IP packets will be delivered, and will arrive in the right order. This is precisely the job of TCP, as a connection-oriented protocol, than to do what is needed so that the data you are sending as a TCP payload will ultimately be delivered. Your MCU hardware should be the one responsible for validating the Ethernet frames, but the TCP/IP stack running on the MCU is responsible for validating IP and TCP packets and the proper delivery of the data being sent/received over TCP.
You can experiment with TCP on a Linux PC using netcat, and capture the exchange using Wireshark or tcpdump.
Create a 'response' file containing 32 bytes:
echo 0123456789ABCDEFGHIJKL > response.txt
Start Wireshark, and filter on lo interface using filter tcp port 1234
Start a TCP server listening on TCP port 1234, which will send the content of response.txt upon receiving a connection from the client:
netcat -l 1234 < response.txt
In another console/shell, connect to the server listening on tcp/1234, and display what was received:
netcat localhost 1234
0123456789ABCDEFGHIJKL
On Wireshark, you should see the following Wireshark Network Capture, and be able to expand all frames/packets of the full exchange using the IBM Redbook as a reference.
Your 32 bytes of data will be in the payload section of a TCP packet sent by the server.
I have a quite newbie question : assume that I have two devices communication via Ethernet (TCP/IP) at 100Mbps. In one side, I will be feeding the device with data to transmit. At the other side, I will be consuming the received data. I have the ability to choose the adequate buffer size of both devices.
And now my question is : If data consumption rate from the second device, is slower than data feeding rate at the first one, what will happen then?
I found some, talking about overrun counter.
Is there anything in the ethernet communication indicating that a device is momently busy and can't receive new packets? so I can pause the transmission from the receiver device.
Can some one provide me with a document or documents that explain this issue in detail because I didn't find any.
Thank you by advance
Ethernet protocol runs on MAC controller chip. MAC has two separate RX-ring (for ingress packets) and TX-ring(for egress packets), this means its a full-duplex in nature. RX/TX-rings also have on-chip FIFO but the rings hold PDUs in host memory buffers. I have covered little bit of functionality in one of the related post
Now, congestion can happen but again RX and TX are two different paths and will be due to following conditions
Queue/de-queue of rx-buffers/tx-buffers is NOT fast compared to line rate. This happens when CPU is busy and not honer the interrupts fast enough.
Host memory is slower (ex: DRAM and not SRAM), or not enough memory(due to memory leak)
Intermediate processing of the buffers taking too long.
Now, about the peer device: Back-pressure can be taken care in the a standalone system and when that happens, we usually tail drop the packets. This is agnostics to the peer device, if peer device is slow its that device's problem.
Definition of overrun is: Number of times the receiver hardware was unable to handle received data to a hardware buffer because the input rate exceeded the receiver’s ability to handle the data.
I recommend pick any MAC controller's data-sheet (ex: Intel's ethernet Controller) and you will get all your questions covered. Or if you get to see device-driver for any MAC controller.
TCP/IP is upper layer stack sits inside kernel(this can be in user plane as well), whereas ARPA protocol (ethernet) is inside MAC controller hardware. If you understand this you will understand the difference between router and switches (where there is no TCP/IP stack).
I am developing a logic core to perform data transfer between a FPGA and a PC over ethernet, using a LAN8710 PHY on my FPGA board.
I've achieved to transfer some UDP data packets from the FPGA to the PC. It's a simple core that complies with the PHY transfer requirements. It builds the UDP package and transfer it to the PC.
To check the reception on the PC, I am using Wireshark and as said above, I receive the packets correctly. I've checked the reception with a simple UDP receiver written by myself.
But, I've noticed that I only receive these packets when Wireshark is running on the PC. I mean, if Wireshark is ON, my application receives the packets too, and the counter of received packets of the following picture increases. (This picture is not mine, just one from the internet)
http://i.stack.imgur.com/wsChT.gif
If I close Wireshark, the PC stops receiving packets and the counter of received packets stops. My application stops receiving too.
Although novice on networking topics, I suspect that this issue is related to PC-side. Seems like Wireshark is "opening/closing" the ethernet communication channel, or something like that. Does anyone knows about this issue?
To build a functional core to transfer data between a PC and the FPGA, I've developed a core to transfer and receive UDP packets. Next step will be ARP implementation (to let the PC identify my FPGA board, as I understand). What protocols are necessary to perform full-duplex data transfer between this 2 devices?
Thank you very much in advance,
migue.
Check whether you are able to get appropriate receive interrupt at ethernet driver level on PC-side for a single transmitted packet by FPGA. If you do not get the receive interrupt, check on the transmit side(FPGA) for appropriate transmit interrupts for packet that is being transmitted. This should mostly help you in cornering the issue.
As far as i know, wireshark is just a packet analyzer/sniffer. However, if wireshark is suspected, one option could be to try with alternate packet sniffer to rule out if any such scenario is happening.
A handy tool for determining problems in network and also for determining the network statistics shall be netstat. netstat -sp udp shall list down the statistics only for UDP. There are many other parameters that can be used with netstat for diagnosis.
After many months I solved it, I post to help someone stucked in the same point.
Finally I figured out that Wireshark uses a tool to access the network link layer of the computer. This tool allows Wireshark to sniff all incoming and outgoing packets at a specified network device. To do this, the first step is to OPEN the network device, and that's why my program only worked if Wireshark was open.
Regards.
What is the difference between TCP and UDP?
I know that TCP is used in the case of non-time critical applications, and UDP is used for games or applications that require fast transmission of data. I know that TCP is used for HTTP, HTTPs, FTP, SMTP, and Telnet. I know that UDP is used for DNS and DHCP.
But why? What characteristics of TCP and UDP make it useful for their respective use cases?
TCP is a connection oriented stream over an IP network. It guarantees that all sent packets will reach the destination in the correct order. This imply the use of acknowledgement packets sent back to the sender, and automatic retransmission, causing additional delays and a general less efficient transmission than UDP.
UDP is a connection-less protocol. Communication is datagram oriented. The integrity is guaranteed only on the single datagram. Datagrams reach destination and can arrive out of order or don't arrive at all. It is more efficient than TCP because it uses non ACK. It's generally used for real time communication, where a little percentage of packet loss rate is preferable to the overhead of a TCP connection.
In certain situations UDP is used because it allows broadcast packet transmission. This is sometimes fundamental in cases like DHCP protocol, because the client machine hasn't still received an IP address (this is the DHCP negotiaton protocol purpose) and there won't be any way to establish a TCP stream without the IP address itself.
From the Skullbox article:
TCP (Transmission Control Protocol) is the most commonly used protocol on the Internet.
The reason for this is because TCP offers error correction. When the TCP protocol is used there is a "guaranteed delivery." This is due largely in part to a method called "flow control." Flow control determines when data needs to be re-sent, and stops the flow of data until previous packets are successfully transferred. This works because if a packet of data is sent, a collision may occur. When this happens, the client re-requests the packet from the server until the whole packet is complete and is identical to its original.
UDP (User Datagram Protocol) is anther commonly used protocol on the Internet. However, UDP is never used to send important data such as webpages, database information, etc; UDP is commonly used for streaming audio and video. Streaming media such as Windows Media audio files (.WMA) , Real Player (.RM), and others use UDP because it offers speed! The reason UDP is faster than TCP is because there is no form of flow control or error correction. The data sent over the Internet is affected by collisions, and errors will be present. Remember that UDP is only concerned with speed. This is the main reason why streaming media is not high quality.
1) TCP is connection oriented and reliable where as UDP is connection less and unreliable.
2) TCP needs more processing at network interface level where as in UDP it’s not.
3) TCP uses, 3 way handshake, congestion control, flow control and other mechanism to make sure the reliable transmission.
4) UDP is mostly used in cases where the packet delay is more serious than packet loss.
Think of TCP as a dedicated scheduled UPS/FedEx pickup/dropoff of packages between two locations, while UDP is the equivalent of throwing a postcard in a mailbox.
UPS/FedEx will do their damndest to make sure that the package you mail off gets there, and get it there on time. With the post card, you're lucky if it arrives at all, and it may arrive out of order or late (how many times have you gotten a postcard from someone AFTER they've gotten home from the vacation?)
TCP is as close to a guaranteed delivery protocol as you can get, while UDP is just "best effort".
Reasons UDP is used for DNS and DHCP:
DNS - TCP requires more resources from the server (which listens for connections) than it does from the client. In particular, when the TCP connection is closed, the server is required to remember the connection's details (holding them in memory) for two minutes, during a state known as TIME_WAIT_2. This is a feature which defends against erroneously repeated packets from a preceding connection being interpreted as part of a current connection. Maintaining TIME_WAIT_2 uses up kernel memory on the server. DNS requests are small and arrive frequently from many different clients. This usage pattern exacerbates the load on the server compared with the clients. It was believed that using UDP, which has no connections and no state to maintain on either client or server, would ameliorate this problem.
DHCP - DHCP is an extension of BOOTP. BOOTP is a protocol which client computers use to get configuration information from a server, while the client is booting. In order to locate the server, a broadcast is sent asking for BOOTP (or DHCP) servers. Broadcasts can only be sent via a connectionless protocol, such as UDP. Therefore, BOOTP required at least one UDP packet, for the server-locating broadcast. Furthermore, because BOOTP is running while the client... boots, and this is a time period when the client may not have its entire TCP/IP stack loaded and running, UDP may be the only protocol the client is ready to handle at that time. Finally, some DHCP/BOOTP clients have only UDP on board. For example, some IP thermostats only implement UDP. The reason is that they are built with such tiny processors and little memory that the are unable to perform TCP -- yet they still need to get an IP address when they boot.
As others have mentioned, UDP is also useful for streaming media, especially audio. Conversations sound better under network lag if you simply drop the delayed packets. You can do that with UDP, but with TCP all you get during lag is a pause, followed by audio that will always be delayed by as much as it has already paused. For two-way phone-style conversations, this is unacceptable.
One of the differences is in short
UDP : Send message and dont look back if it reached destination, Connectionless protocol
TCP : Send message and guarantee to reach destination, Connection-oriented protocol
TCP establishes a connection before the actual data transmission takes place, UDP does not. In this way, UDP can provide faster delivery. Applications like DNS, time server access, therefore, use UDP.
Unlike UDP, TCP uses congestion control. It responses to the network load. Unlike UDP, it slows down when network congestion is imminent. So, applications like multimedia preferring constant throughput might go for UDP.
Besides, UDP is unreliable, it doesn't react on packet losses. So loss sensitive applications like multimedia transmission prefer UDP. However, TCP is a reliable protocol, so, applications that require reliability such as web transfer, email, file download prefer TCP.
Besides, in today's internet UDP is not as welcoming as TCP due to middle boxes. Some applications like skype fall down to TCP when UDP connection is assumed to be blocked.
Run into this thread and let me try to express it in this way.
TCP
3-way handshake
Bob: Hey Amy, I'd like to tell you a secret
Amy: OK, go ahead, I'm ready
Bob: OK
Communication
Bob: 'I', this is the first letter
Amy: First letter received, please send me the second letter
Bob: ' ', this is the second letter
Amy: Second letter received, please send me the third letter
Bob: 'L', this is the third letter
After a while
Bob: 'L', this the third letter
Amy: Third letter received, please send me the fourth letter
Bob: 'O', this the forth letter
Amy: ...
......
4-way handshake
Bob: My secret is exposed, now, you know my heart.
Amy: OK. I have nothing to say.
Bob: OK.
UDP
Bob: I LOVE U
Amy received: OVI L E
TCP is more reliable than UDP with even message order guaranteed, that's no doubt why UDP is more lightweight and efficient.
The Law of Leaky Abstractions
by Joel Spolsky
http://www.joelonsoftware.com/articles/LeakyAbstractions.html
Short and simple differences between Tcp and Udp protocol:
1) Tcp - Transmission control protocol and Udp - User datagram protocol.
2) Tcp is reliable protocol, Where as Udp is a unreliable protocol.
3) Tcp is a stream oriented, where as Udp is a message oriented protocol.
4) Tcp is a slower than Udp.
This sentence is a UDP joke, but I'm not sure that you'll get it. The below conversation is a TCP/IP joke:
A: Do you want to hear a TCP/IP joke?
B: Yes, I want to hear a TCP/IP joke.
A: Ok, are you ready to hear a TCP/IP joke?
B: Yes, I'm ready to hear a TCP/IP joke.
A: Well, here is the TCP/IP joke.
A: Did you receive a TCP/IP joke?
B: Yes, I **did** receive a TCP/IP joke.
TCP and UDP are transport layer protocol, Layer 4 protocol in OSI(open systems interconnection model). The main difference along with pros and cons are as following.
TCP
PROS:
Acknowledgment
Guaranteed Delivery
Connection based
Ordered packets
Congestion control
CONS:
Larger Packet
More bandwidth
Slower
Statefull
Consume memory
UDP
PROS:
Packets are smaller
Consume less bandwidth
Faster
Stateless
CONS:
No acknowledgment
No guaranteed delivery
Connectionless
No congestion control
No order packet
TLDR;
TCP - stream-oriented, requires a connection, reliable, slow
UDP - message-oriented, connectionless, unreliable, fast
Before we start, remember that all disadvantages of something are a continuation of its advantages. There only a right tool for a job, no panacea. TCP/UDP coexist for decades, and for a reason.
TCP
It was designed to be extremely reliable and it does its job very well. It's so complex because it accomplishes a hard task: providing a reliable transport over the unreliable IP protocol.
Since all TCP's complex logic is encapsulated into the network stack, you are free from doing lots of laborious, error-prone low-level stuff in the application layer.
When you send data over TCP, you write a stream of bytes to the socket at the sender side where it gets broken into packets, passed down the stack and sent over the wire. On the receiver side packets get reassembled again into a continous stream of bytes.
Maintaining this nice abstraction has a cost in terms of complexity and performance. If the 1st packet from the byte stream is lost, the receiver will delay processing of subsequent packets even those have already arrived (the so-called "head of line blocking").
In addition, in order to be reliable, TCP implements this:
TCP requires an established connection, which requires 3 round-trips ("infamous" 3-way handshake)
TCP has a feature called "slow start" when it gradually ramps up the transmission rate after establishing a connection to allow a receiver to keep up with data rate
Every sent packet has to be acknowledged or else a sender will stop sending more data
And on and on and on...
All this is exacerbated in slow unreliable wireless networks because TCP was designed for wired networks where delays are predictable and packet loss is not so common. In addition, like many people already mentioned, for some things TCP just doesn't work at all (DHCP). However, where relevant, TCP still does its work exceptionally well.
Using a mail analogy a TCP session is similar to telling a story to your secretary who breaks it into mails and sends over a crappy mail service to a publisher. On the other side another secretary assembles mails into a single piece of text. Some mails get lost, some get corrupted, so a very complex procedure is required for reliable delivery and your 10-page story can take a long time to reach your publisher.
UDP
UDP, on the other hand, is message-oriented, so a receiver writes a message (packet) to the socket and then it gets transmitted to a receiver as-is, without any splitting/assembling in the transport layer.
Compared to TCP, its specification is very straightforward. Essentially, all it does for you is adding a checksum to the packet so a receiver can detect its corruption. Everything else must be implemented by you, a software developer. Now read the voluminous TCP spec and try thinking of re-implementing even a small subset of it.
Some people went this way and got very decent results, to the point that HTTP/3 uses QUIC - a protocol based on UDP. However, this is more of an exception. Common applications of UDP are audio/video streaming and conferencing applications like Skype, Zoom or Google Hangout where loosing packets is not so important compared to a delay introduced by TCP.
Simple Explanation by Analogy
TCP is like this.
Imagine you have a pen-pal on Mars (we communicated with written letters back in the good ol' days before the internet).
You need to send your pen pal the seven habits of highly effective people. So you decide to send it in seven separate letters:
Letter 1 - Be proactive
Letter 2 - Begin with the end in mind...
etc.
etc..Letter 7 - Sharpen the Saw
Requirements:
You want to make sure that your pen pal receives all your letters - in order and that they arrive perfectly. If your pen pay receives letter 7 before letter 1 - that's no good. if your pen pal receives all letters except letter 3 - that also is no good.
Here's how we ensure that our requirements are met:
Confirmation Letter: So your pen pal sends a confirmation letter to say "I have received letter 1". That way you know that your pen pal has received it. If a letter does not arrive, or arrives out of order, then you have to stop, and go back and re-send that letter, and all subsequent letters.
Flow Control: Around the time of Xmas you know that your pen pal will be receiving a lot of mail, so you slow down because you don't want to overwhelm your pen pal. (Your pen pal sends you constant updates about the number of unread messages there are in penpal's mailbox - if your pen pal says that the inbox is about to explode because it is so full, then you slow down sending your letters - because your pen pal won't be able to read them.
Perfect arrival. Sometimes while you send your letter in the mail, it can get torn, or a snail can eat half of it. How do you know that all your letter has arrived in perfect condition? Well your pen pal will give you a mechanism by which you can check whether they've got the full letter and that it was the exactly the letter that you sent. (e.g. via a word count etc. ). a basic analogy.