Is there a "header label frame" in the structure of IEEE 802.15.4/ZigBee packets?
I understand the basic packet structure from this documentation.
so is there such a "header label frame" or "frame label header" in IEEE 802.15.4/ZigBee?
No. ZigBee is designed for short-range communication. It is not based upon IP (Internet Protocol), which was designed for global communication.
Related
My question is the inverse of this one asked on Stack Overflow:
InverseQuestion
...but I don't think I get an unequivocal answer to what I'm seeking from that either. My question was posed on a mid-term I just took and was worth A LOT of points. I argued that it was not a legitimate question because the UDP header DOES have a length field (that speaks to the header and data), as shown in the screenshot I'm embedding. I could list dozens of references that have a similar diagram and explanation. The instructor simply marked it wrong with no explanation. We have been going back and forth since then and I can't get an answer as to why every UDP header diagram on the internet shows a length field if there is no length! Can someone help me understand--if it's true there is no header length--in plain English? Am I misinterpreting all these similar diagrams? Thanks.
UDP Diagram
https://www.computernetworkingnotes.com/ccna-study-guide/segmentation-explained-with-tcp-and-udp-header.html
https://www.lifewire.com/tcp-headers-and-udp-headers-explained-817970
Why does the TCP header have a header length field while the UDP header does not? might be a valid question.
UDP header contains the header+data length
TCP header contains the header length in 32b DWORD
IP header contains the total length of the IP packet
Important:
UDP header is fixed 8 Bytes => no meaning to make the header bigger for a constant
TCP header can vary with options
If you're looking for the reason why UDP includes the data and TCP doesn't, you can check in the draft of each RFC specification. Nevertheless, there might not be any reason for that, don't forget those protocols have been defined tens years ago.
In a TCP segment with the URG flag up there might be normal data as well. How does the receiving host handles the urgent packet? How does it acknowledge the urgent data if it is not part of the data stream? Does it acknowledge the rest of it?
I understand that it is not usually used, but if both hosts support the same RFC about the URG flag, how do they handle out-of-band data?
If the urgent data is an abort message, the receiver will drop all other data, but the sender will still want an acknowledgment that the message was received.
A bit of background:
The TCP urgent mechanism permits a point in the data stream to be designated as the end of urgent information. Thus we have the Urgent Pointer that contains a positive offset from the sequence number in this tcp segment. This field is significant only with the URG control bit set.
Discrepancies about the Urgent Pointer:
RFC 793 (1981, page 17):
The urgent pointer points to the sequence number of the octet
following the urgent data.
RFC 1011 (1987, page 8):
Page 17 is wrong. The urgent pointer points to the last octet of
urgent data (not to the first octet of non-urgent data).
The same thing in RFC 1122 (1989, page 84):
..the urgent pointer points to the sequence number of the LAST octet
(not LAST+1) in a sequence of urgent data.
The intelligible RFC 6093 (2011, pages 6-7) says:
Considering that as long as both the TCP sender and the TCP receiver
implement the same semantics for the Urgent Pointer there is no
functional difference in having the Urgent Pointer point to "the
sequence number of the octet following the urgent data" vs. "the last
octet of urgent data", and that all known implementations interpret
the semantics of the Urgent Pointer as pointing to "the sequence
number of the octet following the urgent data".
Thus the updating RFC 793, RFC 1011, and RFC 1122 is
the urgent pointer points to the sequence number of the octet
following the urgent data.
It meets virtually all existing TCP implementations.
Note: Linux provides the net.ipv4.tcp_stdurg sysctl to override the default behaviour but this sysctl only affects the processing of incoming segments. The Urgent Pointer in outgoing segments will still be set as specified in RFC 793.
About the data handling
You can gain urgent data in two ways (keep in mind that the TCP concept of "urgent data" is mapped to the socket API as "out-of-band data"):
using recv with MSG_OOB flag set.
(normally you should establish ownership of the socket with something like fcntl(sock, F_SETOWN, getpid()); and establish a signal handler for SIGURG). Thus you will be notified with SIGURG signal. The data will be read separately from the normal data stream.
using recv without MSG_OOB flag set. Previously, you should set SO_OOBINLINE socket option such way:
int so_oobinline = 1; /* true */
setsockopt(sock, SOL_SOCKET, SO_OOBINLINE, &so_oobinline, sizeof so_oobinline);
The data remain "in-line". And you can determine the Urgent Pointer with a help of ioctl:
int flag; /* True when at mark */
ioctl(sock, SIOCATMARK, &flag);
Besides it is recommended for new applications not to use the mechanism of urgent data at all to use (if so) receiving in-line, as mentioned above.
From RFC 1122:
The TCP urgent mechanism is NOT a mechanism for sending "out-of-band"
data: the so-called "urgent data" should be delivered "in-line" to the
TCP user.
Also from RFC 793:
TCP does not attempt to define what the user specifically does upon
being notified of pending urgent data
So you can handle as you want. It is an application level issue.
Accordingly, the answer to your question about acknowledgements when all other data was dropped is "You can implement it in your application".
As for tcp-ack, I found nothing special about it in the case of urgent data.
About the length of "Urgent Data"
Almost all implementations really can provide only one byte of "out-of-band data".
RFC 6093 says:
If successive indications of "urgent data" are received before the
application reads the pending "out-of-band" byte, that pending byte
will be discarded (i.e., overwritten by the new byte of "urgent
data").
So TCP urgent mode and its urgent pointer cannot provide marking the boundaries of the urgent data in practice.
Rumor has it that there are some implementations that queue each of the received urgent bytes. Some of them have been known to fail to enforce any limits on the amount of "urgent data", that they queue. Thus, they become vulnerable to trivial resource exhaustion attacks.
P. S. All of the above probably covers a little more than was asked, but that's only to make it clear for people unfamiliar with this issue.
Some more useful links:
TCP Urgent Pointer, buffer management, and the "Send" call
Difference between push and urgent flags in TCP
Understanding the urgent pointer
Two words commonly used in networking world - Packets and frames.
Can anyone please give the detail difference between these two words?
Hope it might sounds silly but does it mean as below
A packet is the PDU(Protocol Data Unit) at layer 3 (network layer - ip packet) of the networking OSI model.
A frame is the PDU of layer 2 (data link) of the OSI model.
Packets and Frames are the names given to Protocol data units (PDUs) at different network layers
Segments/Datagrams are units of data in the Transport Layer.
In the case of the internet, the term Segment typically refers to TCP, while Datagram typically refers to UDP. However Datagram can also be used in a more general sense and refer to other layers (link):
Datagram
A self-contained, independent entity of data carrying sufficient information to be routed from the source to the destination computer without reliance on earlier exchanges between this source and destination computer andthe transporting network.
Packets are units of data in the Network Layer (IP in case of the Internet)
Frames are units of data in the Link Layer (e.g. Wifi,
Bluetooth, Ethernet, etc).
A packet is a general term for a formatted unit of data carried by a network. It is not necessarily connected to a specific OSI model layer.
For example, in the Ethernet protocol on the physical layer (layer 1), the unit of data is called an "Ethernet packet", which has an Ethernet frame (layer 2) as its payload. But the unit of data of the Network layer (layer 3) is also called a "packet".
A frame is also a unit of data transmission. In computer networking the term is only used in the context of the Data link layer (layer 2).
Another semantical difference between packet and frame is that a frame envelops your payload with a header and a trailer, just like a painting in a frame, while a packet usually only has a header.
But in the end they mean roughly the same thing and the distinction is used to avoid confusion and repetition when talking about the different layers.
Actually, there are five words commonly used when we talk about layers of reference models (or protocol stacks): data, segment, packet, frame and bit. And the term PDU (Protocol Data Unit) is a generic term used to refer to the packets in different layers of the OSI model. Thus PDU gives an abstract idea of the data packets. The PDU has a different meaning in different layers still we can use it as a common term.
When we come to your question, we can call all of them by using the general term PDU, but if you want to call them specifically at a given layer:
Data: PDU of Application, Presentation and Session Layers
Segment: PDU of Transport Layer
Packet: PDU of network Layer
Frame: PDU of data-link Layer
Bit: PDU of physical Layer
Here is a diagram, since a picture is worth a thousand words:
Consider TCP over ATM. ATM uses 48 byte frames, but clearly TCP packets can be bigger than that. A frame is the chunk of data sent as a unit over the data link (Ethernet, ATM). A packet is the chunk of data sent as a unit over the layer above it (IP). If the data link is made specifically for IP, as Ethernet and WiFi are, these will be the same size and packets will correspond to frames.
Packet
A packet is the unit of data that is routed between an origin and a destination on the Internet or any other packet-switched network. When any file (e-mail message, HTML file, Graphics Interchange Format file, Uniform Resource Locator request, and so forth) is sent from one place to another on the Internet, the Transmission Control Protocol (TCP) layer of TCP/IP divides the file into "chunks" of an efficient size for routing. Each of these packets is separately numbered and includes the Internet address of the destination. The individual packets for a given file may travel different routes through the Internet. When they have all arrived, they are reassembled into the original file (by the TCP layer at the receiving end).
Frame
1) In telecommunications, a frame is data that is transmitted between network points as a unit complete with addressing and necessary protocol control information. A frame is usually transmitted serial bit by bit and contains a header field and a trailer field that "frame" the data. (Some control frames contain no data.)
2) In time-division multiplexing (TDM), a frame is a complete cycle of events within the time division period.
3) In film and video recording and playback, a frame is a single image in a sequence of images that are recorded and played back.
4) In computer video display technology, a frame is the image that is sent to the display image rendering devices. It is continuously updated or refreshed from a frame buffer, a highly accessible part of video RAM.
5) In artificial intelligence (AI) applications, a frame is a set of data with information about a particular object, process, or image. An example is the iris-print visual recognition system used to identify users of certain bank automated teller machines. This system compares the frame of data for a potential user with the frames in its database of authorized users.
Why does UDP have the field "UDP Length" twice in its packet? Isn't it redundant? If it is required for some kind of error checking, please provide an example.
Your observation is correct. The length field is redundant because both the IP header and the UDP header has a length field. My only guess about the reason for this redundancy is, that it happened because UDP was designed at a time, where it was not yet clear what the IP protocol suite would look like.
All legitimate UDP packets should have a length field matching exactly what could be derived from the length field in the IP header. If you don't do that, you can't know for sure, what the receiver is going to do with the packet.
UDP packets with inconsistent length fields are seen in the wild on the Internet. I guess they are probing for buffer overflows, which might happen if one length field is used to allocate memory and the other length field is used when copying data to the allocated buffer.
In the newer UDP Lite protocol, the length field has been repurposed. The length field in the UDP Lite header does not indicate how much data there is in the packet, but rather how much of it has been covered by the checksum. The length of the data in a UDP Lite packet is always computed from the length field in the IP header. This is the only difference between the UDP and UDP Lite header formats.
From RFC 768:
Length is the length in octets of this user datagram including
this header and the data. (This means the minimum value of the
length is eight.)
The pseudo header conceptually prefixed to the UDP header contains
the source address, the destination address, the protocol, and
the UDP length. This information gives protection against misrouted
datagrams. This checksum procedure is the same as is used in TCP.
0 7 8 15 16 23 24 31
+--------+--------+--------+--------+
| source address |
+--------+--------+--------+--------+
| destination address |
+--------+--------+--------+--------+
| zero |protocol| UDP length |
+--------+--------+--------+--------+
The REAL answer is that this is a "pseudo header" - that is, it is used for calculating the checksum, but not actually sent. at least that is what I conclude from What is the Significance of Pseudo Header used in UDP/TCP
I'm reading up on network technology, but there's something that's got me scratching my head. I've read that a popular encoding for sending data across Ethernet is 8B/10B "Gigabit Ethernet".
I've read how the data is packaged up in "frames" which in turn package up "packets" of the data the application needs. Here's where it gets fuzzy. When I write a page of HTML, I set the encoding to Unicode. I understand that that page is packaged in the packet (formatted using the HTTP protocol, etc.)
If the HTML is in Unicode, but the Ethernet encoding is 8B/10B, how do the two encodings coexist? Is the message part of the packet in Unicode while the rest of the frame is 8B/10B?
Thanks for any help!
They really don't have much to do with each other. Ethernet is a "lower level" protocol than the HTTP over which your HTML is sent.
The HTML itself is simply data, and Unicode is a way to encoding characters with bits/bytes.
In contrast, Ethernet is a communications protocol for transfering bits/bytes/packets on a link between devices.
See here: http://en.wikipedia.org/wiki/OSI_model
Ethernet in the OSI 7 layer model is basically layer 2, the data link layer. HTTP and your HTML character encoding are the "Data" layers above layer 4 (which is basically TCP). The abstractions at each layer mean that each layer only has to worry about its job. The layers of 4 and below are responsible for getting your data from point A to point B. Ethernet is part of the "getting data from point A to point B" problem. The layers above that are for figuring what to do with that data. Your Unicode encoding is a "what to do with that data" question.