Zigbee AF frames vs ZCL commands - zigbee

I'm looking through the Zigbee spec and the ZCL spec, and the two seem incompatible in that I cannot figure out how AF frames and ZCL commands intersect. Is the ZCL an alternative to using 'standard' AF frames, are they contained within AF frames, or is there some aspect of Zigbee that I am fundamentally missing?

I assume by AF frame, you mean APS frame. The ZigBee spec defines the low level layer (APS and NWK layers) as well as ZDO protocol which is carried within the APS packets. As with ZDO, ZCL frames are carried as the payload of the APS frames and these protocols.
I drew the image below to illustrate the layers below the ZCL - ZDO would be substituted at the same layer as ZCL although the packet structure is a little different. Of course, below this sits the 802.15.4 frame.

Related

is there any data packet format in standard bluetooth characteristics?

Is there any data packet format in standard bluetooth characteristics? Because I used bloodPressure meter from omron and A&D Medical, these two have the same standard characteristics. But data positions are not equal. And there is no good documents I found on the internet. Even from bluetooth.org. So is there any perticular data position format of the standard ble characteristics?
In the XML document of blood pressure, I found that the data is available or not is mentioned or what kind of data is available. But not the positions.
Take a look at the specifications provided by the Bluetooth SIG. The Document for the Blood Pressure Service (BLS v1.1) clearly describes the service and its characteristics of the standard.
Some of the characteristics are optional or depend on another to be there. You can find this information at the top of chapter 3.
Sometimes even characteristics do have optional fields, but this is also well documented.

How does a computer know what data to reassemble?

When a computer X sends data through a network to computer Y the data goes down through the OSI layer. This is ok. I understand. But once the data is put on the media as eletric signals then how does the computer Y know what to reassmble, given the headers and trailers of the data model generated in OSI, once it is put on the electric media at layer 1 does not exist any more?
The physical layer is just 1's and 0's as you say - the trick is that there is a pattern that tells the receiver that this is the start of a packet. This is usual referred to as 'Framing'.
Once the receiver knows that, it simply reads in as many bits as its needs for the Layer 2 header and it then has that and so on.
The headers are clear in a typical OSI or networking diagrams, e.g. (https://www.ciscopress.com/articles/article.asp?p=2738463):
So the way the first two layers work on the receiver is:
layer 1 just recognises whether the signal is a one or a zero and creates the stream of ones and zeros.
layer 2 reads this stream and when it recognises the start pattern it then know the following bits are the header and so on and hence it can identify the frames.
You can see examples of start and stop patterns online e.g. (http://sinauonline.50webs.com/Cisco/Cisco%20Exploration%20Sem1Chap7.html):

Instead of Carrier Aggregation, why don't carriers use the new frequency bandwidth as separate channel to connect users directly?

Carrier aggregation combines the existing spectrum, say if the carrier had previously 20MHz in the area, with the newly acquired spectrum of 20MHz, to give a wider pipe or bandwidth for data flow between the mobile device & the base station tower.
My question is, why don't they just operate the new bandwidth as a separate pipe? So that there would be two pipes of 20MHz each, instead of one aggregated pipe of 40MHz?
Benefits:
Carriers won't have to deal with the complexity of Carrier Aggregation technology, as the two bands are totally separate (2300MHz & 1800MHz). End-users can be divided over the two frequencies. Theoretically this should halve the load on one channel, providing double the speeds to connected users.
Many existing 4G devices use single antenna for 4G operation. The LTE-A tech needs MIMO support on both mobile & tower to work. Essentially it needs 2 antennas on both mobile & tower for operating 2 different frequencies, which only stresses the mobile device. Existing hardware cannot benefit from LTE-A, where speeds will continue to remain the same post upgradation. In fact, it may slightly decrease post LTE-A implementation, since newer LTE-A devices will share load on both the frequencies, but existing LTE users can only use one.
For those new, this simple image explains how Carrier Aggregation works. https://www.techtalkthai.com/wp-content/uploads/2014/12/qualcomm_carrier_aggregation.jpg
1) Assuming that the operator already has 2 bands, it is really not complex to enable and configure carrier aggregation. It is likely that they already have the ability as part of the latest LTE software upgrades and it is just a matter of configuring it and possibly paying for a license to use it.
The scenario you describe of using two separate pipes instead of a single CA pipe is not feasible (or may not be possible?). When a device establishes a connection in an LTE network, a default bearer is configured which would not be able to simultaneously use two radio connections without CA or other similar features. Multiple bearers can certainly be established simultaneously, however they serve different purposes (e.g. voice vs data). That said, really CA is using two different pipes, but they act as a single (logical) bearer. Another advantage of CA is that the control plane signaling takes place on only one of the component carriers and therefore the other component carriers can be fully dedicated to user plane traffic.
2) I'll clear a few things up:
MIMO has nothing to do with Carrier Aggregation.
Most 4G devices today transmit on a single antenna and receive on two antennas. (Although they most likely have at least 2 tx and 2 rx antennas, and many have 4 tx and 4 rx antennas, although 4x4 MIMO has not been implemented by most operators.)
Existing devices are already taking advantage of LTE-A features and some operators are currently rolling out 3-carrier CA, 4x4 MIMO as well as 256QAM.
Here is a recent news article which discusses LTE-A features which have already been implemented: https://newsroom.t-mobile.com/news-and-blogs/lte-advanced.htm

why 802.1Q does not encapsulate the original frame?

I am studying VLAN. After hours of searching, I know 802.1Q doesn't encapsulate the original frame, instead it adds a 32-bit field between the source MAC address and the“EtherType” field of the original frame. But I can't figure out why. Can somebody explain to me why 802.1Q doesn't encapsulate the original frame? Thanks a lot.
The predecessor to 802.1q was Cisco's ISL. ISL did fully encapsulate the frame. Which means when any device was receiving an ISL frame, it must be able to understand the ISL tag, or else the whole frame is considered malformed.
In 802.1q, the first 12 bytes of the frame, whether it is tagged or not, is always the same.
To illustrate exactly what the tag modifies, here is the Packet Capture of a frame without the tag, then the same frame with the tag:
The bracketed portion in orange is all from the original frame. The bracketed portion in green is what the 802.1q tag adds to the frame.
Notice that in both cases, the first 12 bytes are the Destination MAC address and the Source MAC address.
Moreover, in both cases, the next 2 bytes of the frame are a "EtherType" field, which indicate the next protocol encapsulated in the datagram.
This means that whether a transit device understands 802.1q tags or not, the processing for that frame does not change. Which means 802.1q tags will still "work" through a device that...
is older, and doesn't support or understand 802.1q tags
is not configured to read/look for a particular tag
is built to only inspect the first 12 bytes of any frame so it can make a line-speed decision on how to forward the packet, which is the strategy in Cut-Through switching.
Overall, it allows the implementation and standardization of VLANs and VLAN Tagging without having to patch every device ever created that does Layer 2 processing to teach them how to interpret a "fully encapsulated VLAN tagging strategy" (like ISL). Instead, the devices that need to support VLANs can be patched to understand 802.1q, and all the other devices in transit can simply continue to operate without any fuss.
Granted, these days it is pretty rare to come across a host or switch that doesn't understand VLANs, but consider it from the perspective from when the concept of VLANs and Tagging were first invented.

Difference between PACKETS and FRAMES

Two words commonly used in networking world - Packets and frames.
Can anyone please give the detail difference between these two words?
Hope it might sounds silly but does it mean as below
A packet is the PDU(Protocol Data Unit) at layer 3 (network layer - ip packet) of the networking OSI model.
A frame is the PDU of layer 2 (data link) of the OSI model.
Packets and Frames are the names given to Protocol data units (PDUs) at different network layers
Segments/Datagrams are units of data in the Transport Layer.
In the case of the internet, the term Segment typically refers to TCP, while Datagram typically refers to UDP. However Datagram can also be used in a more general sense and refer to other layers (link):
Datagram
A self-contained, independent entity of data carrying sufficient information to be routed from the source to the destination computer without reliance on earlier exchanges between this source and destination computer andthe transporting network.
Packets are units of data in the Network Layer (IP in case of the Internet)
Frames are units of data in the Link Layer (e.g. Wifi,
Bluetooth, Ethernet, etc).
A packet is a general term for a formatted unit of data carried by a network. It is not necessarily connected to a specific OSI model layer.
For example, in the Ethernet protocol on the physical layer (layer 1), the unit of data is called an "Ethernet packet", which has an Ethernet frame (layer 2) as its payload. But the unit of data of the Network layer (layer 3) is also called a "packet".
A frame is also a unit of data transmission. In computer networking the term is only used in the context of the Data link layer (layer 2).
Another semantical difference between packet and frame is that a frame envelops your payload with a header and a trailer, just like a painting in a frame, while a packet usually only has a header.
But in the end they mean roughly the same thing and the distinction is used to avoid confusion and repetition when talking about the different layers.
Actually, there are five words commonly used when we talk about layers of reference models (or protocol stacks): data, segment, packet, frame and bit. And the term PDU (Protocol Data Unit) is a generic term used to refer to the packets in different layers of the OSI model. Thus PDU gives an abstract idea of the data packets. The PDU has a different meaning in different layers still we can use it as a common term.
When we come to your question, we can call all of them by using the general term PDU, but if you want to call them specifically at a given layer:
Data: PDU of Application, Presentation and Session Layers
Segment: PDU of Transport Layer
Packet: PDU of network Layer
Frame: PDU of data-link Layer
Bit: PDU of physical Layer
Here is a diagram, since a picture is worth a thousand words:
Consider TCP over ATM. ATM uses 48 byte frames, but clearly TCP packets can be bigger than that. A frame is the chunk of data sent as a unit over the data link (Ethernet, ATM). A packet is the chunk of data sent as a unit over the layer above it (IP). If the data link is made specifically for IP, as Ethernet and WiFi are, these will be the same size and packets will correspond to frames.
Packet
A packet is the unit of data that is routed between an origin and a destination on the Internet or any other packet-switched network. When any file (e-mail message, HTML file, Graphics Interchange Format file, Uniform Resource Locator request, and so forth) is sent from one place to another on the Internet, the Transmission Control Protocol (TCP) layer of TCP/IP divides the file into "chunks" of an efficient size for routing. Each of these packets is separately numbered and includes the Internet address of the destination. The individual packets for a given file may travel different routes through the Internet. When they have all arrived, they are reassembled into the original file (by the TCP layer at the receiving end).
Frame
1) In telecommunications, a frame is data that is transmitted between network points as a unit complete with addressing and necessary protocol control information. A frame is usually transmitted serial bit by bit and contains a header field and a trailer field that "frame" the data. (Some control frames contain no data.)
2) In time-division multiplexing (TDM), a frame is a complete cycle of events within the time division period.
3) In film and video recording and playback, a frame is a single image in a sequence of images that are recorded and played back.
4) In computer video display technology, a frame is the image that is sent to the display image rendering devices. It is continuously updated or refreshed from a frame buffer, a highly accessible part of video RAM.
5) In artificial intelligence (AI) applications, a frame is a set of data with information about a particular object, process, or image. An example is the iris-print visual recognition system used to identify users of certain bank automated teller machines. This system compares the frame of data for a potential user with the frames in its database of authorized users.

Resources