on bluetooth.org I saw that one BLE characteristic can have multiple field. I now searched for a while but did not get an answer about the byte order.
For example this characteristic:
https://www.bluetooth.com/specifications/gatt/viewer?attributeXmlFile=org.bluetooth.characteristic.gatt.service_changed.xml
It has two fields. Is "Start of Affected Attribute Handle Range" the higher 16 bit or the lower?
Regards
Maz
GATT fields are always (or at least should always be) little-endian. This is discussed in the Bluetooth Core Spec.
From v4.2 of the spec, Vol 3, Part G (which covers GATT), page 523:
2.4 Profile Fundamentals
...
• Multi-octet fields within the GATT Profile shall be sent least significant octet first (little endian).
Be very careful reading this spec because there are pieces that are in network order (big-endian), but GATT attributes are always supposed to be in little-endian.
(The only reason I a say "should always be," is that the one rule of bluetooth devices is that you will always find some device in the field that breaks the rules.... But the spec is clear.)
Related
I am working on making a custom controller for an aquarium light. I was able to figure out how to adjust the light's internal clock, and I was able to capture some of the communication, and I found this timecode 545f0d31574d52565951607631 which translated to ascii from hex becomes T_ 1WMRVYQ`v1. I know for sure it's the timecode, because it works as expected.
Anyone know what it is? Is it BLE specific? anyone know how to alter it?
I'm pretty sure the first 4 numbers are not part of the code, but a indicator for the device.
Edit:
It is BLE. I should have been more clear. It does most of the transmission on UUID 1000, with the characteristic uuid being 1001. The device doesn't have a built-in clock that I can see. It turn's on and off at the times I specify in the developer’s app. After a power failure, it "resets" to midnight. I know that value is the timecode, because when I input it using gatter tools, I can see the light reacts accordingly. I added a photo of it updating. –
You hint that that this is a Bluetooth Low Energy (BLE) device.
If it is BLE, then the UUID of the characteristic might be in the 16-bit UUID Numbers document. If it is a custom characteristic, then it will not. Official characteristics have the base address of 0000xxxx-0000-1000-8000-00805F9B34FB and only the four missing values are documented.
The specification for how time can be shared over BLE is documented in the GATT Specification Supplement if it is a Bluetooth SIG adopted characteristic.
It might be helpful if you update the question with what this values gives as the value on the light's internal clock.
I'm developing a web front end for a GNU Radio application developed by a colleague.
I have a TCP client connecting to the output of two TCP Sink blocks, and the data encoding is not as I expect it to be.
One TCP Sink is sending complex data and the other is sending float data.
I'm decoding the data at the client by reading each 4-byte chunk as a float32 value. The server and the client are both little-endian systems, but I also tried byte swapping (with the GNU Radio Endian Swap block and also manually at the client), and the data is still not right. Actually it's much worse then, confirming there is no byte order mismatch.
When I execute the flow graph in GNU Radio Companion with appropriate GUI elements, the plots look correct. The data values are shown as expected to between 0 and 10.
However the values decoded at the client are generally around 0.00xxxxx, and the plot looks like noise rather than showing a simple tone as is seen in GNU Radio. If I manually scale the data by multiplying by 1000 it still looks like noise.
I'll describe the pre-D path in GNU Radio since it's shorter, but I see the same problem on the post-D path, where a WBFM Receive and a Rational Resampler are added, followed by a Throttle block and then a TCP Sink block sending float data.
File Source (Output Type: complex, vector length: 1) =>
Throttle (vector length: 1) =>
Low Pass Filter (FIR Type: Complex->Complex (Decimating)) =>
Throttle (vector length: 1) =>
TCP Sink (input type: complex, vector length: 1).
This seems to be the correct way to specify the stream parameters (and indeed Companion shows errors if I make changes which mismatch the stream items), but I can find no way to decode the data correctly on the other end of the stream.
"the historic RFC 1700 (also known as Internet standard STD 2) has defined the network order for protocols in the Internet protocol suite to be big-endian , hence the use of the term 'network byte order' for big-endian byte order."
see https://en.wikipedia.org/wiki/Endianness
having mentioned the network order for protocols being big-endian, this actually says nothing about the byte order of network payload itself.
also note: Sun Microsystems made big-endian native byte order computers (upon which much Internet protocol development was done).
i am surprised the previous answer has gone this long without a lesson on network byte order versus native byte order.
GNURadio appears to assume native byte order from a UDP Source block.
Examining the datatype color codes in Help->Types of GNURadio Companion, the orange colored 'float' connections are float32.
To verify a computer's native byte order, in Python, do:
from sys import byteorder
byteorder
the result will be 'little' or 'big'
It might be possible that no matter what type floats you are sending, when bytes get on network they get ordered in little endian. I had similar problem with udp connection, and I solved it by parsing floats as little endian on client side.
I am using XBee Digimesh Modules in API-Mode to send data between different industrial machines allowing them to share data, information and commands.
The API-Mode offers some basic commands, mainly to perform addressing and talk with the XBee Module itself in order to do configuration, etc.
Sending user data is done via a corresponding XBee API-Command which allows to send user-defined data with a maximum payload of 72 Bytes.
Since I want to expand this communication to allow integration of more machines, etc. I am thinking about how to implement a basic communication system that's tailored perfectly to the super small payload of just 72 Bytes.
Coming from the web, I normally would use some sort of JSON here but that would fill up the payload very quickly.
Also it's not possible to send a frame with lot's of information since this also fills up the payload very quickly.
So I came up with a different way of communicating. Instead of transmitting frames packed with information, what about sending some sort of Messages like this:
Machine-A Broadcasts: Who's there?
Machine-B Answers: It's me I am a xxx-Machine
Machine-C Answers: It's me I am a xxx-Machine
Machine-A now evaluates the replies and decides to work with Machine-B (because Machine-C does not match As interface):
Machine-A to B: Hello B, Give me some Value, please!
Machine-B to A: There you go: 2.349590
This can be extended to different short messages. After each message the sender holds the type of message in a state and the reply will be evaluated in relation to the state / context.
What I was trying to avoid was defining a bit-based protocol (like MIDI) which defines all events as bit based flags. Since we do not now what type of hardware there will be added in the future I want a communication protocol that's very flexible and does not need a coordinator or message broker, etc.
But since this is the first time I am thinking about communication protocols I am curious to know if there might be some existing frameworks that can handle complex communication on a light payload.
You might want to read through the ZigBee Cluster Library specification with a focus on the general commands. It describes a system of attribute discovery and retrieval. Each attribute has a 16-bit ID and a datatype (integers of various sizes, enumerated types, bitmaps) that determines its size.
It's a protocol designed for the small payloads of an 802.15.4 network, and you could potentially based your protocol off of a subset of it. Other ZigBee specifications are simply a list of defined attributes (and commands) for a given 16-bit cluster ID.
Your master device can go through a discovery process to get a list of attribute IDs, and then send a request to get values for multiple IDs in one shot. The response will be packed tight with a 16-bit ID, 8-bit attribute type and then variable length data. Even if your master device doesn't know what the ID corresponds to, it can pass the data along to other systems (like a web server) that do know.
Can anyone tell me what major and minor (contained within the advertisement packet of BLE signals) are used for? I've heard that it's used for differentiating signals with the same UUID, but that raises questions like "why use two" and "is that just how certain receivers use it". It would be useful to have a decent explanation of it.
As per #Larme's comment, I presume you are asking about iBeacon advertisements - these are a special use of BLE. Bluetooth Low Energy service advertisements have a different format and don't include the major/minor.
The iBeacon specification doesn't say how to use major and minor - this is defined by the people that implement solutions using iBeacon. Two numbers just gives more flexibility.
A lot of effort went into making BLE use very little power. Accordingly the iBeacon advertisement has to be quite small in order to minimise the transmission time. I guess the designers decided two 16 bit numbers was a reasonable compromise between power consumption and a useable amount of information.
A typical retail use case could use the major to indicate a store (New York, Chicago, London etc) and the minor to indicate the department (shoes, menswear etc). The app that detects a beacon can then pass this information to a server which can send back relevant information - the user's location on a map or specials for that department etc. This was discussed in the guide that #Larme linked to.
A solution that presented information on museum exhibits might just use the major number to determine which exhibit the person was near and ignore the minor number. The minor number would still be in the advertisement, of course, the app just wouldn't use it for anything.
what is the relation between BTS and cell? I think one BTS hardware can cover few cells and also some cells could be covered by more than one BTS isn't it?
Is part of information, that mobile receives from GSM network identification of concrete BTS or mobile phone knows only cell-id?
Is part of information, that mobile receives from GSM network identification of BSC?
Ad 1: Typically one BTS can handle several cells. Common patterns are a one BTS covering a circular area with one round-radiating antenna or a three-sector BTS which covers three cells with sector-radiating antennas. One cell can only be handed by one BTS at a time. Two or more BTSes are not possible since the radio communication would interfere with each other. Note that this is completely different in WCDMA/UMTS since there is no concept of cells.
Ad 2: Since one cell is covered by exactly one BTS, the cell id uniquely identified the concrete BTS.
Ad 3: Since the BTS does not contain any control logic, the mobile communicates directly with the BSC, e.g. about radio resources.
Edit after comment:
1/ The BTS is "dumb" to say it simply. It does only what the BSC instructs it to do. E.g. The BSC tells the BTS as well as the mobile which frequencies to use for the radio communication. A BTS does not route traffic as it is hooked to exactly one BSC. It even does not route traffic to one of several mobiles attached to the BTS as this is done by the BSC. Think of the BTS as a Um-to-Abis physical layer and protocol transcoder.
2/ Actually my earlier statement that UMTS has no cell concept is not exactly true, it's just different.
GSM is FTDMA (frequency and time division multiple access). The radio channel is shared by using different frequecies (per cell) and timeslots (per mobile). Since radio frequency is used to distinguish participants, great care must be taken that not two GSM participants use the same frequency at the same time at the same location. The solution to this is cells, where geographic areas have different frequencies assigned. Network planning must ensure that no two neighbouring cells use the same frequencies as this may lead to interference since you cannot control exactly the size of a cell (e.g. due to absorption and reflection). In GSM, a BTS has a fixed number of radio transmission channels, the number depends on the BTS hardware configuration. If all channels are in use, the cell is full, this is indpendent of the location of a mobile in the cell.
UMTS is CDMA (code division multiple access). The radio channel is shared by encoding the payload in a way that allows to decode it later even if several senders use the same frequency range. That requires coding schemes which are collision free (all codes are different from each other to avoid senders using too similar codes) and a great deal of signal processing. As an analogy: on a party you can understand someone accross the room, even if ten people are talking. The more senders communicate within the cell, the smaller the cell gets in order to allow the BTS/Node-B distinguishing between senders. Therefore, in UMTS a cell size is not geographically fixed. The cell "breathes" depending on its load.
OK, this thread is quite old, but requires some further clarifications for next generations.
When talking about GSM physical network architecture, the term BTS (Base Transceiver System) refers to the physical site itself - the 'small house with the tower' (although modern small BTSs are just boxes hanged on walls or placed on roof tops).
Each such physical site can host one omni-directional cell, or several sector cells.
In GSM logical network architecture, there is some confusion.
The terms 'Cell' and 'Base Station' actually refer to the same physical entity (a set of transceiver units, each used to receive and transmit one of the paired UL/DL carrier frequencies allocated in the BA frequency set). Let's call this entity 'physical cell' just for clarification.
The term Base Station is used for radio resource management. A BSIC (BS Id Code, or BTS Id Code) is allocated for the 'physical cell' and is used in the radio-related conversations between the MS (Mobile Station) and the BSS (BTS and BSC), e.g. for measurement reports.
The BSIC is composed of 'local' parameters - Network Color Code (NCC) and BS Color Code (BCC), and is therefore unknown outside the network.
This is where the term Cell comes in:
The term Cell is used for Mobility Management. A Cell Identity (CI) is defined as a refinement of the Routing Area - one RA will include several cells in it.
The Global Cell Identifier (GCI) is composed of network, RA and CI, and is used for handovers inside and outside the network.
It is up to the BSC to convert the BSIC to the Cell Identity (the BSC may convert the BSIC directly to GCI, or the BSC converts to CI, and the MSC will convert it to GCI).
Hope that helps a bit.
BTS means different at different place!
MS, BTS, BSC, when these words appear together, BTS means something between your phone and the MSC.
Sometimes we call a site (a small house and a tower) as a BTS.
In NOKIA gsm equipment,cell is called segment. Every cell has at least one BTS,different BTS has different functions,Eg:BTS1 provide voice service,BTS2 provide EDGE service。
Phone get BCCH(freq)/NCC/BCC to identificate different cells. Decode the information from BCCH to get CI, LAC...etc.