I have a BLE device that doesn't respond to SCAN_REQ and am working it out with the vendor independently per https://github.com/espressif/esp-idf/issues/10660.
When I use Nordic nRD Connect iphone app as a client I can see that device in the scan list and can connect to it. However, when I use a different client, a python Windows one, that client doesn't show the device in its scan list and doesn't connect to it if I specify the exact address.
My question is, are BLE 4 devices required to respond to SCAN_REQ requests to be discoverable and connectable or is it just optional response to provide additional advertisement data?
EDIT, I believe that Emil's answer below (thanks) refers to this quote
Yes, it's required to reply with a scan response. That is defined in Bluetooth Core v5.3, Vol 6 Part B (Link Layer), section 4.4.2.3, using the word "shall".
There is one exception though. There is a Filter Accept List in the controller which can contain addresses of centrals allowed to scan and/or connect. There are four combinations the host can set (advertising filter policy) that control if this list shall be used for filtering incoming SCAN_REQ and CONNECT_IND packets, respectively. If you don't use this filtering mechanism, then the device must send a scan response to every scan request.
There are two possible approaches to scanning—Passive Scanning or Active Scanning.
Passive Scanning is when Scanners receive advertising packets and process the contents.
In the case of Active Scanning, however, a device may decide it wants to know more about an advertising device and respond to the initial advertising packet by sending a Scan Request GAP protocol data unit (PDU). This basically means ‘Tell me more.’ The device receiving the Scan Request can send back a Scan Response PDU with more information, once again in the form of a collection of AD types.
The above has been extracted from: https://www.bluetooth.com/blog/advertising-works-part-1/ [the emphasis mine].
Related
I have experimenting with Bluez 5.50 Bluetooth Stack, Here i have some confusion about procedure Auto-connection using Whitelist.
Suppose,
Device A - Advertiser
Device B - Scanner
Add Advertisers(Device A) Bluetooth address as white list in Scanner(Device B)
Device A will advertise with "Connectable Un-directed" adv type & default adv params
Device B will start scanning with "Accept only PDUs from device in white list" configuration
If B scans A's address, than explicitly B will send connection request to A(Without sending Connection create command)
What is basic difference between paired device & white listed device ?
The white list can be used both when just scanning as well as when connecting.
Note that the packet exchange during advertising is this, when the central device is just scanning:
Advertiser sends ADV_IND.
Scanner sends SCAN_REQ.
Advertiser sends SCAN_RSP.
When the central device has a pending initiation (i.e. connection attempt) to a peripheral, the packet exchange is this:
Advertiser sends ADV_IND.
Initiator sends CONNECT_IND.
The connection is now established.
Note the timing between between to packets in the flow above is 150 microseconds (T_IFS), which is quite quick. If the advertiser does not detect SCAN_REQ, it does not send SCAN_RSP. If it also does not detect CONNECT_IND, it does not enter the connection state, but continue to advertise. There is a need to have a white list because the host would not be quick enough to decide if the packet should be dropped or not. Therefore the white list is implemented directly in the Bluetooth controller hardware.
A central device using the white list, will simply drop any ADV_IND having an address that cannot be found in the white list. Therefore no SCAN_REQ or CONNECT_IND is sent in this case.
"Auto-connect" therefore refers to when the initiator is constantly looking for ADV_IND packets where the sender's address is in the white list. If one is found, a CONNECT_IND is sent and the connection gets established, resulting in an "LE Connection Complete" event. Using the white list is the only way to have two or more pending connections, since otherwise you need to specify exactly one target address when initiating a connection. (Although a workaround sometimes used when the white list cannot be used, for example if it is too small to contain all desired addresses, is to let the central first perform a scan, then initiate a connection to the target addrerss, with some short timeout. This introduces latency as at least two ADV_INDs must be sent.)
Paired/Bonded device is a completely different thing. It means that both devices have stored in its database, information about the remote device, such as encryption keys, client characteristic configuration descriptor state, gatt db cache. Bonded devices are usually listed in a user interface as well.
When the white list is used when establishing connections, you can have addresses in this list of non-bonded devices. You can also have bonded devices which you do not currently want to connect to, which are then not included in the white list.
What I've written above is general BLE without any specific Bluetooth stack in mind. BlueZ might have certain conditions/flows when the white list is used.
My goal is this: I have a bunch of sensors out in a field connected in a sort of P2P network. On one side of the field I have a device that provides a BLE server to bridge data between a controller (phone or laptop) and all the devices out in the field.
One of the requirements is a sort of network visualization and management service. The gotcha with this is that there are a variable number of devices out in the field.
I have a plan to have the bridge device send a broadcast out to the network to get all the devices connected. My only problem is that I'm relatively new to BLE and GATT in general and I'm not certain what the standard is for showing a list of data with a dynamic length.
Is there such a standard? Do any of you have any tips to help me wrap my head around how to organize this into a GATT?
Thanks for your help
To the best of my knowledge, BLE and GATT don't have any best practice or pattern that would fit your requirement. So you have to roll your own.
An option would be to implement a request–response protocol: The controller sends a request to the BLE server (e.g. requesting the data for sensor 17) and the server responds with the data.
In GATT terms, the server provides a service with two characteristics:
The request characteristic (writable)
The response characteristic (readable with notifications)
For communication with the server, the controller connects to the server and activates notifications for the response characteristics. Then it writes the request to the request characteristic and waits for an update on the response characteristic.
As BLE has a low bandwidth, you should use a compact, binary protocol (and not JSON or XML).
As I understand, BLE indications are a reliable communications method. How do you know if your indications was not communicated. I am writing code for the peripheral/server and currently when I send a notifications, I get a manual response from the central. I read that if I use indications, the acknowledges take place in the L2CAP layer automatically and communications is therefore faster, but how does my embedded controller know the Bluetooth module was not successful at getting the packet across the link? We are using the Microchip RN4030 Bluetooth module.
Let's make things clear.
The BLE stack looks roughly like the following. The stack has these layers in this order:
Link Layer
HCI (if controller and host are separated)
L2CAP
ATT
GATT
Application
The Link Layer is a reliable protocol in the sense that all packets are protected by a CRC and every packet is acknowledged by the receiving device. If a packet is not acknowledged, it is resent until an acknowledge is received. There can also only be one outstanding packet, which means no reordering of packets are possible. Acknowledges are normally not being presented to upper layers.
The HCI layer is the communication protocol between the controller and the host.
The L2CAP layer does almost nothing if you use the standard MTU size of 23. It has a length header and a type code ("channel") which indicates what type of packet is being sent (usually ATT).
On the ATT level, there are two types of packets that are sent from the server that are not sent as a response to a client request. Those are notifications and indications. Sending one notification or indication has the same "performance" since the type is just a tag of a packet that's sent over the lower layers. The differences are listed below:
Indications:
When an indication packet is sent to the client, the client must acknowledge the packet by sending a confirmation packet when it has received the indication packet. Even if the indication packet is invalid, a confirmation shall be sent back.
Upper layers are not involved sending back the confirmation.
The server may not send a new indication until it has received a confirmation from the previous one.
The only time you won't receive a confirmation after an indication is if the link is dropped (you should then get a disconnected event), or there is some bug in some of the BLE stacks.
After the app has sent an indication, most BLE stack confirms to the app that that a confirmation has been received by the client as that the indication operation has completed.
Notifications:
No ATT layer acknowledges are sent.
"Commands and notifications that are received but cannot be processed, due to buffer overflows or other reasons, shall be discarded. Therefore, those PDUs must be considered to be unreliable." However I have never noticed an implementation actually following this rule, i.e. all notifications are delivered to the application. Or I've never hit the max buffer size.
The GATT layer is mostly a definition of how the attribute protocol should be used and defines a DB structure of characteristics.
Implications
According to my opinion, there are several flaws or issues with the BLE standard. There are two things that might make indications useless and unreliable in practice:
There are no way to send back an error response instead of a confirmation.
The fact that it is the ATT layer that sends back the confirmation directly when it has received the indication, and not the app when it has successfully handled the indication.
This means that if for example, some bug or other issue causing that the BLE stack couldn't send the indication to the app, or your app crashed, or your app found your value to be invalid, there is no way your peripheral can aware of that. Since it got the confirmation it thinks everything is fine.
I can't understand why they defined indications this way. Since the app doesn't send the confirmation but a lower layer does, there seems to be no point at all in having an ATT layer acknowledge instead of just using the Link Layer acknowledge. Both are just acknowledges that the packet has been received halfway of its destination.
If we draw a parallel to a HTTP POST and internet, we could consider the Link Layer acknowledge as when the network card of the destination receives the request and the ATT confirmation as a confirmation that the TCP stack received the packet. You have no way of knowing that your web server software actually did receive your request, and it processed it with success.
The fact that notifications are allowed to be dropped by the receiver is also bad. Normally notifications are used if the peripheral wants to stream a lot of data to the central and then you don't want dropped packets. They should have designed the flow control so that the Link Layer stopped acknowledge incoming packets instead until the app are ready to process the next notifications. This is even already implemented at the LL + HCI + Host layers.
Windows
One interesting thing about at least the Windows BLE stack is, if it receives indications faster than the app processes them it starts to drop the indications as well, even though only notifications should be allowed to be dropped due to "buffer overflows or other reasons". It buffers at most 512 indications in my tests.
That said
Just use notifications and if you want some kind of confirmation, let the client send a write packet when it has received your data and successed processing it.
What exactly is a BLE scan response packet?
Since there is almost nothing to be found online, we would like to now this.
Does a scan response packet, respond on a device scan or is it like the advertisement packet sent every x seconds?
A BLE scan response is the packet that is sent by the advertising device (peripheral) upon the reception of scanning requests (i.e. yes, it is a response to a device scan). The scan response usually has more data than the advertising packets. In other words, central devices send scan requests to the advertising device in order to get additional user data through the scan response. Please also note that scan responses are considered to have fixed 'static' data relative to the more dynamic advertising data.
Advertising packets and scan response share the same format, and are transmitted over the same three physical channels (they are both sent as advertising events), but are otherwise two different things.
For more information, I recommend reading about scan response packets in the SIG's core specification found here.
I hope this helps
An important addition to yousif saeed's answer:
According to the Bluetooth 4.x specification, Peripheral devices accepting Scan Requests,
Must advertise this by using a specific Advertising Type value in the protocol header.
Must use advertising intervals of equal or bigger than, at least, 100 ms, so that the Central/Peripheral devices can exchange the Scan Request/Response packets in the time between two consecutive advertising packets (advertising interval).
Keep in mind, also, that depending on your particular hardware platform and Bluetooth Low Energy software stack,
You may find that a peripheral device accepting Scan Requests is non connectable, that is, may be limited to behave as a pure beacon (connection-less).
I was just looking for this information and it is difficult to find good technical resources beyond the basic description.
There is a great few pages on one of the manufacturer's sites that goes into the details of how their hardware interacts with these communications.
The scan response packet consists of:
Device name,
Transmission power,
Beacon ID,
Firmware version,
Battery level
https://support.kontakt.io/hc/en-gb/articles/201492492-iBeacon-advertising-packet-structure
https://support.kontakt.io/hc/en-gb/articles/201493072-Beacon-services
https://support.kontakt.io/hc/en-gb/articles/201492492-iBeacon-advertising-packet-structure
I am not promoting Kontakt.io, but they did a pretty good job of providing this answer in good detail.
Yes it does depend on device scan.
I recently had this experience.
I was working with Nordic device and started sending advertising packets which included scan rsp data. But either I was getting no scan rsp packet or hardly any packet. The issue was I was not scanning from my other nordic device. Once I started scanning from another device, scan rsp packets started coming quickly.
I have read this somewhere:
Most mobile operators encrypt all mobile communication data, including SMS messages In GSM, messages are encrypted using A5/1 but even when encrypted, the data held by SMS is readable for the operator. Mobile phone operators have the ability to filter and modify short messages during delivery. Also, it is possible that the operator might not filter messages on purpose but might use equipment that cannot handle encrypted messages.
I want to know..is it true..?
Can someone explain how this filtering is done..? and is there any solution to avoid such loss of messages on the network..?
A5/1 is being used on the radio link between mobile and base station controller (BSC, the network entity entity that manages the radio resources). The radio link transports a couple of higher level protocols, among them MAP which is used to transport SMS.
The BSC is relaying SMS over MAP into the core network. The protocol stack between BSC and core network is not encrypted as well as the communication inside the core network. This was deemed as not needed at time GSM was designed, the links are supposed to be mobile operators very own property and territory and therefore assumed being secure.
The core network typically delivers SMS to an SMSC (short message service center) which is reponsible for routing messages to receipients.
A network operator can read SMS in clear text in various places, e.g.
With a protocol analyzer, tapping links between network nodes
On the SMSC, in message queues (databases...) or even log files
On an MSC when tracing MAP messages
Message filtering and modification may happen on the SMSC, depending on the network operator needs.