BLE communication after pairing - bluetooth-lowenergy

I'm new to BLE.
I understood that pairing is achieved using advertising/scanning schema, based on connection interval.
My question is related to what happens after a connection has been established; is there any periodic message exchange to keep the connection alive? Or the only exchanged data are based on Characteristics' read/write or notification/indication?
Thanks,
Andrea

The connection parameters for a BLE connection is a set of parameters that determine when and how the Central and a Peripheral in a link transmits data. It is always the Central that actually sets the connection parameters used, but the Peripheral can send a so-called Connection Parameter Update Request, that the Central can then accept or reject.
Connection supervision timeout: This timeout determines the timeout from the last data exchange till a link is considered lost. A Central will not start trying to reconnect before the timeout has passed, so if you have a device which goes in and out of range often, and you need to notice when that happens, it might make sense to have a short timeout.
for more read this thread

Related

Can a BLE Service enable two-way writing between Peripheral and Server?

I want two BLE devices to connect to each other, and exchange data bi-directionally.
These devices are exactly equivalent - this is basically a serial cable between two peers.
There is not one 'store of data', or hierarchy between the two.
Both ends basically transmit data to each other (there is no polling for 'read' data), and no acknowledgement is required.
BLE documentation refers to the Characteristic Properties as:
A Client can send data to the server with 'write'
A Server uses a 'notify' characteristic (which the Client subscribes to)
So basically I have just one question:
Can I just have one Characteristic, with two properties
'Notify' (writing to Client) and
'Write' (writing to Server)
That the Peripheral advertises, and this will enable two-way write behavior?
First a few things to make everything clear, in case somebody wonders:
BLE has the concept of Central and Peripheral. Peripherals advertise their presence. Centrals scan and discover peripherals which they then connect to. Once the connection is created, these link layer roles do not really matter in terms of GATT and L2CAP Connection oriented channels, which are both used to transfer data.
Every device (including centrals) supporting connections over BLE must have a GATT server. While it is far more common that the peripheral only uses the GATT server role, it can act as GATT client. Centrals can also act as GATT servers. Furthermore, the both GATT roles can be supported simultaneously. Both sides can for example expose an identical GATT server characteristic with the write property. Not sure if this is a common approach though.
Now, back to your question. You can definitely have one characteristic both having the "write" and "notify" capabilities. When you write, I would suggest using "Write without response" since that has better throughput than "Write with response", since it does not require an acknowledgement between every write. Over the link layer, all kinds of packets are still sent "reliably", meaning there will be no packet drops unless the whole connection drops.
I would however suggest you to have one characteristic per direction. The reason is that some Bluetooth stacks can not in a thread-safe way both send and receive data on one single characteristic, due to the API design structure. In particular, Android's API has one method to set data on a characteristic and another method to send the current data of a characteristic. If a notification arrives in between these calls (where the bluetooth stack internally assigns the new data to the characteristic), the write operation will send the data that was just notified, instead of the intended data.
You could also check out L2CAP Connection oriented channels (introduced in Bluetooth v4.2) which is a good way to transfer raw data byte packets, when the GATT structure is not appropriate.

BLE Missing Packets (Protocol / Spec question)

I have been learning the nuts and bolts of BLE lately, because I intend to do some development work using a BLE stack. I have learned a lot from the online documentation and the spec, but there is one aspect that I cannot seem to find.
BLE uses frequency hopping for communication. Once two devices are connected (one master and one slave), it looks like all communication is then initiated via the master and the slave responds to each packet. My question involves loss of packets in the air. There are two major cases I am concerned with:
Master sends a packet that is received by the slave and the slave sends a packet back to the master. The master doesn't receive the packet or if it does, it is corrupt.
Master sends a packet that is not received by the slave.
Case 1 to me is a "dont care" (I think). Basically the master doesn't get a reply but at the very least, the slave got the packet and can "sync" to it. The master does whatever and tries transmitting the packet at the next connection event.
Case 2 is the harder case. The slave doesn't receive the packet and therefore cannot "sync" its communication to the current frequency channel.
How exactly do devices synchronize the channel hopping sequence with each other when packets are lost in the air (specifically case 2)? Yes, there is a channel map, so the slave technically knows what frequency to jump to for the next connection event. However, the only way I can see all of this happening is via a "self timed" mechanism based on the connection parameters. Is this good enough? I mean, given the clock drift, there will be slight differences in the amount of time the master and slave are transmitting and receiving on the same channel... and eventually they will be off by 1 channel.. 2 channels, etc. Is this not really an issue, because for that to happen 'a lot' of time needs to pass based on the 500ppm clock spec? I understand there is a supervisor timer that would declare the connection dead after no valid data is transferred for some time. However, I still wonder about the "hopping drift", which brings me to the next point.
How much "self timing" is employed / mandated within the protocol? Do slave devices use a valid start of packet from the master every connection interval to re synchronize the channel hopping? For example if the (connection interval + some window) elapses, hop to the next channel, OR if packet received re sync / restart timeout timer. This would be a hop timer separate from the supervisor timer.
I can't really find this information within the core 5.2 spec. It's pretty dense at only over 3000+ pages... If somebody could point me to the relevant sections in the spec or somewhere else.. or even answer the questions, that would be great.
The slave knows the channel map. If one packet is not received from the master, it will listen again after one connection interval on the next channel. If that it also not received, it adds one extra connection interval and next channel.
The slave also stores a timestamp (or event counter) when the last received packet from the master was detected, regardless of if the crc was correct or not. This is called the anchor point. This is not the same time point used for supervision timeout.
The amount of time between the anchor point and the next expected packet is multiplied by the master + slave accuracy (for example 500 ppm) to get a receive window, plus 16 microseconds. So the slave listens this amount of time before and after the expected packet time of arrival.

What could cause the slave not to send LL_START_ENC_RSP during BLE encryption procedure ?

During the encryption procedure of a BLE connection, the master and the slave perform a 3-way handshake to validate the encryption. I am facing a case where the slave does not send the last part of this handshake, i.e. the message LL_START_ENC_RSP which is in red in the following schematic of this handshake :
Is there a specified reason for this to happen ? By specified I mean a reason that is not implementation specific.
The BLE Core Spec 4.2 tells this :
When the Link Layer of the slave receives an LL_START_ENC_RSP PDU it
shall transmit an LL_START_ENC_RSP PDU. This packet shall be sent
encrypted.
But this does not specify any condition for the slave not to send this packet.
Would it be possible at this point that the slave thinks it has the Long Term Key associated with the current Master (because if it wasn't the case the slave wouldn't have started the 3-way handshake, right ?), but its LTK is incorrect and the decryption fails ? If it happened, wouldn't there be a disconnection message, instead of nothing ?
As I am pretty new to BLE I have no idea how to analyze or interpret this issue, so any help would be greatly appreciated. The presence and the absence of the messages has been observed with the help of a BLE sniffer.
Note : Image 1 is a reproduction of Figure 7-26 of the book : Bluetooth Low Energy The Developer's handbook by Robin Heydon.
According to the spec, the third message shall always be sent. If it is not sent, there is a bug in the implementation.
If the slave has the wrong LTK, the slave will not accept the LL_START_ENC_RSP from the master. If this happens, the link will be dropped after the supervision timeout since the slave will never acknowledge that packet.
Note that in order for a sniffer to successfully decrypt a packet, it needs to know the LTK. Nordic's sniffer program will catch the LTK if the sniffer runs during the pairing process.

BLE indications

As I understand, BLE indications are a reliable communications method. How do you know if your indications was not communicated. I am writing code for the peripheral/server and currently when I send a notifications, I get a manual response from the central. I read that if I use indications, the acknowledges take place in the L2CAP layer automatically and communications is therefore faster, but how does my embedded controller know the Bluetooth module was not successful at getting the packet across the link? We are using the Microchip RN4030 Bluetooth module.
Let's make things clear.
The BLE stack looks roughly like the following. The stack has these layers in this order:
Link Layer
HCI (if controller and host are separated)
L2CAP
ATT
GATT
Application
The Link Layer is a reliable protocol in the sense that all packets are protected by a CRC and every packet is acknowledged by the receiving device. If a packet is not acknowledged, it is resent until an acknowledge is received. There can also only be one outstanding packet, which means no reordering of packets are possible. Acknowledges are normally not being presented to upper layers.
The HCI layer is the communication protocol between the controller and the host.
The L2CAP layer does almost nothing if you use the standard MTU size of 23. It has a length header and a type code ("channel") which indicates what type of packet is being sent (usually ATT).
On the ATT level, there are two types of packets that are sent from the server that are not sent as a response to a client request. Those are notifications and indications. Sending one notification or indication has the same "performance" since the type is just a tag of a packet that's sent over the lower layers. The differences are listed below:
Indications:
When an indication packet is sent to the client, the client must acknowledge the packet by sending a confirmation packet when it has received the indication packet. Even if the indication packet is invalid, a confirmation shall be sent back.
Upper layers are not involved sending back the confirmation.
The server may not send a new indication until it has received a confirmation from the previous one.
The only time you won't receive a confirmation after an indication is if the link is dropped (you should then get a disconnected event), or there is some bug in some of the BLE stacks.
After the app has sent an indication, most BLE stack confirms to the app that that a confirmation has been received by the client as that the indication operation has completed.
Notifications:
No ATT layer acknowledges are sent.
"Commands and notifications that are received but cannot be processed, due to buffer overflows or other reasons, shall be discarded. Therefore, those PDUs must be considered to be unreliable." However I have never noticed an implementation actually following this rule, i.e. all notifications are delivered to the application. Or I've never hit the max buffer size.
The GATT layer is mostly a definition of how the attribute protocol should be used and defines a DB structure of characteristics.
Implications
According to my opinion, there are several flaws or issues with the BLE standard. There are two things that might make indications useless and unreliable in practice:
There are no way to send back an error response instead of a confirmation.
The fact that it is the ATT layer that sends back the confirmation directly when it has received the indication, and not the app when it has successfully handled the indication.
This means that if for example, some bug or other issue causing that the BLE stack couldn't send the indication to the app, or your app crashed, or your app found your value to be invalid, there is no way your peripheral can aware of that. Since it got the confirmation it thinks everything is fine.
I can't understand why they defined indications this way. Since the app doesn't send the confirmation but a lower layer does, there seems to be no point at all in having an ATT layer acknowledge instead of just using the Link Layer acknowledge. Both are just acknowledges that the packet has been received halfway of its destination.
If we draw a parallel to a HTTP POST and internet, we could consider the Link Layer acknowledge as when the network card of the destination receives the request and the ATT confirmation as a confirmation that the TCP stack received the packet. You have no way of knowing that your web server software actually did receive your request, and it processed it with success.
The fact that notifications are allowed to be dropped by the receiver is also bad. Normally notifications are used if the peripheral wants to stream a lot of data to the central and then you don't want dropped packets. They should have designed the flow control so that the Link Layer stopped acknowledge incoming packets instead until the app are ready to process the next notifications. This is even already implemented at the LL + HCI + Host layers.
Windows
One interesting thing about at least the Windows BLE stack is, if it receives indications faster than the app processes them it starts to drop the indications as well, even though only notifications should be allowed to be dropped due to "buffer overflows or other reasons". It buffers at most 512 indications in my tests.
That said
Just use notifications and if you want some kind of confirmation, let the client send a write packet when it has received your data and successed processing it.

GPS Tracker and GPRS connection over changing GSM (BTS) envioment

I have question what is the best way to implement in GPS Tracker software communication with server. The connection is established with GPRS but I have some question to it.
GPS Tracker has a tendency to switch between Network BTS'es during vehicle movement. How GPRS is designed.
Does during the BTS switch the GPRS session have to be established again?
If no what is better. Creating one long running TCP/IP connection to the server (IP:PORT) and send data over this connection all the time (ONE GPRS SESSION) or maybe Creating TCP/IP connection each time when tracker has something to send and then close the connection (ALL ON ONE GPRS SESSION)? Does switching between BTS destroy my GPRS SESSION and connection that i created during this session?
It would be great is somebody give me some info about this topic and how to approach the best possible design taking into acount behaviour of changing BTS'es, Network Operator, Countries (roaming turned ON). Thanks.
By saying CONNECTION to the server I mean connection that is established during ONE GPRS SESSION. During one GPRS session You can create many connections so my question is about connections over ONE GPRS session and if GPRS SESSION has to be recreated in some scenario, connections over many GPRS SESSIONS (which will be more expensive).
Switching between BTS'es does not destroy connection (well, I don't know too much on this thing, except I've worked with it and I'm sure that sometimes connection is preserved).
My preferred solution would be as follows:
Create a connection if there is something to send, but allow it to be idle for some minutes before closing.
Provide an application-level keepalive protocol to detect hanging connections.
Disconnecting while you've got nothing to send can have interesting implications if (1) it involves shutdown of GPRS session and (2) the provider charges some minimum fee for GPRS sessions, so you can pay for 40kb on open+send 200 bytes+close sequence. The solution above should be a good compromise.

Resources