BLE indications - push-notification

As I understand, BLE indications are a reliable communications method. How do you know if your indications was not communicated. I am writing code for the peripheral/server and currently when I send a notifications, I get a manual response from the central. I read that if I use indications, the acknowledges take place in the L2CAP layer automatically and communications is therefore faster, but how does my embedded controller know the Bluetooth module was not successful at getting the packet across the link? We are using the Microchip RN4030 Bluetooth module.

Let's make things clear.
The BLE stack looks roughly like the following. The stack has these layers in this order:
Link Layer
HCI (if controller and host are separated)
L2CAP
ATT
GATT
Application
The Link Layer is a reliable protocol in the sense that all packets are protected by a CRC and every packet is acknowledged by the receiving device. If a packet is not acknowledged, it is resent until an acknowledge is received. There can also only be one outstanding packet, which means no reordering of packets are possible. Acknowledges are normally not being presented to upper layers.
The HCI layer is the communication protocol between the controller and the host.
The L2CAP layer does almost nothing if you use the standard MTU size of 23. It has a length header and a type code ("channel") which indicates what type of packet is being sent (usually ATT).
On the ATT level, there are two types of packets that are sent from the server that are not sent as a response to a client request. Those are notifications and indications. Sending one notification or indication has the same "performance" since the type is just a tag of a packet that's sent over the lower layers. The differences are listed below:
Indications:
When an indication packet is sent to the client, the client must acknowledge the packet by sending a confirmation packet when it has received the indication packet. Even if the indication packet is invalid, a confirmation shall be sent back.
Upper layers are not involved sending back the confirmation.
The server may not send a new indication until it has received a confirmation from the previous one.
The only time you won't receive a confirmation after an indication is if the link is dropped (you should then get a disconnected event), or there is some bug in some of the BLE stacks.
After the app has sent an indication, most BLE stack confirms to the app that that a confirmation has been received by the client as that the indication operation has completed.
Notifications:
No ATT layer acknowledges are sent.
"Commands and notifications that are received but cannot be processed, due to buffer overflows or other reasons, shall be discarded. Therefore, those PDUs must be considered to be unreliable." However I have never noticed an implementation actually following this rule, i.e. all notifications are delivered to the application. Or I've never hit the max buffer size.
The GATT layer is mostly a definition of how the attribute protocol should be used and defines a DB structure of characteristics.
Implications
According to my opinion, there are several flaws or issues with the BLE standard. There are two things that might make indications useless and unreliable in practice:
There are no way to send back an error response instead of a confirmation.
The fact that it is the ATT layer that sends back the confirmation directly when it has received the indication, and not the app when it has successfully handled the indication.
This means that if for example, some bug or other issue causing that the BLE stack couldn't send the indication to the app, or your app crashed, or your app found your value to be invalid, there is no way your peripheral can aware of that. Since it got the confirmation it thinks everything is fine.
I can't understand why they defined indications this way. Since the app doesn't send the confirmation but a lower layer does, there seems to be no point at all in having an ATT layer acknowledge instead of just using the Link Layer acknowledge. Both are just acknowledges that the packet has been received halfway of its destination.
If we draw a parallel to a HTTP POST and internet, we could consider the Link Layer acknowledge as when the network card of the destination receives the request and the ATT confirmation as a confirmation that the TCP stack received the packet. You have no way of knowing that your web server software actually did receive your request, and it processed it with success.
The fact that notifications are allowed to be dropped by the receiver is also bad. Normally notifications are used if the peripheral wants to stream a lot of data to the central and then you don't want dropped packets. They should have designed the flow control so that the Link Layer stopped acknowledge incoming packets instead until the app are ready to process the next notifications. This is even already implemented at the LL + HCI + Host layers.
Windows
One interesting thing about at least the Windows BLE stack is, if it receives indications faster than the app processes them it starts to drop the indications as well, even though only notifications should be allowed to be dropped due to "buffer overflows or other reasons". It buffers at most 512 indications in my tests.
That said
Just use notifications and if you want some kind of confirmation, let the client send a write packet when it has received your data and successed processing it.

Related

Are BLE devices required to respond to a SCAN_REQ requests?

I have a BLE device that doesn't respond to SCAN_REQ and am working it out with the vendor independently per https://github.com/espressif/esp-idf/issues/10660.
When I use Nordic nRD Connect iphone app as a client I can see that device in the scan list and can connect to it. However, when I use a different client, a python Windows one, that client doesn't show the device in its scan list and doesn't connect to it if I specify the exact address.
My question is, are BLE 4 devices required to respond to SCAN_REQ requests to be discoverable and connectable or is it just optional response to provide additional advertisement data?
EDIT, I believe that Emil's answer below (thanks) refers to this quote
Yes, it's required to reply with a scan response. That is defined in Bluetooth Core v5.3, Vol 6 Part B (Link Layer), section 4.4.2.3, using the word "shall".
There is one exception though. There is a Filter Accept List in the controller which can contain addresses of centrals allowed to scan and/or connect. There are four combinations the host can set (advertising filter policy) that control if this list shall be used for filtering incoming SCAN_REQ and CONNECT_IND packets, respectively. If you don't use this filtering mechanism, then the device must send a scan response to every scan request.
There are two possible approaches to scanning—Passive Scanning or Active Scanning.
Passive Scanning is when Scanners receive advertising packets and process the contents.
In the case of Active Scanning, however, a device may decide it wants to know more about an advertising device and respond to the initial advertising packet by sending a Scan Request GAP protocol data unit (PDU). This basically means ‘Tell me more.’ The device receiving the Scan Request can send back a Scan Response PDU with more information, once again in the form of a collection of AD types.
The above has been extracted from: https://www.bluetooth.com/blog/advertising-works-part-1/ [the emphasis mine].

What auto-connection using white listing mean in BLE ? Does it same as directed advertising?

I have experimenting with Bluez 5.50 Bluetooth Stack, Here i have some confusion about procedure Auto-connection using Whitelist.
Suppose,
Device A - Advertiser
Device B - Scanner
Add Advertisers(Device A) Bluetooth address as white list in Scanner(Device B)
Device A will advertise with "Connectable Un-directed" adv type & default adv params
Device B will start scanning with "Accept only PDUs from device in white list" configuration
If B scans A's address, than explicitly B will send connection request to A(Without sending Connection create command)
What is basic difference between paired device & white listed device ?
The white list can be used both when just scanning as well as when connecting.
Note that the packet exchange during advertising is this, when the central device is just scanning:
Advertiser sends ADV_IND.
Scanner sends SCAN_REQ.
Advertiser sends SCAN_RSP.
When the central device has a pending initiation (i.e. connection attempt) to a peripheral, the packet exchange is this:
Advertiser sends ADV_IND.
Initiator sends CONNECT_IND.
The connection is now established.
Note the timing between between to packets in the flow above is 150 microseconds (T_IFS), which is quite quick. If the advertiser does not detect SCAN_REQ, it does not send SCAN_RSP. If it also does not detect CONNECT_IND, it does not enter the connection state, but continue to advertise. There is a need to have a white list because the host would not be quick enough to decide if the packet should be dropped or not. Therefore the white list is implemented directly in the Bluetooth controller hardware.
A central device using the white list, will simply drop any ADV_IND having an address that cannot be found in the white list. Therefore no SCAN_REQ or CONNECT_IND is sent in this case.
"Auto-connect" therefore refers to when the initiator is constantly looking for ADV_IND packets where the sender's address is in the white list. If one is found, a CONNECT_IND is sent and the connection gets established, resulting in an "LE Connection Complete" event. Using the white list is the only way to have two or more pending connections, since otherwise you need to specify exactly one target address when initiating a connection. (Although a workaround sometimes used when the white list cannot be used, for example if it is too small to contain all desired addresses, is to let the central first perform a scan, then initiate a connection to the target addrerss, with some short timeout. This introduces latency as at least two ADV_INDs must be sent.)
Paired/Bonded device is a completely different thing. It means that both devices have stored in its database, information about the remote device, such as encryption keys, client characteristic configuration descriptor state, gatt db cache. Bonded devices are usually listed in a user interface as well.
When the white list is used when establishing connections, you can have addresses in this list of non-bonded devices. You can also have bonded devices which you do not currently want to connect to, which are then not included in the white list.
What I've written above is general BLE without any specific Bluetooth stack in mind. BlueZ might have certain conditions/flows when the white list is used.

Why do XMPP messages sometimes get lost on mobile devices

This question asks what to do about loosing XMPP messages on mobile devices when they don't have a stable connection, but I don't really see why the packages get lost in the first place.
I remember having read that the stream between the server and the client stays open when the connection is suddenly lost and will only be destroyed once the connection times out. This means that the server sends arriving messages over the stream, even though the disconnected client can't receive those messages anymore.
I was happy with that explanation for some time, but started wondering why core XMPP would be lacking such an important feature. Eventually I noticed that ensuring correct transmission in the XMPP protocol would be redundant, as the underlying TCP should already ensure the proper transmission of the message, but as the various problems that arise from the lost message it seems that this isn't true.
Why isn't TCP enough to ensure that the message is either correctly sent or fails properly so the server knows it has to send the message later?
Why isn't TCP enough to ensure a proper transmission (or proper error handling, so the server knows the message has to be sent again) in this scenario?
Application gives the data that needs to be sent across to its TCP. TCP segments the data as needed and sends them out on established connection. Application passes over the burden of ensuring the packet reaches the other end to TCP. ( This does not mean,application should not have re-transmissions. Application level protocol can define re-send of messages if right response didn't come)
TCP has the mechanism of the Re-transmissions. Every packet sent to peer needs to be acknowledged. Until the acknowledgements come in TCP shall have the packets in its sendQ. Once the acknowledgement for the packets sent is received, they are removed.
If there is packet loss, acknowledgements don't arrive. TCP does the re-transmissions. Eventually gives up.Notifies application which needs to take action. Packet loss can happen beyond TCPs control. Thus TCP provides best-effort reliable service.

Access COMPORT 1 through three different applications

I have an SMS Appliaction, which receives the messages through GPS Modem and revert back through GPS Modem. The Modem is using COM1.
Now, i need two more appliactions which can send messages through the same GPS Modem. I tried making a webservice which can access the COM1 to send data, but when i try to connect through webService, it throw an error saying, 'COM1 is already occupied, Access denied.'.
Can anybody help me to connect through the modem in above scenario.
Khushi
You have to make sure only 1 connection is made.
Easiest (and most low-tech, but probably most flexible) is having a script checking a directory for files regularly and sending the messages in the file to the modem. The webservice then just writes a file for every SMS it received. (this can be trivially extended to accept emails, web requests, etc, ...)
A bit more sophistacated is to start a thread to do the communication and push the messages on a FIFO like datastructure provided by your favorite programming platform. A BlockinQueue would be perfect. The thread reads the messages from the queue and sends them to the GSM modem.
If you want to have confirmation the SMS is sent (which in my experience does not mean anything and certainly not that the recipient actually received it) you'll need to find a way to return feedback to the caller. This can be as simple a setting a boolean flag in the message to sending another message or performing a callback. But I would not bother. I had situations where 30% of messages dissapeared even when we had confirmation of the message central.

How does TCP/IP report errors?

How does TCP/IP report errors when packet delivery fails permanently? All Socket.write() APIs I've seen simply pass bytes to the underlying TCP/IP output buffer and transfer the data asynchronously. How then is TCP/IP supposed to notify the developer if packet delivery fails permanently (i.e. the destination host is no longer reachable)?
Any protocol that requires the sender to wait for confirmation from the remote end will get an error message. But what happens for protocols where a sender doesn't have to read any bytes from the destination? Does TCP/IP just fail silently? Perhaps Socket.close() will return an error? Does the TCP/IP specification say anything about this?
TCP/IP is a reliable byte stream protocol. All your bytes will get to the receiver or you'll get an error indication.
The error indication will come in the form of a closed socket. Regardless of what the communication pattern (who does the sending), if the bytes can't be delivered, the socket will close.
So the question is, how do you see the socket close? If you're never reading, you'd eventually get an error trying to write to the closed socket (with ECONNRESET errno, I think).
If you have a need to sleep or wait for input on another file handle, you might want to do your waiting in a select() call where you include the socket in the list of sources you're waiting on (even if you never expect to receive anything). If the select() indicates that the socket is ready for a read call, you may get a -1 return (with ECONNRESET, I think). An EOF would indicate an orderly close (other side did a shutdown() or close().
How to distinguish this error close from a clean close (other program exiting, for example)? The errno values may be enough to distinguish error from orderly close.
If you want an unambiguous indication of a problem, you'll probably need to build some sort of application level protocol above the socket layer. For example, a short "ack" message sent by the receiver back to the sender. Then the violation of that higher level application protocol (sender didn't see an ack) would be a confirmation that it was an error close vs a clean close.
The sockets API has no way of informing the writer exactly how many bytes have been received as acknowledged by the peer. There are no guarantees made by the presence of a successful shutdown or close either.
The TCP/IP specification says nothing about the application interface (which is nearly always the sockets API).
SCTP is an alternative to TCP which attempts to address these shortcomings, among others.
In C, if you write to a socket that has failed with send(), you will get back the number of bytes that were sent. If this does not match the number of bytes you meant to send, then you have a problem. But also, when you write to a failed socket, you get SIGPIPE back. Before you start socket handling, you need to have a signal handler in place that will alert you when you get SIGPIPE.
If you are reading from a socket, you really should wrap it with an alarm so you can timeout. Like "alarm(timeout_val); recv(); alarm(0)". Check the return code of recv, and if it's 0, that indicates that the connection has been closed. A negative return result indicates a read failure and you need to check errno.
TCP is built upon the IP protocol, which is the centerpiece for the Internet, providing much of the interoperability that drives Routing, which is what determines how to get packets from their source to their destination. The IP protocol specifies that error messages should be sent back to the sender via Internet Control Message Protocol(ICMP) in the case of a packet failing to get to the sender. Some of these reasons include the Time To Live(TTL) field being decremented to zero, often meaning that the packet got stuck in a routing loop, or the packet getting dropped due to switch contention causing buffer overruns. As others have said, it is the responsibility of the Socket API that is being used to relay these errors at the IP layer up to the application interacting with the network at the TCP layer.
TCP/IP packets are either raw, UDP, or TCP. TCP requires each byte to be acked, and it will re-transmit bytes that are not acked in time. raw, and UDP are connectionless (aka best effort), so any lost packets (barring some ICMP cases, but many of these get filtered for security) are silently dropped. Upper layer protocols can add reliability, such as is done with some raw OSPF packets.

Resources