What are the cases Access point disconnect station without sending disassociation or deauthetication to station? - networking

What are the cases Access point disconnect station without sending disassociation or deauthentication to station?

I guess it would be pretty strange...the AP is meant to send a disassociation to any connected station. The only possibility that springs to mind is failing to find "clear air" while sending the disassociation.
I.e. AP decides to disassociate a station. The MAC layer receives the request to disassoc the station, and puts it in a queue. Every time the AP attempts to send the disassoc frame there is something else sending, and the AP does not pass the collision avoidance check. Eventually the request in the MAC layer times out.

Related

BLE Missing Packets (Protocol / Spec question)

I have been learning the nuts and bolts of BLE lately, because I intend to do some development work using a BLE stack. I have learned a lot from the online documentation and the spec, but there is one aspect that I cannot seem to find.
BLE uses frequency hopping for communication. Once two devices are connected (one master and one slave), it looks like all communication is then initiated via the master and the slave responds to each packet. My question involves loss of packets in the air. There are two major cases I am concerned with:
Master sends a packet that is received by the slave and the slave sends a packet back to the master. The master doesn't receive the packet or if it does, it is corrupt.
Master sends a packet that is not received by the slave.
Case 1 to me is a "dont care" (I think). Basically the master doesn't get a reply but at the very least, the slave got the packet and can "sync" to it. The master does whatever and tries transmitting the packet at the next connection event.
Case 2 is the harder case. The slave doesn't receive the packet and therefore cannot "sync" its communication to the current frequency channel.
How exactly do devices synchronize the channel hopping sequence with each other when packets are lost in the air (specifically case 2)? Yes, there is a channel map, so the slave technically knows what frequency to jump to for the next connection event. However, the only way I can see all of this happening is via a "self timed" mechanism based on the connection parameters. Is this good enough? I mean, given the clock drift, there will be slight differences in the amount of time the master and slave are transmitting and receiving on the same channel... and eventually they will be off by 1 channel.. 2 channels, etc. Is this not really an issue, because for that to happen 'a lot' of time needs to pass based on the 500ppm clock spec? I understand there is a supervisor timer that would declare the connection dead after no valid data is transferred for some time. However, I still wonder about the "hopping drift", which brings me to the next point.
How much "self timing" is employed / mandated within the protocol? Do slave devices use a valid start of packet from the master every connection interval to re synchronize the channel hopping? For example if the (connection interval + some window) elapses, hop to the next channel, OR if packet received re sync / restart timeout timer. This would be a hop timer separate from the supervisor timer.
I can't really find this information within the core 5.2 spec. It's pretty dense at only over 3000+ pages... If somebody could point me to the relevant sections in the spec or somewhere else.. or even answer the questions, that would be great.
The slave knows the channel map. If one packet is not received from the master, it will listen again after one connection interval on the next channel. If that it also not received, it adds one extra connection interval and next channel.
The slave also stores a timestamp (or event counter) when the last received packet from the master was detected, regardless of if the crc was correct or not. This is called the anchor point. This is not the same time point used for supervision timeout.
The amount of time between the anchor point and the next expected packet is multiplied by the master + slave accuracy (for example 500 ppm) to get a receive window, plus 16 microseconds. So the slave listens this amount of time before and after the expected packet time of arrival.

BLE communication after pairing

I'm new to BLE.
I understood that pairing is achieved using advertising/scanning schema, based on connection interval.
My question is related to what happens after a connection has been established; is there any periodic message exchange to keep the connection alive? Or the only exchanged data are based on Characteristics' read/write or notification/indication?
Thanks,
Andrea
The connection parameters for a BLE connection is a set of parameters that determine when and how the Central and a Peripheral in a link transmits data. It is always the Central that actually sets the connection parameters used, but the Peripheral can send a so-called Connection Parameter Update Request, that the Central can then accept or reject.
Connection supervision timeout: This timeout determines the timeout from the last data exchange till a link is considered lost. A Central will not start trying to reconnect before the timeout has passed, so if you have a device which goes in and out of range often, and you need to notice when that happens, it might make sense to have a short timeout.
for more read this thread

Does a BLE device reads advertising packets when not scanning? (autoconnect)

I read in some places that advertising packets are sent to every one in the distance range. However, should the other device be scanning to receive them or it will receive it anyways?
The problem:
let's say I'm establishing a piconet between 5 or 6 BLE devices. At some point I have some connections between the slaves and one master. Then if one of the devices get removed/shut off for a few days I would like it to reconnect back to the network as soon as turned on.
I read about the autoconnect feature but it seems when you set it true, the device creates a background scanning which is actually slower (in frequency) than the manual scanning. This makes me conclude that for the autoConnect to work the device which is being turned on again needs to advertise again, right? Therefore, if autoconnect really runs a slow scan on background so it seems to me that you can never receive the adv packets instantly unless you're scanning somehow. Does that make sense?
If so, is there any way around it? I mean, detect the device that is comming back to the range instantly?
Nothing is "Instant". You are talking about radio protocols with delays, timeouts, retransmits, jamming, etc. There are always delays. The important thing is what you consider acceptable for your application.
A radio transceiver is either receiving, sleeping or transmitting, on one given channel at a time. Transmitting and receiving implies power consumption.
When a Central is idle (not handling any connection at all), all it has to do is scanning. It can do it full time (even if spec says this should be duty cycled). You can expect to actually receive an advertising packet from peer Peripheral the first time it is transmitted.
When a Central is maintaining a connection to multiple peripherals, its transceiver time is shared between all the connections to maintain. Background scanning is considered low priority, and takes some of the remaining transceiver time. Then an advertising Peripheral may send its ADV packet while Central is not listening.
Here comes statistical magic:
Spec says interval between two advertising events must be augmented with a (pseudo-)random delay. This ensures Central (scanner) and Peripheral (advertiser) will manage to see each other at some point in time. Without this random delay, their timing allocations could become harmonic, but out of phase, and it could happen they never see each other.
Depending on the parameters used on Central and Peripheral (advInterval, advDelay, scanWindow, scanInterval) and radio link quality, you can compute the probability to be able to reach a node after a given time. This is left as an exercise to the reader... :)
In the end, the question you should ask yourself looks like "is it acceptable my Peripheral is reconnected to my Central after 300 ms in 95% of cases" ?

BLE indications

As I understand, BLE indications are a reliable communications method. How do you know if your indications was not communicated. I am writing code for the peripheral/server and currently when I send a notifications, I get a manual response from the central. I read that if I use indications, the acknowledges take place in the L2CAP layer automatically and communications is therefore faster, but how does my embedded controller know the Bluetooth module was not successful at getting the packet across the link? We are using the Microchip RN4030 Bluetooth module.
Let's make things clear.
The BLE stack looks roughly like the following. The stack has these layers in this order:
Link Layer
HCI (if controller and host are separated)
L2CAP
ATT
GATT
Application
The Link Layer is a reliable protocol in the sense that all packets are protected by a CRC and every packet is acknowledged by the receiving device. If a packet is not acknowledged, it is resent until an acknowledge is received. There can also only be one outstanding packet, which means no reordering of packets are possible. Acknowledges are normally not being presented to upper layers.
The HCI layer is the communication protocol between the controller and the host.
The L2CAP layer does almost nothing if you use the standard MTU size of 23. It has a length header and a type code ("channel") which indicates what type of packet is being sent (usually ATT).
On the ATT level, there are two types of packets that are sent from the server that are not sent as a response to a client request. Those are notifications and indications. Sending one notification or indication has the same "performance" since the type is just a tag of a packet that's sent over the lower layers. The differences are listed below:
Indications:
When an indication packet is sent to the client, the client must acknowledge the packet by sending a confirmation packet when it has received the indication packet. Even if the indication packet is invalid, a confirmation shall be sent back.
Upper layers are not involved sending back the confirmation.
The server may not send a new indication until it has received a confirmation from the previous one.
The only time you won't receive a confirmation after an indication is if the link is dropped (you should then get a disconnected event), or there is some bug in some of the BLE stacks.
After the app has sent an indication, most BLE stack confirms to the app that that a confirmation has been received by the client as that the indication operation has completed.
Notifications:
No ATT layer acknowledges are sent.
"Commands and notifications that are received but cannot be processed, due to buffer overflows or other reasons, shall be discarded. Therefore, those PDUs must be considered to be unreliable." However I have never noticed an implementation actually following this rule, i.e. all notifications are delivered to the application. Or I've never hit the max buffer size.
The GATT layer is mostly a definition of how the attribute protocol should be used and defines a DB structure of characteristics.
Implications
According to my opinion, there are several flaws or issues with the BLE standard. There are two things that might make indications useless and unreliable in practice:
There are no way to send back an error response instead of a confirmation.
The fact that it is the ATT layer that sends back the confirmation directly when it has received the indication, and not the app when it has successfully handled the indication.
This means that if for example, some bug or other issue causing that the BLE stack couldn't send the indication to the app, or your app crashed, or your app found your value to be invalid, there is no way your peripheral can aware of that. Since it got the confirmation it thinks everything is fine.
I can't understand why they defined indications this way. Since the app doesn't send the confirmation but a lower layer does, there seems to be no point at all in having an ATT layer acknowledge instead of just using the Link Layer acknowledge. Both are just acknowledges that the packet has been received halfway of its destination.
If we draw a parallel to a HTTP POST and internet, we could consider the Link Layer acknowledge as when the network card of the destination receives the request and the ATT confirmation as a confirmation that the TCP stack received the packet. You have no way of knowing that your web server software actually did receive your request, and it processed it with success.
The fact that notifications are allowed to be dropped by the receiver is also bad. Normally notifications are used if the peripheral wants to stream a lot of data to the central and then you don't want dropped packets. They should have designed the flow control so that the Link Layer stopped acknowledge incoming packets instead until the app are ready to process the next notifications. This is even already implemented at the LL + HCI + Host layers.
Windows
One interesting thing about at least the Windows BLE stack is, if it receives indications faster than the app processes them it starts to drop the indications as well, even though only notifications should be allowed to be dropped due to "buffer overflows or other reasons". It buffers at most 512 indications in my tests.
That said
Just use notifications and if you want some kind of confirmation, let the client send a write packet when it has received your data and successed processing it.

Do WiFi devices transmit packets when they are just turned on?

I read a lot about WiFi sensors being used to track smart phones in Retail environment. The location triangulation is done on basis that a smart phone has its WiFi turned ON, be it in connected or unconnected state.
Case 1 : WiFi turned ON but unconnected
Why should a smart phone which has its WiFi turned ON need to transmit the packets, unless the user 'scans' for nearby WiFi networks?
Case 2 : WiFi turned ON and connected
Why should a smart phone transmit any packets, unless the user is browsing the net?
In both the above cases, there is a high chance that most of the time the WiFi device does not send any packet, which means none of the WiFi sensors detect it. If that is true, then the whole idea behind WiFi sensor based triangulation in Retail goes for toss, clearly with so many companies working on this, I must be wrong. Please answer with more than a yes or no, as to which packets are generally sent in both the above scenarios.
If wifi is turned on it will periodically search for new networks. This happens even if you are already connected to one, as it allows the device to connect to a 'better' network, if available.
Scanning/network discovery can be done in two ways. First is passive when a device listens to surrounding access point's (AP) beacon frames. These are basically advertisements for their network. The second method is called active. This is the most likely explanation of how the technology you mentioned works. Active scanning is when the device sends out a probe frame asking for available APs. These are generally ones that you have associated with previously, e.g. Your home network. These probes can be listened to from nearby 802.11 (wifi) devices, therefore tracking you.
Active and passive scanning
801.11 frames
As mentioned in #AndrewLeeming answer, one of the causes for data transmission data is scanning.
It's not necessary but normally it will be performed to find a network to connect to (or a better network in case of already connected). Active scanning can be turned off for power saving reasons. Passive scanning doesn't involve transmissions, so it's irrelevant to this question.
However, the most important reason for WiFi devices to transmit packets while connected is to let the AP know that the client is still available. Otherwise the AP will drop the link after a certain period of time without activity. Additionally, the clients might be in power save mode and instruct the AP not to transmit data to them. From time to time the client will inquire the AP to see if there are any pending packets for it.

Resources