I'm trying to sample TI CC2650STK sensor data at ~120Hz with my Raspberry Pi 3B and when comparing the signal trace to a wired MPU6050 I seem to have a ~50ms phase-shift in the resultant signal (in the image below orange is the data received over BLE and blue is the data received over I2C with another sensor (MPU6050):
The firmware on the sensor side doesn't seem to have any big buffers:
(50{ ms }/8{ ms/sample } = ~6 { samples }), where each sample is 18bytes long -> 6*18 buffer size req'd I guess...).
On the RPi side I use Bluez with Bluepy library and again I see no buffers that could cause such a delay. For test purposes the sensor is lying right next to my pi, so surely OTA transmission cannot be taking 40-50ms of time? More so, timing my code that handles the incoming notifications shows that the whole handling (my high level code + bluepy library + BLUEZ Stack) takes less than 1-2 ms.
Is it normal to see such huge propagation delay or would you say I'm missing something in my code?
Looks legit to me.
BLE is timeslotted. Peripheral cannot transmit any time it wants, it has to wait for next connection event for sending its payload. If next connection event is right after sensor data update, message gets sent with no big latency. If sensor data is generated right after connection event, peripheral stack has to wait a complete connection interval for next connection event.
Connection interval is an amount of time, multiple of 1.25 ms between 7.25 ms and 4 s, set by Master of the connection (your Pi's HCI) upon connection. It can be updated by Master arbitrarily. Slave can kindly ask for modification of parameters from the Master, but master can do whatever it wants (most Master implementation try to respect constraints from Slave though).
If you measure an average delay of 50 ms, you are probably using a connection interval of 100 ms (probably a little less because of constants delays in the chain).
Bluez contains a hcitool lecup command line that is able to change the connection parameters for a given connection.
Related
I'm writing an application to read blocks of data from an external device over an RS485 half duplex link running at 921600 baud. The external device sends a block of data and then the MATLAB app needs to send a short acknowledge response before the device will send the next block. I find that MATLAB takes a varying time, often in excess of 35mS, after reading in the data block before it transmits the acknowledgement back. Is there some way to speed up this apparent serial port transmit delay as this is slowing the interaction down a lot? I am using MATLAB version R2018b. Thanks.
I have been learning the nuts and bolts of BLE lately, because I intend to do some development work using a BLE stack. I have learned a lot from the online documentation and the spec, but there is one aspect that I cannot seem to find.
BLE uses frequency hopping for communication. Once two devices are connected (one master and one slave), it looks like all communication is then initiated via the master and the slave responds to each packet. My question involves loss of packets in the air. There are two major cases I am concerned with:
Master sends a packet that is received by the slave and the slave sends a packet back to the master. The master doesn't receive the packet or if it does, it is corrupt.
Master sends a packet that is not received by the slave.
Case 1 to me is a "dont care" (I think). Basically the master doesn't get a reply but at the very least, the slave got the packet and can "sync" to it. The master does whatever and tries transmitting the packet at the next connection event.
Case 2 is the harder case. The slave doesn't receive the packet and therefore cannot "sync" its communication to the current frequency channel.
How exactly do devices synchronize the channel hopping sequence with each other when packets are lost in the air (specifically case 2)? Yes, there is a channel map, so the slave technically knows what frequency to jump to for the next connection event. However, the only way I can see all of this happening is via a "self timed" mechanism based on the connection parameters. Is this good enough? I mean, given the clock drift, there will be slight differences in the amount of time the master and slave are transmitting and receiving on the same channel... and eventually they will be off by 1 channel.. 2 channels, etc. Is this not really an issue, because for that to happen 'a lot' of time needs to pass based on the 500ppm clock spec? I understand there is a supervisor timer that would declare the connection dead after no valid data is transferred for some time. However, I still wonder about the "hopping drift", which brings me to the next point.
How much "self timing" is employed / mandated within the protocol? Do slave devices use a valid start of packet from the master every connection interval to re synchronize the channel hopping? For example if the (connection interval + some window) elapses, hop to the next channel, OR if packet received re sync / restart timeout timer. This would be a hop timer separate from the supervisor timer.
I can't really find this information within the core 5.2 spec. It's pretty dense at only over 3000+ pages... If somebody could point me to the relevant sections in the spec or somewhere else.. or even answer the questions, that would be great.
The slave knows the channel map. If one packet is not received from the master, it will listen again after one connection interval on the next channel. If that it also not received, it adds one extra connection interval and next channel.
The slave also stores a timestamp (or event counter) when the last received packet from the master was detected, regardless of if the crc was correct or not. This is called the anchor point. This is not the same time point used for supervision timeout.
The amount of time between the anchor point and the next expected packet is multiplied by the master + slave accuracy (for example 500 ppm) to get a receive window, plus 16 microseconds. So the slave listens this amount of time before and after the expected packet time of arrival.
I read in some places that advertising packets are sent to every one in the distance range. However, should the other device be scanning to receive them or it will receive it anyways?
The problem:
let's say I'm establishing a piconet between 5 or 6 BLE devices. At some point I have some connections between the slaves and one master. Then if one of the devices get removed/shut off for a few days I would like it to reconnect back to the network as soon as turned on.
I read about the autoconnect feature but it seems when you set it true, the device creates a background scanning which is actually slower (in frequency) than the manual scanning. This makes me conclude that for the autoConnect to work the device which is being turned on again needs to advertise again, right? Therefore, if autoconnect really runs a slow scan on background so it seems to me that you can never receive the adv packets instantly unless you're scanning somehow. Does that make sense?
If so, is there any way around it? I mean, detect the device that is comming back to the range instantly?
Nothing is "Instant". You are talking about radio protocols with delays, timeouts, retransmits, jamming, etc. There are always delays. The important thing is what you consider acceptable for your application.
A radio transceiver is either receiving, sleeping or transmitting, on one given channel at a time. Transmitting and receiving implies power consumption.
When a Central is idle (not handling any connection at all), all it has to do is scanning. It can do it full time (even if spec says this should be duty cycled). You can expect to actually receive an advertising packet from peer Peripheral the first time it is transmitted.
When a Central is maintaining a connection to multiple peripherals, its transceiver time is shared between all the connections to maintain. Background scanning is considered low priority, and takes some of the remaining transceiver time. Then an advertising Peripheral may send its ADV packet while Central is not listening.
Here comes statistical magic:
Spec says interval between two advertising events must be augmented with a (pseudo-)random delay. This ensures Central (scanner) and Peripheral (advertiser) will manage to see each other at some point in time. Without this random delay, their timing allocations could become harmonic, but out of phase, and it could happen they never see each other.
Depending on the parameters used on Central and Peripheral (advInterval, advDelay, scanWindow, scanInterval) and radio link quality, you can compute the probability to be able to reach a node after a given time. This is left as an exercise to the reader... :)
In the end, the question you should ask yourself looks like "is it acceptable my Peripheral is reconnected to my Central after 300 ms in 95% of cases" ?
I am using an Arduino Pro Mini 328P (3.3v, 8Mhz) with Xbee series 1. I have set the frequency to 1 Mhz and the baudrate to 9600. Also I have set baudrate to 9600 in the Xbee. I have also tested that at this baudrate Xbee is sending the data properly in a normal scenario.
Now what I have done in my project:
I have registered my Xbee with the gateway and then it will go to sleep (I have used pin hibernate mode) then it will wake up by a digital pin of the Pro Mini. I have put a delay of 19ms, after which the Xbee will try to send data. After sending the data it will go back to sleep.
The problem is that it behaves randomly when sending data to the gateway (which also has the same Xbee series1). Sometimes it sends the data perfectly, sometimes sending fails. I have also enabled RR to retry 6 times in case the Xbee fails to send the data the first time.
I have no idea how to solve this problem because of the randomness in sending the data.
I have put two Xbees nearer (I have two nodes with the same hardware and the same code). There is an interval between of around 4 minutes. So when one Xbee sends the data perfectly, after that 4 minutes gae (time difference of two RTC on different nodes) the other one fails to send the data. In this condition what can I conclude?
As a side note, the Xbee will try to send the data every hour. To calculate that hour I have to use an RTC, which seems to work fine (I am sure because I have taken the logs, the RTC never fails to generate an interrupt).
So I am wondering what could be the possible reason and how can I fix this problem (without restarting anything if it is possible then nothing will be better than that).
And I have no choice to restart my controller.
How to debug this?
A few things. If possible, increase your baud rate so you spend less time sending data to/from the XBee. If you have a limited power budget, faster baud rates save time and energy. I don't know how the UARTs work on the Arduino, so I can't say whether 115,200bps is possible with a 1MHz CPU clock.
Second, make sure you wait for the XBee to assert CTS back to the Arduino after you wake it up. Never send to the XBee unless it's "clear to send".
Third, if you use API mode, you can watch for a "Transmit Status" frame from the local XBee back to the Arduino which will let you know when the module has successfully sent the frame, and it's safe for you to put it back to sleep.
I have an Arduino Uno connected to a PC via USB and I am communicating via serial to a temperature sensor from PHP.
At present, the temperature sensor records a value and sends it straight down the serial connection to the PC. However, this may not be read for a long period of time. Therefore, I think this method may be inefficient.
I was thinking I could listen on the Arduino for a serial message from the PX requesting the temperature before actually checking it and sending the message back to the PC via serial, therefore becoming more efficient as its not checking the temperature every 0.1 seconds.
My Questions are as follows:
Is this actually worth doing from a code efficiency point?
Is there a better way to improve this than my suggested method?
Would these changes improve battery performance (Eg if I was using a
different communication model and not Serial and therefore might
need a batteries)
A1: Since you already have the routines to measure the temperature and then send it to the PC there should not be much coding left to do to wait for a trigger from the PC before performing the routine.
A2: There always is a 'better' way :)
A3: If your µC does not have many other tasks to perform that keep it busy you can definitely save a lot of juice by putting the µC to sleep between those short periods of activity - which you should do anyway when running off batteries.