Best approach for transfering large data chunks over BLE - bluetooth-lowenergy

I'm new to BLE and hope you will be able to point me towards the right implementation approach.
I'm working on an application in which the peripheral (battery operated) device continuously aggregate sensor readings.
On the mobile side application there will be a "sync" button, upon button press, I would like to transfer all the sensor readings that were accumulated in the peripheral to the mobile application.
The maximal duration between sync's can be several days, hence, the accumulated data can reach a size of 20Kbytes.
Now, I'm wondering what will be the best approach to perform the data transfer from the peripheral to the central application.
I thought about creating an array of characteristics where each characteristic will contain a fixed amount of samples (e.g. representing 1hour of readings).
Then, upon sync, I will:
Read the characteristics count (how many 1hours cells).
Then read the characteristics (1hour cells) one by one.
However, I have no idea if this is a valid approach ?
I'm not sure if this is the most "power efficient" way that I can
use.
I'm not sure if Characteristic READ is the way to go, or maybe
I need to use indication instead.
Any help here will be highly appreciated :)
Thanks in advance, Moti.

I would simply use notifications.
Use one characteristic which you write something to in order to trigger the transfer start.
Then have another characteristic which you simply stream data over by sending 20 bytes at a time. Most SDKs for BLE system-on-a-chips have some way to control the flow of data so you don't send too fast. Normally by having a callback triggered when it is ready to take the next notification.
In order to know the size of the data being sent, you can for example let the first notification contain the size, and rest of them the data.
This is the most time and power efficient way since there can be sent many notifications per connection interval, compared if you do a lot of reads instead which normally requires two round trips each. Don't use indications since they also require basically two round trips per indication. They're also quite useless anyway.
You could possibly increase the speed also by some % by exchanging a larger MTU (which leads to lower L2CAP/ATT headers overhead).

Related

Can't get temperature reading out of ble beacon .. at my wits end now. This needs a super-hero I guess

I have a task where I need to read 2 parameters from a BLE Beacon. The documentation was seriously lacking and after a fair amount of effort, I managed to get some basic information about reading the data from the BLE Beacon.
The parameters to read are
1) Battery Voltage of the sensor
2) Temperature the beacon has a built in temperature sensor.
I think I have tried almost every popular Python BLE library out there but I just can't seem to get the temperature reading out of the beacon. "I think" I am able to read the voltage. The reason why I said "I think" is because the value seems to match what was provided in the minimal document. And also when I put the beacon into the charger, I can see the value go up - an indication that it is the voltage reading. As I could not read the temperature ( because the UUIDs that are mentioned in the document, the value doesn't seem to change ). I have tried enabling the sensor in every possible way and method described - by writing 01:00 etc. I spent a fair amount of time to reverse engineer the thing. I ran a packet sniffer and managed to capture the data that was being transferred between the beacon and the mobile app ( They have a mobile app ). But then again I am not able to figure out how the temperature readings are being communicated between the beacon and the app. Let me break the whole stuff in smaller blocks.
Hardware: BLE beacon from which voltage and temperature can be read. The temperature sensor is built into the beacon. And the beacon itself is from Texas Instruments but the temperature, voltage sensing part is done by a third party. They provided us with some minimal information and it was difficult to make sense of some of the sentences as they have trouble communicating in English.
The sequence to get the data goes like this
Scan for beacons
When the beacon is found then connect to it
Enable notification
Set notification interval
Get the voltage and temperature reading.
I have been able to do the first 4 real fast, and "half" of No. 5, i.e getting the voltage part. When I say real fast I mean I got that stuff with nearly no documentation available at that time.
As per the info that I have the data resides in these characteristics/UUIDs. Also please note that the UUID are not standard 128 bit and this caused me issues when using certain libraries. But after some tries I got to read/write to them using handles etc. The handles and other stuff I printed are ones that I read using PYGATT (A Python wrapper for gatttool).
The UUIDs are marked as 1st, 2nd, 3rd and 4th parameters and it has the following to say about the parameters
- A: 1 byte (2nd Param)
- B: Maj + Min values, 4 bytes (4th Param)
- C: 4 bytes (3rd Param)
- D: Enable/disable notification ( I have been able to turn this on )
- E: Set notification interval ( I have been able to set this and can notice the change in notification interval )
This is minimal so as to not have a large file. All it does is this - the mobile app connects to the beacon, then the notifications start and the temperate readings are retrieved by the mobile app. Like I had mentioned, I don't seem to have problem reading the voltage, it's only the temperature that I am getting stuck at. I have been at it for a week now. I think I have tried nearly everything that I could think of. I even enumerated all the writable characteristics and tried writing numbers like 1 ( enables the sensor? ). I could have offered a bounty for this straight away if it were possible. I rarely get stuck for so long with a problem. This is driving me a little crazy. I am getting close to my wits end - I guess it's time for a super hero - anyone out there? :) I can provide for every bit of information needed if someone could indicate what is wrong. I even wrote a cordova app ... and tried a bunch of stuff from my Android phone. I can connect ... write to characteristics, read stuff etc but temperature ready, nah!!! It just won't budge. All I get is the same set of values ( I used a JSON.stringify to display A, B and C). I can bother about the byte order later. I guess that is a smaller problem.
The communication between the beacon and a third party mobile app is fine, it is able to read the temperature info just fine.
I have been looking at wireshark data and I am fairly sure that the temperature data is being communicated at this stage. But then when I decode the "value", it looks like it's the voltage. It mentions l2cap but I am not sure how that is being used here to send the temperature readings ( if it is using that in the first place ).
Update: Wrote to every writable characteristics. Wrote values like 1, 0100, 2, 7 on every writable characteristics. At the same time I was reading every readable characteristic ( in a loop ) and doing a comparison (just true/false) with the previous set of values. This seemed like a quick and easier way to know if something changed. Didn't want to take chances with converting the hex to a float. I can figure out the byte order later.
From the sniffed data (wireshark) I can only see 3 writes happening on the beacon.
I am not fully sure, even after a long discussion, but it seems that the four bytes of the notification are used for the voltage as well as the temperature, since the temperature can most probably be derived from the voltage.
From the values it seems that those four bytes represent the voltage in float (if you ignore the absurd factor of 10^-38 that comes in because only 4 bytes instead of 8 bytes are used).
Since typically the temperature T is derived from a resistivity measurement, where the resistivity R is proportional to the voltage U (if the current is constant), you can in principle calculate the temperature T from the voltage U.
The problem is that T(R) is relatively linear, but not perfectly (in contrast to U(R) which is assumed to be U=RI). So you may need to plot the values for T(U) to find out the curve that they are using.
To add to the confusion, I got the best results when only using the first five bits of the third byte and the eight bits of the fourth byte. I am not aware why this is the case, and it might point to some trouble still.
The best option is to ask for their function T(U) that they are using. If they can and will provide it for you...

Can you develop an app for the Microsoft band, without a corresponding mobile app always being connected?

I have several Microsoft bands, to be used as part of a group health initiative. I intend to develop a single app on a tablet which will pull the data from the bands. This will be a manual process, there will not be a constant connection to the tablet and no connection to Microsoft Health.
Does anyone know if this is possible?
Thanks
Emma
The general answer is no: Historical sensor values are not stored or buffered on the Band itself.
It does however depend on what sensors you are interested in. The sensor values are not buffered, so you can only read the current (realtime) value of the sensors.
But sensors such as pedometer and distance are incrementing over time, so these values will make sense even though you are only connected once in a while. Whereas for, e.g., the heart rate and skin temperature, you will only get the current (realtime) value.
So it depends on your use case.

Arduino + multiple ultrasonic sensors + interference

I have two buggies moving around a track, both of which use ultrasonic measurement modules to detect obstacles in their paths and are controlled by Arduino microcontrollers. The two ultrasonic sensors operate at the same frequency and this frequency cannot be changed. The two ultrasonic sensors are interfering with each other. How can I reduce this interference, or prevent it, by adding something to the Arduino code. The hardware cannot be changed. Thanks for your help
There are in general six ways to reduce interference between two channels (see for example http://en.wikipedia.org/wiki/Multiplexing) - two of which don't apply to sound. That leaves you with four:
space - don't operate in the same space (e.g. cell towers): not applicable for you
frequency - (e.g. channels) you said you can't change that
time - don't operate at the same time
code - send out different amplitude patterns
In a sense, "code" is a bit like "time", but more complicated. With "time", you try to time it so the two transducers don't transmit at the same time. With "code", they send complex pulse sequences and use these to eliminate the interference.
I think your best bet (simple, but effective) is "time". This is going to depend a little bit on the frequency of update that you need, but you could make one buggy the "master", sending a short chirp every 100 ms (say); then have the second buggy wait until it hears the master chirp, and send its own pulse 50 ms later (when it knows the other buggy will be quiet). In this way each will have 10 updates per second, but they will not interfere.
To be more robust, the "slave" buggy could decide (after not hearing a pulse from the "master") to send its own pulse after 100 ms - this way it can operate when there is only one buggy present. And they could in fact each use this algorithm - then there is no "master" and "slave" and they have the same code (usually a good idea). As a final tweak, if you make this "wait for n ms" interval random, you will have implemented a version of "carrier sense multiple access with collision detection" - see http://en.wikipedia.org/wiki/Carrier_sense_multiple_access_with_collision_detection
Good luck.

How to sync physics in a multiplayer game?

I try to found the best method to do this, considering a turn by turn cross-plateform game on mobile (3G bandwidth) with projectile and falling blocks.
I wonder if one device (the current player turn = server role) can run the physics and send some "key frames" data (position, orientation of blocks) to the other device, which just interpolate from the current state to the "keyframes" received.
With this method I'm quite afraid about the huge amount of data to guarantee the same visual on the other player's device.
Another method should be to send the physics data (force, acceleration ...) and run physics on the other device too, but I'm afraid to never have the same result at all.
My current implementation works like this:
Server manages physics simulation
On any major collision of any object, the object's absolute position, rotation, AND velocity/acceleration/forces are sent to each client.
Client sets each object at the position along with their velocity and applies the necessary forces.
Client calculates latency and advances the physics system to accommodate for the lag time by that amount.
This, for me, works quite well. I have the physics system running over dozens of sub-systems (maps).
Some key things about my implementation:
Completely ignore any object that isn't flagged as "necessary". For instance, dirt and dust particles that respond to player movement or grass and water as it responds to player movement. Basically non-essential stuff.
All of this is sent through UDP by the way. This would be horrendous on TCP.
You will want to send absolute positions and rotations.
You're right, that if you send just forces, it won't work. It's possible to make this work, but it's much harder than just sending positions. You need both devices to do their calculations the same way, so before each frame, you need to wait for the input from the other device, you need to use the same time step, scripts need to either run in the same order or be commutative, and you can only use CPU instructions guaranteed to give the same result on both machines.
that last one is one that makes it particularly problematic, because it means you can't use floating-point numbers (floats/singles, or doubles). you have to use integers, or roll your own number format, so you can't take advantage of many existing tools.
Many games use a client-server model with client-side prediction. if your game is turn based, you might be able to get away with not using client-side prediction. instead, you could have the client lag behind by some amount of time, so that you can be fairly sure that the server's input will already be there when you go to render. client-side prediction is only important if the client can make changes that the server cares about (such as moving).

Intelligent Voice Recording: Request for Ideas

Say you have a conference room and meetings take place at arbitrary impromptu times. You would like to keep an audio record of all meetings. In order to make it as easy to use as possible, no action would be required on the part of meeting attenders, they just know that when they have a meeting in a specific room they will have a record of it.
Obviously just recording nonstop would be inefficient as it would be a waste of data storage and a pain to sift through.
I figure there are two basic ways to go about it.
Recording simply starts and stops according to sound level thresholds.
Recording is continuous, but split into X minute blocks. Blocks found to contain no content are discarded.
I like the second way better because I feel there is less risk for losing data because of late starts, or triggers failing.
I would like to implement in Python, and on Windows if possible.
Implementation suggestions?
Bonus considerations that probably deserve their own questions:
best audio format and compression for this purpose
any way of determining how many speakers are present, assuming identification is unrealistic
This is one of those projects where the path is going to be defined more about what's on hand for ready reuse.
You'll probably find it easier to continuously record and saving the data off in chunks (for example, hour long pieces).
Format is going to be dependent on what you in the form of recording tools and audio processing library. You may even find that you use two. One format, like PCM encoded WAV for recording and processing, but compressed MP3 for storage.
Once you have an audio stream, you'll need to access it in a PCM form (list of amplitude values). A simple averaging approach will probably be good enough to detect when there is a conversation. Typical tuning attributes:
* Average energy level to trigger
* Amount of time you need to be at the energy level or below to identify stop and start (I recommend two different values)
* Size of analysis window for averaging
As for number of participants, unless you find a library that does this, I don't see an easy solution. I've used speech recognition engines before and also done a reasonable amount of audio processing and I haven't seen any 'easy' ways to do this. If you were to look, search out universities doing speech analysis research. You may find some prototypes you can modify to give your software some clues.
I think you'll have difficulty doing this entirely in Python. You're talking about doing frequency/amplitude analysis of MP3 files. You would have to open up the file and look for a volume threshold, then cut out the portions that go below that threshold. Figuring out how many speakers are present would require very advanced signal processing.
A cursory Google search turned up nothing for me. You might have better luck looking for an off-the-shelf solution.
As an aside- there may be legal complications to having a recorder running 24/7 without letting people know.

Resources