Kontakt beacon has garbage response time at 6 metres - bluetooth-lowenergy

I'm reading so much propaganda about BLE beacons (Kontakt.io, in my case) being accurate to the centimetre, readable at 70 metres etc etc, but my experience has been nothing like that.
I have 3 beacons. If they're in the next room over (door open, around 6 or 7 metres), it'll detect maybe one or two, after around 20 seconds. Even then I often need to restart my app over and over to detect it.
Move them to the same room, and they're pretty much okay. Everything's default, scanMode is 'LOW_LATENCY', scanPeriod is 'RANGING', I'm not sure what else I can do.
Do these results sound way off, or are they just not that good?

A few tips about Bluetooth beacons in general, not specifically Kontakt beacons:
When you need to restart your app to detect beacons, that clearly means it is something on the phone, not the beacons themselves that are the issue. That issue may be the app, the SDK, the Bluetooth stack on the phone, or the phone's bluetooth hardware. Try an off the shelf detector app like BeaconLocate for iOS or Android and also test with a different phone.
The range of a beacon is dependent on its output transmitter power, typically measured at 1 meter. This output power is adjustable on many hardware beacons and is often set lower than the maximum to save battery on battery-powered models. For best detection results, set the output power to the maximum that configuration allows. An output power at one meter should be at least -59 dBm for best results. Less negative numbers mean more power. Because some phone models have poor sensitivity and measure RSSI inaccurately, you may want to measure with different models. In general iOS models are more predictable receivers.
The range of a beacon between rooms varies greatly depending on materials in walls, furnishings, and local geometry. A beacon with an output power of -59 dBm at one meter can be reliably detected by a phone with a sensitive receiver at 40 meters away, but only with clear line of sight conditions (typically outdoors). Intermittently, I have seen such beacons be detected outdoors at over 100 meters away. Intermittently means that 99% of packets are lost, a small percentage are successfully received.
Always be skeptical of marketing claims from companies trying to sell you something. The above points should tell you what is achievable from an independent engineering perspective.

Related

Some questions for bluetooth low energy as an indoor proximity sensor for building or school campus

My professor recently approved our research paper which will also be used in our final year thesis. Basically our main purpose is to create a system for location tracking and attendance automation for students and staffs. We would like to use the power of bluetooth low energy modules for this project.
I have actually done quite few research about this but I am having trouble which keywords to use in order for me to filter the right answers for my question. So instead, I'll just put all my questions here.
I provided an image to further understand the concept I am talking about.
Basically, the broadcaster/advertisement mode modules are for students and staffs. While the observer mode modules are initially installed in every rooms or spaces in our building/campus.
Broadcast and Observer mode
I would like to clarify first that the location tracking is only basic, it only detects which rooms are the students and staffs located.
Here are my questions:
What is the maximum advertisement/broadcaster module can the observer module detects at the same time?
Our target is about 50 students per room, 300 students in cafeteria, will the observer module have a large amount of latency upon scanning advertisement packets?
Do we have to use different module for observer mode, or will the same module for broadcaster mode be just fine?
Since this is supposedly embedded to school IDs, we would like to use a coin cell battery, how long will it last?
According to my research, BLE range is about 100 meters, but we will be using coin cell battery, is it really possible to achieve 100m for broadcasting and observing? If it is, can we perhaps decrease it by programming?
My apologies for too much question, as this is actually our first time doing applied hardware stuffs due to pandemic. Most of our laboratories are basically tinkercad base. Face-to-face classes are allowed for only medical students for now.
A few answers:
BLE scanners can detect hundreds of distinct broadcasters at the same time. There is no hard limit, but the more broadcasters the longer it will take the scanners to detect each broadcaster.
Most BLE modules support both peripheral mode (broadcaster) and central mode (scanner) simultaneously.
Scanning 50 broadcasters in a single room will easily detect 90% of packets, so if the advertiser is going at 1 Hz it will usually take one second to detect, but sometimes 2-3 seconds of packets are missed.
The indoor range is closer to 40 meters with no walls obstructing the signal. Outdoors with clear line of sight the range is higher. Walls often block signals almost entirely, depending on materials.
A CR2032 coin cell can power a BLE broadcast at 1 Hz and max power for about 30 days.
Creating an embedded solution is cool and valid but just remember that broadcasters already exists as each and every student carries a smartphone with BLE embedded into it and your observer can be any BLE capable device from smartphone through PC with BLE dongle all the way to Arduino and alike.
Your broadcasters (or BLE peripherals as they should be called) will need an Android / iOS app and you will have to deal with working in the background without the operating system stopping your app.
Your observer (or Central in BLE language) can be any stationary PC if such exists in the class which can make development and deployment a lot easier.

UnitDiskRadioMedium no power consumption settings? (omnetpp)

Looking at:
OMNET++: How to obtain wireless signal power?
and
https://github.com/inet-framework/inet/blob/master/examples/wireless/scaling/omnetpp.ini
there seem to be no power consumption related settings to packets that are sent in a UnitDiskRadio.
Is there a way of setting packet power consumption in a unit disk radio medium, or, conversely, communication range in ApskScalarRadioMedium?
UnitDiskRadio is a simplified version of a radio, where you are not interested in the transmission, propagation, attenuation etc. details. You just want to have a clear cut transmission distance. Above that, the transmission always fails, below that the transmission always succeed. This is simple, fast and suitable if you want to simulate high level behavior like application level or routing. You really don't care how much your radio draws from a power grid (or battery) in this case.
On the other hand, if you are interested in low level details, the whole radio transmission process should be modeled. In this case, you model the power draw and based on that transmission and there is no clear cut transmission range. Whether a transmission succeeds is a probabilistic outcome depending on power, antenna configuration, encoding, modulation, noise and a lot of other stuff, so you cannot set it as a simple "range".
TLDR: No, you cannot set both of them on the same radio.
PS: and make sure that you do not mix and match various power parameters. The first question you linked is about getting the power of a received packet (i.e. how strong that signal was when it was received). The second link show how to configure the transmission power (that goes out on the antenna), and in the question you are referring to power consumption which is a third thing, meaning how much you draw from a battery to make the transmission. They are NOT the same thing.

How to synchronize QAudioOutput on multiple devices?

I have an audio playing app that runs on several distributed devices, each with their own clock. I am using QAudioOutput to play the same audio on each device, and UDP broadcast from a master device to synchronize the other devices, so far so good.
However, I am having a hard time getting an accurate picture of "what is playing now" from QAudioOutput. I am using the QAudioOutput bufferSize() and bytesFree() to estimate what audio frame is currently being fed to the sound system, but the bytesFree() value progresses in a "chunky" fashion, so that (bufferSize() - bytesFree()) / bytesPerFrame doesn't give the number of frames remaining in the buffer, but some smaller number that bounces around relative to it.
The result I am getting now is that when my "drift indicator" updates, it will run around 0 for several seconds, then get indications in the -15 to -35 ms range every few seconds for maybe 20 seconds, then a correcting jump of about +120ms. Although I could try to analyze this long term pattern and tease out the true drift rate (maybe a couple of milliseconds per minute), I'd much rather work with more direct information if it's available.
Is there any way to read the true number of frames remaining in the QAudioOutput buffer while it is playing a stream?
I realize I could minimize my problems by radically reducing the buffer size and feeding QAudioOutput with a high priority process, but I'd rather have a solution that uses longer buffers and isn't so fussy about what it runs on - target platforms vary from Windows 10 machines to Raspberry Pi Zero Ws, possibly to Android phones.

synchronise many microcontrollers

In my project i'll use modbus protocol for serial communication. There are more than 320 slaves which seperated equally in 2 groups(see image). Every 16 slaves are powered from the same supply and isolated from others galvanically(Master'll be isolated from all the slaves).
My first question is if there is a problem in this design?
Secondly I want to synchronise all the slaves via 10ms period pulses that are derived from master microcontroller. How can i achieve a robust synchronisation(what type of bus, single or differential signal, where to isolate)?
Here is an alternative one:
see picture
Many things can go wrong here. For starters, it will take a looooooong time for you to poll each and every one of your slaves. And your isolators will easily introduce delays beyond 2us to your sync signal.
Can you briefly tell us what are you trying to do specifically, eg. synchronized motion control? There are other alternatives used in industrial solutions.
Most of the synchronized motion control used in industrial systems are used to replace mechanical cams and eccentric gears, and thus usually called "electronic camming" in this field. Here's a list of techniques I had come across in my last job
A PLC which outputs multiple pulse trains, each commands an individual servo/stepper motor driver. The PLC will have to store all the motion profiles and do all the interpolations, so relatively simple drives can be used. But each actuator will need it's own pulse train lines, and there's just too much in your system.
Motor drives stores motion profile & does interpolation, and the motion is advanced/reversed by an external pulse train. This is a technique used in Delta Automation ASDA and Schneider Electric Lexium 23 model industrial servo drives. The motion profiles are either burnt into the drive's EEPROM beforehand, or written in through MODBUS. This is very close to what you are trying to do, but the difference here is the external sync pulse train is on a separate wire.
Real Time Ethernet. The target positions are periodically written to each drives at a specific interval. This can be done very rapidly at 100Mbps. As for the latency that occurs when writing to different drives, there is a built in mechanism that measures the latency of each drives, and this is then compensated accordingly later. Cool eh? The one that I had saw, but never really used is EtherCAT by Beckhoff.
I worked mostly with method 2 in the past, and from those experience you needs might not need to be so stringent. Here are my recommendations.
It will be perfectly fine if your sync signal is delayed a little if your mechanism has no risk of collision if the timing is off by a little. But lost pulses cannot be tolerated as one of the actuators will be out of phase. Don't scrimp on your sync & communication cable quality, shielded twisted pair if possble, and connect them properly.
If the communication line is not too long, isolators are not needed. I had worked with lines up to 8 meters without the need for isolators or repeaters. Instead I am more worried about the number of spur (branch) connection on your RS485 bus. If possible, connect everything to your 2 main buses directly.
If this is a production system, there might be a problem. When the system is running in sync motion mode, there is no way to monitor the actuators as the communication lines are now occupied. This will not be acceptable on a real world application, but if this is just a proof of concept design, go for it.

estimating distance to ibeacon AVR

I want to ask about I Beacon advertising, especially Tx Power.
I used two BLE module HM10 and HM11. I make one as a ibeacon (HM10). and other one used to connect and listen to HM10 broadcasting.
I used MCU ATmega32 AVR tied with HM11 and I used scanf function to read the broadcast. I want to extract the last byte (Tx Power). I want to measure the distance with AVR programming.
Could you tell me the algorithm?
The formula Apple uses to calculate a distance estimate to an iBeacon is not published. There are a number of alternative formulas including this one, based on a best fit power curve, that we wrote for the Android Beacon Library.
Further research we have done shows that the formula above basically works, but it has two main imperfections:
It does not work well for weaker beacon transmitters. With weaker broadcasts, the distance is underestimated.
It does not account for varying signal gains in receivers. Different receivers have different antennas and receivers which measure the same signals differently.
There is an ongoing discussion of the best formula here.
A bit late but hopefully useful to others. I have given up on Apple's "Accuracy" number; as #davidyoung points out, different devices will have different signal gains. Now I am not an engineer but more of a math and statistics person, so I have gone down the route of "fingerprinting" an indoor space instead. Essentially I read all RSSI from all beacons installed in a certain "venue". Some might not be within reach and therefore I just assume, in such cases, an RSSI of -95 dBm (which seems to be the floor past which a signal is not read any more). Such constituted array has the same beacons in the same positions at all times (even across app launches). I compute a 5 seconds moving average for each beacon (so a I se 5 arrays to do that). The resulting avg array is then shifted up by 95 units and normalised so that the sum of all of its values is one. If you want to tag an an indoor "point" you collect many of these normalised average arrays on that specific spot. I go ahead and construct a database of "spots". To forecast your proximity to any spot in a database you simply compute a quadratic distance of your current reading and the all of the fingerprints in the database.
Which beacons to use? At least class 2 in power. How many? At least a couple per room (put them in two adjacent corners, on the ceiling or high up).
The last step that you need to do is match the fingerprints with an x,y coordinate on your map. I never did this step, because I am mainly interested in proximity applications and not fully fingerprint and indoor space.
Perhaps the discussion above will serve you as a guidance on a technique that is used by many indoor location companies.
Disclosure: I have recently open sourced my code doing the above calculations.

Resources