How to customize BlueZ? - bluetooth-lowenergy

I will be asking a very subjective question, but it is important as I am looking to recover from failure to effectively use BlueZ programatically.
Basically I envision an IoT edge device that runs on a miniature computer (Ex: Raspberry pi or Intel Compute Stick). The device would then run AlpineLinux OS and interact with Cloud.
Since it is IoT environment, it is needless to mention the importance of Bluetooth BLE over ISM band. Hence the central importance of being able to customize and work with BlueZ.
I am looking to do several things with BlueZ BLE including but not limited to
Advertising
Pairing
Characteristic
Broadcast
Secure transport of data etc...
Since I will be needing full control over data, for data-processing and interacting with cloud (Edge AI or Data-science on Cloud) I am looking at three ways of using BlueZ:
Make DBus API calls to BlueZ Methods.
Modify BlueZ codebase and make install a custom bin.
(So that callback handlers can be registered and wealth of other bluez
methods can be invoked)
Invoke BlueZ using command line utils like hcitool/bluetoothctl inside a program using system() calls.
No 1 is where I have failed. It is exorbitant amount of effort to construct and export DBus objects and then to invoke BlueZ methods. Plus there is no guarantee that you will be able to take care of all BLE issues.
No 2 looks very promising and I want to fully explore how feasible it is to modify the BlueZ code to my needs.
No 3 is the least desirable option, but I want to have it as a fallback option nevertheless.
Given my problem statement, what is the most viable strategy forward? I am asking this aloud so that I do not make more missteps and cost myself time and efforts.

Your best strategy is to start with the second way (which you already found promising) as this is a viable solution and many developers go about this method in order to create their BlueZ programs. Here is what I would do:-
Write all the functionality of the system in some sort of flowchart or state machine. This helps you visualise your whole system and what needs to be done to reach your end goal.
Try to perform all the above functionality manually using bluetoothctl and btmgmt. This includes advertising, pairing, etc. I recommend steering away from legacy commands such as hcitool and hciconfig as these have been deprecated and have a very different code structure.
When stumbling upon something that is not the default in bluetoothctl/btmgmt or you want to tweak the functionality, update the source to do so.
Finally, once you manually get the system to perform the functionality that you need (it doesn't have to be all, it can just be a subset of the functions), you can move to automating the whole process. This involves modifying the source for bluetoothctl/btmgmt commands so that instead of manual intervention, everything would be event-driven.
This is a bonus, but if you can create automated tests using python or some other scripting language, then this would ensure that your system is robust and that previous functionality doesn't break when adding new ones.
By the end of this process, you'll have a much better understanding of the internals of bluetoothctl/btmgmt and D-BUS APIs that you might be able to completely detach your code from the original bluetoothctl/btmgmt or create the program from scratch.
You probably already know this, but when modifying the tools, this is the starting point for the source code:-
bluetoothctl - client/main.c
btmgmt - tools/btmgmt.c
For more references on using bluetoothctl commands and btmgmt, please see the links below:-
BlueZ D-Bus C or C++ Sample
Bluetoothctl set passkey
https://stackoverflow.com/a/51876272/2215147
Bluez Programming
Linux command line howto accept pairing for bluetooth device without pin
https://stackoverflow.com/a/52982329/2215147
Bluetooth Low Energy in C - using Bluez to create a GATT server
I hope this helps.

Related

Light Weight Bluetooth LE library in C

I have been looking around for a simple Bluetooth LE library in C that allows me to scan for BLE devices, connect and receive periodic notifications from a given service UUID from the BLE device. Something that directly works with Bluetooth sockets and libbluetooth(created from BlueZ) and not using DBUS. Pairing and security functionality are not required.
Came across https://github.com/labapart/gattlib. Appears to be good but uses dbus API and has dependency on libdbus, glib, so on. To use this library, there is an additional 5MB of libraries required, hence decided to go without dbus. We do not have space on our device to support 5MB of bluetooth stack on compressed rootfs image. The total size of our rootfs image is 9 MB. The bluetooth stack with dbus itself appears to be more than 50% of our rootfs size.
There is also - https://github.com/edrosten/libblepp which is in c++ and doesn't use dbus. This would require to write a C wrapper to be used in C programs and also overhead of C++ constructs such as compiler generated copy constructors, assignment operators and so on. Also issues in cross-compiling.
Target board is Xilinx Zynq running Linux and the build system is buildroot.
Please suggest.
Thanks
Found a solution, it may be of help for someone...
After searching and going through Linux Conference and IOT conference videos on youtube, figured that Bluez has light weight executables and the code is present in src/shared folder of Bluez. For btgattclient.c produces "gatt-client" executable when compiled which does the same functionality as "gatttool" and is not dependant on bluetoothd or dbus. The only dependency it has is on glib-2.0.
This is helpful if we need lightweight tools when the OS has no bluetoothd running or has no dbus library installed.
Thanks
If you want to use BlueZ for BLE communication, the only supported API is the D-Bus API. Everything else is either discouraged or deprecated.
If you want something more minimal and/or not use BlueZ at all, you can use the HCI_CHANNEL_USER feature in Linux to get raw access to the HCI connection in the kernel. With this you can use any Bluetooth Host stack software or write your own minimal if you only require an extremely small subset.
Questions asking for software library recommendations are not allowed on Stack Overflow due to the possibility for opinion-based results though.

use mavlink without qgroundcontrol

I'm trying to conect my PX4Flow sensor to a raspberry pi. It seems that nearly everybody is using qgroundcontrol to access and control it. But as I'd like to integrate it into some bigger program, I'd like to control it with some self-written simple python code, if possible.
My aim is to:
access the camera (to measure the speed - later)
get gyrometer values
I don't need the ultra sonic sensor.
I found out that I can use MAVlink for the communication between the px4flow sensor and the raspberry pi. I cloned the git repository and followed the steps on https://github.com/mavlink/mavlink until the generation of header file (python -m mavgenerate). With that, I can generate a new python file. I don't know if this is correct, and I don't know what to do with that python file. No more file (header files) are copied or generated. How do I go on? How do I use the library? How do I even test the connection?
If I understand you correctly, you want to make a module to communicate with PX4Flow.
I have some experience in building a ground control station with ardupilot. I think the procedure is roughly the same:
Generate the proper mavlink library, what you have done by using mavgenerate. Read some guidance of mavlink communication procedure.
Read the source code in PX4Flow communication module https://github.com/PX4/Flow/blob/master/src/modules/flow/communication.c, which shows what kind of messages have been sent to client side (e.g. your communication module)
Start write the module code to communicate with PX4Flow. You may need to start with HEARTBEAT msg first to establish connections between your module and PX4Flow. Note that you can always receive HEARTBEAT messages from PX4Flow. You can start with decoding these ones.
Implement your other functionalities.
You can read sources code of QGourndControl during step 3 and step 4. Make sure to find the right module in its repo.
My communication module is built using JavaScript https://github.com/kvenux/nodegcs, if it helps.

FreeSWITCH minimal installation and module selection

As someone who is very new to the opensource PBX projects like Asterisk and FreeSWITCH, I am grappling with some information overload. Have read the basic FreeSWITCH docs on Wiki, but still have few questions. Since I am not very familiar with the terminology, I will try to use close approximations.
Trying to create a small/minimalistic build of FreeSWITCH, that needs to run on an rather old laptop (Celeron 1GHz, 512MB RAM, 20GB HDD, already running Debian "Wheezy"), and set it up as a 6-port GSM-SIP/Jabber gateway. So, by "small" and "minimalistic", I mean one which doesn't have modules/optional-software that is not absolutely necessary (e.g. no need for IVR announcements, or Skype integration) -- to keep memory footprint smallest, and occupy less hard-disk real-estate.
The rough idea is to have 6 GSM ports (via 'GSM-open module', similar to chan_dongle) towards public telephony network, and about 60 SIP extension, and support upto 6 calls involving GSM ports, and about 6 SIP-SIP calls (intra PBX), on this setup. I have read that the CPU overhead of GSMopen module is pretty low, so I am guessing this is possible.
Can someone confirm this to be a realistic goal?
What might be the minimum set of modules to select for minimalistic build?
For modules not chosen during initial build, can those be added later? If so, would it require me to rebuild FreeSWITCH completely, only the modules, or that everything would be built, but only configuration changes would be required to ensure that modules are loaded, and configure?
Is there any rough estimate of what might be the maximum call-rate that could be supported in such a configuration? For SIP-SIP calls? Given the underpowered processor, and little RAM (as per modern standards), I am guessing that both shall be bottlenecks, but adding RAM might still be possible (even if costly and difficult).
I have read that "hooks" can be created using Lua/Python/Java etc.. However if someone share share few examples of what-all is possible using such hooks, it would make the concept clearer. Can one hope to write an application like "missed call log" or "redirect on no answer" using these hooks?
Can someone confirm this to be a realistic goal?
Yes, this is quite realistic. You need to target as little as possible transcoding, because that's where CPU resources are needed. But even with a 1Ghz Celeron, 6 transcoded sessions seem quite realistic. But it needs testing :)
What might be the minimum set of modules to select for minimalistic build?
Just start with the default list of modules, and add gsmopen (I have no experience with gsm gateways, can't help with that part). The memory footprint is pretty low, and you may need some of those modules later.
For modules not chosen during initial build, can those be added later?
as far as I remember, Wiki describes this process. You edit modules.conf and make the specific module.
Is there any rough estimate of what might be the maximum call-rate that could be supported in such a configuration? For SIP-SIP calls? Given the underpowered processor, and little RAM (as per modern standards), I am guessing that both shall be bottlenecks, but adding RAM might still be possible (even if costly and difficult).
It really depends on complexity of your dialplan. Each context consists of a number of conditions, which are doing regexp match on channel variables. So, the more complex your dialplan is, the less CPS you get. But for a 6-channel gateway, I don't see this a problem. GSM network will be much slower than your box :)
I have read that "hooks" can be created using Lua/Python/Java etc.. However if someone share share few examples of what-all is possible using such hooks, it would make the concept clearer. Can one hope to write an application like "missed call log" or "redirect on no answer" using these hooks?
You can control every aspect of FreeSWITCH behavior with FreeSWITCH. There are even examples when the complete dialplan is re-implemented by an external program (Kazoo does that).
The simplest mode of operation is when your Lua/JS/Perl/Python script is launched from within the dialplan: then it receives a "session" object, and you can do whatever you want with the call: play sounds, bridge, forward, make a new call and bridge them together, and so on. Here in my blog there's a little practical example.
Then, you can build an external application which connects to the FS socket and monitors the events and performs actions on active calls.
Also, it can be done in the opposite direction: you run a server, and FS connects to it with its socket library.
Also, you can have an HTTP service which delivers pieces of XML configuration to FreeSWITCH, and it requests those on every call (this would be the most CPU-intensive application). This way, you can feed FS from some internal database, and build fault-tolerant systems.
I hope this helps :)
You can also find me in skype if needed.
FreeSWITCH is not really memory-hungry, and you can simply start with the default set of modules (the best is to use the prebuilt Debian packages). For example, on my 64bit machine, the FreeSWIITH process occupies only 35MB of memory.
freeswitch#vx03:~$ uname -a
Linux vx03 2.6.32-5-xen-amd64 #1 SMP Thu Nov 3 05:42:31 UTC 2011 x86_64 GNU/Linux
freeswitch#vx03:~$ ps -p 11873 v
PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
11873 ? S<l 10:29 0 0 258136 36852 2.3 /opt/freeswitch/bin/freeswitch -nc -rp -nonat -u freeswitch -g freeswitch
I will go through the rest of your questions later today

Have PLC Controller Listen/Send Custom TCP Packets?

I would like to be able to communicated with PLC controllers, so that I can send and receive custom commands on the PLC.
My idea of being able to do this was to have a TCP listener on the PLC that could read TCP incoming packets on a specific port, and execute routines based on the commands in the packets. It could also send information back via TCP/IP.
This would allow me to write software in multiple languages such as C#, PHP, JavaScript, etc. so that software can be used on any platform such as Windows, iOS, Android, etc. to issue commands to the PLC. This would also mean you do not need the PLC software (which can be costly) to view or control the PLC.
I am not a PLC programmer, so I do not know if PLC has the capability of sending and receive custom TCP packets. I would like to know that a) if it is possible b) how feasible it would be to do this and c) what exactly I should research so that I can accomplish this.
Thanks.
It sounds a bit like reinventing the wheel. You want to make something like KepServerEX?
http://www.kepware.com/kepserverex/
There are also two things to consider - one is the ability to interface with the PLC to share data (ie: for a custom HMI) and the other is to program the PLC. For the latter you still need the control software from the manufacturer unless you're willing to reverse engineer and re-write it from the ground up.
Keep in mind, also, that PLCs don't work the same way that other software does. There are no functions or procedures or classes or objects or even really any "commands", per se. A PLC is a system which executes a continuous fixed program of mostly raw logic rules and calculations. A typical interface to an HMI involves reading and writing directly to/from logic bits and word data (ie:hardware memory locations) which represent the current state of the machine. OPC already does this just fine so I'm not quite sure what you're going for.
If you're looking for a cheap/free alternative to a full commercial package, something here may work for you :
http://www.opcconnect.com/freesrv.php
If I understand correctly, when referred to "Run/Stop" you mean for the PLC to 'Start' or 'Stop' scanning the code and updating its I/O. If this is the situation, it would be perfectly suitable to add a Scan_If_On bit (which will be written by a TCP Command) in parallel connection with the "Start" bit controlled by the HMI.
This way, there will be 2 forms of "Starting" the process controlled by the PLC. HMI and TCP.

Microcontroller + Verilog/VHDL simulator?

Over the years I've worked on a number of microcontroller-based projects; mostly with Microchip's PICs. I've used various microcontroller simulators, and while they can be very helpful at times, I often find myself frustrated. In real life microcontrollers never exist alone and the firmware's behavior is dependent on the environment. However, none of the sims I've used provide decent support for anything outside the microcontroller.
My first thought was to model the entire board in Verilog. But, I'd rather not create an entire CPU model, and I haven't had much luck finding existing models for the chips I use. Regardless, I really don't need, or want, to simulate the proc at that level of detail, and I'd like to retain the debugging facilities provided by a regular processor sim.
It seems to me that the ideal solution would be a hybrid simulator that interfaces a traditional processor simulator with a Verilog model.
Does such a thing exist?
I've used the Altera Nios II processor embedded on a FPGA. Altera provides a toolchain for simulating the CPU (with its software) together with your custom logic in a simulator. I suppose that a similar setup can be achieved by downloading a VHDL/Verilog core of your CPU (Did you try opencores ? They have lots of stuff there).
But keep in mind that it is going to be mind-bogglingly slow, so don't expect to simulate whole complex processes this way. The best you can hope for is simulating fine software-hardware interaction points to debug problems. If you need a deeper simulation, consider running it on a FPGA with built-in monitoring code.
For the "simulate the whole board" approach,
The Free Model Foundry has a large number of models, some in VHDL others in Verilog, that are available now.. but you'll need to pay to have new models created. These are very helpful in being sure the board is built correctly.
But I think the more common approach when debugging your PIC is to just build a board, then work on the firmware. In the chip world, (where the firmware is running on a microprocessor in a chip that hasn't gone to fab yet) people often resort to very expensive systems (or renting time on them) that allow compiling part of the design into an emulator while the rest of the design runs in the normal simulator environment. Without the barrier of an expensive mask set for the chip, the cost is just not justifiable for a Circuit board. Although I've heard of some creative applications of Simulink (Mathworks) with FPGA's, but my recollection is that one either ran the system on the computer, or programmed the device and ran the same thing in realtime.
I believe both Cadence (ask about Palladium) and Mentor Graphics have that integrated solution if you have the money to spend on it.
What I have done recently is create an interface between the simulation environment and host system. Different hdl simulators have different interfaces, and getting the simulator NOT think in batch mode, the traditional simulation model, instead run for ever like a real design is half of the problem.
Then from the host using C (or whatever) you can create abstractions that may or may not allow you to write your application software for whatever target (depending on what language and compiler capabilities you have). For example you can make a generic poke and peek function and on the final target have those actually poke and peek memory or I/O, but for simulation through the abstraction you talk to a testbench in the simulation that simulates the same memory cycle.
I went one step further and used (Berkeley) sockets between the host and test bench so that the simulation can keep running while the host applications stop and start. Not unlike having a real processor with an OS that you are starting applications and running them to completion and starting another. At least for test applications, for delivery you probably only have one app.
By creating these abstraction layers I can write real applications that will be used on the target when it is built. Along the way you can use software simulation of the logic initially, then if you like build an fpga with an abstraction interface (throw away logic) say a uart for example. Replace the shim between the applications abstraction layer and the simulator with a uart interface, or whatever. Then when you marry the processor and logic in the same chip or on the same board, replace the abstraction layer again with direct calls to whatever interfaces they have always though they were talking to. If something breaks and you have retained the abstraction layer you can take the application back to the simulation model and have access to all of your logic internals.
Specifically this time around I am using a hdl language cyclicity cdl which is on sourceforge, the documentation needs some help but the examples may get you going, and it produces synthesizable verilog, so you get an extra win there. I threw out all the scripting batch stuff other than the bare minimum needed to connect and start a C simulation model. So my test bench is in C (well C++ technically) the sockets layer was done there. The output can be .vcd files which gtkwave uses. Basically you can do the bulk of your HDL design using open source software with no licenses, etc. By adding one or two lines of code to the CDL simulation part I was able to have it run as an infinite loop, which I can say works quite well, there doesnt appear to be any memory leaks, etc.
both modelsim and cadence have standardized ways of connecting host C programs to the simulation world and from there you can use an IPC to get to host applications talking to an abstraction layer api.
this is probably way overkill for a pic, I have given up pics a while ago for the faster and C friendly arm based micros anyway. There is/was an open core pic that you could simply incorporate into your simulation, even though that is not what you are trying to do here.
Not that I've seen. Your best bet is to properly define the interfaces and behavior between the uC and FPGA and then define a series of test waveforms that can be applied using an automated tester. You would have to make the automated tester (or perhaps a logic analyzer may have some such functionality) out of an FPGA or uC (apply waveform, watch interrupts, breakpoints, etc). If you really want I know that Opencores.org has PIC and AVR-like 8-bit uC cores defined as VHDL, so you could implement your entire project on the FPGA and then just debug that.
Generally there isn't need to model the CPU at the RTL level. Since you don't really care about what it does bit by bit; you generally care about what it does, e.g. register values, memories and bus access.
The simplest is call at Bus Functional Model. This just generates the read and writes that the CPU does, often based on a text file. These are available for some CPUs and many popular buses (e.g. PCI, PCIe). THese simulate super fast.
The next step up is a functional cycle-accurate model. Those simulate fast. They are often encrypted.
Last is a full RTL model. Those usually are only available if you are working closely with the CPU vendor, e.g. using their core in your ASIC. Typically these are encrypted, unless you are a huge company.
Memory models are typically cycle-accurate (e.g. Micron).
My workmates from the hardware department use FPGA simulation software quite often to find timing-bugs and trace down strange behaviours.
Simulating one or two milliseconds can take several hours, so using the simulator for anything but very small things is not feasable.
You may want to have a look at SystemC though. http://en.wikipedia.org/wiki/SystemC

Resources