How is StartNofity invoked in Python BLE implementation? - bluetooth-lowenergy

Environment: Ubuntu 20.04, Python
My BLE gatt server implementation is roughly based on the example here - https://github.com/Douglas6/cputemp.
Here is my relevant code:
class RxCharacteristic(Characteristic):
def __init__(self, service):
self.notifying = False
Characteristic.__init__(self, RXCHARID,
['read', 'notify'], service)
def ReadValue(self, options):
value = []
return value
def StartNotify(self):
if self.notifying:
return
print('Notifying...')
self.notifying = True
When a BLE characteristic is published with "notify" attribute, my understanding is that StartNotify/StopNotify methods are automatically invoked. I guess this happens when the client connects/disconnects.
In my case, when I test my connectivity from "nRF Connect" Android app, I don't see StartNotify getting called.
I am wondering what triggers calling of StartNotify/StopNotify methods. Why am I not seeing it being called? Regards.

You need to enable notification in the nRF Connect app. It is the multiple down arrows in the app that does this:
The Bluetooth daemon will then call the StartNotify D-Bus method which in the linked example means this and this.
You can monitor the D-Bus calls with:
sudo busctl monitor org.bluez

Related

grpc-swift: How to set timeout for an RPC in Swift?

I am using https://github.com/grpc/grpc-swift for inter-process communication. I have a GRPC server written in Go that listens on a unix domain socket, and a macOS app written in Swift that communicates with it over the socket.
Let's say the Go server process is not running and I make an RPC call from my Swift program. The default timeout before the call will fail is 20 seconds, but I would like to shorten it to 1 second. I am trying to do something like this:
let callOptions = CallOptions(timeLimit: .seconds(1)) // <-- Does not compile
This fails with compile error Type 'TimeLimit' has no member 'seconds'.
What is the correct way to decrease the timeout interval for Swift GRPC calls?
As mentioned in the error TimeLimit don't have a member seconds. This seconds function that you are trying to access is inside TimeAmount. So if you want to use a deadline, you will need to use:
CallOptions(timeLimit: .deadline(.now() + .seconds(1)))
here the .now is inside NIODeadline and it as a + operator defined for adding with TimeLimit (check here).
and for a timeout:
CallOptions(timeLimit: .timeout(.seconds(1)))
Note that I'm not an expert in Swift, but I checked in TimeLimitTests.swift and that seems to be the idea.

Flask-socketIO + Kafka as a background process

What I want to do
I have an HTTP API service, written in Flask, which is a template used to build instances of different services. As such, this template needs to be generalizable to handle use cases that do and do not include Kafka consumption.
My goal is to have an optional Kafka consumer running in the background of the API template. I want any service that needs it to be able to read data from a Kafka topic asynchronously, while also independently responding to HTTP requests as it usually does. These two processes (Kafka consuming, HTTP request handling) aren't related, except that they'll be happening under the hood of the same service.
What I've written
Here's my setup:
# ./create_app.py
from flask_socketio import SocketIO
socketio = None
def create_app(kafka_consumer_too=False):
"""
Return a Flask app object, with or without a Kafka-ready SocketIO object as well
"""
app = Flask('my_service')
app.register_blueprint(special_http_handling_blueprint)
if kafka_consumer_too:
global socketio
socketio = SocketIO(app=app, message_queue='kafka://localhost:9092', channel='some_topic')
from .blueprints import kafka_consumption_blueprint
app.register_blueprint(kafka_consumption_blueprint)
return app, socketio
return app
My run.py is:
# ./run.py
from . import create_app
app, socketio = create_app(kafka_consumer_too=True)
if __name__=="__main__":
socketio.run(app, debug=True)
And here's the Kafka consumption blueprint I've written, which is where I think it should be handling the stream events:
# ./blueprints/kafka_consumption_blueprint.py
from ..create_app import socketio
kafka_consumption_blueprint = Blueprint('kafka_consumption', __name__)
#socketio.on('message')
def handle_message(message):
print('received message: ' + message)
What it currently does
With the above, my HTTP requests are being handled fine when I curl localhost:5000. The problem is that, when I write to the some_topic Kafka topic (on port 9092), nothing is showing up. I have a CLI Kafka consumer running in another shell, and I can see that the messages I'm sending on that topic are showing up. So it's the Flask app that's not reacting: no messages are being consumed by handle_message().
What am I missing here? Thanks in advance.
I think you are interpreting the meaning of the message_queue argument incorrectly.
This argument is used when you have multiple server instances. These instances communicate with each other through the configured message queue. This queue is 100% internal, there is nothing that you are a user of the library can do with the message queue.
If you wanted to build some sort of pub/sub mechanism, then you have to implement the listener for that in your application.

Is there a way to do grpc bidirectional streaming application load testing to send 100req/sec to a server

I was testing a grpc bidrectional streaming application which is in below link and it is working fine.
https://github.com/melledijkstra/python-grpc-chat
Do we have any tool to trigger bidirectional streaming say 100 requests per second as we use wrk/jmeter for rest/http APIs?
I tried exposing the API (run) into rest and triggering 100 req/sec using wrk tool. Seems to be not a proper approach.
#app.route('/', methods=['GET'])
def send_message(self, event):
"""
This method is called when user enters something into the textbox
"""
message = self.entry_message.get()
if message is not '':
n = chat.Note()
n.name = self.username
n.message = message
print("S[{}] {}".format(n.name, n.message))
self.conn.SendNote(n)
Complete code is the actual grpc chat application: https://github.com/melledijkstra/python-grpc-chat
I wanted to do a load testing of a grpc bidirectional streaming application to send 100 requests per second to a server. Is there a possible approach to it?
This is to test that my server can handle the enough load with this chat functionality.

QLowEnergyService never transitions to ServiceDiscovered state on custom bluetooth service

I have created a Bluetooth communicator in Qt 5.5.1 following the Qt documentation. I have gotten to the point where I am able to view a list of services offered by a Bluetooth device. The services are generated by:
QLowEnergyService *service = controller->createServiceObject(serviceUuid);
Where controller is a QLowEnergyController and serviceUuid is a QBluetoothUuid. The service is created successfully but since it is a custom service offered by the device I am trying to connect to, the name is unknown. At this point I call:
service->discoverDetails();
which transitions the service to the QLowEnergyService::DiscoveringServices state from the QLowEnergyService::DiscoveryRequired state. Once this happens, the state never changes again and no error is ever thrown. Is there a way to pull the characteristics of an "unknown service"? I have checked the Uuid against what I expected for the service and it is correct. I also have the Uuid of the expected characteristics.
Note: I am using a pyqt (Python binding library of QT C++).
I stumbled upon issue while trying to connect to some device which offers two services. One is the standard battery service and another is private custom non-standard service.
I noticed that I was able to discover the batter service successfully, but I was not able to discover that custom service. However, for some reason, when I subscribed to service_error signal, the discovery works fine, and whenever i comment it out, it does not work.
void QLowEnergyService::error(QLowEnergyService::ServiceError newError)
I know it is funny and I do not have an explanation, but it could be related and i felt it is worth sharing.
QMetaObject::invokeMethod(this, "discoverCharacteristics", Qt::QueuedConnection);
void discoverCharacteristics() {
service->discoverDetails();
}

Listening to notification after creation of instance in Openstack

Am interested in finding out if there is a way to create a listener within openstack which gets notified every time a new instance gets created.
Try to take a look at OpenStack workload measuring project https://launchpad.net/ceilometer
One way to do this is by using Django signals. So, you can create a signal and send it after the line of code which creates an instance. The function which expects the notification can be made the receiver which listens to this signal. The function will wait till it receives the signal.As an example:
#Declaring a signal
from django.dispatch import Signal
instance_signal = Signal(providing_args=['param1', 'param2'])
#function that sends the signal
def instance_create():
--code that creates the instance
instance_signal.send(sender='instance_create', param1='I am param 1', param2='I am param 2')
#Defining the function that listens to this signal(the receiver)
def notify_me(**kwargs):
x, y= kwargs['param1'], kwargs['param2']
#Connect the signal to the receiver (Can be written anywhere in the code)
instance_signal.connect(notify_me)
The best part about Django Signals is that you can create the signal, the receiver function and connect them anywhere in the whole application. Django Signals are very useful in scheduling tasks or in your case, receiving notifications.

Resources