What is the best way to design a GATT characteristic/service to allow querying an array of similar objects?
For example, lets say I want my device to present the set of all network neighbors (discovered through a mechanism beyond the scope of this question).
With a traditional management interface, I would probably use a get-first/get-next interface to walk the list. Or I'd have one call to get a list of identifiers and then a get call to get the details corresponding to one identifier.
But BLE GATT doesn't seem to have direct support for this.
I could make a characteristic return the entire array of objects, but that has the potential to be very large.
I could also make a characteristic to get a list of identifiers, but then how would I get the details for one object? Maybe write it to a characteristic that will cause the device to send the results via a notification/indication?
I don't think it would be right to define a characteristic that, when read, returns the details of an object previously specified by writing to a different characteristic. I'm sure this could be done, but it doesn't seem to fit within the spirit of how GATT is expected to operate.
Related
We're trying to determine whether to use separate GATT characteristics or combine multiple properties into one custom characteristic.
The benefits of combining is fairly clear: one transaction, many properties.
But even with multiple characteristics (one property per), the transaction seems quick enough.
Is this entirely an arbitrary decision? Or are there best practices?
This is highly relevant and depends on the system you're trying to implement. My recommendation is to go for many separate characteristics. The reason is that you will be simplifying the application both on the GATT server side (where all the characteristics are stored) and the GATT client side. For example, if you use multiple characteristics, this means that you have to add extra intelligence to your GATT client side to separate the data in those characteristics. If the data side is variable, then this will be even more complicated. If in the future you have to update this combined characteristic with new features, the task will probably be relatively more complex for both the client and the server side compared to having many characteristics as things will be more categorised and compartmentalised.
Another thing to consider is testing. When you create your GATT server application, you'd want to test it with one or many different GATT client implementations (e.g. iOS device, Linux machine, etc). For that, it will be a lot easier if the remote device is not getting a combined characteristic and trying to make sense of the data.
Finally, please note that as you said, the transaction in Bluetooth is relatively quick and you will not be getting a huge difference when using multiple characteristics vs one. The reason is that by default the characteristic length is 20 and the Bluetooth packet length is 27 (unless you're using a Bluetooth 4.2 feature known as Data Length Extension, which not all phones support). Therefore, even if you use characteristic lengths greater than 20, the Bluetooth stack/baseband will divide the characteristic into chunks and send them over air, therefore not achieving the improved throughput that you aimed for.
I hope this helps.
Currently I use the AcknoledgingMessageListener to implement a Kafka consumer using spring-Kafka. This implementation helps me listen on a specific topic and process messages with a manual ack.
I now need to build the following capability:
Let us assume that for an some environmental exception or some entry of bad data via this topic, I need to replay data on a topic from and to a specific offset. This would be a manual trigger (mostly via the execution of a Java class).
It would be ideal if I can retrieve the messages between those offsets and feed it is a replay topic so that a new consumer can process those messages thus keeping the offsets intact on the original topic.
CosumerSeekAware interface - if this is the answer how can I trigger this externally? Via let say a mvn -Dexec. I am not sure if this is even possible
Also let say that I have an crash time stamp with me, is it possible to introspect the topic to find the offset corresponding to the crash so that I can replay from that offset?
Can I find offsets corresponding to some specific data so that I can replay those specific offsets?
All of these requirements are towards building a resilience layer around our Kafka capabilities. I need all of these to be managed by a separate executable class that can be triggered manually providing the relevant data (like time stamps etc). This class should determine offsets and then seek to that offset, retrieve the messages corresponding to those offsets and post them to a separate topic. Can someone please point me in the right direction? I’m afraid I’m going around in circles.
so that a new consumer can process those messages thus keeping the offsets intact on the original topic.
Just create a new listener container with a different group id (new consumer) and use a ConsumerAwareRebalanceListener (or ConsumerSeekAware) to perform the seeks when the partitions are assigned.
Here is a sample CARL that seeks all assigned topics based on a timestamp.
You will need some mechanism to know when the new consumer should stop consuming (at which time you can stop() the new container). Maybe set max.poll.records=1 on the new consumer so he doesn't prefetch past the failure point.
I am not sure what you mean by #3.
I am using XBee Digimesh Modules in API-Mode to send data between different industrial machines allowing them to share data, information and commands.
The API-Mode offers some basic commands, mainly to perform addressing and talk with the XBee Module itself in order to do configuration, etc.
Sending user data is done via a corresponding XBee API-Command which allows to send user-defined data with a maximum payload of 72 Bytes.
Since I want to expand this communication to allow integration of more machines, etc. I am thinking about how to implement a basic communication system that's tailored perfectly to the super small payload of just 72 Bytes.
Coming from the web, I normally would use some sort of JSON here but that would fill up the payload very quickly.
Also it's not possible to send a frame with lot's of information since this also fills up the payload very quickly.
So I came up with a different way of communicating. Instead of transmitting frames packed with information, what about sending some sort of Messages like this:
Machine-A Broadcasts: Who's there?
Machine-B Answers: It's me I am a xxx-Machine
Machine-C Answers: It's me I am a xxx-Machine
Machine-A now evaluates the replies and decides to work with Machine-B (because Machine-C does not match As interface):
Machine-A to B: Hello B, Give me some Value, please!
Machine-B to A: There you go: 2.349590
This can be extended to different short messages. After each message the sender holds the type of message in a state and the reply will be evaluated in relation to the state / context.
What I was trying to avoid was defining a bit-based protocol (like MIDI) which defines all events as bit based flags. Since we do not now what type of hardware there will be added in the future I want a communication protocol that's very flexible and does not need a coordinator or message broker, etc.
But since this is the first time I am thinking about communication protocols I am curious to know if there might be some existing frameworks that can handle complex communication on a light payload.
You might want to read through the ZigBee Cluster Library specification with a focus on the general commands. It describes a system of attribute discovery and retrieval. Each attribute has a 16-bit ID and a datatype (integers of various sizes, enumerated types, bitmaps) that determines its size.
It's a protocol designed for the small payloads of an 802.15.4 network, and you could potentially based your protocol off of a subset of it. Other ZigBee specifications are simply a list of defined attributes (and commands) for a given 16-bit cluster ID.
Your master device can go through a discovery process to get a list of attribute IDs, and then send a request to get values for multiple IDs in one shot. The response will be packed tight with a 16-bit ID, 8-bit attribute type and then variable length data. Even if your master device doesn't know what the ID corresponds to, it can pass the data along to other systems (like a web server) that do know.
I am in need of synchronizing models/forwarding models between different computers. The models can represent tables but also trees (entries with child items). The setup will be:
Server providing a TCP Service (or other communication)
keep a list of models that need to be synchronized (registerSourceModel() in the like of proxy models)
provide the modellist with unique IDs to clients
provides QDataStream for packet serialization
Client(s) providing a TCP Connection to the server
create NetworkModel instances based on modellist from server
forward / provide any data() queries to the model
forward / provide any setData() queries to the model
... rest of typical model methods
There are two options of setting up the synchronization:
Have an QAbstractItemModel (may QStandardItemModel) with a complete cache (duplication of all data) on the client side, just updating on remote dataChanged()/layoutChanged() signals
Directly forward any requests over the network resulting in stub (Loading***) data entries until the real data has been fetched (dataChanged() signals) from the network
While option 1) will have quite the memory print, since we will have to duplicate any relevant model, it will most of the time have a very short response time, since synchronization can be done in background when needed.
Option 2) will have little to (almost) no memory usage, since everything will be queried directly. I am still not sure how this will actually look&feel when having big models to be visualized in a view. Think about an company catalogue (or list of amazon articles with some details) being required to be queried one by one (data() works on top of a single QModelIndex) over network.
As a result we probably will go with the first option.
The problems I encounter in both options are the synchronization/forwarding of a valid QModelIndex. These are always invalid on the remote computer. I did some research on the QSortFilterProxyModel() as this is kind of similar but within the same process space. This model keeps a identifying list for all indicies for mapping the "filtered" indicies.
Will this require myself to keep a identifying list of QPersistantModelIndex() on the server and a mapped list of IDs to my own QModelIndex() (with synchronization about these unique IDs of the list)?
Is there another option to "link" two models or even just put the whole model into some container and pipe that into my stream?
I would like to have pre-emption calls in Asterisk. I think there is no Asterisk support for this feature so i'm trying to implement it following a simliar algorithm like the one showed in this thread: Asterisk - Pre-emption calls
So I'm having problems in this step:
check if B in call with lower priority caller( ASTDB or REALTIME or fastagi script).
I know how to check if B is in a call using for example DEVICE_STATE(device) cmd, but i can't achieve to know who is the other caller in order to see his priority.
So, How can I know if one users is in a call and who is the other caller inside this call?
Thanks a lot.
You can read variables of any channel using
SHARED(varname[,channel])
-= Info about function 'SHARED' =-
[Synopsis]
Gets or sets the shared variable specified.
[Description]
Implements a shared variable area, in which you may share variables between
channels.
The variables used in this space are separate from the general namespace
of the channel and thus ${SHARED(foo)} and ${foo} represent two completely
different variables, despite sharing the same name.
Finally, realize that there is an inherent race between channels operating
at the same time, fiddling with each others' internal variables, which is
why this special variable namespace exists; it is to remind you that variables
in the SHARED namespace may change at any time, without warning. You should
therefore take special care to ensure that when using the SHARED namespace,
you retrieve the variable and store it in a regular channel variable before
using it in a set of calculations (or you might be surprised by the
result).
Sure you have set variables first.
You can set in variables or in ASTDB name of current speaking channel using in-call macro
General complexity of any solution like you want is above average, need person with at least 1-2 year of extensive experience with *.