I have created a simple BLE advertiser using Python's dbus Library (Bluez 5.48).
This sample application adds one service, and and this service has 2 characteristics.
On starting advertiser, all services and Characterictic UUID will be assigned a 16 bit long ATT Handle so that Client can directly read or write on those UUIDs using HANDLEs.
Advertiser works perfectly fine, and Client can subscribe to UUIDs. However Client expects a fixed ATT Handle for both UUIDs. But Advertiser swaps the ATT Handles when reconnected.
So is there any way by which I can either
1- Keep my ATT Handles static.
2- Or Advertiser should know ATT handles assigned to UUIDs.
I have spent good amount of time in finding this out, but got no success so far.
Using this code to create advertiser.
https://github.com/ukBaz/python-bluezero/blob/master/bluezero/peripheral.py
I think that BlueZ tries to do the right thing looking at this:
https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/doc/settings-storage.txt
The issue is likely to be that it only does it with paired devices and that example has security turned off. I can't remember how to turn it on off the top of my head. I'll try to find some time later to take a look.
Related
My understanding of the CSA Matter (aka project CHIP) spec (11.16) seems to indicate that nodes on the Matter fabric will sync the current time using the TimeSync cluster service.
If true, it seems like there should be a way to use Matter to get the current real time from within my app code. The Matter SDK includes various headers and functions for dealing with time. SystemClock().GetClock_RealTime() seems promising, but when I call it I only seem to get the number of microseconds since device boot rather than the real time. ToTimeval() in the same header also seems promising to convert to a timeval, but that declaration is not defined on ESP32, at least on my version.
I'm using the jakubdybczak/esp32-arduino-matter Arduino port of the Espressif Matter SDK.
I also tried accessing the time service on Node 0, but the returned attribute is always null:
endpoint_t *root_endpoint = endpoint::get(node,0);
esp_matter::cluster_t *time_cluster = cluster::create(root_endpoint,TimeSynchronization::Id, CLUSTER_FLAG_CLIENT);
// in this example I'm looking at timezone, but the same thing happens for LocalTime::Id
attribute_t *timezone_attribute = attribute::get(time_cluster,TimeSynchronization::Attributes::TimeZone::Id); // always NULL
esp_matter_attr_val_t timezone_value = esp_matter_invalid(NULL);
attribute::get_val(timezone_attribute, &timezone_value);
What is the proper way to use Matter to get the current real time?
I've encountered this problem a few times and now I wonder what the industry best practice is, the context is, we have a data store which aggregates pieces of information taken from multiple micro-services, the way the data comes to us is through messages broadcasted by every source when there is a change
The problem is how to guarantee that our data will be eventually consistent and that the updates were applied in the order they were meant to be received. For example, Let's say we have an entity User
User {
display_name : String,
email: String,
bio: String
}
And we are listening changes on those users to keep "display_name" updated in our data store, the messages come in a format such as
{
event: "UserCreated",
id: 1000,
display_name: "MyNewUser"
}
{
event: "UserChanged",
id: 1000,
display_name: "MyNewUser2"
}
There is a scenario where "UserChanged" reaches our listeners before "UserCreated" therefore our code won't be able to find user with id 1000 and fail both transactions. This is where a mechanism to sort those two is desired, we have considered:
Timestamps: The problem with timestamps is that although we know the last time we read an update we don't know how many events happened between the last event seen and the one we are currently processing
Sequence numbers: This is slightly better but if a sequence is lost then we won't update our storage unless we relax the rules a little bit, we could say that after some time if a sequence hasn't been seen then proceed with the rest of operations
If anyone knows common design patterns that tackle this sort of issue would be great to know, also open to suggestions on perhaps data modeling, etc. Bottomline, I'm pretty sure this is a common software problem that has been solved many times before
Thanks a lot for the help!
My first thought here would be to jump directly to a sequence numbers-based approach, but this works when you got 1 to 1 communication, like in TCP orientated communications. In your case, there is many to one, so without a coordination between the senders, it would be challenging to implement this approach correctly (ex. 2 senders can use the same sequence number).
Yes, losing the messages would be problematic, but I don't think that's the case of SQS or other cloud-based message queues (of course, it depends on the scale you're working on), because they're known for data duplication instead of data loss (AFAIK).
One idea I can think of right now is to add a new layer between the senders and the consumer, which will orchestrate the events. It can be the consumer itself, but it can be another service in front of it, let's call it orchestrator.
The orchestrator is connected with each senders (individually) via 2 queues:
The first queue is used to get the actual events from the sender
The second queue is used to signal back to the sender an ACK-like event (the message has been received, validated and successfully passed downstream to the consumer (or consumed directly)).
The way it works is the following:
The orchestrator gets the event from the sender A
It tries to execute a validation-like operation specific to the message (update on an inexisting user), the operation fails, so it sends a N-ACK message back to sender A, signaling that its message was not able to be processed successfully. Sender A will try to resend the message after some time.
In the mean time, it gets the "create user" message from sender B, the message get passed downstream to the consumer
Finally, it will get the message from sender A (after some retries).
This solution ensures message ordering in a pretty basic way, without keeping the events in memory, but rather in the queues. It may work, but it depends on a lot of factors, like number of events, number of senders, etc.
Currently I use the AcknoledgingMessageListener to implement a Kafka consumer using spring-Kafka. This implementation helps me listen on a specific topic and process messages with a manual ack.
I now need to build the following capability:
Let us assume that for an some environmental exception or some entry of bad data via this topic, I need to replay data on a topic from and to a specific offset. This would be a manual trigger (mostly via the execution of a Java class).
It would be ideal if I can retrieve the messages between those offsets and feed it is a replay topic so that a new consumer can process those messages thus keeping the offsets intact on the original topic.
CosumerSeekAware interface - if this is the answer how can I trigger this externally? Via let say a mvn -Dexec. I am not sure if this is even possible
Also let say that I have an crash time stamp with me, is it possible to introspect the topic to find the offset corresponding to the crash so that I can replay from that offset?
Can I find offsets corresponding to some specific data so that I can replay those specific offsets?
All of these requirements are towards building a resilience layer around our Kafka capabilities. I need all of these to be managed by a separate executable class that can be triggered manually providing the relevant data (like time stamps etc). This class should determine offsets and then seek to that offset, retrieve the messages corresponding to those offsets and post them to a separate topic. Can someone please point me in the right direction? I’m afraid I’m going around in circles.
so that a new consumer can process those messages thus keeping the offsets intact on the original topic.
Just create a new listener container with a different group id (new consumer) and use a ConsumerAwareRebalanceListener (or ConsumerSeekAware) to perform the seeks when the partitions are assigned.
Here is a sample CARL that seeks all assigned topics based on a timestamp.
You will need some mechanism to know when the new consumer should stop consuming (at which time you can stop() the new container). Maybe set max.poll.records=1 on the new consumer so he doesn't prefetch past the failure point.
I am not sure what you mean by #3.
I have DICOM C-StoreSCP application which receives DICOM images from my other C-StoreSCU application. My SCU always send one (and only one) and complete (all images from given study) study on one association. So SCP always know that all images received from SCU belong to single study. I know I can also check StudyIUID; but that is not my point of interest here.
I want to know total number of images in study that is being transferred. Using this data, I want to display status like "Received 3 of 10 images..." on screen. I can count images received (3 in this case) but how can I know total number of images in given study (10 in this case) that is being transferred?
Workaround:
On receiving first C-Store request on SCP, I should read the StudyIUID and establish new association with SCU (SCU should also support Q\R SCP capabilities in this case) for Q\R and get total count of images in study using C-Find.
Limitations: -
SCU should also support Q\R SCP features.
SCU should compulsorily send image count in C-Find response.
SCU should always send all images from only one study on one asociation.
I can easily overcome the limitations if I write SCU (with Q\R SCP capabilities) myself. But my SCP also receive images from third party SCUs those may not implement features necessary.
Please suggest if there is any DICOM compatible solution?
Is this possible using MPPS? I have not worked on MPPS part of DICOM yet.
Conclusion: -
Accepted answer (kritzel_sw) suggests very good solution (using MPPS) with only one drawback. MPPS is not mandatory service for each SCU. MPPS is applicable to only SCUs those actually acquire the image i.e. modalities. Even not all modalities support MPPS out of the box; they need unlock of feature with additional license cost and configurations. Also, there are lot of scenarios where modalities push instances to some intermediate workstation and the workstation further push it to SCP.
May be, I need to look into combination of DICOM + NON_DICOM wayout.
Good question, but no simple answer.
Expecting a Storage SCU to support the C-FIND-SCP as well is not going to work well in practice unless you are referring to archive servers / VNAs.
MPPS is not a bad idea. All attributes (Study, Series, SOP Instance UID) you need are mandatory, so it should be valid to rely on them. "Should" because I have seen vendors violating these constraints.
However, how can you be sure that the SCU has received the complete study? Maybe the study consists of CT and MR series, but the SCU sending the images to you only conforms to CT and rejects to receive MRs.
You might want to consider the Instance Availability Notification service which is another service class with which information about "who has got which image" can be made available to other systems. Actually this would exactly do what you need, because you know in advance for each AET ("device") which images are available there. But this service is not widely supported in practice.
Even if you really know which images are available on the system that is sending the study to you - how can you be sure that there is no user sitting in front of it who has just selected a sub-set of the study for sending.
Sorry, that I cannot provide a "real solution" to you but for the reasons I have mentioned above, I am not aware of any real-world system which supports the functionality (progress bar) you are describing.
I want that my MC52i auto accept an incoming call. If I use AT commands to answer manually (ATA) it works fine, but I'm not able to force the modem auto accepting an incoming call. On other devices it works with ATS0=1 but not on the MC52i. I think it has something to do with the GPRS Mode?
What type of call are you trying to accept, voice or CSD? Setting S0 ought to be enough for this. Try to enable AT+CRC=1 and examine the +CRING: <type> unsolicited result code (see 27.007 for details). Does it fail to auto answer all incoming calls? Try to have it auto answer both voice calls and data calls. Try to call from PSTN, ISDN and mobile phone (both same operator and different operator. Try several different phone models). If it fails to answer all those cases then you probably have to write off auto answer as a possibility. Oh, by the way, also try with several different sim cards (from at least more than one operator) in the modem to rule out problems with the operator/subscription.
I have probably given enough options to tweak so that testing every single combination is not feasible and useful, but pick some variations of all of them and set up at least 20 different test cases.
Although very unlikely, I'll mention the following for completeness and as a background to one of the many reasons why testing with several different operators is important:
There could be a problem if the network does not include call type information in the Bearer Capability in the SETUP message and then the phone does not know how to answer the call. This is very unlikely today, but several years ago some network could behave so. Because of this the phones used to have a "receive next call as" configuration to determine how to behave then. But I assume all newer phones to just ignore this scenario (It was applicable back in the days when Ericsson made mobile phones in their own brand, at least I remember seeing seeing this configuration option in their single menu style phones like T28. I do not remember if it survived the conversion to the icon based menus).