Support for selection of kaa application key during execution - kaa

I am making multiple clients accessing same Kaa application. Consider it as fork of a process. As I can understand, Kaa server differentiate b/w clients based on the hash which is part of kaa public key. So essentially, for having multiple clients, I should be having multiple kaa public-private keys generated.
Now, questions comes here. As by default this key is taken by default and there is no option to select key for an application (if I haven't missed any such update), how can we achieve it. I can add same support within Kaa SDK at client side but I am afraid if any future update will make my code void or there will be effort needed to maintain my code during Kaa version upgrade.
Touching Kaa SDK is my last option, so is there any way Kaa could differentiate b/w clients?

Answer of question in other thread for its use case, https://groups.google.com/forum/#!topic/kaaproject/qwjVIWBMp8M. As when we run Kaa application, Kaa public/private key is generated in the same folder from where we do run the application. My answer of use case is:
I am making Node-RED interface for Kaa application. And there could be multiple Node-RED nodes and each will be separate client to the Kaa server but using the same kaa client application. So, to make all these clients separate, I need to create separate pub/private key. Since we are not executing Kaa app binary here directly, which is the usual case. Now, how to make sure for generating separate Kaa pub/private key. This is not the ideal case, as like in Kaa demo app, but an interesting scenario. If needed with bit more details, then I can share github initial code as well for my target.

Please use this new documentation. You can resolve this problem with using
KaaClientProperties class where you can specify PrivateKeyFileName and PublickKeyFileName. Also you can generate your own key pairs for your clients. Specify different names of key files and you can run all your clients in the same folder without conflicts.

Related

Verifying Confluent Kafka access rights programmatically

I'm trying to figure out if there is any way to programmatically check if provided credentials have read and write access to different Kafka topics using the Confluent Kafka lib for .NET.
What I want to do is basically a smoke test upon system startup,
to verify that the given credentials are correct.
e.g. when deploying to various environments with different settings.
Setting up an entire consumer or producer and then actually reading or writing data seems hacky and expensive.
I thought that maybe there could be something in e.g. the AdminClient that allows verifying this, but I don't see anything that hints in that direction.
yes there is AdminClient implementation, please check out the following github repository example for .NET
https://github.com/confluentinc/confluent-kafka-dotnet/blob/master/examples/AdminClient/Program.cs

OpenStack Cluster Event Notification

So far, based on my understanding of OpenStack Python SDK, I am able to read the Hypervisor, Servers instances, however, I do not see an API to receive and handle the change notification/events for the operations that happens on the cluster e.g. A new VM is added, an existing VM is deleted etc.
There is a similar old post (circa 2016) and I am curious if there have been any changes in Notification handling?
Notifications to external systems from openstack
I see a documentation, which talks about emitting notifications over a message bus that indicate different events that occur within the service.
https://docs.openstack.org/ironic/latest/admin/notifications.html
I have the following questions:
Does Openstack Python SDK support notification APIs?
How do I receive/monitor notifications for VM related changes?
How do I receive/monitor notifications for compute/hypervisor related changes?
How do I receive/monitor notifications for Virtual Switch related changes?
I see other posts such as Notifications in openstack and they recommend to use Ceilometer project, which uses a Database. Is there more light-weight solution than using a completely different service like Ceilometer?
Thanks in advance for your help in this regard.
As far as I see and I know, Openstack SDK doesn't provide such a function.
Ceilometer will also not help you. It only collects data by polling and by notifications over RPC. You would still have to poll the data from ceilometer by yourself. Beside this, ceilometer alone has the problem, that it only grow and will blow up your database, that's why you should also use gnocchi, when you use ceilometer.
At the moment I see only the 3 possible solutions for you:
Write your own tool, which runs permanently in the background and collect the data in a regular interval over OpenstackSDK and REST-API requests.
Write something, which does the same like ceilometer by reciving notifications over oslo-messaging (RPC). See the oslo_messaging_notifications-section in the configs: https://docs.openstack.org/ocata/config-reference/compute/config-options.html#id35 (neutron has also such an option) and use messagingv2 as driver like ceilometer does. But be aware here, that not every event creates a notification. The list of the ceilometer meter-data should give a good overview of which event are creating a notification and what can only be collected by polling: https://docs.openstack.org/ceilometer/pike/admin/telemetry-measurements.html. The number of notification-events is really low, so its possible, that it doesn't provides all events you want.
Use in the oslo_messaging_notifications-section in the configs log as driver to write the notification in a log-file, and write a simple program to read the log-file and process or forward the read content. Here is the same problem like in number 2, that not every event creates a notification (log-entry in this case here). This has also the problem, that the notifications and so also the event-logs, are created on the compute-nodes (as far as I know) and so you would have to watch all compute-nodes by your tool.
Based on the fact, that I don't know, how much work it would be to write a tool to collect notifications over RPC and because I don't know, if all events you want to watch really creates a notification (base on the overview here: https://docs.openstack.org/ceilometer/pike/admin/telemetry-measurements.html), I would prefer number 1.
Its the easiest way to create a tool, which runs GET-Requests over REST-API in a regular interval and forward the results to the desired destination as your own custom notifications.
I followed the below references to get this working. Also, chatted with the author of this code and video.
https://github.com/gibizer/nova-notification-demo/blob/master/ws_forwarder.py
https://www.youtube.com/watch?v=WFq5JWXa9AM
In addition, I faced other issues:
By default, OpenStack server would not allow you to connect to RabbitMQ bus from remote host because of an IPTABLE rule. You will have to enable access to the RabbitMQ Port in the IP table.

Understanding the Pact Broker Workflow

Knowing full well, there are many types of workflows for different ways of integrating Pact, I'm trying to visualize what a common work flow looks like. I developed this Swimlane for Pact Broker Workflow.
How do we run a Provider verification on an older Provider build?
How does this change with tags?
When does the webhook get created back to the Provider?
What if different Providers have different base urls (i.e. build systems)?
How does a new Provider build alert about the Consumers if the Provider fails?
Am I thinking about this flow correctly?
I've tried to collect my understanding from Webhooks, Using pact where the consumer team is different from the provider team, and Publishing verification results to a Pact Broker . Assuming I am thinking about the problem the right way and did not completely miss some documentation, I'd gladly write up an advise work flow documentation for the community.
Your swimlane diagram is a good picture of the workflow, with the caveat that once everything is all set up, it's rare to manually start provider builds from the broker.
The provider doesn't ever notify the consumers about verification failure (or success) in the process. If it did, then you could end up with circular builds.
I think about it like this:
The consumer tests create a contract (the Pact file).
This step also verifies that the consumer can work with a provider that fulfils that contract (using the mock provider).
Then, the consumer gives this Pact file to the broker (if configured to do so)
Now that there's a new pact, the broker (if configured) can trigger a provider build
The provider's CI infrastructure builds the provider, and runs the pact verification
The provider's CI infrastructure (if configured) tells the broker about the verification result.
The broker and the provider's build system are the only bits that know about the verification result - it isn't passed back to the consumer at the moment.
A consumer that is passing the tests means the consumer can say "I've written this communication contract and confirmed that I can hold up my side of it". Failure to verify the contract at the provider end doesn't change this statement.
However, if the verification succeeds, you may want to trigger a consumer deployment. As Beth Skurrie (one of the primary contributors to Pact) points out in the comments below:
Communicating the status of the verification back to the consumer is actually a highly important thing, as it tells the consumer whether or not they can be deployed safely. It is the missing part of the pact workflow at the moment, and I'm working away as fast as I can to rectify this.
Currently, since the verification status is information you might like to know about - especially if you're unable to see the provider's CI infrastructure - you might like to check out the pact build badges, which are a lighter way of checking the broker.

Use Service Bus as a push notification workaround (excluding Notification Hub)

I'm developing an on-site WinRT application and would like to send push notifications when a new update is available (since it's an on-site app, the installation process requires a custom loader rather than the Windows Store app).
However, I'm trying to wrap my head around how such a system should ideally function.
I could obviously create a service which returns the latest version number and the app would periodically poll the service for that info. It would be easy to implement, but it seems like a very ugly approach (the need to send constant requests for the latest version doesn't seem like an elegant one).
I have the power of Azure at my disposal. Obviously a Notification Hub would be the preferred way. HOWEVER, I have no intention of getting a Windows Store account purely to develop a "private" on-site application. So using the Notification Hub is a no-go.
I've thought about using topics. This WOULD work, but every client would probably need to be added as a subscriber. While technically possible, the administrative overhead could be a major issue. I also thought about creating the subscribers dynamically by the WinRT application itself. However the SDK (Azure Messaging Managed) for handling this is too old and always throws an error (invalid date format, which apparently was changed at some point) when creating subscribers. The only "current" package from MS seems to be for the full .NET framework, not the limited WinRT counterpart.
I'm looking for ways around this problem. Either a newer, proper SDK (which can handle listing and creating subscriptions as well as receiving topic), or perhaps a completely different approach to the problem.
Note - I don't need any code just yet. Pseudocode or simply a description on how the communication would work should suffice.
For a limited scale deployment, Service Bus Topics may quite well be a feasible choice. Or you may want to take a look at IoT Hub as an alternative push notification channel.
The new Service Bus client SDK for .NET Standard lives this repo, but we do not build binaries as of yet: https://github.com/Azure/azure-service-bus-dotnet

How can I send data updates to Meteor server from a desktop application?

I'm just getting into using Meteor, and yesterday I managed to get a leaflet map running with custom tiles. My goal is to get player positional data from a game and send it to a Meteor server to distribute to other players viewing the map in real time.
The data is available to a small desktop application on the player's machine and Meteor can easily handle the distribution part, so all I'm missing is getting the desktop application to talk to the Meteor server. What would be the best way to go about this? Is there a way to get Meteor to listen for incoming data from an external source?
You can communicate directly with a meteor server using its native Distributed Data Protocol (DDP). You can find the specification document here, and an up-to-date node driver here. Some searching may turn up implementations in other languages.
Alternatively, you could use server-side-routing in iron router to allow clients to use HTTP to POST/PUT their positions. The drawback of this solution is that you may need to come up with some way for clients to uniquely identify themselves (e.g. using a unique key) so you don't get bogus data.

Resources