Cloud service monitor changes - asp.net

I've cloud service and in the cloud service (in azure view )there is tab which is called
monitor which you can add there settings (like key value),I want to track if some user
did for it some changes and when like Auditing ,there is a process which should I invoke to track this changes,with user name and timestamps ?

If I'm not mistaken, what you're looking for is the Operation Logs functionality. It is available in Azure Portal (https://manage.windowsazure.com). Once you login into the portal, click on MANAGEMENT SERVICES and then OPERATION LOGS. The operation you would want to track is Change Configuration (or something like that).
If you want to track it programmatically, the Service Management API operation you would want to invoke is List Subscription Operations

I believe the management portal only makes available a finite list of performance counters for monitoring. And it sounds like perhaps you are trying to add a custom event, which you would not in turn be able to add to that list.
So the broader question then turns to how to log in-app events in a way that I can monitor for. To that end, I'd recommend first looking at Azure Diagnostics. This gives you a fairly low impact way to capture telemetry about your application. You can then in turn scan the result logs and act on the events you need to capture.

Related

Forcing an "Offline" Mode in Firestore

We are building an app for our teams out in the field that they collect their daily information using Firebase. However one of our concerns is poor connectivity. We are looking to build an Online/Offline button they can click to essentially work offline for when things slow down. We've built a workflow in which we query all the relevant information from Firestore.
I wanted to know if there was a way to tell Firestore to work directly on the cache only and not try to hit the servers directly. I don't want Firestore attempting to make server calls until they enable online again.
You shouldn't need to do this. If you use realtime listeners, they will already first return the data from the local cache, and only then reach out to the server to check for updates.
If you are performing one-time reads, the SDK will by default try to reach the server first (since it has only one chance to give you a value). If you want it to only check the local cache, you can pass an argument to the get call to do so.
You can also disable the network completely, in which case the client will never call on the network and only serve from the local cache. I recommend reading about that and more in the documentation on using Firestore offline.

OpenStack Cluster Event Notification

So far, based on my understanding of OpenStack Python SDK, I am able to read the Hypervisor, Servers instances, however, I do not see an API to receive and handle the change notification/events for the operations that happens on the cluster e.g. A new VM is added, an existing VM is deleted etc.
There is a similar old post (circa 2016) and I am curious if there have been any changes in Notification handling?
Notifications to external systems from openstack
I see a documentation, which talks about emitting notifications over a message bus that indicate different events that occur within the service.
https://docs.openstack.org/ironic/latest/admin/notifications.html
I have the following questions:
Does Openstack Python SDK support notification APIs?
How do I receive/monitor notifications for VM related changes?
How do I receive/monitor notifications for compute/hypervisor related changes?
How do I receive/monitor notifications for Virtual Switch related changes?
I see other posts such as Notifications in openstack and they recommend to use Ceilometer project, which uses a Database. Is there more light-weight solution than using a completely different service like Ceilometer?
Thanks in advance for your help in this regard.
As far as I see and I know, Openstack SDK doesn't provide such a function.
Ceilometer will also not help you. It only collects data by polling and by notifications over RPC. You would still have to poll the data from ceilometer by yourself. Beside this, ceilometer alone has the problem, that it only grow and will blow up your database, that's why you should also use gnocchi, when you use ceilometer.
At the moment I see only the 3 possible solutions for you:
Write your own tool, which runs permanently in the background and collect the data in a regular interval over OpenstackSDK and REST-API requests.
Write something, which does the same like ceilometer by reciving notifications over oslo-messaging (RPC). See the oslo_messaging_notifications-section in the configs: https://docs.openstack.org/ocata/config-reference/compute/config-options.html#id35 (neutron has also such an option) and use messagingv2 as driver like ceilometer does. But be aware here, that not every event creates a notification. The list of the ceilometer meter-data should give a good overview of which event are creating a notification and what can only be collected by polling: https://docs.openstack.org/ceilometer/pike/admin/telemetry-measurements.html. The number of notification-events is really low, so its possible, that it doesn't provides all events you want.
Use in the oslo_messaging_notifications-section in the configs log as driver to write the notification in a log-file, and write a simple program to read the log-file and process or forward the read content. Here is the same problem like in number 2, that not every event creates a notification (log-entry in this case here). This has also the problem, that the notifications and so also the event-logs, are created on the compute-nodes (as far as I know) and so you would have to watch all compute-nodes by your tool.
Based on the fact, that I don't know, how much work it would be to write a tool to collect notifications over RPC and because I don't know, if all events you want to watch really creates a notification (base on the overview here: https://docs.openstack.org/ceilometer/pike/admin/telemetry-measurements.html), I would prefer number 1.
Its the easiest way to create a tool, which runs GET-Requests over REST-API in a regular interval and forward the results to the desired destination as your own custom notifications.
I followed the below references to get this working. Also, chatted with the author of this code and video.
https://github.com/gibizer/nova-notification-demo/blob/master/ws_forwarder.py
https://www.youtube.com/watch?v=WFq5JWXa9AM
In addition, I faced other issues:
By default, OpenStack server would not allow you to connect to RabbitMQ bus from remote host because of an IPTABLE rule. You will have to enable access to the RabbitMQ Port in the IP table.

Filter insights by server

I'm using application insights to get telemetry data off of my servers, but there is one server linked to my insights ID that i don't have access to, and it is giving me all sorts of bogus data. How do i filter out all of the telemetry from that server so that it doesn't appear alongside any of my other telemetry responses?
I know i can go into analytics and filter servers out there, but I'm talking about the main metrics page, the one that sends out alerts, shows a general overview of the servers, etc.
In short, there's no easy way to do it. you'd have to edit the filters on every part on every place in order to customize them, and pin those customized versions to dashboards and never look at the default experiences again.
How is there a server sending telemetry using your key you don't have access to? is it somewhere else where someone made a copy/paste error and somehow used your key?
i believe you can contact support and have them generate a new instrumentation key for that resource. then you'd update the places you do have under your control to use the new ikey, and the thing not under your control would still be using the old, now invalid key.

What's the recommended way to handle microservice processing bugs new insights?

Before I get to my question, let me sketch out a sample set of microservices to illustrate my dilemma.
Scenario outline
Suppose I have 4 microservices:
An activation service where features supplied to our customers are (de)activated. A registration service where members can be added and changed. A secured key service that is able to generate secure keys (in a multi step process) for members to be used when communicating with them with the outside world. And a communication service that is used to communicate about our members with external vendors.
The secured key service may however only request secured keys if this is a feature that is activated. Additionally, the communication service may only communicate about members that have a secured key AND if the communication feature itself is activated.
Because they are microservices, each of the services has it's own datastore and is completely self sufficient. That is, any data that is required from the other microservices is duplicated locally and kept in sync by means of asynchronous messages from the other microservices.
The dilemma
I'm actually facing two main dilemma's. The first is (pretty obviously) data synchronization. When there are multiple data stores that need to be kept in sync you have to account for messages getting lost or processed out of order. But there are plenty of out of the box solutions for this and when all fails you could even fall back to some kind of ETL process to keep things in sync.
The main issue I'm facing however is the actions that need to be performed. In the above example the secured key service must perform an action when it either
Receives a message from the registration service for a new member when it already knows that the secured keys feature is active in the activation service
Receives a message from the activation service that the secured keys feature is now active when it already knows about members from the registration service
In both cases this means that a message from the external system must lead to both an update in the local copy of the data as well as some logic that needs to be processed.
The question
Now to the actual question :)
What is the recommended way to cope with either bugs or new insights when it comes to handling those messages? Suppose there is a bug in the message handler from the activation service. The handler does update the internal data structure, but it fails to detect that there are already registered members and thus never starts the secure key generation process. Alternatively it could be that there's no bug, but we decide that there is something else we want the handler to do.
The system will have no reason to resubmit or reprocess messages (as the message didn't fail), but there's no real way for us to re-trigger the behavior that's behind the message.
I hope it's clear what I'm asking (and I do apologize if it should be posted on any of the other 170 Stack... sites, I only really know of StackOverflow)
I don't know what is the recommended way, I know how this is done in DDD and maybe this can help you as DDD and microservices are friends.
What you have is a long-running/multi-step process that involves information from multiple microservices. In DDD this can be implemented using a Saga/Process manager. The Saga maintains a local state by subscribing to events from both the registration service and the activation service. As the events come, the Saga check to see if it has all the information it needs to generate secure keys by submitting a CreateSecureKey command. The events may come in any order and even can be duplicated but this is not a problem as the Saga can compensate for this.
In case of bugs or new features, you could create special scripts or other processes that search for a particular situation and handle it by submitting specific compensating commands, without reprocessing all the past events.
In case of new features you may even have to process old events that now are interesting for your business process. You do this in the same way, by querying the events source for the newly interesting old events and send them to the newly updated Saga. After that import process, you subscribe the Saga to these newly interesting events and the Saga continues to function as usual.

Use GAE background thread to trigger SSE to multiple web clients

All,
I have completed the basic GAE "Guestbook" example which uses Google Cloud Endpoints and Google Cloud Messaging. I can successfully add a note to the guestbook and have it appear on all registered devices.
I've also used the super simple Server Sent Event (SSE) mechanism to have a web page initiate an event source and then update itself as events are received. But separate web pages appear to create their own distinct event sources (even if using the same URI to the event source) and thus get their own events at their own times.
The objective here is to create a bit of collaboration such that user actions can come from an android device or a web page and the effects the received action are then pushed to all connected users/devices/web pages.
I have assumed I will need a background module and that both Endpoints and 'normal' web pages / queries would channel the received user action to that background module. I believe I can get that far. Next, I need the background module to trigger a push notification to all interested parties.
I believe I can trigger a Google Could Messaging event to registered Android devices from that background module.
But it isn't clear to me how a background module can be the source of an SSE, or how the background module can best communicate with a foreground module that already is the source of an SSE.
I've looked at the Google Queue API, but I have a feeling I'm making something quite easy much more difficult than it needs to be. If you were not going to 'poll' for changes from a web page... and you wanted to receive notifications from an SSE source when changes were made by other users, possibly using Android devices rather than a typical web page, and the deployed application is running on the Google Application Engine, what would you recommend?
Many thanks,
Randy
You are on the right track, not really sure why you are using the background module but from what i understood you need to:
Your front end module receives an update
You retrieve a list of all devices receiving that update
Use the Queue service to send the update via GCM to every single device
Why use queues? because front end instances have a 1 min time limit per request and you'll need to queue work in order to go beyond that time to serve you (potentially) thousands of users.
Now, If you already have a backend instance (which does not have the 1min limit) you could just iterate over the list and send all messages on one request. I believe you have a 24 hr request limit so you should be OK. But in this scenario you don't have need for the front end module, you can just hit this server straight up.

Resources