Integrate Akamai with graphite - cdn

I have an object delivery service on Akamai and i would like to fetch the usage data for our endpoints and send it to graphite and create some graphs on grafana.
so appreciate any suggestions or ideas to achieve this ?

We have built a Prometheus exporter for Akamai data, the cloudmonitor_exporter. If you couple this with the graphite_exporter, you should be able to get the data into Graphite (albeit a bit roundabout). If you want to go directly to Graphite, you can probably pick up some inspiration from how we built it for Prometheus.
(Note: This requires that you have the "Cloud Monitor" product in your Akamai account)

Related

OpenStack Cluster Event Notification

So far, based on my understanding of OpenStack Python SDK, I am able to read the Hypervisor, Servers instances, however, I do not see an API to receive and handle the change notification/events for the operations that happens on the cluster e.g. A new VM is added, an existing VM is deleted etc.
There is a similar old post (circa 2016) and I am curious if there have been any changes in Notification handling?
Notifications to external systems from openstack
I see a documentation, which talks about emitting notifications over a message bus that indicate different events that occur within the service.
https://docs.openstack.org/ironic/latest/admin/notifications.html
I have the following questions:
Does Openstack Python SDK support notification APIs?
How do I receive/monitor notifications for VM related changes?
How do I receive/monitor notifications for compute/hypervisor related changes?
How do I receive/monitor notifications for Virtual Switch related changes?
I see other posts such as Notifications in openstack and they recommend to use Ceilometer project, which uses a Database. Is there more light-weight solution than using a completely different service like Ceilometer?
Thanks in advance for your help in this regard.
As far as I see and I know, Openstack SDK doesn't provide such a function.
Ceilometer will also not help you. It only collects data by polling and by notifications over RPC. You would still have to poll the data from ceilometer by yourself. Beside this, ceilometer alone has the problem, that it only grow and will blow up your database, that's why you should also use gnocchi, when you use ceilometer.
At the moment I see only the 3 possible solutions for you:
Write your own tool, which runs permanently in the background and collect the data in a regular interval over OpenstackSDK and REST-API requests.
Write something, which does the same like ceilometer by reciving notifications over oslo-messaging (RPC). See the oslo_messaging_notifications-section in the configs: https://docs.openstack.org/ocata/config-reference/compute/config-options.html#id35 (neutron has also such an option) and use messagingv2 as driver like ceilometer does. But be aware here, that not every event creates a notification. The list of the ceilometer meter-data should give a good overview of which event are creating a notification and what can only be collected by polling: https://docs.openstack.org/ceilometer/pike/admin/telemetry-measurements.html. The number of notification-events is really low, so its possible, that it doesn't provides all events you want.
Use in the oslo_messaging_notifications-section in the configs log as driver to write the notification in a log-file, and write a simple program to read the log-file and process or forward the read content. Here is the same problem like in number 2, that not every event creates a notification (log-entry in this case here). This has also the problem, that the notifications and so also the event-logs, are created on the compute-nodes (as far as I know) and so you would have to watch all compute-nodes by your tool.
Based on the fact, that I don't know, how much work it would be to write a tool to collect notifications over RPC and because I don't know, if all events you want to watch really creates a notification (base on the overview here: https://docs.openstack.org/ceilometer/pike/admin/telemetry-measurements.html), I would prefer number 1.
Its the easiest way to create a tool, which runs GET-Requests over REST-API in a regular interval and forward the results to the desired destination as your own custom notifications.
I followed the below references to get this working. Also, chatted with the author of this code and video.
https://github.com/gibizer/nova-notification-demo/blob/master/ws_forwarder.py
https://www.youtube.com/watch?v=WFq5JWXa9AM
In addition, I faced other issues:
By default, OpenStack server would not allow you to connect to RabbitMQ bus from remote host because of an IPTABLE rule. You will have to enable access to the RabbitMQ Port in the IP table.

How to deploy same firebase cloud function under different regions?

Currently we are using express framework with in firebase cloud functions and we would like to have same api distributed on different regions.
Currently this is how it is exported now.
exports.api = functions.region("asia-east2").https.onRequest(app);
As our users are distributed all over globally, is there any way we can make sure the api request hits the nearest region function for faster access and low latency?
This is a little more complex that one might think (at least to my humble knowledge), the short answer is you can't do it by default, this is more like HTTPS load balancer.
Your functions are unique, "the trick" is to route your user to the client deployed on the nearest region. Google Cloud DNS is your friend here (but a not as cool friend as AWS Route53, I left a blog post to get you started). You can get the nearest nameserver taking advantage of the anycast network, but that is as far as you go. I'm not sure what are your implementation details but these resources helped to get started:
https://cloud.google.com/load-balancing/docs/https/https-load-balancer-example
https://cloud.google.com/dns/docs/overview#performance_and_timing
https://aws.amazon.com/blogs/aws/latency-based-multi-region-routing-now-available-for-aws/
Again, it's not about routing the functions, it's about routing the traffic to the client that point to that function, sadly in my case, I end up using AWS Route 53's latency/geo so, I don't have too many details here.

Audit logging CosmosDB

Wanting to validate my ARM template was deployed ok and to get an understanding of the telemetry options...
Under what circumstances do the following get logged to Log Analytics?
DataPlaneRequests
MongoRequests
QueryRuntimeStatistics
Metrics
From what I can tell arduously in the last few days connecting in different ways.
DataPlaneRequests are logged for:
SQL API calls
Table API calls even when the account was setup for SQL API
Graph API calls against an account setup for Graph API
Table API calls against an account setup for Table API
MongoRequests are logged for:
Mongo requests even when the account was setup for SQL API
However I haven't been able to see anything for QueryRuntimeStastics (even when turning on PopulateQueryMetrics) nor have I seen any AzureMetrics appear?
Thanks Alex for spending time and trying out different options of logging for Azure Cosmos DB.
There are primarily two types of monitoring paths for Azure Cosmos DB.
Metrics: These are low latency (<5 min) and aggregated metrics which are exposed on Azure Monitor API for consumption. THese metrics are primarily used for diagnosis of the app for any live site issues.
Logs: These are raw request logs coming at 2hours+ latency and are used for customer for primarily audit scenarios to understand who accessed the data.
Depending on your need you can choose either of the approaches.
DataPlaneRequests by default shows all the requests across all the API's and Mongo Requests only show Mongo specific calls. Please note Mongo requests would also be seen in Data Plane requests.
Metrics would not be see in Log Analytics due to a knowwn which our partner team is fixing.
Let me know if you have any further questions here.

What is the difference between kibana and opentsdb

As I know kibana and opentsdb both can be used in conjunction with Elastic logstash kibana (ELK) stack.
OpenTSDB offers a built-in, simple user interface for selecting one or more metrics and tags to generate a graph as an image. Alternatively an HTTP API is available to tie OpenTSDB into external systems such as monitoring frameworks, dashboards, statistics packages or automation tools. Each (Time series daemon)TSD uses the open source database HBase to store and retrieve time-series data.
Kibana can also be used for plotting metrics with access logs and custom logs, how does opentsdb helps in the system?
OpenTSDB is a time-series database. You could use OpenTSDB alongside Elasticsearch, for the metrics component. You would use Elasticsearch for text (logs perhaps) and OpenTSDB for metrics.
If you are using Logstash, there is an output that can export metrics from logs into OpenTSDB.
Kibana is a visualization tool for Elasticsearch queries. It assists you with searching and viewing the data that is stored in Elasticsearch. It is only used with Elasticsearch.
If you would like a unified front-end for Elasticsearch and OpenTSDB, you could consider Grafana, which has support for both Elasticsearch and OpenTSDB, but less functionality than Kibana in regard to Elasticsearch.

What is Kibana used for / where is it useful?

I’ve read the Kibana website so I know theoretically what Kibana can be used for, but I’m interested in real-world stories. Is Kibana solely used for log analysis? Can it be used as some kind of BI tool across your actual data set? Interested to hear what kind of applications its useful in.
Kibana is very useful for visualizing mixed types of data, not just numbers - metrics, but also text and GEO data. You can use to Kibana to visualize:
real-time data about visitors of your webpage
number of sales per region
locations from sensor data
emails sent, server load, most frequent errors
... and many other there's a plethora of use cases, you only need to feed your data into Elasticsearch (and find appropriate visualization)
Kibana is basically an analytics and visualization platform, which lets you easily visualize data from Elasticsearch and analyze it to make sense of it. You can assume Kibana as an Elasticsearch dashboard where you can create visualizations such as pie charts, line charts, and many others.
There are like the infinite number of use cases for Kibana. For example, You can plot your website’s visitors onto a map and show traffic in real time. Kibana is also where you configure change detection and forecasting. You can aggregate website traffic by the browser and find out which browsers are important to support based on your particular audience. Kibana also provides an interface to manage authentication and authorization regarding Elasticsearch. You can literally think of Kibana as a Web interface to the data stored on Elasticsearch.

Resources