how do you create a tree rule given tags and metric in opentsdb - opentsdb

I can create a tree with this:
http://localhost:4242/api/tree?name=GraphiteTree&method_override=post
I can create rule based on metrics:
http://localhost:4242/api/tree/rule?treeid=1&level=0&order=0&type=METRIC&separator=\.&method_override=post
I really need to combine tag and metric type to create a rule based tree in opentsdb. My data looks like this:
My data in opentsdb looks like this:
pcpu 1356998400 42.5 host=web1.example.com
used_memory 1356998400 60 host=web1.example.com
pcpu 1356998400 10 vGuest=guest1.example.com
used_memory 1356998400 60 vGuest=guest1.example.com
disk_capacity 1356998400 42.5 disk=nas1.example.com
used_disk 1356998400 42.5 disk=nas1.example.com
There does not seem be a lot of docs about this. I am hoping that somebody has done something like this. How would I go about creating a rule for tags and metrics combined, as shown above?

Related

Can you name a variable without declaring it?

When I'm using functions that output tables, I like to see some variable names. However, I'm a very lazy man who doesn't want to write three lines when I write one. Take a simple function like tail. What I like to do is:
boxes<-1:50
names(boxes)<-paste("Box",boxes)
tail(boxes)
and this will get me the output:
Box 45 Box 46 Box 47 Box 48 Box 49 Box 50
45 46 47 48 49 50
However, that felt like too much work to me. The bulk of my code was spent naming boxes. What I wanted to write was something like
tail((1:50);names(1:50)<-paste("Box",1:50))
Is anything like this possible for when I want to name a variable without declaring it?
We could use setNames without creating an object
setNames(1:50, paste0("Box", 1:50))
Or another option is enframe/deframe
library(tibble)
deframe(enframe(sprintf("Box%d", 1:50))[2:1])

Is it possible to aggregate data with varying nesting depth in Grafana?

I have data in Grafana with different nesting depths. It looks like this (the nesting depth differs depending on the message type):
foo.<host>.type.<type-id>
foo.<host>.type.<type-id>.<subtype-id>
foo.<host>.type.<type-id>.<subtype-id>.<more-nesting>
...
The <host> field can be the IP of the server sending the data and <type-id> is the type of message that it handled. There are quite a lot of message types but for the visualization I am only interested in the first level of <type-id> aggregated over all hosts.
For example, if I have this data:
foo.ip1.type.type1 = 3
foo.ip1.type.type2.subtype1 = 5
foo.ip2.type.type1 = 4
foo.ip2.type.type2.subtype1 = 9
foo.ip2.type.type2.subtype2 = 13
I would rather see it like this:
foo.*.type.type1 = 7 (3+4)
foo.*.type.type2 = 27 (5+9+13)
Then it would be easier to produce a graph where you can see which types of messages are most frequent.
I have not found a way to express that in Grafana. The only option that I see is to create a graph by manually creating queries for each message type. If there were only a handful of types that would be OK, but in my example, the number of types is quite high and even worse, they can change over time. When new message types are added, I would like to see them without having to change the graph.
Does Grafana support to aggregate the data in such a way? Can it visualize the data aggregated by one node and while summing up everything that comes after the node (like the --max-depth option in the Unix du command)?
I am not very experienced with Grafana, but I am starting to believe this functionality is not supported. Not sure whether Grafana allows to preprocess the data, but if the data could be transformed to
foo.ip1.type.type1 = 3
foo.ip1.type.type2_subtype1 = 5
foo.ip2.type.type1 = 4
foo.ip2.type.type2_subtype1 = 9
foo.ip2.type.type2_subtype2 = 13
it would also be valid workaround as the number of subtypes in very low in my data (often there is even only one subtype).
I think the groupByNode function might be useful to you. By doing something like:
groupByNode(foo.ip1.type.*.*,3,"sumSeries")
You'll need to repeat this for each level of nesting. Hope that helps.
More information is available here:
http://graphite.readthedocs.io/en/latest/functions.html#graphite.render.functions.groupByNode
If you want to do it the way you alluded to in your example you could use aliasSub

Converting GDELT to Turtle Triples

I want to convert GDELT events to turtle triples.
Is there a standard for making URIs for instances?
Can I just make one up? Something like
http://www.gdeltproject.org/Events/1.0/GDELTEventID_674976286
perhaps?

ratio of sum of columns in kibana

I have data in following format
Date A B
20150901 23.4 2.4
20150901 245 22
20150901 21 2.4
20150902 243 4.2
20150902 7.5 1.2
20150903 .54 8.4
what I want do is SUM(colA)/SUM(colB) for each date. I am using kibana for this but I can not find a way to do this. All it shows is SUM(colA) but I cannot save it to use for finding the ratio.
Can somebody help me with this?
You must use scripted field, create a new field, and then in that field you will have the sum of a + b of each data. Then when you discover data or do some graph only select the data that where you need the sum.
This was challenge I had too.
Have a look also at this great kibana plugin:
https://github.com/datasweet-fr/kibana-datasweet-formula
The original discussion can also be found here:
https://github.com/elastic/kibana/issues/2646
It supports several functions of aggregated metrics.
It worked for my case of ratios of aggregated sums over time, similar to your case.

SAS Enterprise Guide Count IF

I'm looking to be able to perform the equivalent of a count if on a data set similar to the below. I found something similar here, but I'm not sure how to translate it into Enterprise Guide. I would like to create several new columns that count how many date occurrences there are for each primary key by year, so for example:
PrimKey Date
1 5/4/2014
2 3/1/2013
1 10/1/2014
3 9/10/2014
To be this:
PrimKey 2014 2013
1 2 0
2 0 1
3 1 0
I was hoping to use the advanced expression for calculated fields option in query builder, but if there is another better way I am completely open.
Here is what I tried (and failed):
CASE
WHEN Date(t1.DATE) BETWEEN Date(1/1/2014) and Date(12/31/2014)
THEN (COUNT(t1.DATE))
END
But that ended up just counting the total date occurrences without regard to my between statement.
Assuming you're using Query Builder you can use something like the following:
I don't think you need the CASE statement, instead use the YEAR() function to calculate the year and test if it's equal to 2014/2013. The test for equality will return a 1/0 which can be summed to the total per group. Make sure to include PrimKey in your GROUP BY section of query builder.
sum(year(t1.date)=2014) as Y2014,
sum(year(t2.date)=2013) as Y2013,
I don't like this type of solution because it's not dynamic, i.e. if your years change you have to change your code, and there's nothing in the code to return an error if that happens either. A better solution is to do a Summary Task by Year/PrimKey and then use a Transpose Task to get the data in the structure you want it.

Resources