I have some user level histogram metrics.
I want to display a singlestat in grafana that shows me the number of series where the count in histogram.bin_5000 > 0.
I can get it to display the number of series with countSeries. But, can't seem to get a filter to remove the series which are below a certain value.
With Count Series
With Count Series AND removeBelowValue
The functions removeBelow* and removeAbove* (including removeBelowValue) actually do not remove series, rather just set null (None) to matching datapoints.
There are two solutions:
use removeEmptySeries, that removes all metrics that have only null datapoints. This will also remove null metrics that existed before removeBelow*.
instead of remove*-family use maximumBelow/Above, minimumBelow/Above. that remove (sic!) series
Related
I wrote my own munin-plugin and I'm confused, how munin represents values between 0 and 1.
The 2nd line of values uses an notation with 'n'. What does it mean and how can I avoid it? I just want a common floating point value like 0.33!
What I'm doing wrong?
Here is the plugin configuration:
graph_title Title
graph_args --base 1000 -l 0
graph_vlabel label
graph_category backup
Every hint is welcome.
UPDATE
Ok, I finally found it: https://serverfault.com/questions/123761/what-does-the-m-unit-in-munin-mean
'm' stands for milli!
I am a bit confused, why munin is using it in this context!
Is there any possibility to avoid this?
Some configuration items that may be of interest. See Munin Plugin Reference.
You may change the resolution in numeric resolution in the table data with the below. Use C/C++ formatting, and spacing may be an issue in end result.
graph_printf %5.2le # default is %6.2lf
# appears l (for long) is needed
For certain graph types you can also alter the display period for the munin graph. Munin graphs the average data between last two (nominally 5 minute spaced) measurements - with default of per/sec units on the result. This is based on the var.type attribute which defaults to GAUGE. If your for example data is reporting accumulated counts you can use:
type DERIVE # data displayed is change from last
# reading
type COUNTER # accumulating counter with reset on rollover
for each of these you can change the vertical axis to use per/minute or per/hour with:
graph_period minute # if type DERIVE or COUNTER
# display avg/period vs default of avg/sec
Note that the table shows the derived data for the graph_period.
See RRDCreate == rrdtool create for implementation particulars- munin used rrd underneath.
I'm using opentsdb. I have ONE time series, with values at 10-minute intervals. I want to specify a start time and an end time, and get back a single number that is the sum of all the values in the specified time range. I tried what I thought to be correct
...start=<start>&end=<end>&m=sum...
but got back all the individual values rather than their sum.
Add the element downsample="0all-sum"; apparently, the "0all" is interpreted as "the interval containing all timestamps".
I read the docs a million times but I can't figure out what it does. What does nPercentile do in Graphite and how it differs from percentileOfSeries?
nPercentile: "Returns n-percent of each series in the seriesList."
This converts every series you give it to a single value*, that is the n-th percentile of that series (but it will output as many series as it was given as input).
*note that as everything in graphite, this single value will be returned as a series, containing many times (as many as needed to fill the requested time-range) the same single value.
percentileOfSeries: "percentileOfSeries returns a single series which is composed of the n-percentile values taken across a wildcard series at each point."
This returns a single series, which, for every timepoint, contains the n-th percentile of the different input series.
I have extensively read and re-read the Troubleshooting R Connections and Tableau and R Integration help documents, but as a new Tableau user they just aren't helping me.
I need to be able to calculate Kaplan-Meier survival probabilities across any dimensions that are dragged onto the sheet. Ideally, I would be able to retrieve this in a tabular format at multiple time points, but for now, I would be happy just to get it at a single time point.
My data in Tableau have columns for [event-boolean] and [time to event]. Let's say I also have columns for Gender and District.
Currently, I have a calculated field [surv] as:
SCRIPT_REAL('
library(survival);
fit <- summary(survfit(Surv(.arg2,.arg1) ~ 1), times=365);
fit$surv'
, min([event-boolean])
, min([time to event])
)
I have messed with Computed Using, Addressing, Partitions, Aggregate Measures, and parameters to the R function, but no combination I have tried has worked.
If [District] is in Columns, do I need to change my SCRIPT_REAL call or do I just need to change some other combination of levers?
I used Andrew's solution to solve this problem. Essentially,
- Turn off Aggregate Measures
- In the Measure Values shelf, select Compute Using > Cell
- In the calculated field, start with If FIRST() == 0 script_*() END
- Ctrl+drag the measure to the Filters shelf and use a Special > Non-null filter.
I'm using the incr function from the python statsd client. The key I'm sending for the name is registered in graphite but it shows up as a flat line on the graph. What filters or transforms do I need to apply to get the rate of the increments over time? I've tried an apply function > transform > integral and an apply function > special > aggregate by sum but no success yet.
Your requested function is "Summarize" - see it over here: http://graphite.readthedocs.org/en/latest/functions.html
In order to the totals over time just use the summarize functions with the "alignToFrom =
true".
For example:
You can use the following metric for 1 day period:
summarize(stats_counts.your.metrics.path,"1d","sum",true)
See graphite summarize datapoints
for more details.
The data is there, it just needs hundreds of counts before you start to be able to see it on the graph. Taking the integral also works and shows number of cumulative hits over time, have had to multiple it by x100 to get approximately the correct value.