I'm trying to divide 2 series to get their ratio.
For example I'm got sites (a.com, b.com, c.com) as * (All sites)
Each of them has total sections count and errors occurred stats. I'm wanna to show as bars errors/sections where section > errors for each site to each erros for this site. Here I'm whant to got 3 bars.
So:
A parser.*.sections.total
B parser.*.errors.total
X-Axis Mode:Series
Display:DrawMode: Bars
When i'm trying to use divideSeries I'm always got VallueError(divideSeries second argument must reference exactly 1 series)
A new function divideSeriesLists was introduced in Graphite 1.0.2 for dividing one series with another. Both the series should be of same length.
You can use mapSeries with divideSeries to do vector matching of series in Graphite (or maybe asPercent depending on which version of graphite you are using).
An example query:
aliasByNode(reduceSeries(mapSeries(groupByNodes(parser.*.{sections,errors}.total, 'maxSeries', 1, 2), 0), 'asPercent', 1, 'sections', 'errors'), 0)
I'm not sure what aggregation function you are using so substitute maxSeries for the function you need.
Check out this blog post about using mapSeries with divideSeries for more explanation.
Here is an example from our system in the Grafana query editor:
Related
I wrote my own munin-plugin and I'm confused, how munin represents values between 0 and 1.
The 2nd line of values uses an notation with 'n'. What does it mean and how can I avoid it? I just want a common floating point value like 0.33!
What I'm doing wrong?
Here is the plugin configuration:
graph_title Title
graph_args --base 1000 -l 0
graph_vlabel label
graph_category backup
Every hint is welcome.
UPDATE
Ok, I finally found it: https://serverfault.com/questions/123761/what-does-the-m-unit-in-munin-mean
'm' stands for milli!
I am a bit confused, why munin is using it in this context!
Is there any possibility to avoid this?
Some configuration items that may be of interest. See Munin Plugin Reference.
You may change the resolution in numeric resolution in the table data with the below. Use C/C++ formatting, and spacing may be an issue in end result.
graph_printf %5.2le # default is %6.2lf
# appears l (for long) is needed
For certain graph types you can also alter the display period for the munin graph. Munin graphs the average data between last two (nominally 5 minute spaced) measurements - with default of per/sec units on the result. This is based on the var.type attribute which defaults to GAUGE. If your for example data is reporting accumulated counts you can use:
type DERIVE # data displayed is change from last
# reading
type COUNTER # accumulating counter with reset on rollover
for each of these you can change the vertical axis to use per/minute or per/hour with:
graph_period minute # if type DERIVE or COUNTER
# display avg/period vs default of avg/sec
Note that the table shows the derived data for the graph_period.
See RRDCreate == rrdtool create for implementation particulars- munin used rrd underneath.
I'm using Graphite and Grafana to graph some metrics. Given the following example, is it possible to output a difference that contains multiple values?
service.cluster1.host1.quota
service.cluster1.host1.usage
service.cluster1.host2.quota
service.cluster1.host2.usage
service.cluster1.host3.quota
service.cluster1.host3.usage
I'm trying to output separate values (based on last) (i.e. quota - usage) for each host. I can display all the data with two separate series using a wildcard for the 'host#' tag, but I'm not certain how I can output the difference per host. My goal is then to use limit() to only display the top few. I've been looking at functions like groupByNode() and diffSeries() but I haven't found a solution. I'm trying to avoid defining a separate series for each host.
I stumbled to the following solution using reduceSeries() and mapSeries() (given the previous example data):
limit(sortBy(aliasByTags(reduceSeries(mapSeries(service.cluster1.*.*, 2), 'diffSeries', 3, 'quota', 'usage'), 2), 'last', false), 10)
We have 4 data series and once in a while one of the 4 has a null as we missed reading the data point. This makes the graph look like we have awful spikes in loss of volume coming in which is not true as we were just missing the data point.
I am doing a basic sumSeries(server*.InboundCount) right now for server 1, 2, 3, 4 where the * is.
Is there a way where graphite can NOT sum the locations on the line and just have sum for those points in time be also null so it connects the line from the point where there is data to the next point where there is data.
NOTE: We also display the graphs server*.InboundCount individually to watch for spikes on individual servers.
or perhaps there is function such that it looks at all the series and if any of the values is null, it returns null for every series that it takes X series and returns X series points to the sum function as null+null+null+null hopefully doesn't result in a spike and shows null.
thanks,
Dean
This is an old question but still deserves an answer as a point of reference, what you're after I believe is the function KeepLastValue
Takes one metric or a wildcard seriesList, and optionally a limit to the number of ‘None’ values to skip over. Continues the line with the last received value when gaps (‘None’ values) appear in your data, rather than breaking your line.
This would make your function
sumSeries(keepLastValue(server*.InboundCount))
This will work ok if you have a single null datapoint here and there. If you have multiple consecutive null data points you can specify how far back before a null breaks your data. For example, the following will look back up to 10 values before the sumSeries breaks:
sumSeries(keepLastValue(server*.InboundCount, 10))
I'm sure you've since solved your problems, but I hope this helps someone.
I'm using the incr function from the python statsd client. The key I'm sending for the name is registered in graphite but it shows up as a flat line on the graph. What filters or transforms do I need to apply to get the rate of the increments over time? I've tried an apply function > transform > integral and an apply function > special > aggregate by sum but no success yet.
Your requested function is "Summarize" - see it over here: http://graphite.readthedocs.org/en/latest/functions.html
In order to the totals over time just use the summarize functions with the "alignToFrom =
true".
For example:
You can use the following metric for 1 day period:
summarize(stats_counts.your.metrics.path,"1d","sum",true)
See graphite summarize datapoints
for more details.
The data is there, it just needs hundreds of counts before you start to be able to see it on the graph. Taking the integral also works and shows number of cumulative hits over time, have had to multiple it by x100 to get approximately the correct value.
Is there a function in the Graphite URL API which allows us to ignore values which are inside (or outside) a certain range?
I believe you can check out the removeAboveValue and removeBelowValue functions.
For example, to exclude values below 2 and above 10:
http://host/render&target=removeAboveValue(removeBelowValue(a.b.c, 2), 10)
Ignoring values inside a range is a little more difficult, but it can probably be achieved by summing series where data has previously been filtered out (untested):
http://host/render&target=sum(removeAboveValue(a.b.c, 2), removeBelowValue(a.b.c, 10))