Weighted sum of series in Graphite - graphite

I'm working with series of response times from different servers in Graphite, and I have separate series showing the number of requests from each server. Now what I'd like to do is compute a weighted average of these, i.e.
avg = ((weight1 * value1) + (weight2 * value2)) / (weight1 + weight2)
However, I'm having problems computing the top part of this expression. I've tried inputting:
sumSeries(multiplySeries(series1,weights1),multiplySeries(series2,weights2))
as a target, but Graphite just renders "no data". Each of the multiplySeries calls on their own works.
What could I be doing wrong?

I've had the same issue, and was unable to find a decent solution, so I tried to write one - see this pull request: https://github.com/graphite-project/graphite-web/pull/300

Related

Graphite - calculate the difference of two series containing multiple values

I'm using Graphite and Grafana to graph some metrics. Given the following example, is it possible to output a difference that contains multiple values?
service.cluster1.host1.quota
service.cluster1.host1.usage
service.cluster1.host2.quota
service.cluster1.host2.usage
service.cluster1.host3.quota
service.cluster1.host3.usage
I'm trying to output separate values (based on last) (i.e. quota - usage) for each host. I can display all the data with two separate series using a wildcard for the 'host#' tag, but I'm not certain how I can output the difference per host. My goal is then to use limit() to only display the top few. I've been looking at functions like groupByNode() and diffSeries() but I haven't found a solution. I'm trying to avoid defining a separate series for each host.
I stumbled to the following solution using reduceSeries() and mapSeries() (given the previous example data):
limit(sortBy(aliasByTags(reduceSeries(mapSeries(service.cluster1.*.*, 2), 'diffSeries', 3, 'quota', 'usage'), 2), 'last', false), 10)

Grafana dividing 2 series

I'm trying to divide 2 series to get their ratio.
For example I'm got sites (a.com, b.com, c.com) as * (All sites)
Each of them has total sections count and errors occurred stats. I'm wanna to show as bars errors/sections where section > errors for each site to each erros for this site. Here I'm whant to got 3 bars.
So:
A parser.*.sections.total
B parser.*.errors.total
X-Axis Mode:Series
Display:DrawMode: Bars
When i'm trying to use divideSeries I'm always got VallueError(divideSeries second argument must reference exactly 1 series)
A new function divideSeriesLists was introduced in Graphite 1.0.2 for dividing one series with another. Both the series should be of same length.
You can use mapSeries with divideSeries to do vector matching of series in Graphite (or maybe asPercent depending on which version of graphite you are using).
An example query:
aliasByNode(reduceSeries(mapSeries(groupByNodes(parser.*.{sections,errors}.total, 'maxSeries', 1, 2), 0), 'asPercent', 1, 'sections', 'errors'), 0)
I'm not sure what aggregation function you are using so substitute maxSeries for the function you need.
Check out this blog post about using mapSeries with divideSeries for more explanation.
Here is an example from our system in the Grafana query editor:

How does Graphite handle oversamples

I am trying to understand how Graphite treats over samples. I read the documentation but could not find the answer.
For example, If I specify in Graphite that the retention policy should be 1 sample in 60 seconds and graphite receives something like 200 values in 60 seconds, what will be stored exactly ? Will graphite take an average or a random point in those 200 points ?
Short answer: it depends on the configuration, default is to take the last one.
Long answer, Graphite can configure, using regexp a strategy to aggregate several points in one sample.
These strategies are configured in storage-aggregations.conf file, using regexp to select metrics:
[all_min]
pattern = \.min$
aggregationMethod = min
This example conf, will aggregate points using their minimum.
By default, the last point to arrive wins.
This strategy will always be used to aggregate from higher resolutions to lower resolutions.
For example, if storage-schemas.conf contains:
[all]
pattern = .*
retentions = 1s:8d,1h:1y
Given the sum aggregation method, all points arrived for the same second will be summed and stored with a second resolution.
Points older than 8 days will be summed again to one hour resolution.
The aggregation configuration only applies when moving from archive i to archive i+1. For oversampling, it's always pick the last sample in the period.
The recommendation is to match sampling rate with the configuration.
see graphite issue

graphite summarize function not working as expected

I am feeding data into a metric, let say it is "local.junk". What I send is just that metric, a 1 for the value and the timestamp
local.junk 1 1394724217
Where the timestamp changes of course. I want to graph the total number of these instances over a period of time so I used
summarize(local.junk, "1min")
Then I went and made some data entries, I expected to see the number of requests that it received in each minute but it always just shows the line at 1. If I summarize over a longer period like 5 mins, It is showing me some random number... I tried 10 requests and I see the graph at like 4 or 5. Am I loading the data wrong? Or using the summarize function wrong?
The method summarize() just sums up your data values so co-relate and verify that you indeed are sending correct values.
Also, to localize weather the function or data has issues, you can run it on metricsReceived:
summarize(carbon.agents.ip-10-0-0-1-a.metricsReceived,"1hour")
Which version of Grahite are you running?
You may want to check your carbon aggregator settings. By default carbon aggregates data for every 10 seconds. Without adding any entry in aggregation-rules.conf, Graphite only saves last metric it receives in the 10second duration.
You are seeing above problem because of that behaviour. You need to add an entry for your metric in the aggregation-rules.conf with sum method like this
local.junk (10) = sum local.junk

Transform for graphite counter

I'm using the incr function from the python statsd client. The key I'm sending for the name is registered in graphite but it shows up as a flat line on the graph. What filters or transforms do I need to apply to get the rate of the increments over time? I've tried an apply function > transform > integral and an apply function > special > aggregate by sum but no success yet.
Your requested function is "Summarize" - see it over here: http://graphite.readthedocs.org/en/latest/functions.html
In order to the totals over time just use the summarize functions with the "alignToFrom =
true".
For example:
You can use the following metric for 1 day period:
summarize(stats_counts.your.metrics.path,"1d","sum",true)
See graphite summarize datapoints
for more details.
The data is there, it just needs hundreds of counts before you start to be able to see it on the graph. Taking the integral also works and shows number of cumulative hits over time, have had to multiple it by x100 to get approximately the correct value.

Resources