I wrote my own munin-plugin and I'm confused, how munin represents values between 0 and 1.
The 2nd line of values uses an notation with 'n'. What does it mean and how can I avoid it? I just want a common floating point value like 0.33!
What I'm doing wrong?
Here is the plugin configuration:
graph_title Title
graph_args --base 1000 -l 0
graph_vlabel label
graph_category backup
Every hint is welcome.
UPDATE
Ok, I finally found it: https://serverfault.com/questions/123761/what-does-the-m-unit-in-munin-mean
'm' stands for milli!
I am a bit confused, why munin is using it in this context!
Is there any possibility to avoid this?
Some configuration items that may be of interest. See Munin Plugin Reference.
You may change the resolution in numeric resolution in the table data with the below. Use C/C++ formatting, and spacing may be an issue in end result.
graph_printf %5.2le # default is %6.2lf
# appears l (for long) is needed
For certain graph types you can also alter the display period for the munin graph. Munin graphs the average data between last two (nominally 5 minute spaced) measurements - with default of per/sec units on the result. This is based on the var.type attribute which defaults to GAUGE. If your for example data is reporting accumulated counts you can use:
type DERIVE # data displayed is change from last
# reading
type COUNTER # accumulating counter with reset on rollover
for each of these you can change the vertical axis to use per/minute or per/hour with:
graph_period minute # if type DERIVE or COUNTER
# display avg/period vs default of avg/sec
Note that the table shows the derived data for the graph_period.
See RRDCreate == rrdtool create for implementation particulars- munin used rrd underneath.
Related
I'm trying to divide 2 series to get their ratio.
For example I'm got sites (a.com, b.com, c.com) as * (All sites)
Each of them has total sections count and errors occurred stats. I'm wanna to show as bars errors/sections where section > errors for each site to each erros for this site. Here I'm whant to got 3 bars.
So:
A parser.*.sections.total
B parser.*.errors.total
X-Axis Mode:Series
Display:DrawMode: Bars
When i'm trying to use divideSeries I'm always got VallueError(divideSeries second argument must reference exactly 1 series)
A new function divideSeriesLists was introduced in Graphite 1.0.2 for dividing one series with another. Both the series should be of same length.
You can use mapSeries with divideSeries to do vector matching of series in Graphite (or maybe asPercent depending on which version of graphite you are using).
An example query:
aliasByNode(reduceSeries(mapSeries(groupByNodes(parser.*.{sections,errors}.total, 'maxSeries', 1, 2), 0), 'asPercent', 1, 'sections', 'errors'), 0)
I'm not sure what aggregation function you are using so substitute maxSeries for the function you need.
Check out this blog post about using mapSeries with divideSeries for more explanation.
Here is an example from our system in the Grafana query editor:
I spent more than two months with RRDTOOL to find out how to store and visualize data on graph. I'm very close now to my goal, but for some reason I don't understand why it is happening that some data are considered to be NaN in my case.
I counting lines in gigabytes sized of log files and have feeding the result to an rrd database to visualize events occurrence. The stepping of the database is 60 seconds, the data is inserted in seconds base whenever it is available, so no guarantee the the next timestamp will be withing the heartbeat or within the stepping. Sometimes no data for minutes.
If have such big distance mostly my data is considered to be NaN.
b1_5D.rrd
1420068436:1
1420069461:1
1420073558:1
1420074583:1
1420076632:1
1420077656:1
1420079707:1
1420080732:1
1420082782:1
1420083807:1
1420086881:1
1420087907:1
1420089959:1
1420090983:1
1420094055:1
1420095080:1
1420097132:1
1420098158:1
1420103284:1
1420104308:1
1420107380:1
1420108403:1
1420117622:1
1420118646:1
1420121717:1
1420122743:1
1420124792:1
1420125815:1
1420131960:1
1420134007:1
1420147326:1
1420148352:1
rrdtool create b1_4A.rrd --start 1420066799 --step 60 DS:Value:GAUGE:120:0:U RRA:AVERAGE:0.5:1:1440 RRA:AVERAGE:0.5:10:1008 RRA:AVERAGE:0.5:30:1440 RRA:AVERAGE:0.5:360:1460
The above gives me an empty graph for the input above.
If I extend the heart beat, than it will fill the time gaps with the same data. I've tried to insert zero values, but that will average out the counts and bring results in mils.
Maybe I taking something wrong regarding RRDTool.
It would be great if someone could explain what I doing wrong.
Thank you.
It sounds as if your data - which is event-based at irregular timings - is not suitable for an RRD structure. RRD prefers to have its data at constant, regular intervals, and will coerce the incoming data to match its requirements.
Your RRD is defined to have a 60s step, and a 120s heartbeat. This means that it expects one sample every 60s, and no further apart than 120s.
Your DS is a gauge, and so the values you enter (all of them '1' in your example) will be the values stored, after any time normalisation.
If you increase the heartbeat, then a value received within this time will be used to make a linear approximation to fill in all samples since the last one. This is why doing so fills the gaps with the same data.
Since your step is 60s, the smallest sample time sidth will be 1 minute.
Since you are always storing '1's, your graph will therefore either show '1' (when the sample was received in the heartbeart window) or Unknown (when the heartbeat expired).
In other words, your graph is showing exactly what you gave it. You data are being coerced into a regular set of numerical values at a 1-minute step, each being 1 or Unknown.
thank you kindly for your time.
I'm merely trying to plot a simple time series data set, but am running into a number of basic issues (one of which I'll ask here). For example, I have a notepad file that starts with:
"x"
"1",2.731
"2",2.562
"3",2.632
"4",2.495
"5",1.978
...and so on...
So R reads it just fine, e.g. myfile=read.table("F:/Documents/myfile.txt",sep=""). However, the values seem to change under a conversion using R's ts function, i.e.
myfile = ts(myfile,start=1,end=120,frequency=1)
plot(myfile, type="o",pch=22,lty=1,pty=2,xlab="Month",ylab="Values",main="My File")
So when plotted, the first value starts at 20+ for some reason, as opposed to 2+. Furthermore, R assumes that the y-axis goes from 1 to 120 (mirroring the x-axis), which is not the right scale (i.e. 0 through 10). In another data set that I did (using integers), it was shifted upward by 1. In any event, I believe the issue is probably about how to properly identifying the y-axis.
Any ideas on how to tackle this? Thanks!
We have 4 data series and once in a while one of the 4 has a null as we missed reading the data point. This makes the graph look like we have awful spikes in loss of volume coming in which is not true as we were just missing the data point.
I am doing a basic sumSeries(server*.InboundCount) right now for server 1, 2, 3, 4 where the * is.
Is there a way where graphite can NOT sum the locations on the line and just have sum for those points in time be also null so it connects the line from the point where there is data to the next point where there is data.
NOTE: We also display the graphs server*.InboundCount individually to watch for spikes on individual servers.
or perhaps there is function such that it looks at all the series and if any of the values is null, it returns null for every series that it takes X series and returns X series points to the sum function as null+null+null+null hopefully doesn't result in a spike and shows null.
thanks,
Dean
This is an old question but still deserves an answer as a point of reference, what you're after I believe is the function KeepLastValue
Takes one metric or a wildcard seriesList, and optionally a limit to the number of ‘None’ values to skip over. Continues the line with the last received value when gaps (‘None’ values) appear in your data, rather than breaking your line.
This would make your function
sumSeries(keepLastValue(server*.InboundCount))
This will work ok if you have a single null datapoint here and there. If you have multiple consecutive null data points you can specify how far back before a null breaks your data. For example, the following will look back up to 10 values before the sumSeries breaks:
sumSeries(keepLastValue(server*.InboundCount, 10))
I'm sure you've since solved your problems, but I hope this helps someone.
I'm trying to graph data using statsd and graphite. I have a simple counter, I increment it by 1, and then when I graph the values for the counter over the day, I see strange values like 0.09 as the peak in my graph (see http://i.stack.imgur.com/o4gmz.png)
This graph should be showing 2 logins, but instead it's showing 0.09. If I change the time scale from 1 day to the last 15 minutes, then it correctly shows the two logins (see http://i.stack.imgur.com/23vDJ.png)
I've set up my finest retention to be in 10s increments in storage-schemas.conf:
retentions = 10s:7d,1m:21d,24h:5y
I've set up my storage-aggregation.conf file to sum counts:
[sum]
pattern = \.count$
xFilesFactor = 0
aggregationMethod = sum
(And, before you ask, yes; this is a .count).
If I try my URL with &rawData=true then in either case I see some Nones, some 0.0s, and a pair of 1.0s separated by some 0.0s. I never see these fractional values that somehow show up on the graph. So... Is this a bug? Am I doing something wrong?
There's also consolidateBy function which tells graphite what to do if there's no enough pixels to draw everything accurately. By default it's using "avg" function and therefore strange results when time ranges are greater. Here excerpt from documentation:
When a graph is drawn where width of the graph size in pixels is
smaller than the number of datapoints to be graphed, Graphite
consolidates the values to to prevent line overlap. The
consolidateBy() function changes the consolidation function from the
default of ‘average’ to one of ‘sum’, ‘max’, or ‘min’. This is
especially useful in sales graphs, where fractional values make no
sense and a ‘sum’ of consolidated values is appropriate.
Another function that could be useful is hitcount. Short excerpt from here why it's useful:
This function is like summarize(), except that it compensates
automatically for different time scales (so that a similar graph
results from using either fine-grained or coarse-grained records) and
handles rarely-occurring events gracefully.
I spent some time scratching my head why I get fractions for my counter with time ranges longer than couple hours when my aggregation rule is max. It's pretty confusing, especially at the beginning when you play with single counters to see if everything works. Checking rawData is quite a good way for debugging sanity check ;)