I have a counter metric the counts the number of errors thrown by my application. I'm trying to use a single stat panel to show the number of errors thrown in the selected time period, but failing miserably.
I've tried sumSeries (nonNegativeDerivative(myapp.count)) but this didn't work .
Any tips would be gratefully appreciated.
Nick
You can try the following:
integral(transformNull(myapp.count, 0))
Since it is a count, the integral will be just the sum of all counts over the selected period. The transformNull serves to sum 0 for times where no counts were collected.
Then in the grafana singlestat panel, select max as function value, so you will get the last value of the integral.
Related
I am trying to create a simple revenue per person calc that works with different filters within the data. I have it working for a single record, however, it breaks and aggregates incorrectly with multiple records.
The formula I have now is simply Sum([Revenue]) / Sum([Attendance]). This works when I only have a single event selected. However, as soon as I select multiple shows it aggregates and doesn't do the weighted avg.
I'm making some assumptions here, but hopefully this will help you out. I've created an .xlsx file with the following data:
Event Revenue Attendance
Event 1 63761 6685
Event 2 24065 3613
Event 3 69325 4635
Event 4 41996 5414
Inside Tableu I've created the calculated column for Rev Per Person.
Finally, in the Analysis dropdown I've enabled Show Column Grand Totals. This gives me the following:
Simple Fix
The problem is that all of the column totals are being calculated using the SUM aggregation. This is the desired behavior for Revenue and Attendance, but for Rev Per Person, you want to display the average.
In Analysis/ Totals / Total All Using you can configure the default aggregation. Here we don't want to set all of them though; but it's useful to know. Leave that where it is, and instead click on the Rev Per Person Grand Total value and change it from 'Automatic' to 'Average'.
Now you'll see a number much closer to the expected.
But it's not exactly what you expect. The average of all the Rev Per Person values gives us $9.73; but if you take the total Revenue / total Attendance you'd expect a value of $9.79.
Slight More Involved Fix
First - undo the simple fix. We'll keep all of the totals at 'Default'. Instead, we'll modify the Rev Per Person calculation.
IF Size() > 1 THEN
// Grand Total
SUM([Revenue]/[Attendance])
ELSE
// Regular View
SUM([Revenue])/SUM([Attendance])
END
Size() is being used to determine if the calculation is being done for an individual cell or not.
More information on Size() and similar functions can be found on Tableau's website here - https://onlinehelp.tableau.com/current/pro/desktop/en-us/functions_functions_tablecalculation.html
Now I see the expected value of $9.79.
I am using a method similar to the Swift example in Get total step count for every date in HealthKit to acquire the number of steps from HealthKit. That works great.
My preference would be to get the number of steps per minute or per hour though instead of the per day that that code does -- While the sum of hourly steps perfectly matches the daily step count reported by HealthKit, the sum of minutely steps does not match hourly or daily sums.
Is there a way to get per-minute step summaries work? Or is there a logical answer as to why they are vastly different?
The only differences from the code above and my code is the following for Per Hour calculations (works the same):
interval.hour = 1
var anchorComponents = calendar.dateComponents([.hour, .day, .month, .year], from: NSDate() as Date)
and the following per minute calculations (usually over counts):
interval.minute = 1
var anchorComponents = calendar.dateComponents([.minute, .hour, .day, .month, .year], from: NSDate() as Date)
Clearly I am missing something. Thanks for any insight.
Eric
There are options (.strictStartDate, .strictEndDate) for the query predicate that determine whether the query finds all samples that are strictly within the interval (strict stop and start date), or whether it will include samples that stop or start outside of the interval.
My suspicion is that your step samples may cross minute boundaries, and depending on how you define the interval, this will make a significant difference.
I work with daily summaries and have defined the interval to be (strict, not-strict) so that a sample that spans two days is only counted in the one that it starts in. (Swift 4)
let predicate = HKQuery.predicateForSamples(withStart: start, end: end, options: .strictStartDate)
I note that the final answer in the SO question you linked to defines a (strict, not-strict) interval as well.
We've got a system which writes some count metrics using a Counter from io.dropwizard.metrics java lib. The system is deployed once a day and our count metrics look in such way:
What I need is to summarize all that daily counts to a month value. Afterwards I'll show it as a single stat.
How can I do that?
Here is the query for the graph:
sumSeries(mysys.{prod,production}.{server1,server2,server3}.important.metric.count))
This will give you what you want for last 30 days - summarize(nonNegativeDerivative(mysys.{prod,production}.{server1,server2,server3}.important.metric.count)), "30d", "sum", false)
Or use consolidateBy(integral(sumSeries(nonNegativeDerivative(mysys.{prod,production}.{server1,server2,server3}.important.metric.count))), 'max') and set "Override relative time" in Grafana (it's on "Time range" tab in Graph's settings) for 30 days
We have 4 data series and once in a while one of the 4 has a null as we missed reading the data point. This makes the graph look like we have awful spikes in loss of volume coming in which is not true as we were just missing the data point.
I am doing a basic sumSeries(server*.InboundCount) right now for server 1, 2, 3, 4 where the * is.
Is there a way where graphite can NOT sum the locations on the line and just have sum for those points in time be also null so it connects the line from the point where there is data to the next point where there is data.
NOTE: We also display the graphs server*.InboundCount individually to watch for spikes on individual servers.
or perhaps there is function such that it looks at all the series and if any of the values is null, it returns null for every series that it takes X series and returns X series points to the sum function as null+null+null+null hopefully doesn't result in a spike and shows null.
thanks,
Dean
This is an old question but still deserves an answer as a point of reference, what you're after I believe is the function KeepLastValue
Takes one metric or a wildcard seriesList, and optionally a limit to the number of ‘None’ values to skip over. Continues the line with the last received value when gaps (‘None’ values) appear in your data, rather than breaking your line.
This would make your function
sumSeries(keepLastValue(server*.InboundCount))
This will work ok if you have a single null datapoint here and there. If you have multiple consecutive null data points you can specify how far back before a null breaks your data. For example, the following will look back up to 10 values before the sumSeries breaks:
sumSeries(keepLastValue(server*.InboundCount, 10))
I'm sure you've since solved your problems, but I hope this helps someone.
I am feeding data into a metric, let say it is "local.junk". What I send is just that metric, a 1 for the value and the timestamp
local.junk 1 1394724217
Where the timestamp changes of course. I want to graph the total number of these instances over a period of time so I used
summarize(local.junk, "1min")
Then I went and made some data entries, I expected to see the number of requests that it received in each minute but it always just shows the line at 1. If I summarize over a longer period like 5 mins, It is showing me some random number... I tried 10 requests and I see the graph at like 4 or 5. Am I loading the data wrong? Or using the summarize function wrong?
The method summarize() just sums up your data values so co-relate and verify that you indeed are sending correct values.
Also, to localize weather the function or data has issues, you can run it on metricsReceived:
summarize(carbon.agents.ip-10-0-0-1-a.metricsReceived,"1hour")
Which version of Grahite are you running?
You may want to check your carbon aggregator settings. By default carbon aggregates data for every 10 seconds. Without adding any entry in aggregation-rules.conf, Graphite only saves last metric it receives in the 10second duration.
You are seeing above problem because of that behaviour. You need to add an entry for your metric in the aggregation-rules.conf with sum method like this
local.junk (10) = sum local.junk