Getting accurate graphite stats_counts - graphite

We have etsy/statsd node application running that flushes stats to carbon/whisper every 10 seconds. If you send 100 increments (counts), in the first 10 seconds, graphite displays them properly, like:
localhost:3000/render?from=-20min&target=stats_counts.test.count&format=json
[{"target": "stats_counts.test.count", "datapoints": [
[0.0, 1372951380], [0.0, 1372951440], ...
[0.0, 1372952460], [100.0, 1372952520]]}]
However, 10 seconds later, and this number falls to 0, null and or 33.3. Eventually it settles at a value 1/6th of the initial number of increments, in this case 16.6.
/opt/graphite/conf/storage-schemas.conf is:
[sixty_secs_for_1_days_then_15m_for_a_month]
pattern = .*
retentions = 10s:10m,1m:1d,15m:30d
I would like to get accurate counts, is graphite averaging the data over the 60 second windows rather than summing it perhaps? Using the integral function, after some time has passed, obviously gives:
localhost:3000/render?from=-20min&target=integral(stats_counts.test.count)&format=json
[{"target": "stats_counts.test.count", "datapoints": [
[0.0, 1372951380], [16.6, 1372951440], ...
[16.6, 1372952460], [16.6, 1372952520]]}]

Graphite data storage
Graphite manages the retention of data using a combination of the settings stored in storage-schemas.conf and storage-aggregation.conf. I see that your retention policy (the snippet from your storage-schemas.conf) is telling Graphite to only store 1 data point for it's highest resolution (e.g.10s:10m) and that it should manage the aggregation of those data points as the data ages and moves into the older intervals (with the lower resolution defined - e.g. 1m:1d). In your case, the data crosses into the next retention interval at 10 minutes, and after 10 minutes the data will roll up according the settings in the storage-aggregation.conf.
Aggregation / Downsampling
Aggregation/downsampling happens when data ages and falls into a time interval that has lower resolution retention specified. In your case, you'll have been storing 1 data point for each 10 second interval but once that data is over 10 minutes old graphite now will store the data as 1 data point for a 1 minute interval. This means you must tell graphite how it should take the 10 second data points (of which you have 6 for the minute) and aggregate them into 1 data point for the entire minute. Should it average? Should it sum? Depending on the type of data (e.g. timing, counter) this can make a big difference, as you hinted at in your post.
By default graphite will average data as it aggregates into lower resolution data. Using average to perform the aggregation makes sense when applied to timer (and even gauge) data. That said, you are dealing with counters so you'll want to sum.
For example, in storage-aggregation.conf:
[count]
pattern = \.count$
xFilesFactor = 0
aggregationMethod = sum
UI (and raw data) aggregation / downsampling
It is also important to understand how the aggregated/downsampled data is represented when viewing a graph or looking at raw (json) data for different time periods, as the data retention schema thresholds directly impact the graphs. In your case you are querying render?from=-20min which crosses your 10s:10m boundary.
Graphite will display (and perform realtime downsampling of) data according to the lowest-resolution precision defined. Stated another way, it means if you graph data that spans one or more retention intervals you will get rollups accordingly. An example will help (assuming the retention of: retentions = 10s:10m,1m:1d,15m:30d)
Any graph with data no older than the last 10 minutes will be displaying 10 second aggregations. When you cross the 10 minute threshold, you will begin seeing 1 minute worth of count data rolled up according to the policy set in the storage-aggregation.conf.
Summary / tldr;
Because you are graphing/querying for 20 minutes worth of data (e.g. render?from=-20min) you are definitely falling into a lower precision storage setting (i.e. 10s:10m,1m:1d,15m:30d) which means that aggregation is occurring according to your aggregation policy. You should confirm that you are using sum for the correct pattern in the storage-aggregation.conf file. Additionally, you can shorten the graph/query time range to less than 10min which would avoid the dynamic rollup.

Related

How to get accumulate value of openTSDB use bosun?

I have a counter, which named,for example "mysvr.method_name1" with 3 tagk/v.It's a counter type of openTSDB which means query times in my situation.How can I get the accumulate value of it in past 30 days(in my situation, total requests number in 30 days).
I use q method like below:
q("sum:mysvr.method_name1{tag1=v1}", "1590940800", "1593532800")
but it looks like the number series not monotone increasing due to server restart, missing tagk/v or some other reasons.
So it's seam like the below query will not meet my requirement:
diff(q("sum:mysvr.method_name1{tag1=v1}", "1590940800", "1593532800"))
how shall I do to fetch the accumulate value for counter in the given time period?
The only thing I can make sure is the below is mean QPS in my situation:
avg(q("sum:rate{counter}:mysvr.method_name1{tag1=v1}", "1590940800", "1593532800"))
sum(q("sum:rate{counter}:mysvr.method_name1{tag1=v1}", "1590940800", "1593532800"))
works for my situation,the gap is multiply by sample time duration which in my situation is 30 seconds.

Monitor and alert prometheus anomaly in number of metrics

We have a number of prometheus servers, each one monitors its own region (actually 2 per region), there are also thanos servers that can query multiple regions, and we also use alertmanager for the alerting.
Lately, we had an issue that few metrics stopped to report and we only discovered it when we needed the metrics.
We are trying to find out how to monitor the changes in the number of reported metrics in a scalable system that grow and shrink as required.
I'll be glad about your advice.
You can either count the number of timeseries in the head chunk (last 0-2 hours) or the rate at which you're ingesting samples:
prometheus_tsdb_head_series
or
rate(prometheus_tsdb_head_samples_appended_total[5m])
Then you compare said value with itself a few minutes/hours ago, e.g.
prometheus_tsdb_head_series / prometheus_tsdb_head_series offset 5m
and see whether it fits within an expected range (say 90-110%) and alert otherwise.
Or you can look at the metrics with the highest cardinality only:
topk(100, count({__name__=~".+"}) by (__name__))
Note however that this last expression can be quite costly to compute, so you may want to avoid it. Plus the comparison with 5 minutes ago will not be as straightforward:
label_replace(topk(100, count({__name__=~".+"}) by (__name__)), "metric", "$1", "__name__", "(.*)")
/
label_replace(count({__name__=~".+"} offset 5m) by (__name__), "metric", "$1", "__name__", "(.*)")
You need the label_replace there because the match for the division is done on labels other than __name__. Computing this latest expression takes ~10s on my Prometheus instance with 150k series, so it's anything but fast.
And finally, whichever approach you choose, you're likely to get a lot of false positives (whenever a large job is started or taken down), to the point that it's not going to be all that useful. I would personally not bother trying.

Graphite: sumSeries() does not sum

since this morning at 6 I'm experiencing a strange behavior of graphite.
We have two machine that collects date about calls received, I plot the charts and I also plot the sum of these two charts.
While the charts of single machine are fine, the sum is not working anymore.
This is a screenshot of graphtite and also grafana that shows how 4+5=5 (my math teacher is going to die for this)
This wrong sum happens also for other metrics. And I don't get why.
storage-scheams.conf
# Schema definitions for whisper files. Entries are scanned in order,
# and first match wins.
#
# [name]
# pattern = regex
# retentions = timePerPoint:timeToStore, timePerPoint:timeToStore, ...
[default_1min_for_1day]
pattern = .*
retentions = 60s:1d,1h:7d,1d:1y,7d:5y
storage-aggregations.conf
# Schema definitions for whisper files. Entries are scanned in order,
# and first match wins.
#
# [name]
# pattern = regex
# retentions = timePerPoint:timeToStore, timePerPoint:timeToStore, ...
[time_data]
pattern = ^stats\.timers.*
xFilesFactor = 0.5
aggregationMethod = average
[storage_space]
pattern = \.postgresql\..*
xFilesFactor = 0.1
aggregationMethod = average
[default_1min_for_1day]
pattern = .*
xFilesFactor = 0
aggregationMethod = sum
aggregation-rules.conf This may be the cause, but it was working before 6AM. But anyway i don' see the stats_count.all metric.
stats_counts.all.rest.req (60) = sum stats_counts.srv_*_*.rest.req
stats_counts.all.rest.res (60) = sum stats_counts.srv_*_*.rest.res
It seems that the two series were not alligned by the timestamp, so the sum could not summarize the points. This is visible i the following chart, where selecting a time highliths point in two diffrent minute (charts from grafana).
I don't know why this happened. I resetarted some services (This charts comes from statsd for python and bucky). Maybe was the fault of one of those.
NOTE. Now this works, however, I would like to know if someone knows the reason and how I can solve it.
One thing you need to ensure is that the services sending metrics to Graphite do it at the same granularity as your smallest retention period or the period you will be rendering your graphs in. If the data points in the graph will be every 60 seconds, you need to send metrics every 60 seconds from each service. If the graph will be showing a data point for every hour, you can send your metrics every hour. In your case the smallest period is every 60 seconds.
I encountered a similar problem in our system - graphite was configured with the smallest retention period of 10s:6h, but we had 7 instances of the same service generating lots of metrics and configured them to send data every 20 seconds in order to avoid overloading our monitoring. This caused an almost unavoidable misalignment, where the series from the different instances will have a datapoint every 20 seconds, but some would have it at 10, 30, 50 and others will have it at 0, 20, 40. Depending on how many services were aligned, we would get a very jagged graph, looking similar to yours.
What I did to solve this problem for time periods that were returning data in 10 second increments was to use the keepLastValue function -> keepLastValue(1). I used 1 as parameter, because I only wanted to skip 1 None value, because I knew our service causes this by sending once every 20 seconds rather than every 10. This way the series generated by different services never had gaps, so sums were closer to the real number and the graphs stopped having the jagged look. I guess this introduced a bit of extra lag in the monitoring, but this is acceptable for our use case.

Say a customer could enter a bank randomly every 2-6 seconds, what would be the statistical percentage of a person entering each second?

I'm writing a bank simulation program and I'm trying to find that percent to know how fast to program a new person coming in based on a timer that executes code every second. Sorry if it sounds kinda confusing, but I appreciate any help!
If you need to generate a new person entity every 2-6 seconds, why not generate a random number between 2 and 6, and set the timer to wait that amount of time. When the timer expires, generate the new customer.
However, if you really want the equivalent probability, you can get it by asking what it represents: the stochastic experiment is "at any given second of the clock, what is proability of a client entering, such that it will result in one client every 2-6 seconds?". Pick a specific incidence: say one client every 2 seconds. If on average you get 1 client every 2 seconds, then clearly the probability of getting a client at any given second is 1/2. If on average you get 1 client every 6 seconds, the probability of getting a client at any given second is 1/6.
The Poisson distribution gives the probability of observing k independant events in a period for which the average number of events is λ
P(k) = λk e-λ / k!
This covers the case of more than one customer arriving at the same time.
The easiest way to generate Poisson distributed random numbers is to repeatedly draw from the exponential distribution, which yields the waiting time for the next event, until the total time exceeds the period.
int k = 0;
double t = 0.0;
while(t<period)
{
t += -log(1.0-rnd())/lambda;
if(t<period) ++k;
}
where rnd returns a uniform random number between 0 and (strictly less than) 1, period is the number of seconds and lambda is the average number of arrivals per second (or, as noted in the previous answer, 1 divided by the average number of seconds between arrivals).

Calculation of data delta

I'm writing a server that sends a "coordinates buffer" of game objects to clients every 300ms. But I don't want to send the full data each time. For example, suppose I have an array with elements that change over time:
0 0 100 50 -100 -50 at time t
0 10 100 51 -101 -50 at time t + 300ms
You can see that only the 2nd, 4th, and 5th elements have changed.
What is the right way to send not all the elements, but only the delta? Ideally I'd like a function that returns the complete data the first time and empty data when there are no changes.
Thanks.
Are you looking to optimize for efficiency, or is this a learning exercise? Some thoughts:
Unless there's a lot of data, it's probably easiest, and not terribly inefficient, to send all the data each time.
If you send deltas for all of the data points each time, you won't save much by sending zeroes for unchanged points instead of re-sending the previous vales.
If you send data for only those points that change, you'll need to provide an index for each value. For example, if point 3 increases by 5 and point 8 decreases by 2, then you might send 3 5 8 -2. But now, since you're sending two values for each point that changes, you'll only win if fewer than half the points change.
If the values change relatively slowly, as compared to the rate at which you transmit updates, you might increase efficiency by transmitting the delta for each data point, but using only a few bits. For example, with 4 bits you can transmit values from -8 to +7. That would work as long as the deltas are never larger than that, or if it's ok to transmit several deltas before they "catch up" to the actual values.
It may not be worthwhile to have 2 different mechanisms: one to send the initial values, and another to send deltas. If you can tolerate the lag, it may make more sense to assume some constant initial value for every point, and then transmit only deltas.
There are lots of options. If most data isn't changing, just send (index,value) pairs of the changed elements. If most values change but the changes are small, compute deltas and gzip (or run length encode, or lots of other possibilities) the result.

Resources