I'm trying to do an average if a duration is over a certain amount of seconds
I can't get it working
something like this
| summarize Totalcount=count(),Average=avgif(round(duration/1000,2)>10.00)
That's because avgif() expects two arguments. If I run what you've posted, I get:
avgif(): function expects 2 argument(s).
Read the documentation.
The solution could be:
| summarize Totalcount=count(),Average=avgif(round(duration/1000,2), round(duration/1000,2)>10.00)
Related
How do I compute the difference and ratio between two Stackdriver metrics in MQL?
There are two parts two this question, but I am grateful if you can help me with one:
Compute the difference between the two time series
Compute the ratio between the two time series. Addon if possible: The case that the denominator is null should be handled gracefully.
I got so far, but it does not yield the expected result (the resulting time series is always zero):
fetch global
| { t_0:
metric 'custom.googleapis.com/http/server/requests/count'
| filter
(metric.service == 'service-a' && metric.uri =~ '/api/started')
| align next_older(1m);
t_1:
metric 'custom.googleapis.com/http/server/requests/count'
| filter
(metric.service == 'service-a' && metric.uri =~ '/api/completed')
| align next_older(1m)
}
| outer_join 0
| div
Obviously, the code has been anonymized. What I want to accomplish is to track whether there is a difference between processes that have been started vs completed.
EDIT / ADDITIONAL INFO 2021-11-18
I used the projects.timeSeries v3 API for further debugging. Apparently the outer_join operation assumes that the labels between the two timeseries are the same, which is not the case in my example.
Does anybody know, how to delete the labels, so I can perform the join and aggregation?
EDIT / ADDITIONAL INFO 2021-11-19
The aggregation now works as I managed to delete the labels using the map drop[...] maplet.
The challenge was indeed the labels as these labels are generated by the Spring Boot implementation of Micrometer. Since the labels are distinct between these two metrics the join operation was always empty for join, or only the second timeseries for outer_join.
As I can see you want to know the ratio between the server_req _count when the process started and when it’s completed and you are using the align next_older(1m) which may fetch the most recent value in the period. So it would be possible when the process started the count would be zero. So I would recommend you to change the aligner to mean or max to get the req_count during the process started and completed in a different time series.
align mean(1m)
Please see the documentation for more ways to do a ratio query:Examples of Ratio Query
This is what I got so far. Now the aggregation works as the labels are deleted. I will update this example when I know more.
fetch global
| {
t_0:
metric 'custom.googleapis.com/http/server/requests/count'
| filter
(metric.service == 'service-a' && metric.uri =~ '/api/started')
| every (1m)
| map drop [resource.project_id, metric.status, metric.uri, metric.exception, metric.method, metric.service, metric.outcome]
; t_1:
metric 'custom.googleapis.com/http/server/requests/count'
| filter
(metric.service == 'service-a' && metric.uri =~ '/api/completed')
| every (1m)
| map drop [resource.project_id, metric.status, metric.uri, metric.exception, metric.method, metric.service, metric.outcome]
}
| within d'2021/11/18-00:00:00', d'2021/11/18-15:15:00'
| outer_join 0
| value val(0)-val(1)
After using the map add / map drop operators like in your own answer, make sure to use outer_join 0,0 which will give you a full outer join. Note that the 0,0 argument to outer_join means "substitute zeros if either stream's value is missing".
In your case, since the first stream counts "started" tasks and the second stream counts "completed" tasks, you are likely to find cases when the first metric has more rows than the second one. If you want to do a left join operation, the syntax is outer_join _, 0. The underscore followed by 0 means "don't substitute anything if the first stream's value is missing, but do substitute a zero if the second stream's value is missing."
How can I use jq to do filtering based on certain conditions?
Using the example from https://stedolan.github.io/jq/tutorial/#result2 for e.g., say I want to filter the results by
.commit.author.date>=2016, and
.commit.comment_count>=1
All the items not satisfying such criteria will not show up in the end result.
What's the proper jq expression to do that? Thx.
The response is an array of objects that contain commit information. Since you want to filter that array, you usually will want to map/1 that array and filter using select/1: map(select(...)). You just need to provide the condition to filter.
map(select(.commit | (.author.date[:4] | tonumber) >= 2016 and .comment_count >= 1))
The date in this particular case is a date string in iso format. I'm assuming you wanted to get the commits that were in the year 2016 and beyond. Extract the year part and compare.
(.author.date[:4] | tonumber) >= 2016
Then combine that with comparing the comment count.
Note, I projected to the commit first to minimize repetition in the filter. I could have easily left that part out.
map(select((.commit.author.date[:4] | tonumber) >= 2016 and .commit.comment_count >= 1))
I'm trying to sum up values based on the 'Description' column of a dataset. So far, I have this
=Sum(Cdbl(IIf(First(Fields!Description.Value, "Items") = "ItemA", Sum(Fields!Price.Value, "Items"), 0)))
But it keeps giving me an error saying that it "contains a First, Last, or Previous aggregate in an outer aggregate. These aggregate functions cannot be specified as nested aggregates" Is there something wrong with my syntax here?
What I need to do is take something like this...
Item | Price
Item A | 400.00
Item B | 300.00
Item A | 200.00
Item A | 100.00
And I need to get the summed Price for 'ItemA' - 700.00 in this case.
All of the answers I've found so far only show for a single dataset OR for use with a tablix. For example, the below code does not work because it does not specify the scope or the dataset to use.
=Sum(Cdbl(IIf(Fields!Description.Value) = "ItemA", Sum(Fields!Price.Value), 0)))
I also can't specify a dataset to use, because the control I'm loading into is a textbox, not a tablix.
If anyone else sees this and wants an answer, I ended up returning a count back of what I needed on another dataset. The other option I was thinking would possibly be to create a 1x1 tablix, set the dataset, and then use the second bit of code posted.
I have a table with a column of times such as
| time|
|=====|
| 9:20|
|14:33|
| 7:35|
In my query, I have ORDER BY time, but it sorts the times as strings, so the result is ordered as
|14:33|
| 7:35|
| 9:20|
What do I have to do to my ORDER BY statement to get the result to be sorted as times so it would result in
| 7:35|
| 9:20|
|14:33|
One solution is to pad hours that do not include a leading 0 with one in the query itself and then perform the sort.
SELECT * FROM <table> ORDER BY SUBSTR('0' || time, -5, 5);
Here's the breakdown on what the SUBSTR method is doing.
|| is a string concatenation operation in SQLite. So '0' || '7:35' will give '07:35', and '0' || '14:23' will give '014:23'. Since we're only interested in a string like HH:MM, we only want the last 5 characters from this concatenated string.
If you store the original string with a leading 0 for hours and minutes, then you could simply order by the time column and get the desired results.
|14:33|
|07:35|
|09:20|
That will also make it easy to the use the time column as an actual time unit, and do computations on it easily. For example, if you wanted to add 20 minutes to all times, that can simply be achieved with:
SELECT TIME(time, '+20 minutes') FROM <table>;
The reason for this 0 padding is because SQLite as of now only understands 24 hour times such as 'HH:MM' but not 'H:MM'.
Here's a good reference page for SQLite documentation on Date and Time related functions.
The best way is to store the time as seconds. Either as a unix timestamp(recommended), or as number of seconds since midnight.
In the second case, 7:35 will 7*3600+35*60=27300 and the representation for 14:33 will be 52380 Store them as integers(timestamps). Similarly for unix timestamps, times are stored as no of seconds since 1970.
You can now sort them as integers
Use utility methods to easily handle the conversion
I'm using the incr function from the python statsd client. The key I'm sending for the name is registered in graphite but it shows up as a flat line on the graph. What filters or transforms do I need to apply to get the rate of the increments over time? I've tried an apply function > transform > integral and an apply function > special > aggregate by sum but no success yet.
Your requested function is "Summarize" - see it over here: http://graphite.readthedocs.org/en/latest/functions.html
In order to the totals over time just use the summarize functions with the "alignToFrom =
true".
For example:
You can use the following metric for 1 day period:
summarize(stats_counts.your.metrics.path,"1d","sum",true)
See graphite summarize datapoints
for more details.
The data is there, it just needs hundreds of counts before you start to be able to see it on the graph. Taking the integral also works and shows number of cumulative hits over time, have had to multiple it by x100 to get approximately the correct value.