App Insights - Empty as zeros - azure-application-insights

When I use bin(timestamp, 1m) and generate a timechart of it, null values generate a straight line. How can i treat missing values as zeros?!?

Another option is to use the make-series operator instead of summarize, with default=0.
Make-series documentation
make-series creates a series that can be analyzed using advanced time-series functions. It's a bit clunkier than summarize, but offers other advantages.

Your best bet is to use iif() in conjnction with an isnull() or isempty(). Mind you if your pulling from a custom metric you will need to make sure you're value is set to the same type.
requests
| extend mycustomvalue = toint(todynamic(customDimension).customDuration)
| extend mycustomduration = iif((isnull(mycustomvalue) or isempty(mycustomvalue)), 0.0, mycustomvalue)

Related

Divide two metrics in Google Cloud Monitoring / MQL

How do I compute the difference and ratio between two Stackdriver metrics in MQL?
There are two parts two this question, but I am grateful if you can help me with one:
Compute the difference between the two time series
Compute the ratio between the two time series. Addon if possible: The case that the denominator is null should be handled gracefully.
I got so far, but it does not yield the expected result (the resulting time series is always zero):
fetch global
| { t_0:
metric 'custom.googleapis.com/http/server/requests/count'
| filter
(metric.service == 'service-a' && metric.uri =~ '/api/started')
| align next_older(1m);
t_1:
metric 'custom.googleapis.com/http/server/requests/count'
| filter
(metric.service == 'service-a' && metric.uri =~ '/api/completed')
| align next_older(1m)
}
| outer_join 0
| div
Obviously, the code has been anonymized. What I want to accomplish is to track whether there is a difference between processes that have been started vs completed.
EDIT / ADDITIONAL INFO 2021-11-18
I used the projects.timeSeries v3 API for further debugging. Apparently the outer_join operation assumes that the labels between the two timeseries are the same, which is not the case in my example.
Does anybody know, how to delete the labels, so I can perform the join and aggregation?
EDIT / ADDITIONAL INFO 2021-11-19
The aggregation now works as I managed to delete the labels using the map drop[...] maplet.
The challenge was indeed the labels as these labels are generated by the Spring Boot implementation of Micrometer. Since the labels are distinct between these two metrics the join operation was always empty for join, or only the second timeseries for outer_join.
As I can see you want to know the ratio between the server_req _count when the process started and when it’s completed and you are using the align next_older(1m) which may fetch the most recent value in the period. So it would be possible when the process started the count would be zero. So I would recommend you to change the aligner to mean or max to get the req_count during the process started and completed in a different time series.
align mean(1m)
Please see the documentation for more ways to do a ratio query:Examples of Ratio Query
This is what I got so far. Now the aggregation works as the labels are deleted. I will update this example when I know more.
fetch global
| {
t_0:
metric 'custom.googleapis.com/http/server/requests/count'
| filter
(metric.service == 'service-a' && metric.uri =~ '/api/started')
| every (1m)
| map drop [resource.project_id, metric.status, metric.uri, metric.exception, metric.method, metric.service, metric.outcome]
; t_1:
metric 'custom.googleapis.com/http/server/requests/count'
| filter
(metric.service == 'service-a' && metric.uri =~ '/api/completed')
| every (1m)
| map drop [resource.project_id, metric.status, metric.uri, metric.exception, metric.method, metric.service, metric.outcome]
}
| within d'2021/11/18-00:00:00', d'2021/11/18-15:15:00'
| outer_join 0
| value val(0)-val(1)
After using the map add / map drop operators like in your own answer, make sure to use outer_join 0,0 which will give you a full outer join. Note that the 0,0 argument to outer_join means "substitute zeros if either stream's value is missing".
In your case, since the first stream counts "started" tasks and the second stream counts "completed" tasks, you are likely to find cases when the first metric has more rows than the second one. If you want to do a left join operation, the syntax is outer_join _, 0. The underscore followed by 0 means "don't substitute anything if the first stream's value is missing, but do substitute a zero if the second stream's value is missing."

Generate Splunk report with only extracted fields

First and foremost, maybe what I am looking for isn’t possible or I am going down the wrong path. Please suggest.
Consider, I’ve raw data which has n number of parameters each separated by ‘&’.
Id=1234&ACC=bc3gds5&X=TESTX&Y=456567&Z=4457656&M=TESTM&N=TESTN&P=5ec3a
Using SPL, I’ve filtered only a few fields(ACC, X, Y) which I’m interested in. Now, I would like to generate the report only with the filtered fields in a tabular format, not the whole raw data.
There may be more than one way to do that, but I like to use rex. The rex command extracts text that matches a regular expression into fields. Once you have the fields you use SPL on them to do whatever you need.
index=foo
| rex "ACC=(?<ACC>[^&]+)&X=(?<X>[^&]+)&Y=(?<Y>[^&]+)"
| table ACC X Y

Kusto avgif() greater than

I'm trying to do an average if a duration is over a certain amount of seconds
I can't get it working
something like this
| summarize Totalcount=count(),Average=avgif(round(duration/1000,2)>10.00)
That's because avgif() expects two arguments. If I run what you've posted, I get:
avgif(): function expects 2 argument(s).
Read the documentation.
The solution could be:
| summarize Totalcount=count(),Average=avgif(round(duration/1000,2), round(duration/1000,2)>10.00)

jq filtering based on conditions

How can I use jq to do filtering based on certain conditions?
Using the example from https://stedolan.github.io/jq/tutorial/#result2 for e.g., say I want to filter the results by
.commit.author.date>=2016, and
.commit.comment_count>=1
All the items not satisfying such criteria will not show up in the end result.
What's the proper jq expression to do that? Thx.
The response is an array of objects that contain commit information. Since you want to filter that array, you usually will want to map/1 that array and filter using select/1: map(select(...)). You just need to provide the condition to filter.
map(select(.commit | (.author.date[:4] | tonumber) >= 2016 and .comment_count >= 1))
The date in this particular case is a date string in iso format. I'm assuming you wanted to get the commits that were in the year 2016 and beyond. Extract the year part and compare.
(.author.date[:4] | tonumber) >= 2016
Then combine that with comparing the comment count.
Note, I projected to the commit first to minimize repetition in the filter. I could have easily left that part out.
map(select((.commit.author.date[:4] | tonumber) >= 2016 and .commit.comment_count >= 1))

Grouping, missing data - Cognos Report Studio

In IBM Cognos Report Studio
I have a data structure like so, plain dump of the customer details:
Account|Type|Value
123-123| 19 |2000
123-123| 20 |2000
123-123| 21 |3000
If I remove the Type from my report I get:
Account|Value
123-123|2000
123-123|3000
It seems to have treated the two rows with an amount '2000' as some kind of duplicated amount and removed it from my report.
My assumption was that Cognos will aggregate the data automatically?
Account|Value
123-123|8000
I am lost on what it is doing. Any pointers? If it is not grouping it, I would at least expect 3 rows still
Account|Value
123-123|2000
123-123|2000
123-123|3000
In any case I would like to end up with 1 line. The behaviour I'm getting is something I can't figure out. Thanks for any help.
Gemmo
The 'Auto-group & Summarize' feature is the default on new queries. This will find all unique combinations of attributes and roll up all measures to these unique combinations.
There are three ways to disable auto-group & summarize behavior:
Explicitly turn it off at the query level
Include a grain-level unique column, e.g. a key, in the query
Not include any measures in the query
My guess is that your problem is #3. The [Value] column in your example has to have its 'Aggregate Function' set to an aggregate function or 'Automatic' for the auto-group behavior to work. It's possible that column's 'Aggregate Function' property is set to 'None'. This is the standard setting for an attribute value and would prevent the roll up from occurring.

Resources