Kusto query calculate 2 metric fields - azure-data-explorer

I'm doing a query in Kusto on Azure to bring the memory fragmentation value of Redis, this value is obtained by dividing the RSS memory by the memory used, the problem is that I am not able to do the calculation using these two different fields because it is necessary to filter the value of the "Average" field of the "usedmemoryRss" and "usedmemory" fields when I do the filter on the extend line the query returns no value, the code looks like this:
AzureMetrics
| extend m1 = Average | where MetricName == "usedmemoryRss" and
| extend m2 = Average | where MetricName == "usedmemory"
| extend teste = m1 / m2
When I remove the "where" clauyse from the lines it divides the value of each record by itself and return 1. Is it possible to do that? Thank you in advance for your help.

Thanks for the answer Justin you gave me an idea and i solved this way
let m1 = AzureMetrics | where MetricName == "usedmemoryRss" | where Average != 0 | project Average;
let m2 = AzureMetrics | where MetricName == "usedmemory" | where Average != 0 | project Average;
print memory_fragmentation=toscalar(m1) / toscalar(m2)

let Average=datatable (MetricName:string, Value:long)
["usedmemoryRss", 10,
"usedmemory", "5"];
let m1=Average
| where MetricName =="usedmemoryRss" | project Value;
let m2=Average
| where MetricName =="usedmemory" | project Value;
print teste=toscalar(m1) / toscalar (m2)

Related

Is it possible to iterate over the row values of a column in KQL to feed each value through a function

I am applying the series_decompose_anomalies algorithm to time data coming from multiple meters. Currently, I am using the ADX dashboard feature to feed my meter identifier as a parameter into the algorithm and return my anomalies and scores as a table.
let dt = 3hr;
Table
| where meter_ID == dashboardParameter
| make-series num=avg(value) on timestamp from _startTime to _endTime step dt
| extend (anomalies,score,baseline) = series_decompose_anomalies( num, 3,-1, 'linefit')
| mv-expand timestamp, num, baseline, anomalies, score
| where anomalies ==1
| project dashboardParameter, todatetime(timestamp), toreal(num), toint(anomalies), toreal(score)
I would like to bulk process all my meters in one go and return a table with all anomalies found across them. Is it possible to feed an array as an iterable in KQL or something similar to allow my parameter to change multiple times in a single run?
Simply add by meter_ID to make-series
(and remove | where meter_ID == dashboardParameter)
| make-series num=avg(value) on timestamp from _startTime to _endTime step dt by meter_ID
P.S.
Anomaly can be positive (num > baseline => flag = 1) or negative (num < baseline => flag = -1)
Demo
let _step = 1h;
let _endTime = toscalar(TransformedServerMetrics | summarize max(Timestamp));
let _startTime = _endTime - 12h;
TransformedServerMetrics
| make-series num = avg(Value) on Timestamp from _startTime to _endTime step _step by SQLMetrics
| extend (flag, score, baseline) = series_decompose_anomalies(num , 3,-1, 'linefit')
| mv-expand Timestamp to typeof(datetime), num to typeof(real), flag to typeof(int), score to typeof(real), baseline to typeof(real)
| where flag != 0
SQLMetrics
num
Timestamp
flag
score
baseline
write_bytes
169559910.91717172
2022-06-14T15:00:30.2395884Z
-1
-3.4824039875238131
170205132.25708669
cpu_time_ms
17.369556143036036
2022-06-14T17:00:30.2395884Z
1
7.8874529842826
11.04372634506527
percent_complete
0.04595588235294118
2022-06-14T22:00:30.2395884Z
1
25.019464868749985
0.004552738927738928
blocking_session_id
-5
2022-06-14T22:00:30.2395884Z
-1
-25.019464868749971
-0.49533799533799527
pending_disk_io_count
0.0019675925925925924
2022-06-14T23:00:30.2395884Z
1
6.4686836384225685
0.00043773741690408352
Fiddle

KQL, time difference between separate rows in same table

I have Sessions table
Sessions
|Timespan|Name |No|
|12:00:00|Start|1 |
|12:01:00|End |2 |
|12:02:00|Start|3 |
|12:04:00|Start|4 |
|12:04:30|Error|5 |
I need to extract from it duration of each session using KQL (but if you could give me suggestion how I can do it with some other query language it would be also very helpful). But if next row after start is also start, it means session was abandoned and we should ignore it.
Expected result:
|Duration|SessionNo|
|00:01:00| 1 |
|00:00:30| 4 |
You can try something like this:
Sessions
| order by No asc
| extend nextName = next(Name), nextTimestamp = next(timestamp)
| where Name == "Start" and nextName != "Start"
| project Duration = nextTimestamp - timestamp, No
When using the operator order by, you are getting a Serialized row set, which then you can use operators such as next and prev. Basically you are seeking rows with No == "Start" and next(Name) == "End", so this is what I did,
You can find this query running at Kusto Samples open database.
let Sessions = datatable(Timestamp: datetime, Name: string, No: long) [
datetime(12:00:00),"Start",1,
datetime(12:01:00),"End",2,
datetime(12:02:00),"Start",3,
datetime(12:04:00),"Start",4,
datetime(12:04:30),"Error",5
];
Sessions
| order by No asc
| extend Duration = iff(Name != "Start" and prev(Name) == "Start", Timestamp - prev(Timestamp), timespan(null)), SessionNo = prev(No)
| where isnotnull(Duration)
| project Duration, SessionNo

Kusto query to split pie chart in half as per results

I am trying to display result using Kusto KQL query in pie chart.The goal is to display pie chart as half n half color in case of failure and full color in case of pass.
Basically log from a site displays rows as pass and failed row .In case where all are pass , pie chart should display 100 % same color.In case of even single failure in any rows , it should display 50% one color and 50% other color.Below query works when 1) When all rows are pass as full color 2) when some are pass and some fail or even one fails (displays pie chart in half n half) color 3)BUT WHEN ALL ROW HAS FAILS ,this is displaying in one color and not splitting pie chart in half n half
QUERY I USED:
results
| where Name contains "jobqueues"
| where timestamp > ago(1h)
| extend PASS = (ErLvl)>2 )
| extend FAIL = ((ErLvl<2 )
| project PASS ,FAIL
| extend status = iff(PASS==true,"PASS","FAIL")
| summarize count() by status
| extend display = iff(count_>0,1,0)
| summarize percentile(display, 50) by status
| render piechart
Please suggest what can be done to solve this problem.Thanks in advance.
Let's summarize your question:
There are only two outcomes of your query:
A piechart showing 50% vs 50%
A piechart showing 100%
From your description we learn that when
All rows are PASS we plot piechart 2.
Any row has FAIL we plot piechart 1.
Lets see how we can achieve this after this line from your code:
| extend status = iff(PASS==true,"PASS","FAIL")
| summarize count() by status
We should have a table looking like so:
status
count_
PASS
x
FAIL
y
Looks like we need to perform some logic on this. You were originally plotting based on the operation result. My idea was to just generate a table of pass = 1 and fail = 1 for the 50%v50% case and another table of pass = 1 and fail = 0 for the 100% case.
So following that logic we need to perform the following mapping:
status
count_
status
count2
fail
>0
maps to
fail
1
pass
>0
pass
1
status
count_
status
count2
fail
>0
maps to
fail
1
pass
=0
pass
1
status
count_
status
count2
fail
=0
maps to
fail
0
pass
>0
pass
1
Logical representation:
(given count_ >=0):
if fail > 0 count2 = 0 else count 1
pass is always equal to 1
We only need to apply this to the row where status == FAILED but summarize doesn't guarantee a row if there are no observations
Guarantee summarize results:
| extend fail_count = iif(status == "FAIL", count_, 0)
| extend pass_count = iif(status == "PASS", count_, 0)
| project fail_count,pass_count
| summarize sum(fail_count), sum(pass_count)
Apply logic
| extend FAIL = iff(sum_fail_count > 0, 1, 0)
| extend PASS = 1
| project FAIL, PASS
Now our result is looking like:
PASS
FAIL
1
1 or 0
In order to plot this as a pie chart we just need to transpose it so the columns PASSED and FAILED are rows of the "status" column.
We can use a simple pack and mv-expand for this
//transpose for rendering
| extend tmp = pack("FAIL",FAIL,"PASS",PASS)
| mv-expand kind=array tmp
| project Status=tostring(tmp[0]), Count=toint(tmp[1])
| render piechart
And that's it!~
Final query:
results
| where Name contains "jobqueues"
| where timestamp > ago(1h)
| extend PASS = (ErLvl)>2 )
| extend FAIL = ((ErLvl<2 )
| project PASS ,FAIL
| extend status = iff(PASS==true,"PASS","FAIL")
| summarize count() by status
//ensure results
| extend fail_count = iif(status == "FAIL", count_, 0)
| extend pass_count = iif(status == "PASS", count_, 0)
| project fail_count,pass_count
| summarize sum(fail_count), sum(pass_count)
//apply logic
| extend FAIL = iff(sum_fail_count > 0, 1, 0)
| extend PASS = 1
| project FAIL, PASS
//transpose for rendering
| extend Temp = pack("FAIL",FAIL,"PASS",PASS)
| mv-expand kind=array Temp
| project Status=tostring(Temp[0]), Count=toint(Temp[1])
| render piechart

Alert on error rate exceeding threshold using Azure Insights and/or Analytics

I'm sending customEvents to Azure Application Insights that look like this:
timestamp | name | customDimensions
----------------------------------------------------------------------------
2017-06-22T14:10:07.391Z | StatusChange | {"Status":"3000","Id":"49315"}
2017-06-22T14:10:14.699Z | StatusChange | {"Status":"3000","Id":"49315"}
2017-06-22T14:10:15.716Z | StatusChange | {"Status":"2000","Id":"49315"}
2017-06-22T14:10:21.164Z | StatusChange | {"Status":"1000","Id":"41986"}
2017-06-22T14:10:24.994Z | StatusChange | {"Status":"3000","Id":"41986"}
2017-06-22T14:10:25.604Z | StatusChange | {"Status":"2000","Id":"41986"}
2017-06-22T14:10:29.964Z | StatusChange | {"Status":"3000","Id":"54234"}
2017-06-22T14:10:35.192Z | StatusChange | {"Status":"2000","Id":"54234"}
2017-06-22T14:10:35.809Z | StatusChange | {"Status":"3000","Id":"54234"}
2017-06-22T14:10:39.22Z | StatusChange | {"Status":"1000","Id":"74458"}
Assuming that status 3000 is an error status, I'd like to get an alert when a certain percentage of Ids end up in the error status during the past hour.
As far as I know, Insights cannot do this by default, so I would like to try the approach described here to write an Analytics query that could trigger the alert. This is the best I've been able to come up with:
customEvents
| where timestamp > ago(1h)
| extend isError = iff(toint(customDimensions.Status) == 3000, 1, 0)
| summarize failures = sum(isError), successes = sum(1 - isError) by timestamp bin = 1h
| extend ratio = todouble(failures) / todouble(failures+successes)
| extend failure_Percent = ratio * 100
| project iff(failure_Percent < 50, "PASSED", "FAILED")
However, for my alert to work properly, the query should:
Return "PASSED" even if there are no events within the hour (another alert will take care of the absence of events)
Only take into account the final status of each Id within the hour.
As the request is written, if there are no events, the query returns neither "PASSED" nor "FAILED".
It also takes into account any records with Status == 3000, which means that the example above would return "FAILED" (5 out of 10 records have Status 3000), while in reality only 1 out of 4 Ids ended up in error state.
Can someone help me figure out the correct query?
(And optional secondary questions: Has anyone setup a similar alert using Insights? Is this a correct approach?)
As mentioned, since you're only querying on a singe hour your don't need to bin the timestamp, or use it as part of your aggregation at all.
To answer your questions:
The way to overcome no data at all would be to inject a synthetic row into your table which will translate to a success result if no other result is found
If you want your pass/fail criteria to be based on the final status for each ID, then you need to use argmax in your summarize - it will return the status corresponding to maximal timestamp.
So to wrap it all up:
customEvents
| where timestamp > ago(1h)
| extend isError = iff(toint(customDimensions.Status) == 3000, 1, 0)
| summarize argmax(timestamp, isError) by tostring(customDimensions.Id)
| summarize failures = sum(max_timestamp_isError), successes = sum(1 - max_timestamp_isError)
| extend ratio = todouble(failures) / todouble(failures+successes)
| extend failure_Percent = ratio * 100
| project Result = iff(failure_Percent < 50, "PASSED", "FAILED"), IsSynthetic = 0
| union (datatable(Result:string, IsSynthetic:long) ["PASSED", 1])
| top 1 by IsSynthetic asc
| project Result
Regarding the bonus question - you can setup alerting based on Analytics queries using Flow. See here for a related question/answer
I'm presuming that the query returns no rows if you have no data in the hour, because the timestamp bin = 1h (aka bin(timestamp,1h)) doesn't return any bins?
but if you're only querying the last hour, i don't think you need the bin on timestamp at all?
without having your data it's hard to repro exactly but... you could try something like (beware syntax errors):
customEvents
| where timestamp > ago(1h)
| extend isError = iff(toint(customDimensions.Status) == 3000, 1, 0)
| summarize totalCount = count(), failures = countif(isError == 1), successes = countif(isError ==0)
| extend ratio = iff(totalCount == 0, 0, todouble(failures) / todouble(failures+successes))
| extend failure_Percent = ratio * 100
| project iff(failure_Percent < 50, "PASSED", "FAILED")
hypothetically, getting rid of the hour binning should just give you back a single row here of
totalCount = 0, failures = 0, successes = 0, so the math for failure percent should give you back 0 failure ratio, which should get you "PASSED".
without being to try it i'm not sure if that works or still returns you no row if there's no data?
for your second question, you could use something like
let maxTimestamp = toscalar(customEvents where timestamp > ago(1h)
| summarize max(timestamp));
customEvents | where timestamp == maxTimestamp ...
// ... more query here
to get just the row(s) that have that have a timestamp of the last event in the hour?

How to calculate DAU/MAU using Application Insights Analytics?

Assuming I have a definition of a user I can calculate sum of all daily users and all monthly users.
customEvents
| where timestamp > ago(30d)
| where <condition>
| summarize by <user>, bin(timestamp, 1d)
| summarize count() by bin(timestamp, 1d)
| summarize DAU=sum(count_)
customEvents
| where timestamp > ago(30d)
| where <condition>
| summarize by <user>
| MAU=30*count
The question is how to calculate DAU/MAU? Some join magic?
Edit:
There is a much easier way to calculate usage metrics now - "evaluate activity_engagement":
union *
| where timestamp > ago(90d)
| evaluate activity_engagement(user_Id, timestamp, 1d, 28d)
| project timestamp, Dau_Mau=activity_ratio*100
| render timechart
-------
The DAU is really stright forward in Analytics - just use a dcount.
The tricky part of course is calculating the 28-day rolling MAU.
I wrote a post detailing exactly how to calculate stickiness in app analytics a few weeks back - The trick is that you have to use hll() and hll_merge() to calculate the intermediate dcount results for each day, and then merge them together.
The end result looks like this:
let start=ago(60d);
let period=1d;
let RollingDcount = (rolling:timespan)
{
pageViews
| where timestamp > start
| summarize hll(user_Id) by bin(timestamp, period)
| extend periodKey = range(bin(timestamp, period), timestamp+rolling, period)
| mvexpand periodKey
| summarize rollingUsers = dcount_hll(hll_merge(hll_user_Id)) by todatetime(periodKey)
};
RollingDcount(28d)
| join RollingDcount(0d) on periodKey
| where periodKey < now() and periodKey > start + 28d
| project Stickiness = rollingUsers1 *1.0/rollingUsers, periodKey
| render timechart
Looks like this query does it:
let query = customEvents
| where timestamp > datetime("2017-02-01T00:00:00Z") and timestamp < datetime("2017-03-01T00:00:00Z")
| where **<optional condition>**;
let DAU = query
| summarize by **<user>**, bin(timestamp, 1d)
| summarize count() by bin(timestamp, 1d)
| summarize DAU=sum(count_), _id=1;
let MAU = query
| summarize by **<user>**
| summarize MAU=count(), _id=1;
DAU | join (MAU) on _id
| project ["DAU/MAU"] = todouble(DAU)/30/MAU*100, ["Sum DAU"] = DAU, ["MAU"] = MAU
Any suggestions how to calculate it per last few months?
Zaki, your queries calculate a point in time MAU/DAU. If you need a rolling MAU you can use the HLL approach suggested by Asaf. Or the following which is my preferred rolling MAU which is using make-series and fir(). You can play with it hands on using this link to the analytics demo portal.
The two approaches require some time to get used to... and from what I have seen both are blazing fast. One advantage to the make-series and fir() approach is that it is 100% accurate while the HLL approach is heuristic and has some level of error. Another bonus is that it is really easy to configure the level of user engagement that would make the user eligible for the count.
let endtime=endofday(datetime(2017-03-01T00:00:00Z));
let window=60d;
let starttime=endtime-window;
let interval=1d;
let user_bins_to_analyze=28;
let moving_sum_filter=toscalar(range x from 1 to user_bins_to_analyze step 1 | extend v=1 | summarize makelist(v));
let min_activity=1;
customEvents
| where timestamp > starttime
| where customDimensions["sourceapp"]=="ai-loganalyticsui-prod"
| where (name == "Checkout")
| where user_AuthenticatedId <> ""
| make-series UserClicks=count() default=0 on timestamp in range(starttime, endtime-1s, interval) by user_AuthenticatedId
// create a new column containing a sliding sum. Passing 'false' as the last parameter to fir() prevents normalization of the calculation by the size of the window.
| extend RollingUserClicks=fir(UserClicks, moving_sum_filter, false)
| project User_AuthenticatedId=user_AuthenticatedId , RollingUserClicksByDay=zip(timestamp, RollingUserClicks)
| mvexpand RollingUserClicksByDay
| extend Timestamp=todatetime(RollingUserClicksByDay[0])
| extend RollingActiveUsersByDay=iff(toint(RollingUserClicksByDay[1]) >= min_activity, 1, 0)
| summarize sum(RollingActiveUsersByDay) by Timestamp
| where Timestamp > starttime + 28d
| render timechart

Resources