I'm trying to use a Kusto query, taking the TimeGenerated field as input and get the below error.
Error is Query could not be parsed at 'datetime(start_time)'
let start_time = Heartbeat | summarize min(TimeGenerated);
let end_time = Heartbeat | summarize max(TimeGenerated);
Heartbeat
| where TimeGenerated > datetime(start_time) and TimeGenerated < datetime(end_time)
It works, when I set a constant to start_time or even use ago() function.
you could try this:
let times = toscalar(
T1
| summarize pack_array(min(timestamp), max(timestamp))
)
;
T2
| where timestamp > todatetime(times[0])
| where timestamp < todatetime(times[1])
Related
I have a where condition which I want to run over a set of tables in my Azure Data Explorer DB. I found "Find in ()" operator in Kusto query quite useful, works fine when I pass list of tables as intended.
find withsource=DataType in (AppServiceFileAuditLogs,AzureDiagnostics)
where TimeGenerated > ago(31d)
project _ResourceId, _BilledSize, _IsBillable
| where _IsBillable == true
| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId, DataType | sort by BillableDataBytes nulls last
However, in my scenario, I would like to decide the list of tables at run time using another query.
Usage
| where TimeGenerated > ago(32d)
| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
| where IsBillable == true
| summarize BillableDataGB = sum(Quantity) / 1000 by DataType
| sort by BillableDataGB desc
|project DataType
find withsource=DataType in (<pass resulting table expression from above query here as comma separated list of tables>)
where TimeGenerated > ago(31d)
project _ResourceId, _BilledSize, _IsBillable
| where _IsBillable == true
| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId, DataType | sort by BillableDataBytes nulls last
Found some examples of passing all tables in a database or cluster using wildcards but that does not fit my scenario. Can somebody help me here.
Here is one way to achieve this:
let Tables = toscalar(Usage
| where TimeGenerated > ago(32d)
| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
| where IsBillable == true
| summarize by DataType);
union withsource=T *
| where T in (Tables)
| count
Note that there is a significance to the toscalar expression, it precalculates the list of tables and optimizes the filter on the union expression. I also updated your query to avoid unnecessary work.
I have a Kusto Query that I am using to query Application Insights. The goal is to get number of failed requests in 5 min buckets / and divide that by total number of requests in the same 5 min bucket. I will eventually build an alert to trigger if this percentage is greater than a certain value. But, I can't seem to get the query right.
In the example below, I hardcode a specific timestamp to make sure I get some failures.
Here is the query:
let fn = "APP_NAME";
requests
| where success == "False" and cloud_RoleName == fn
| summarize failed=sum(itemCount) by bin(timestamp, 5m)
| where timestamp == "2021-05-17T20:20:00Z"
| join (
requests
| where cloud_RoleName == fn
| summarize reqs=sum(itemCount) by bin(timestamp, 5m)
| where timestamp == "2021-05-17T20:20:00Z"
) on timestamp
| project timestamp, failed, reqs
| extend p=round(failed/reqs, 2)
It currently returns:
timestamp [UTC] |p |failed |reqs
5/17/2021, 8:20:00.000 PM 0 1,220 6,649
If anyone can give me insight into how to get the decimal value (~0.18) I expect for p?
Had to cast values to Doubles to get it to return something other than 0.
let fn = "APP_NAME";
requests
| where success == "False" and cloud_RoleName == fn
| summarize failed=sum(itemCount) by bin(timestamp, 5m)
| join (
requests
| where cloud_RoleName == fn
| summarize reqs=sum(itemCount) by bin(timestamp, 5m)
) on timestamp
| project timestamp, failedReqs=failed, totalRequests=reqs, percentage=(todouble(failed) / todouble(reqs) * 100)
another option that is a bit less verbose is to multiply by a 100.0 (which is a double literal)
percentage = failed * 100.0 / reqs
Note that the multiplication has to happen before division
I have this query and the data display my graph and I have to select on the dropdown on graph chart data_in_Gbps and data_out_Gbps
However, I am interested in adding both data on the graph instead of currently I have to create 2 graphs. what needs to add to show both data on graph?
Azure
| where ResourceId contains "tedr"
| where MetricName == "BitsInPerSecond"
| where TimeGenerated > (now() - 300h) and TimeGenerated <= now()
| project TimeGenerated, Resource, MaxInBps =Maximum
| join kind= inner
(
Azure
| where MetricName == "BitsOutPerSecond"
| where TimeGenerated > (now() - 300h) and TimeGenerated <= now()
| project TimeGenerated, Resource, MaxOutBps= Maximum
)
on TimeGenerated, Resource
| summarize data_in_Gbps = max(MaxInBps)/100* 100, data_in_Gbps = max(MaxOutBps)/1000 * 100 by bin(TimeGenerated, 5m), Resource
Thanks!!
2 series are not supported when they are also have other dimensions, in your case by ResourceId. Removing the split by ResourceId by selecting Don't split will allow you to select multiple series
I am trying to write a query that will get me the average time between 2 custom events, sorted by user session. I have added custom tracking events throughout this application and I want to query the time it takes the user from 'Setup' event to 'Process' event.
let allEvents=customEvents
| where timestamp between (datetime(2019-09-25T15:57:18.327Z)..datetime(2019-09-25T16:57:18.327Z))
| extend SourceType = 5;
let allPageViews=pageViews
| take 0;
let all = allEvents
| union allPageViews;
let step1 = materialize(all
| where name == "Setup" and SourceType == 5
| summarize arg_min(timestamp, *) by user_Id
| project user_Id, step1_time = timestamp);
let step2 = materialize(step1
| join
hint.strategy=broadcast (all
| where name == "Process" and SourceType == 5
| project user_Id, step2_time=timestamp
)
on user_Id
| where step1_time < step2_time
| summarize arg_min(step2_time, *) by user_Id
| project user_Id, step1_time,step2_time);
let 1Id=step1_time;
let 2Id=step2_time;
1Id
| union 2Id
| summarize AverageTimeBetween=avg(step2_time - step1_time)
| project AverageTimeBetween
When I run this query it produces this error message:
'' operator: Failed to resolve table or column or scalar expression named 'step1_time'
I am relatively new to writing queries with AI and have not found many resources to assist with this problem. Thank you in advance for your help!
I'm not sure what the let 1id=step1_time lines are intended to do.
those lines are trying to declare a new value, but step1_time isn't a thing, it was a field in another query
i'm also not sure why you're doing that pageviews | take 0 and unioning it with events?
let allEvents=customEvents
| where timestamp between (datetime(2019-09-25T15:57:18.327Z)..datetime(2019-09-25T16:57:18.327Z))
| extend SourceType = 5;
let step1 = materialize(allEvents
| where name == "Setup" and SourceType == 5
| summarize arg_min(timestamp, *) by user_Id
| project user_Id, step1_time = timestamp);
let step2 = materialize(step1
| join
hint.strategy=broadcast (allEvents
| where name == "Process" and SourceType == 5
| project user_Id, step2_time=timestamp
)
on user_Id
| where step1_time < step2_time
| summarize arg_min(step2_time, *) by user_Id
| project user_Id, step1_time,step2_time);
step2
| summarize AverageTimeBetween=avg(step2_time - step1_time)
| project AverageTimeBetween
if I remove the things I don't understand (like union with 0 pageviews, and the lets, I get a result, but I don't have your data so I had to use other values than "Setup" and "Process" so I don't know if it is what you expect?
you might want to look at the results of the step2 query without the summarize to just see what you're getting matches what you expect.
User wants a count of unique sessions per week in application insights. I have the query working, including a pivot, but the Week columns are out of order. I would prefer if they were in order.
pageViews
| where timestamp < now()
| summarize Sessions= dcount(session_Id)
by Week=bin(datepart("weekOfYear", timestamp), 1), user_AuthenticatedId
| order by Week
| evaluate pivot(Week, sum(Sessions))
| join kind=innerunique (pageViews
| summarize MostRecentRequest = max(timestamp) by user_AuthenticatedId)
on $right.user_AuthenticatedId == $left.user_AuthenticatedId
| project-away user_AuthenticatedId1
I've tried ordering by timestamp before the summarize, and ordering by week after the summarize (still in there) and no luck.
There's currently a "trick" that will work: serialize right after your order by
pageViews
| where timestamp < now()
| where isnotempty(user_AuthenticatedId)
| summarize Sessions= dcount(session_Id)
by Week=bin(datepart("weekOfYear", timestamp), 1), user_AuthenticatedId
| order by Week
| serialize // <--------------------------------- RIGHT HERE
| evaluate pivot(Week, sum(Sessions))
| join kind=innerunique (pageViews
| summarize TotalSessions=dcount(session_Id), MostRecentRequest = max(timestamp) by user_AuthenticatedId)
on $right.user_AuthenticatedId == $left.user_AuthenticatedId
| project-away user_AuthenticatedId1
| top 100 by TotalSessions desc
gets me this in workbooks, with the weeks in descending order (I also added total session count to sort/top by with some custom column settings set):
the custom settings I have for the column settings in workbooks:
delete all the #'d columns that are there by default and add one for ^[0-9]+$ set to heatmap:
I refactored query a bit for my own comprehension. I took the the left and right into "views". Thought I'd share.
let users_MostRecent_Session =
pageViews
| summarize
TotalSessions=dcount(session_Id)
, MostRecentRequest = max(timestamp)
by
user_AuthenticatedId
;
//
let users_sessions_ByWeek =
pageViews
| where timestamp < now()
| where isnotempty(user_AuthenticatedId)
| summarize
Sessions= dcount(session_Id)
by
Week=bin(datepart("weekOfYear", timestamp), 1)
, user_AuthenticatedId
| order by Week
| serialize
| evaluate pivot(Week, sum(Sessions))
;
//
//
users_sessions_ByWeek
| join kind=innerunique
users_MostRecent_Session
on user_AuthenticatedId
| project-away user_AuthenticatedId1
| top 100 by TotalSessions desc