I want to select a first occurring customEvents for given name for in session
Following query gives the first event by session
customEvents
| summerize min(timestamp) by session_Id
But it only returns timestamp and session_Id
How to get other properties also?
I'd suggest using arg_min():
customEvents | where timestamp > ago(24h) | summarize arg_min(timestamp, *) by session_Id
Related
Using Kusto (application insights, log analytics, azure monitor) how can I compare data from my current day (eg: Monday), against data from all previous similar days?
a.k.a, how to compare current monday to previous Mondays only?
requests
| where timestamp > ago(90d)
| where dayofweek(timestamp) == dayofweek(now())
| summarize count() by bin(todatetime(strcat("20210101 ", format_datetime(timestamp,"HH:mm:ss"))),5m),tostring(week_of_year(timestamp))
| render timechart
The trick is to group the timestamp using the same date, keeping the time portion AND the week_of_year (which has to be casted to string otherwise it will be interpreted as another numeric value)
I am trying to find what's causing the higher RU usage on the Cosmos DB. I enabled the Log Analytics on the Doc DB and ran the below Kusto query to get the RU consumption by Collection Name.
AzureDiagnostics
| where TimeGenerated >= ago(24hr)
| where Category == "DataPlaneRequests"
| summarize ConsumedRUsPer15Minute = sum(todouble(requestCharge_s)) by collectionName_s, _ResourceId, bin(TimeGenerated, 15m)
| project TimeGenerated , ConsumedRUsPer15Minute , collectionName_s, _ResourceId
| render timechart
We have only one collection on the DocDb Account (prd-entities) which is represents Red line in the Chart. I am not able to figure out what the Blue line represents.
Is there a way to get more details about the empty collection name RU usage (i.e., Blue line)
I'm not sure but I think there's no empty collection costs RU actually.
Per my testing in my side, I found that when I execute your kusto query I can also get the 'empty collection', but when I watch the line details, I found all these rows are existing in my operation. What I mean here is that we shouldn't sum by collectionName_s especially you only have one collection in total, you may try to use requestResourceId_s instead.
When using requestResourceId_s, there're still some rows has no id, but they cost 0.
AzureDiagnostics
| where TimeGenerated >= ago(24hr)
| where Category == "DataPlaneRequests"
| summarize ConsumedRUsPer15Minute = sum(todouble(requestCharge_s)) by requestResourceId_s, bin(TimeGenerated, 15m)
| project TimeGenerated , ConsumedRUsPer15Minute , requestResourceId_s
| render timechart
Actually, you can check the requestCharge_s are coming from which operation, just watch details in Results, but not in Chart, and order by the collectionName_s, then you'll see those requests creating from the 'empty collection', judge if these requests existing in your collection.
I've a lot of events in the traces section of Application Insights. I'm interested in two events "Beginning" and "End", they each have the same operation Id as they're logged in sets.
Sometimes the "End" event won't exist - as there will have a been a problem with the application we're monitoring.
We can say, for the sake of argument that we have these fields that we're interested in: timestamp, eventName, operationId
How can i calculate the exact time between the two timestamps for the pair of events for all unique operation Ids in a timespan?
My initial thought was to get the distinct operationIds from traces, where the eventName is "Beginning"... But that's as far as i get, as i'm not really sure how to perform the rest of the operations required. (Namely - the calculation, and checking if the "End" event even exists).
let operations =
traces
| where customDimensions.eventName = "Beginning"
| distinct operationId
Any help would be greatly appreciated!
EDIT: I'm obviously thinking about this all wrong. What i'm after is non-unique operationIds. This will filter out missing "end" events.
If i could then merge the resulting results together, based on that id, i would then have 2 timestamps, which i could operate on.
So, i figured it out after some coffee and time to think.
Ended up with:
let a =
traces
| summarize count() by operation_Id;
let b =
a
| where count_ == 2
| project operation_Id;
let c =
traces
| where operation_Id in (b)
| join kind = inner(traces) on operation_Id
| order by timestamp,timestamp1
| project evaluatedTime=(timestamp1 - timestamp), operation_Id, timestamp;
c
| where evaluatedTime > timespan(0)
| project seconds=evaluatedTime/time(1s), operation_Id, timestamp
I have an application insights query. And in this query I want to join/combine several columns into a single column for display how can this be accomplished.
I want to combine ip, city, state, country.
customEvents
| where timestamp >= ago(7d)
| where (itemType == 'customEvent')
| where name == "Signin"
| project timestamp, customDimensions.appusername, client_IP,client_City,client_StateOrProvince, client_CountryOrRegion
| order by timestamp desc
strcat is your friend, with whatever strings you want as separators (i just use spaces in the example):
| project timestamp, customDimensions.appusername,
strcat(client_IP," ",client_City," ",client_StateOrProvince," ", client_CountryOrRegion)
also, the | where (itemType == 'customEvent') in your query is unnecessary, as everything in the customEvents table is already a customEvent. you only need a filter like that on itemType if you join multiple tables somehow (like union requests, customEvents or a join somewhere in your query that references multiple tables)
Based on datapoint numbers I'm seeing, a client's website is averaging 28 dependencies per each request. That does seem very high to me so I'd like to do some analysis by rolling dependency data points up on page views and requests to the website. Unfortunately, looking at the fields available via Application Insights, there doesn't seem to be a natural field to join dependency to pageviews or requests. Any thoughts as to how I would go about doing so?
You can consider using OperationContext
This may get you running in the right direction
requests
| where timestamp > ago(1d)
| project timestamp, operation_Id
| join (dependencies
| where timestamp > ago(1d)
| summarize count(duration) by operation_Id, type
) on operation_Id
This is what I use to look at 22 hours of my data for a particular request talking to sql server
// Requests
requests
| where timestamp >= datetime(2017-08-24T08:59:59.999Z) and timestamp < datetime(2017-08-25T06:30:00.001Z)
| where (itemType == 'request' and ((timestamp >= datetime(2017-08-24T09:00:00.000Z) and timestamp <= datetime(2017-08-25T06:30:00.000Z)) and (client_Type == 'PC' and operation_Name == 'POST /CareDelivery/CareDelivery/ServiceUserDetailsForDeviceUserChunked/00000000-0000-0000-0000-000000000000')))
| join (dependencies
| where timestamp >= datetime(2017-08-24T08:59:59.999Z) and timestamp < datetime(2017-08-25T06:30:00.001Z)
| summarize count(duration) by operation_Id, type
) on operation_Id
| summarize count_dependencies=avg(count_duration) by type, bin(timestamp, 20m)
Post this into the query and the format will be ok, and you can read it - wish i could