helo iam new to kusto query can i get the query for yesterday's date in log analytics workbook ?
let PGBARI1_Extraction_st = ADFPipelineRun
| where parse_json(Parameters).Process_Unique_RunID startswith "PGBARI1"
| where End >= Start
| where Start >= todatetime (strcat (tostring(format_datetime(now(),'yyyy-MM-dd'))))
| summarize time1 = min(Start) by TenantId , process_name = 'PGBARI1';
let PGBARI1_Compaction_end = ADFPipelineRun
| where Parameters contains "PGBARI1"
| where _ResourceId endswith "df-pr-dna-eus2"
| where OperationName contains "orchestrator"
| where End >= Start
| where Start >= todatetime (strcat (tostring(format_datetime(now(),'yyyy-MM-dd'))))
| summarize time2 = max(End) by TenantId , process_name = 'PGBARI1';
let PGBARI1 = PGBARI1_Extraction_st | join PGBARI1_Compaction_end on $left.process_name==$right.process_name
| summarize Runtime_in_minutes = sum( toint(datetime_diff('minute',time2,time1))), process_name = 'PGBARI1';
let PGBARI1_1 = ADFPipelineRun
| where Parameters contains "PGBARI1"
| where End >= Start
| where Start >= todatetime (strcat (tostring(format_datetime(now(),'yyyy-MM-dd'))))
| where Status != 'Succeeded'
| summarize Failed = count(Status);
let PGBARI_Status = PGBARI1_1 | project Process_name = 'PGBARI1', Status = iff(Failed>0,'Failed','Succeeded');
PGBARI1 | union PGBARI_Status
Related
I am applying the series_decompose_anomalies algorithm to time data coming from multiple meters. Currently, I am using the ADX dashboard feature to feed my meter identifier as a parameter into the algorithm and return my anomalies and scores as a table.
let dt = 3hr;
Table
| where meter_ID == dashboardParameter
| make-series num=avg(value) on timestamp from _startTime to _endTime step dt
| extend (anomalies,score,baseline) = series_decompose_anomalies( num, 3,-1, 'linefit')
| mv-expand timestamp, num, baseline, anomalies, score
| where anomalies ==1
| project dashboardParameter, todatetime(timestamp), toreal(num), toint(anomalies), toreal(score)
I would like to bulk process all my meters in one go and return a table with all anomalies found across them. Is it possible to feed an array as an iterable in KQL or something similar to allow my parameter to change multiple times in a single run?
Simply add by meter_ID to make-series
(and remove | where meter_ID == dashboardParameter)
| make-series num=avg(value) on timestamp from _startTime to _endTime step dt by meter_ID
P.S.
Anomaly can be positive (num > baseline => flag = 1) or negative (num < baseline => flag = -1)
Demo
let _step = 1h;
let _endTime = toscalar(TransformedServerMetrics | summarize max(Timestamp));
let _startTime = _endTime - 12h;
TransformedServerMetrics
| make-series num = avg(Value) on Timestamp from _startTime to _endTime step _step by SQLMetrics
| extend (flag, score, baseline) = series_decompose_anomalies(num , 3,-1, 'linefit')
| mv-expand Timestamp to typeof(datetime), num to typeof(real), flag to typeof(int), score to typeof(real), baseline to typeof(real)
| where flag != 0
SQLMetrics
num
Timestamp
flag
score
baseline
write_bytes
169559910.91717172
2022-06-14T15:00:30.2395884Z
-1
-3.4824039875238131
170205132.25708669
cpu_time_ms
17.369556143036036
2022-06-14T17:00:30.2395884Z
1
7.8874529842826
11.04372634506527
percent_complete
0.04595588235294118
2022-06-14T22:00:30.2395884Z
1
25.019464868749985
0.004552738927738928
blocking_session_id
-5
2022-06-14T22:00:30.2395884Z
-1
-25.019464868749971
-0.49533799533799527
pending_disk_io_count
0.0019675925925925924
2022-06-14T23:00:30.2395884Z
1
6.4686836384225685
0.00043773741690408352
Fiddle
We're running into Kusto has_any limit of 10K.
Sample code
// Query: Get failed operations for migrated apps
let migrationsTimeDiff = 15d;
let operationsDiffTime = 24h + 1m;
let migratedApps = FirstTable
| where TimeStamp >= ago(migrationsTimeDiff)
| where MetricName == "JobSucceeded"
| project
MigrationTime = PreciseTimeStamp,
appName = tostring(parse_json(Annotations).AppName)
| project appName;
SecondTable
| where TimeStamp > ago(operationsDiffTime)
| where Url has_any (appName)
| where Result == "Fail"
Is there a way to restructure the query via joins?
Alternatively is it possible to loop in batches of 10k?
Thanks for reading!
If Url is an exact match to appName, then you should use:
SecondTable
| where TimeStamp > ago(operationsDiffTime)
| where Url in (appName) // 'in' instead of 'has_any'
| where Result == "Fail"
Otherwise, you'll need to extract the application name from the Url using extend, and then use in like I suggested above, so your query will look like this:
SecondTable
| where TimeStamp > ago(operationsDiffTime)
| extend ExtractedAppNameFromUrl = ...
| where ExtractedAppNameFromUrl in (appName) // 'in' instead of 'has_any'
| where Result == "Fail"
I am trying to parse the below data in Kusto. Need help.
[[ObjectCount][LinkCount][DurationInUs]]
[ChangeEnumeration][[88][9][346194]]
[ModifyTargetInLive][[3][6][595903]]
Need generic implementation without any hardcoding.
ideally - you'd be able to change the component that produces source data in that format to use a standard format (e.g. CSV, Json, etc.) instead.
The following could work, but you should consider it very inefficient
let T = datatable(s:string)
[
'[[ObjectCount][LinkCount][DurationInUs]]',
'[ChangeEnumeration][[88][9][346194]]',
'[ModifyTargetInLive][[3][6][595903]]',
];
let keys = toscalar(
T
| where s startswith "[["
| take 1
| project extract_all(#'\[([^\[\]]+)\]', s)
);
T
| where s !startswith "[["
| project values = extract_all(#'\[([^\[\]]+)\]', s)
| mv-apply with_itemindex = i keys on (
extend Category = tostring(values[0]), p = pack(tostring(keys[i]), values[i + 1])
| summarize b = make_bag(p) by Category
)
| project-away values
| evaluate bag_unpack(b)
--->
| Category | ObjectCount | LinkCount | DurationInUs |
|--------------------|-------------|-----------|--------------|
| ChangeEnumeration | 88 | 9 | 346194 |
| ModifyTargetInLive | 3 | 6 | 595903 |
I have following query:
traces
| where customDimensions.Category == "Function"
| where isnotempty(customDimensions.prop__recordId) or isnotempty(customDimensions.prop__Entity)
| project operation_Id, Entity = customDimensions.prop__Entity, recordName = customDimensions.prop__recordName, recordId = customDimensions.prop__recordId
I get results like these:
I want to merge rows by operation_id, and get results like these:
Please try use join operator, like below:
traces
| where customDimensions.Category == "Function"
| where isnotempty(customDimensions.prop__recordId)
| project operation_Id, customDimensions.prop__recordId
| join kind = inner(
traces
| where customDimensions.Category == "Function"
| where isnotempty(customDimensions.prop__Entity)
| project operation_Id,customDimensions.prop__Entity,customDimensions.prop__recordName
) on operation_Id
| project-away operation_Id1 //remove the redundant column,note that it's operation_Id1
| project operation_Id, Entity = customDimensions.prop__Entity, recordName = customDimensions.prop__recordName, recordId = customDimensions.prop__recordId
I did not has the same data, but make some similar data, works fine at my side.
Before merge:
After merge:(and note that use project-away to remove the redundant column which is used as joined key, and it always has number suffix 1 by default)
Final query is:
| where customDimensions.Category == "Function"
| where isnotempty(customDimensions.prop__recordId)
| project operation_Id, customDimensions.prop__recordId
| join kind = inner(
traces
| where customDimensions.Category == "Function"
| where isnotempty(customDimensions.prop__Entity)
| project operation_Id,customDimensions.prop__Entity
) on operation_Id
| join kind = inner(
traces
| where customDimensions.Category == "Function"
| where isnotempty(customDimensions.prop__recordName)
| project operation_Id,customDimensions.prop__recordName
) on operation_Id
| project operation_Id, Entity = customDimensions_prop__Entity, recordName = customDimensions_prop__recordName, recordId = customDimensions_prop__recordId
I lost already so much time but I don't get it.
Is it possible to use a String as an argument in a function?
My String is definded as:
mergesetting <- "all = FALSE"
(Sometimes I use "all.y = TRUE" or "all.x = TRUE" instead)
I tried to set that String as an argument into the following Function:
merged = merge.data.frame(x = DataframeA ,y = DataframeB ,by = "date_new", mergesetting )
But i get an error message: Error in fix.by(by.x, x)
The function does work if I use the argument directly:
merged = merge.data.frame(x = DataframeA,y = DataframeB,by = "date_new", all = FALSE )
As well two other approaches found on Use character string as function argument
didn't work:
L<- list(x = DataframeA,y = DataframeB,by = "date_new", mergesetting)
merged <- do.call(merge.data.frame, L)
Any help is much appreciated.
Not sure the point but,
if you had a list with your data and arguments
Say dfA is this data frame
kable(head(dfA))
|dates | datum|
|:----------|-----:|
|2010-05-11 | 1130|
|2010-05-12 | 1558|
|2010-05-13 | 1126|
|2010-05-14 | 131|
|2010-05-15 | 2223|
|2010-05-16 | 4005|
and dfB is this...
kable(head(dfB))
|dates | datum|
|:----------|-----:|
|2010-05-11 | 3256|
|2010-05-12 | 50|
|2010-05-13 | 2280|
|2010-05-14 | 4981|
|2010-05-15 | 2117|
|2010-05-16 | 791|
Your pre set list:
arg.li <- list(dfA = dfA,dfB = dfB,all = T,by = 'dates')
The wrapper for the list function...
f <- function(x)do.call('merge.data.frame',list(x = x$dfA,y = x$dfB,all = x$all))
results in:
kable(summary(f(arg.li)))
| | dates | datum |
|:--|:------------------|:------------|
| |Min. :2010-05-11 |Min. : 24 |
| |1st Qu.:2010-09-03 |1st Qu.:1288 |
| |Median :2010-12-28 |Median :2520 |
| |Mean :2011-01-09 |Mean :2536 |
| |3rd Qu.:2011-04-22 |3rd Qu.:3785 |
| |Max. :2011-12-01 |Max. :5000 |