series_fir not generating moving average on application insights chart - azure-application-insights

Given the following Kusto query:
range t from bin(now(), 1h)-23h to bin(now(), 1h) step 1h
| summarize t=make_list(t)
| project id='TS', val=dynamic([0,0,0,0,0,0,0,0,0,10,20,40,100,40,20,10,0,0,0,0,0,0,0,0]), t
| extend 5h_MovingAvg=series_fir(val, dynamic([1,1,1,1,1])),
5h_MovingAvg_centered=series_fir(val, dynamic([1,1,1,1,1]), true, true)
| render timechart
I am unable to get application insights to actually draw the moving average lines shown in this document
I have also tried applying the article to one of our actual applications and have not had any luck either. There are no errors or anything that would give a clue as to why the moving averages are not being drawn. I'm assuming there is a setting somewhere that most probably has to be set. Here is my custom query:
let timeGrain=1d;
let ago = ago(7d);
let mAvgParm = repeat(1, 5);
let dataset=requests
// additional filters can be applied here
| where timestamp >= ago and cloud_RoleName == "recalculateordercombination" and resultCode == 500
| where client_Type != "Browser" ;
// calculate failed request count for all requests
dataset
| make-series dailyFailure=sum(itemCount) default=0 on timestamp in range(ago, now(), timeGrain) by resultCode
// render result in a chart
| extend SMA = series_fir(dailyFailure, mAvgParm)
| render timechart
What are these queries missing in order to draw the moving average lines using series_fir?
ref articles used in my research
https://marckean.com/2019/03/25/log-analytics-advanced-queries/
https://learn.microsoft.com/en-us/azure/kusto/query/series-firfunction
https://learn.microsoft.com/en-us/azure/kusto/query/make-seriesoperator

The web clients for both services are different, and that's also true for their rendering logic.
In Azure Data Explorer (Kusto), you can just use render timechart on time-series data (which is typed as dynamic).
In other cases, you may need to first mv-expand the series (link to doc), before rendering it.
Here's an example which matches the first query in your question:
range t from bin(now(), 1h)-23h to bin(now(), 1h) step 1h
| summarize t=make_list(t)
| project id='TS', val=dynamic([0,0,0,0,0,0,0,0,0,10,20,40,100,40,20,10,0,0,0,0,0,0,0,0]), t
| extend 5h_MovingAvg=series_fir(val, dynamic([1,1,1,1,1])),
5h_MovingAvg_centered=series_fir(val, dynamic([1,1,1,1,1]), true, true)
| mv-expand val to typeof(long), t to typeof(datetime), 5h_MovingAvg to typeof(long), 5h_MovingAvg_centered to typeof(long)
| project t, 5h_MovingAvg, 5h_MovingAvg_centered, val
| render timechart

Related

Application Insights - Cross Table Calculation Query

I'm trying to summarize the count of exceptions, the count of requests, and then the ratio of exceptions / requests. I can't determine how to calculate the two summaries and then project the ratio and return it. I'm trying something along the following without success:
let exceptionCount = exceptions
| where type == "ShiftDigital.InventoryServices.API.GetVehicleException" and outerMessage !contains("Invalid zip code")
| count as ExceptionCount;
requests
| count as RequestCount
| extend RequestCount / exceptionCount
Can someone please advise on the correct way to structure this query?
You're missing toscalar()
let exceptionCount = toscalar(exceptions
| where type == "ShiftDigital.InventoryServices.API.GetVehicleException" and outerMessage !contains("Invalid zip code")
| count);
let requestsCount = toscalar(requests
| count);
print requestCount * 1.0 / exceptionCount

Reliable method to check actual size occupied by data in hot cache

I have a table with 1 day of hot cache policy on it. And with that assume that cache utilization of the ADX cluster is less than 80%. Considering that, what would be a reliable method to exactly know the amount of cache space (TB) actually occupied by the table? I came up with the following two methods but they both return significantly different numbers:-
.show table <TableName> extents hot | summarize sum(ExtentSize)/pow(1024,4)
.show table <TableName> extents | where MaxCreatedOn >= ago(1d) | summarize extent_size=sum(ExtentSize) | project size_in_TB=((extent_size)/pow(1024,4))
The second command returns count more than 10 times higher than the first one. How can it be that different?
Both commands you ran should result with the same value, assuming:
you ran them at the same time (or quickly one after the other)
the effective caching policy is indeed 1 day (have you verified that is indeed the case?)
Regardless - the most efficient way to get that data point is by using the following command:
.show table TABLENAME details
| project HotExtentSizeTb = HotExtentSize/exp2(40), CachingPolicy
Here's an example from a table of mine, which has a caching policy of 4 days (set at table level), and a retention policy with a soft delete period of 3650 days:
// option 1
// --------
.show table yonis_table extents hot
| summarize HotExtentSizeTb = sum(ExtentSize)/exp2(40)
// returns: HotExtentSizeTb: 0.723723856871402 <---
// option 2: least efficient
// -------------------------
.show table yonis_table extents
| where MaxCreatedOn >= ago(4d)
| summarize HotExtentSizeTb = sum(ExtentSize)/exp2(40)
// returns: HotExtentSizeTb: 0.723723856871402 <---
// option 3: most efficient
// ------------------------
.show table yonis_table details
| project HotExtentSizeTb = HotExtentSize/exp2(40), CachingPolicy, RetentionPolicy
// returns:
HotExtentSizeTb: 0.723723856871402, <---
CachingPolicy: {
"DataHotSpan": "4.00:00:00"
},
RetentionPolicy: {
"SoftDeletePeriod": "3650.00:00:00",
"Recoverability": "Enabled"
}

Aggregate values from customMeasurements column

For my company I need to extract data from Azure Application Insights.
All the relevant data is stored in the customMeasurements. Currently, the table looks something like this:
name | itemType | customMeasurements
-----------------------------------------------------------
AppName | customEvent | {
Feature1:1,
Feature2:0,
Feature3:0
}
-----------------------------------------------------------
AppName | customEvent | {
Feature1:0,
Feature2:1,
Feature3:0
}
I'm trying to find a Kusto query which will aggregate all enabled features (which would have a value of '1'), but I'm unable to do so.
I tried several things to get this resolved like the following:
customEvents
| extend test = tostring(customMeasurements.["Feature2"])
| summarize count() by test
This actually showed me the number rows that have Feature2 set to '1' but I want to be able to extract all features that have been enabled without specifying them in the query (as they can have custom names).
Could somebody point me in the right direction please
perhaps, something like the following could give you a direction:
datatable(name:string, itemType:string, customMeasurements:dynamic)
[
'AppName', 'customEvent', dynamic({"Feature1":1,"Feature2":0,"Feature3":0}),
'AppName', 'customEvent', dynamic({"Feature1":0,"Feature2":1,"Feature3":0}),
]
| mv-apply customMeasurements on
(
extend feature = tostring(bag_keys(customMeasurements)[0])
| where customMeasurements[feature] == 1
)
| summarize enabled_features = make_set(feature) by name

Translation model in App Maker

I would to make the field descriptions and label texts in my pages multi-lingual. Originally they are in English and I could let the user translate them through Google Translate. In order to avoid translation errors I would like to implement a translation data model that contains
FieldDisplayName / LabelText
FieldDisplayName_DE
FieldDisplayName_FR
FieldDisplayName_IT
etc.
All the pages contain a page header fragment that contains a menu button, searchbox etc. like in the Starter App template. I am planning on integrating a dropdown widget in the page header that allows to choose between the languages (DE,EN,FR,IT,...). Is it possible to bind the display name to the user's selection? How would I have to implement that?
The easiest way (to implement/use/maintain) that would provide highest possible translation quality will be introducing Translation data model with the following structure:
+----+--------+------------+------------+------------+-----+
| Id | Locale | FirstName | LastName | Age | ... |
+----+--------+------------+------------+------------+-----+
| 1 | EN | First name | Last name | Age | ... |
+----+--------+------------+------------+------------+-----+
| 2 | RU | Имя | Фамилия | Возраст | ... |
+----+--------+------------+------------+------------+-----+
| 3 | DE | Voornaam | Achternaam | Leeftijd | ... |
+----+--------+------------+------------+------------+-----+
| 4 | ... | ... | .... | ... | ... |
+----+--------+------------+------------+------------+-----+
In this model every column represents unique label within your app and every row represents labels's translations for supported languages. This model can be easily used in label bindings:
#datasources.UserTranslations.item.FieldNameToTranslate
Maintaining these translation will be easy as well, just drag and drop editable table on UI.
Here is a query script for the UserTranslations datasource:
// Assuming that you already have robust user settings implementation.
var userSettings = getUserSettings_();
var query = app.models.Translation.newQuery();
query.filters.Locale._equals = userSettings.Locale;
return query.run();
Radically different implementation will be
Introducing Calculated Model with the same set of fields as in the previous approach
Using Model Metadata API to extract display names from the model's fields
Translate fields using Translate API
Populate calculated model record with translated values
Here is super high level server pseudo script for that flow:
var userLocale = getUserLocaleFromUserSettings();
var fieldsDisplayNames = getFieldsDisplayNames(app.models.Translation);
var translations = translate(fieldsDisplayNames, 'en', userLocale);
var record = app.models.Translation.newRecord();
mapRecordFieldsToTranslations(record, translations);
return [record];
After some trials a translation model turned out to be too laggy for my demands. Therefore I have hardcoded the binding expression into the labels I want to translate. The binding expression looks a little bit like this:
(#pages.UserSettings.LanguageDropdown.value == 'EN') ? 'Contact' : 'Kontakt'

Application Insights Extract Nested CustomDimensions

I have some data in Application Insights Analytics that has a dynamic object as a property of custom dimensions. For example:
| timestamp | name | customDimensions | etc |
|-------------------------|---------|----------------------------------|-----|
| 2017-09-11T19:56:20.000 | Spinner | { | ... |
MyCustomDimension: "hi"
Properties:
context: "ABC"
userMessage: "Some other"
}
Does that make sense? So a key/value pair inside of customDimensions.
I'm trying to bring up the context property to be a proper column in the results. So expected would be :
| timestamp | name | customDimensions | context| etc |
|-------------------------|---------|----------------------------------|--------|-----|
| 2017-09-11T19:56:20.000 | Spinner | { | ABC | ...
MyCustomDimension: "hi"
Properties:
context: "ABC"
userMessage: "Some other"
}
I've tried this:
customEvents | where name == "Spinner" | extend Context = customDimensions.Properties["context"]
and this:
customEvents | where name == "Spinner" | extend Context = customDimensions.Properties.context
but neither seem to work. They give me a column at the end named "Context" but the column is empty - no values.
Any ideas?
EDIT:
Added a picture for clarifying the format of the data:
edited to working answer:
customEvents
| where name == "Spinner"
| extend Properties = todynamic(tostring(customDimensions.Properties))
| extend Context = Properties.context
you need an extra tostring and todynamic in here to get what you expect (and what i expected!)
the explanation i was given:
Dynamic field "promises" you the upper/outer level of key / value access (this is how you access customDimensions.Properties).
Accessing internal structure of that json depends on the exact format of customDimensions.Properties content. It doesn’t have to be json by itself. Even if it looks like a well structured json, it still may be just a string that is not exactly well formatted json.
So basically, it by default won't attempt to parse strings inside of a dynamic/json block because they don't want to spend a lot of time possibly trying and failing to convert nested content to json infinitely.
I still think that extra tostring shouldn't be required inside there, since todynamic should already be allowing both string and dynamic in validly, so i'm checking to see if the team that owns the query stuff can make that step better.
Thanks sooo much.. just to expand on the answer from John. We needed to graph duration of end-points using custom events. This query made it so we could specify the duration as our Y-axis in the chart:
customEvents
| extend Properties = todynamic(tostring(customDimensions.Properties))
| extend duration = todouble(todecimal(Properties.duration))
| project timestamp, name, duration

Resources