AppInsights > Logs > Render Bar Chart to start from 0 - azure-application-insights

In my angular application I am tracking filters that users utilize on one of the pages. What I can later see in Logs, is the following (query for last 24 hours)
What I am interested in, is the count of filters groupped by its name. So I created the following query:
However the problem as you can see, is that my y-axis starts from 1 instead of 0. To users this looks like the last two filters don't have any values, where in reality they both have count of 1.
I have tried to use ymin=0 together with render function, however it did not work (chart still starts from 1). Then I have read I need to use make-series() function and so I tried:
customEvents
| where timestamp >= ago(24h)
| where customDimensions.pageName == 'product'
| make-series Count=count(name) default=0 on timestamp from datetime(2019-10-10) to datetime(2019-10-11) step 1d by name
| project name, Count
However the result is some weird matrix instead of a regular table:
I have just started with application insights thus any help in respect to this matter would be more than appreciated. Thank you

in Workbooks in application insights you could do almost this query (see below for a simplification?), then use the chart settings and set the axis min/max explicitly:
but why are you using make-series but then summarizing to just one series?
in this specific case is summarize simpler:
customEvents
| where timestamp between(datetime(2019-10-10) .. datetime(2019-10-11))
| where customDimensions.pageName == 'product'
| summarize Count=count(name) by name
| render barchart
in the logs blade (where you are), you could do this query, and I believe you can use
render barchart title="blah" ymin=0
(at some point workbooks will be able to "see" all the rendeer options like ymin/ymax/xmin/xmax/title/etc, but right now they're all stripped out at service layer)

A bit late to the party, but the correct syntax to pass in ymin and ymax when using a query is this:
| ...
| render barchart with (ymin=0, ymax=100)
See https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/renderoperator?pivots=azuremonitor

Related

How to summarize by Severity Level in Azure Application Insights Logs for each operation name

I have multiple azure functions in single azure function app resource where each function logs are stored with function name inoperation_Name column of application insights logs. For all azure functions names, I am logging messages with Warnings(severityLevel=2) and Errors(severityLevel=3).
Expected: I am trying to show all functions warnings, errors in a single pie chart and later to pin to dashboard. Piechart should give us visibility how many errors and warnings for each function have in a single azure function app resource.
Actual: Pie chart is displaying for all severity levels(combining) for each function name(operationname) for a single azure function app resource.
traces
| where severityLevel >1
| where cloud_RoleName == 'dev-test-functionapp' //Azure Function App Resource Name
| where operation_Name in ('Function1Name','Function2Name','Function3Name')
| summarize by operation_Name,severityLevel
| render piechart
If I understand correctly, this could work:
traces
| where severityLevel > 1
| extend severityLevel = case(severityLevel == 2, "Warning", severityLevel == 3, "Error", tostring(severityLevel))
| where cloud_RoleName == 'dev-test-functionapp'
| where operation_Name in ('Function1Name','Function2Name','Function3Name')
| summarize count() by s = strcat(severityLevel, "_", operation_Name)
| render piechart
(Not an answer)
Replying the OP as to my comment about the choice of visualization.
Pie chart is an overused visualization.
It is great for storytelling for some scenarios when you want to emphasize the dominance of one or two elements or the lack of such.
It is quite bad for anything else.
It makes it very difficult to observe the details when there are more than just few elements and it is also very difficult to see the ratio between those elements.
Here is another option of unstacked column

Exclude certain days/time from query results? (Ex. Thursday's midnight-2am EST)

I am newer to KQL and I am trying to write a query against configuration changes made to files with an extension of ".config" and would like to remove results that are generated under the "TimeGenerated [UTC]" column. The results should exclude Thursday's from midnight- 2am EST. Understanding that TimeGenerated is in UTC, the query should be offsetting that to return EST.
Would someone be able to assist me in writing this? Not sure how to write it up as to have it return the results that exclude the specific time frame. Below is what I have so far:
ConfigurationChange
| where dayofweek(datetime_add('hour', -5, TimeGenerated)) != 4d and hourofday(datetime_add('hour', -5, TimeGenerated)) !in(0, 1) // <---
| where ConfigChangeType in ("Files")
| where FileSystemPath endswith ".config"
| sort by TimeGenerated
| render table
Replace 4d with 4, because dayofweek() returns the number of day (between 0 and 6).

Application Insights chart is not respecting query order in a shared dashboard

The query is actually pretty simple:
traces
| extend SdIds = customDimensions.SdIds
| where isnull(customDimensions.AmountOfBlobStorageLoadedRows) == false
or isnull(customDimensions.AmountOfRowsAfterTransformation) == false
or isnull(customDimensions.AmountOfRowsIngestedToDW) == false
| summarize
BlobReadSum=sum(toint(customDimensions.AmountOfBlobStorageLoadedRows)),
TransformationSum=sum(toint(customDimensions.AmountOfRowsAfterTransformation)),
SavedToDWSum=sum(toint(customDimensions.AmountOfRowsIngestedToDW))
by tostring(SdIds)
| order by BlobReadSum desc, TransformationSum desc, SavedToDWSum desc
| limit 10
The following picture shows the application insights log tool. Like expected, the biggest values appear first in the chart:
However, the picture below shows the output of the same query, using the same time range, published to a shared dashboard:
What happened to the order?
Is there any setting that may interfere on this?
You could add | sort tostring(SdIds) after | order in the suffix of your query:
| order by BlobReadSum desc, TransformationSum desc, SavedToDWSum desc
| sort tostring(SdIds)
| limit 10
In azure log analytics dashboard parts there's an automatic sort for the x axis when its type is string.
You might notice that the chart sort in the dashboard would be just the opposite. In this case click "Open chart in Analytics" in the top right corner of your part, and change the desc/asc sort configuration of | sort tostring(SdIds) command.

Google Sheets FILTER() and QUERY() not working with SUM()

I'm trying to pull and sum data from one sheet on another. This is GA data being built into a report, so I have sessions split up by landing page and device type, and would like to group them in different ways.
I usually use FILTER() for this sort of thing, but it keeps returning a 0 sum. Thinking this may be an odd edge case with FILTER(), I switched to using QUERY() instead. That gave me an error, but a Google search doesn't offer much documentation about what the error actually means. Taking a guess that it could be indicating an issue with the data type (i.e. not numeric), I changed the format of the source from "Automatic" to "Number", but to no avail.
Maybe it's a lack of coffee, I'm at a loss as to why neither function is working to do a simple lookup and sum by criteria.
FILTER() function
SUM(FILTER(AllData!C:C,AllData!A:A="/chestnut/",AllData!B:B="desktop"))
No error, but returns 0 regardless of filter parameters.
QUERY() function
QUERY(AllData!A:G, "SELECT SUM(C) WHERE A='/chestnut/' AND B='desktop'",1)
Error returned:
Unable to parse query string for Function QUERY parameter 2: AVG_SUM_ONLY_NUMERIC
Sample data:
landingPage | deviceCategory | sessions
-------------|----------------|----------
/chestnut/ | desktop | 4
/chestnut/ | desktop | 2
/chestnut/ | tablet | 5
/chestnut/ | tablet | 1
/maple/ | desktop | 1
/maple/ | desktop | 2
/maple/ | mobile | 3
/maple/ | mobile | 1
I think the summing doesn't work because your numbers are text formatted.
See if any of these work? (change ranges to suit)
using FILTER()
=SUM(FILTER(VALUE(AllData!C:C),AllData!A:A="/chestnut/",AllData!B:B="desktop"))
using QUERY()
=ArrayFormula(QUERY({AllData!A:B, VALUE(AllData!C:C)}, "SELECT SUM(Col3) WHERE Col1='/chestnut/' AND Col2='desktop' label SUM(Col3)''",1))
using SUMPRODUCT()
=SUMPRODUCT(VALUE(AllData!C2:C),AllData!A2:A="/chestnut/",AllData!B2:B="desktop")

How to setup futures instruments in FinancialInstrument to lookup data from CSIdata

Background
I am trying to setup my trade analysis environment. I am running some rule based strategies on futures on different brokers and trying to aggregate trades from different brokers in one place. I am using blotter package as my main tool for analysis.
Idea is to use blotter and PerformanceAnalytics for analysis of live performance of various strategies I am running.
Problem at hand
My source of future EOD data is CSIData. All the EOD OHLC prices for these futures are stored in CSV format in following directory structure. For each future there is seperate directory and each contract of the future has one csv file with OHLC price series.
|
+---AD
| AD_201203.TXT
| AD_201206.TXT
| AD_201209.TXT
| AD_201212.TXT
| AD_201303.TXT
| AD_201306.TXT
| AD_201309.TXT
| AD_201312.TXT
| AD_201403.TXT
| AD_201406.TXT
| AD_54.TXT
...
+---BO2
| BO2195012.TXT
| BO2201201.TXT
| BO2201203.TXT
| BO2201205.TXT
| BO2201207.TXT
| BO2201208.TXT
| BO2201209.TXT
| BO2201210.TXT
| BO2201212.TXT
| BO2201301.TXT
...
I have managed to define root contracts for all the futures (e.g. in above case AD, BO2 etc) I will be using in FinancialInstrument with CSIData symbols as primary identifiers.
I am now struggling on how to define all the actual individual future contracts (e.g. AD_201203, AD_201206 etc) and setup their lookup using setSymbolLookup.FI.
Any pointers on how to do that?
To setup individual future contracts, I looked into ?future_series and ?build_series_symbols, however, the suffixes they support seem to be only of Future Month code format. So I have a feeling I am left with setting up each individual future contract manually. e.g.
build_series_symbols(data.frame(primary_id=c('ES','NQ'), month_cycle=c('H,M,U,Z'), yearlist = c(10,11)))
[1] "ESH0" "ESM0" "ESU0" "ESZ0" "NQH0" "NQM0" "NQU0" "NQZ0" "ESH1" "ESM1" "ESU1" "ESZ1" "NQH1" "NQM1" "NQU1" "NQZ1"
I have no clue where to start digging for my second part of my question i.e. setting price lookup for these futures from CSI.
PS: If this is not right forum for this kind of question, I am happy to get it moved to right section or even ask on totally different forum altogether.
PPS: Can someone with higher reputation tag this question with FinancialInstrument and CSIdata? Thanks!
The first part just works.
R> currency("USD")
[1] "USD"
R> future("AD", "USD", 100000)
[1] "AD"
Warning message:
In future("AD", "USD", 1e+05) :
underlying_id should only be NULL for cash-settled futures
R> future_series("AD_201206", expires="2012-06-18")
[1] "AD_201206"
R> getInstrument("AD_201206")
primary_id :"AD_201206"
currency :"USD"
multiplier :1e+05
tick_size : NULL
identifiers: list()
type :"future_series" "future"
root_id :"AD"
suffix_id :"201206"
expires :"2012-06-18"
Regarding the second part, I've never used setSymbolLookup.FI. I'd either use setSymbolLookup directly, or set a src instrument attribute if I were going to go that route.
However, I'd probably make a getSymbols method, maybe getSymbols.mycsv, that knows how to find your data if you give it a dir argument. Then, I'd just setDefaults on your getSymbols method (assuming that's how most of your data are stored).
I save data with saveSymbols.days(), and use getSymbols.FI daily. I think it wouldn't be much effort to tweak getSymbols.FI to read csv files instead of RData files. So, I suggest looking at that code.
Then, you can just
setDefaults("getSymbols", src="mycsv")
setDefaults("getSymbols.mycsv", dir="path/to/dir")
Or, if you prefer
setSymbolLookup(AD_201206=list(src="mycsv", dir="/path/to/dir"))
or (essentially the same thing)
instrument_attr("AD_201206", "src", list(src="mycsv", dir="/path/to/dir")

Resources