Split dynamic column into multiple rows using using mv-expand - azure-application-insights

While using mv-expand on a dynamic value column I expect to get separate rows for each value in the column. Ultimately I want to count each separate value using summarize
The dynamic value column can contain one or multiple values of any number between 1 and 300
like [], [1,3],[1,2,10,30]
I tried using the mv-expand reference but I couldn't make it work.
Tablename
| mv-expand categories=CustomDimension['category_id']
Instead of giving separate rows for each value in the column it creates a new column that has the same dynamic value as the original column.

You would have to use 'mvexpand' in the query instead of 'mv-expand'. I know this is misleading but if you go to your query page and type 'mv-expand' then it doesn't resolve but it does work if you try 'mvexpand'.
To accomplish your requirement, the answer provided by me in this thread should help you. Also for even more advanced example, you may refer this one as well.
Hope this helps!! Cheers!!

Related

kusto KQL summarize argmax() returns modified column names

I have a quite big table as input where two fields (Id, StartTsUtc) form a unique key. the TimeStamp shows several updates for this unique key
Wanting to keep only the latest update for each (Id,StartTsUtc), I'm applying the argmax function
| summarize argmax(TimeStamp, *) by Id, StartTsUtc
The result is correct, but the columns seem to have 'max_TimeStamp_' added in their column name
Why would that happen ? And how can this be avoided ?
Is there a way to easily remove 'max_TimeStamp_' from each columname ? The columns are dynamic, so it's not possible to use a project-rename using a fixed list
You should use "arg_max()" instead of "argmax()". "argmax()" is the old version that had this undesirable behavior.

why filtering on extents_tags() is slow

Why the following command is slow (5 mins)?
mytable | where extent_tags() contains "20210613" | count
I know this is not the best way to get count , I could have used .show table extents and could have simply calculated sum(RowCount) using summarize operator. But I am just testing. Ideally ADX should be able to search tags across extents and get counts , so it is only metadata search and once it finds correct extent, row count is already stored as part of the extent metadata anyways, so why should it take 5 mins? And by the, the extent(s) I am interested in has the following tag:-
drop-by:20210613
ingest-by:20210613
There is a datetime field in the table which I could have used to filter too , which is what adx ideally recommends in general scenarios and I can guess the reason that min and max of every datetime field in the table is stored in every extent of the table -- but then similarly even tag is stored in every extent. So which method is more efficient , filtering on a datetime field if available or tags?
a. you're correct that using .show table T extents where tags contains 'string' | ... would be much more efficient
b. as mentioned in the documentation: https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/extenttagsfunction
Filtering on the value of extent_tags() performs best when one of the following string operators is used: has, has_cs, !has, !has_cs.
c. which method is more efficient , filtering on a datetime field if available or tags?
The former, especially when your filter is on a substring, and not on the full content of the tag. Tags are a non-indexed metadata property of shards, and isn't an indexed data column. Also see: https://yonileibowitz.github.io/blog-posts/datetime-columns.html

Is there a way to display dynamic columns in Oracle apex

Long story short, I can't use pivot for this task due to the long elements that I need to include in the columns. Although I tried to create a Classic Report based on function in Oracle Apex. The query it's generated correctly but it's not working in the Classic Report.
A general hint first: Output your variable l_sql to your console using dbms_output.put_line or use some kind of debugging table where you can insert it into. Also be careful about the data type of that variable. If you need to expand the SQL you can reach a point where you need to use a CLOB variable instead of varchar2.
You will need to supply table structures and test data if you like to have your problem analyzed completely, therefore I will at first give you some general explanations:
Use Generic Column Names is ok if you have a permanent, unchangable amount of columns. But if the order of your columns or even the amount can change, then this is a bad idea, as your page will show an error if your query results in more columns than Generic Column Count
Option 1: Use column aliases in your query
Enhance your PL/SQL Function Body returning SQL Query in a way that it outputs verbose display names, like this:
return 'select 1 as "Your verbose column name", 2 as "Column #2", 3 as "Column #3" from dual';
That looks like this:
It has the disadvantage that the column names also appear in this way in the designer and APEX will only update these column names if you re-validate the function. You will have a hard time to reference a column with the internal name of Your verbose column name in a process code or dynamic action.
However it still works, even if you change the column names without telling APEX, for example by externalizing the PL/SQL Function Body into a real function.
Option 2: Use custom column headings
A little bit hidden, but there is also the option of completely custom column headings. It is almost at the end of the attributes page of your report region.
Here you can also supply a function that returns your column names. Be careful that this function is not supposed to return an SQL query that itself returns column names, but instead return column names seperated by a colon.
With this method, it is easier to identify and reference your columns in the designer:
Option 3: Both of it
Turn off Generic Column Names, let your query return column names that can be easily identified and referenced, and use the custom column headings function return verbose names for your users.
My personal opinion
Im using the 3rd option in a production application where people can change the amount and order of columns using shuttle items on the report page themselves. It took some time, but now it works like a charm, like some dynamic PIVOT without PIVOT.

Counting Columns in ColdFusion's QoQ

I have:
<cfspreadsheet action="read" src="#Trim(PathToExcelFile)#" query="Data">
How do I count the total column in my "Data" query using ColdFusion Query of Query? I need to count whether my users has used the corrent excel file format before inserting into my DB.
I'm using Oracle 11g and I can not do:
Select * From Data Where rownum < 2
If I can do that then I can create an array and count the columns but running that script using results in error. The error saying that there is no column name Rownum. Oracle does not allow me to use select top 1.
I don't want to loop over 5000+ record to just count the total column of one row. I appreciate any help, thank you
ColdFusion adds a few additional variables to it's query results. One of them is named `columnList' and contains a comma-separated list of the query columns that were returned.
From the documentation here
From that you should be able to count the number of columns easily. #listlen(Data.columnList)# as one example.

How do I query whether any column in a table contains a certain value without knowing which table I'm querying?

I have a bunch of different tables each of which have an ID column and I want to provide a search feature which will search all columns of all tables and return the ID column of a row that contains a matching string. Since I want to do this for all columns of all tables I cant do a WHERE col1 CONTAINS TEXT_STRING OR col2 .... Any ideas?
Well, if you need to do this, you have a problem with the design. but of course there are many times you have to use what other people put on you!
I would create a view, in the view I would create a union of all possible tables. Later you can just search the view. But you have to build index on that column on all the tables otherwise you will get very bad performance.

Resources