I am working on a scenario where after specific time interval a new table is being created. I have one PowerBI report which runs on this kusto query. This kusto query should run on the latest table which was created. Since table name contains created date, I can find the latest tablename with below query , However how to take the out output of below query in a single variable? So that my subsequent query can pick this table name to generate the report.
Query :
.show tables
| where TableName contains "XXX"
| project TableName , year = split(TableName,"")[3],month = split(TableName,"")[4], day = split(TableName,"_")[5]
| project TableName , CreatedDate = todatetime(strcat(year,month,day))
| order by CreatedDate | project TableName| take 1
Output: [single row single column]
Tablename => XXX_12_11_2020
Any help is appreciated. Thanks
Take a look at the table scope function you can use it in a function and refer to the function from PBI. See how to use admin commands in query here
Related
Is there a function or command that provides the row creation timestamp thru metadata in ADX?
TIA
ingestion_time()
Returns the approximate time at which the current record was ingested.
This function must be used in the context of a table of ingested data for which the IngestionTime policy was enabled when the data was ingested. Otherwise, this function produces null values.
In case anyone wants the commands exactly:
.alter table TableName policy ingestiontime true
Then to view the times:
TableName
| extend ingestion_time()
| sort by ingestion_time()
Replace TableName with your table name, hope this helped!
Edit:
Below are more details about what I am trying to achieve.
I have 1 cluster with two databases. Database1 contains Table1 with the below info
DB2 contains a table with the below info:
UniqueID contains the same data as userID on table1.
In DB2, I have a function that performs some filtering on Unique ID.
I know that I can join both tables as the below screenshot, but my production example is way more complex than that and requires me to use the function.
My goal is to run the Table1 query, and pass the the userID to the function getuserproperties, to get a merged output.
Something Like this which does not work :)
Finally found it :) It was very simple :)
let UID = 7;
database('db1').Table1
| join kind=leftouter database('db2').Table2 on $left.userID==$right.UniqueID
| where userID == toscalar(getUserProperties(UID))
I am trying to insert current datetime into table which has Datetime as datatype using the following query:
.ingest inline into table NoARR_Rollout_Status_Dummie <| #'datetime(2021-06-11)',Sam,Chay,Yes
Table was created using the following query:
.create table NoARR_Rollout_Status_Dummie ( Timestamp:datetime, Datacenter:string, Name:string, SurName:string, IsEmployee:string)
But when I try to see data in the table, I could not see TimeStamp being filled. Is there anything I am missing?
the .ingest inline command parses the input (after the <|) as a CSV payload. therefore you cannot include variables in it.
an alternative to what you're trying to do would be using the .set-or-append command, e.g.:
.set-or-append NoARR_Rollout_Status_Dummie <|
print Timestamp = datetime(2021-06-11),
Name = 'Sam',
SurName = 'Chay',
IsEmployee = 'Yes'
NOTE, however, that ingesting a single or a few records in a single command is not recommended for production scenarios, as it created very small data shards and could negatively impact performance.
For queued ingestion, larger bulks are recommended: https://learn.microsoft.com/en-us/azure/data-explorer/kusto/api/netfx/kusto-ingest-best-practices#optimizing-for-throughput
otherwise, see if your use case meets the recommendations of streaming ingestion: https://learn.microsoft.com/en-us/azure/data-explorer/ingest-data-streaming
I have a kusto table with id:int and name:string fields with data in it. I am trying to alter table schema types for id: int to id:long. I tried the below but it throws the below error. I also tried .alter instead of .alter-merge but no luck. What is the procedure to update Kusto table column type for existing table with data and without disturbing the current data?
.alter-merge table mytable
(Id: long, Name: string)
Error:
'Alter table does not support column data type change for existing columns (Id). Current type=I32, requested type=I64'.
Here's the process you should follow to achieve what you want:
Create a new table named OldTable with the updated schema
Create a function named Table (should be exact name as your original table) that will return union (OldTable | project id = tolong(id), name), (Table | project id = tolong(id), name) - this way, whenever someone writes Table, he'll call the function that will return data from both tables in the correct schema
Swap Table and OldTable
When the data in the OldTable table ages out (at the end of the retention period), it will become empty, and you'll have to first delete the Table function, then the OldTable table
I have written the following code which extracts the names of tables which have used storage in Sentinel. I'm not sure if Kusto is capable of doing the query but essentially I want to use the value stored in "Out" as names of tables. e.g union(Out) or search in (Out) *
let MyTables =
Usage
| where Quantity > 0
| summarize by DataType ;
let Out =
MyTables
| summarize makelist(DataType);
No, this is not possible. The tables referenced by the query should be known during query planning. A possible workaround can be generating the query and invoking it using the execute_query plugin.