I have a google spreadsheet with data.
(I simplified all the examples to make it easier to read):
+-------+--------+-------+
| Name | Email | other |
+-------+--------+-------+
| Name1 | Email1 | info1 |
| Name2 | Email2 | info2 |
+-------+--------+-------+
I'm using google.visualization.Query to load the data from the spreadsheet into an html webpage.
I create a google.visualization.DataTable from the query and then a visualization Table from the DataTable.
Now I want to edit the spreadsheet when the user clicks a row, my old code was easy, just fired a new event
var DataTable= response.getDataTable(); //response is the response after sending the google.visualization.Query
var Table = new google.visualization.Table (...);
google.visualization.events.addListener(visualization_table, 'select', selectedRow);
function selectedRow(){
alert(Table.getSelection()[0]);
}
the code is working when the query includes the whole spreadhseet (select *)
but when you filter some row, for example (select * C contains 'John')
obviously the table in the html code don't have the same row index as the spreadsheet, so I can't use Table.getSelection()[0].
Is there any way to get the "real" row index to edit it properly?
I've never queried sheets as a data source like this, but I noticed two methods on DataTable that might help:
DataTable.getColumnId(columnIndex);
DataTable.getRowId(rowIndex)
Returns the identifier of a given column specified by the column index in the underlying table. For data tables that are retrieved by queries, the column identifier is set by the data source, and can be used to refer to columns when using the query language.
Related
I'm pretty new to KQL and I'm having a difficult time with it (I don't have a background in stats, and I'm not very good at SQL either). I have telemetry data coming in from Microsoft AppCenter that I want to parse out into some charts but I'm trying to first figure out how to split a concatenated string that is essentially a dictionary that has two possible values: true and false. I want to count the number of each, so every key would have 2 values (true/false) which would also each have a numerical count value.
The input string I'm trying to get this data from is of the format Remove Splash/Main Menu Branding=True;Disable Aim Assist=False - unique items are split by ; and each pair is split by =. I am trying to figure out which options my users are using this way. The example string here would be split into:
Remove Splash/Main Menu Branding = True (count 1)
Disable Aim Assist = False (count 1).
If a new item came in that was Remove Splash/Main Menu Branding=True;Disable Aim Assist=True the summarized data would be
Remove Splash/Main Menu Branding = True (count 2)
Disable Aim Assist = False (count 1).
Disable Aim Assist = True (count 1).
So far I've got a query that selects a single item, but I don't know how to count this across multiple rows:
customEvents
| where timestamp > ago(7d)
| where name == "Installed a mod"
| extend Properties = todynamic(tostring(customDimensions.Properties))
| where isnotnull(Properties.["Alternate Options Selected"])
| extend OptionsStr = Properties.["Alternate Options Selected"] //The example string in above
| extend ModName = Properties.["Mod name"]
| where ModName startswith "SP Controller Support" //want to filter only to one mod's options
| extend optionsSplit = split(OptionsStr, ";")
| summarize any(optionsSplit)
I'm not sure how to make counts of it in a dictionary though. If anyone has any suggestions or tips or examples on something like this, I would really appreciate it, thanks.
Here you go:
let MyTable = datatable(Flags:string) [
"Remove Splash/Main Menu Branding=True;Disable Aim Assist=False",
"Remove Splash/Main Menu Branding=True;Disable Aim Assist=True"
];
MyTable
| extend Flags = split(Flags, ";")
| mv-expand Flag = Flags to typeof(string)
| summarize Count = count() by Flag
The output of this is:
| Flag | Count |
|---------------------------------------|-------|
| Remove Splash/Main Menu Branding=True | 2 |
| Disable Aim Assist=False | 1 |
| Disable Aim Assist=True | 1 |
Explanation:
First you split every input string (that contains multiple flags) into substrings, so that each will only have a single flag - you achieve this by using split.
Now your new Flags column has a list of strings (each one containing a single flag), and you want to create a record with every string, so you use the mv-expand operator
Lastly, you want to count how many times every key=value pair appears, and you do it with summarize count() by Flag
In case you want to see one record (in the output) per Key, then you can use the following query instead:
let MyTable = datatable(Flags:string) [
"Remove Splash/Main Menu Branding=True;Disable Aim Assist=False",
"Remove Splash/Main Menu Branding=True;Disable Aim Assist=True"
];
MyTable
| extend Flags = split(Flags, ";")
| mv-expand Flag = Flags to typeof(string)
| parse Flag with Key "=" Value
| project Key, Value
| evaluate pivot(Value, count(Value))
Its output is:
| Key | False | True |
|----------------------------------|-------|------|
| Remove Splash/Main Menu Branding | 0 | 2 |
| Disable Aim Assist | 1 | 1 |
You wrote that you're new to KQL, so you might find the following free Pluralsight courses interesting:
How to start with Microsoft Azure Data Explorer
Basic KQL
Azure Data Explorer – Advanced KQL
P.S. In the future please provide sample input in datatable format (if you're using Kusto Explorer, just select the relevant query results, right-click on the selection, and click Copy as datatable() literal), and also the expected output in a table format, so that it will be easier to understand what you want to achieve.
We receive a multiline json (in format below), and we store them into a Kusto table "OldT" after using multiline json mapping.
{"severity":"0","hostname":"Test.login","sender":"Test.login","body":"2a09dfa1","facility":"1","version":"1","timestamp":"2020-04-23T07:07:06.077963Z"}
{"severity":"0","hostname":"Test.login","sender":"Test.login","body":"2a09dfa1","facility":"1","version":"1","timestamp":"2020-04-23T07:07:00.893151Z"}
Records in table "OldT":
sender timestamp severity version body priority facility hostname
Test.login 2020-04-23T07:07:06.077963 0 2a09dfa1 1 Test.login
Test.login 2020-04-23T07:07:00.893151Z 0 2a09dfa1 1 Test.login
Now I need to move the data into another table, say "NewT" with only one column, say "Rawrecord"
Rawrecord:
{"severity":"0","hostname":"Test.login","sender":"Test.login","body":"2a09dfa1","facility":"1","version":"1","timestamp":"2020-04-23T07:07:06.077963Z"}
{"severity":"0","hostname":"Test.login","sender":"Test.login","body":"2a09dfa1","facility":"1","version":"1","timestamp":"2020-04-23T07:07:00.893151Z"}
How can I move this data to NewT?
You can use the pack_all() function. For example:
OldT | project Rawrecord = pack_all()
To move it to another table you can use the .set-or-append command for example:
.set-or-append NewT <| OldT | project Rawrecord = pack_all()
I have a database which uses JSON to store values.
CREATE TABLE JSON(name TEXT, value TEXT);
I am trying to convert this into a native format.
CREATE TABLE NATIVE(name TEXT, KEY1, KEY2, KEY3);
The JSON format looks like this:
[
{"key1":value1, "key2":value2, "key3":value3},
{"key1":value4, "key2":value5, "key3":value6},
....
]
For the above example, I am trying to come up with a query using INSERT INTO NATIVE (name, KEY1, KEY2, KEY3) SELECT <something> FROM JSON to produce this table:
+------+---------+--------+--------+
| TEXT | KEY1 | KEY2 | KEY3 |
+------+---------+--------+--------+
| TEXT | VALUE1 | VALUE2 | VALUE3 |
| TEXT | VALUE4 | VALUE5 | VALUE3 |
...
+------+---------+--------+--------+
I have been using JSON1 for other tables which use simple objects. So for instance when I have values which are objects and not arrays of objects I can use json_extract for each field.
For an array I think I am supposed to use json_each but I am having a hard time figuring out how to apply it to this specific problem.
I came up with this solution:
INSERT INTO NATIVE (name, key1, key2, key3)
SELECT name, json_extract(x.value, '$.key1')
, json_extract(x.value, '$.key2')
, json_extract(x.value, '$.key3')
FROM JSON, json_each(JSON.value) AS x;
The trick is that json_each when used in conjunction with the table containing JSON and a SELECT is returning rows in which there are fields called key and value which contain each key and value. It is then possible to call json_extract in the select to pick individual fields out, which can then be inserted into the new table.
We can try using the JSON_EXTRACT function from the json1 extension library, along with an INSERT INTO ... SELECT:
INSERT INTO NATIVE(name TEXT, KEY1, KEY2, KEY3)
SELECT
name,
JSON_EXTRACT(value, '$.key1'),
JSON_EXTRACT(value, '$.key2'),
JSON_EXTRACT(value, '$.key3')
FROM JSON;
This assumes that it is the value column in JSON which contains the raw JSON. If not, then replace value in the above query by whatever column contains the JSON content.
I have data in azure Insights saved in custom events formats. These custom events has data like Name , Email, Title
There can be multiple rows with the same email.
Now I want data to be grouped by email so that I can get Name, Email, Title. means need to fetch data of unique emails.
I tries to use like
customEvents
| summarize by tostring(customDimensions["email"])
But its returning me only email . Now how I can get another columns?
even
| project customDimensions["email"], customDimensions["name"]
,customDimensions["title"]
not working
I have three columns in azure insights. Customdata has a string value column and and a json string of data stored in it.
ID TimeStamp Customdata
1 21-12-2018 "{email:"xyz#xyz.com", name:"james",title: "Dev"}"
1 21-12-2018 "{email:"abc#abc.com", name:"Will",title: "Tester"}"
1 21-12-2018 "{email:"xyz#xyz.com", name:"james",title: "Dev"}"
1 21-12-2018 "{email:"xyz#xyz.com", name:"Happy",title:"Developer"}"
1 21-12-2018 "{email:"xyz#xyz.com", name:"JOhn",title: "Developer"}"
Now I need a query that can return
Email Name Title CountOfRecords
xyz#xyz.com James Dev 2
abc#abc.com Will Tester 1
help me here to write the query.
Try the query below(please point me out if I misunderstand you):
data source like below:
The query(please adjust the where sentence as per your need):
customEvents
| where timestamp >ago(1d)
| where name == "w1"
| summarize CountOfRecords = count() by Email = tostring(customDimensions["email"]), Name=tostring(customDimensions["name"]),Title=tostring(customDimensions["title"])
The test result:
I would like to write a BNF grammar for an XML program, but I have some problems. I'm trying to write the rules for two elements, Worksheet and Table. Worksheet has optional Table elements, required characteristic Name and optional characteristic Protected. On the other side, Table has optional Column and Row elements, which have to appear with this sequence if both exist and optional characteristics ExpandedColCount,ExpandedRowCount and StyleID.
My rules are these:
Worksheet ::= Name Worksheet
| Name Protected Worksheet
| Worksheet Table Worksheet
| Protected Name Worksheet
|
;
Table ::= ExpandedColumnCount Table
| ExpandedRowCount Table
| StyleId Table
| Column Row Table
| Column Table
| Row Table
|
;
The problem is that the first rule also accepts Table,Table Name and the second rule accepts Row Column, Column StyleID. Any idea to solve this?