Write BNF grammar for an XML - bnf

I would like to write a BNF grammar for an XML program, but I have some problems. I'm trying to write the rules for two elements, Worksheet and Table. Worksheet has optional Table elements, required characteristic Name and optional characteristic Protected. On the other side, Table has optional Column and Row elements, which have to appear with this sequence if both exist and optional characteristics ExpandedColCount,ExpandedRowCount and StyleID.
My rules are these:
Worksheet ::= Name Worksheet
| Name Protected Worksheet
| Worksheet Table Worksheet
| Protected Name Worksheet
|
;
Table ::= ExpandedColumnCount Table
| ExpandedRowCount Table
| StyleId Table
| Column Row Table
| Column Table
| Row Table
|
;
The problem is that the first rule also accepts Table,Table Name and the second rule accepts Row Column, Column StyleID. Any idea to solve this?

Related

Summarizing amount of times options are selected true/false in a concatenated string

I'm pretty new to KQL and I'm having a difficult time with it (I don't have a background in stats, and I'm not very good at SQL either). I have telemetry data coming in from Microsoft AppCenter that I want to parse out into some charts but I'm trying to first figure out how to split a concatenated string that is essentially a dictionary that has two possible values: true and false. I want to count the number of each, so every key would have 2 values (true/false) which would also each have a numerical count value.
The input string I'm trying to get this data from is of the format Remove Splash/Main Menu Branding=True;Disable Aim Assist=False - unique items are split by ; and each pair is split by =. I am trying to figure out which options my users are using this way. The example string here would be split into:
Remove Splash/Main Menu Branding = True (count 1)
Disable Aim Assist = False (count 1).
If a new item came in that was Remove Splash/Main Menu Branding=True;Disable Aim Assist=True the summarized data would be
Remove Splash/Main Menu Branding = True (count 2)
Disable Aim Assist = False (count 1).
Disable Aim Assist = True (count 1).
So far I've got a query that selects a single item, but I don't know how to count this across multiple rows:
customEvents
| where timestamp > ago(7d)
| where name == "Installed a mod"
| extend Properties = todynamic(tostring(customDimensions.Properties))
| where isnotnull(Properties.["Alternate Options Selected"])
| extend OptionsStr = Properties.["Alternate Options Selected"] //The example string in above
| extend ModName = Properties.["Mod name"]
| where ModName startswith "SP Controller Support" //want to filter only to one mod's options
| extend optionsSplit = split(OptionsStr, ";")
| summarize any(optionsSplit)
I'm not sure how to make counts of it in a dictionary though. If anyone has any suggestions or tips or examples on something like this, I would really appreciate it, thanks.
Here you go:
let MyTable = datatable(Flags:string) [
"Remove Splash/Main Menu Branding=True;Disable Aim Assist=False",
"Remove Splash/Main Menu Branding=True;Disable Aim Assist=True"
];
MyTable
| extend Flags = split(Flags, ";")
| mv-expand Flag = Flags to typeof(string)
| summarize Count = count() by Flag
The output of this is:
| Flag | Count |
|---------------------------------------|-------|
| Remove Splash/Main Menu Branding=True | 2 |
| Disable Aim Assist=False | 1 |
| Disable Aim Assist=True | 1 |
Explanation:
First you split every input string (that contains multiple flags) into substrings, so that each will only have a single flag - you achieve this by using split.
Now your new Flags column has a list of strings (each one containing a single flag), and you want to create a record with every string, so you use the mv-expand operator
Lastly, you want to count how many times every key=value pair appears, and you do it with summarize count() by Flag
In case you want to see one record (in the output) per Key, then you can use the following query instead:
let MyTable = datatable(Flags:string) [
"Remove Splash/Main Menu Branding=True;Disable Aim Assist=False",
"Remove Splash/Main Menu Branding=True;Disable Aim Assist=True"
];
MyTable
| extend Flags = split(Flags, ";")
| mv-expand Flag = Flags to typeof(string)
| parse Flag with Key "=" Value
| project Key, Value
| evaluate pivot(Value, count(Value))
Its output is:
| Key | False | True |
|----------------------------------|-------|------|
| Remove Splash/Main Menu Branding | 0 | 2 |
| Disable Aim Assist | 1 | 1 |
You wrote that you're new to KQL, so you might find the following free Pluralsight courses interesting:
How to start with Microsoft Azure Data Explorer
Basic KQL
Azure Data Explorer – Advanced KQL
P.S. In the future please provide sample input in datatable format (if you're using Kusto Explorer, just select the relevant query results, right-click on the selection, and click Copy as datatable() literal), and also the expected output in a table format, so that it will be easier to understand what you want to achieve.

how to create a new table having json record from a kusto table

We receive a multiline json (in format below), and we store them into a Kusto table "OldT" after using multiline json mapping.
{"severity":"0","hostname":"Test.login","sender":"Test.login","body":"2a09dfa1","facility":"1","version":"1","timestamp":"2020-04-23T07:07:06.077963Z"}
{"severity":"0","hostname":"Test.login","sender":"Test.login","body":"2a09dfa1","facility":"1","version":"1","timestamp":"2020-04-23T07:07:00.893151Z"}
Records in table "OldT":
sender timestamp severity version body priority facility hostname
Test.login 2020-04-23T07:07:06.077963 0 2a09dfa1 1 Test.login
Test.login 2020-04-23T07:07:00.893151Z 0 2a09dfa1 1 Test.login
Now I need to move the data into another table, say "NewT" with only one column, say "Rawrecord"
Rawrecord:
{"severity":"0","hostname":"Test.login","sender":"Test.login","body":"2a09dfa1","facility":"1","version":"1","timestamp":"2020-04-23T07:07:06.077963Z"}
{"severity":"0","hostname":"Test.login","sender":"Test.login","body":"2a09dfa1","facility":"1","version":"1","timestamp":"2020-04-23T07:07:00.893151Z"}
How can I move this data to NewT?
You can use the pack_all() function. For example:
OldT | project Rawrecord = pack_all()
To move it to another table you can use the .set-or-append command for example:
.set-or-append NewT <| OldT | project Rawrecord = pack_all()

Extracting JSON arrays and inserting them as separate rows in a new database

I have a database which uses JSON to store values.
CREATE TABLE JSON(name TEXT, value TEXT);
I am trying to convert this into a native format.
CREATE TABLE NATIVE(name TEXT, KEY1, KEY2, KEY3);
The JSON format looks like this:
[
{"key1":value1, "key2":value2, "key3":value3},
{"key1":value4, "key2":value5, "key3":value6},
....
]
For the above example, I am trying to come up with a query using INSERT INTO NATIVE (name, KEY1, KEY2, KEY3) SELECT <something> FROM JSON to produce this table:
+------+---------+--------+--------+
| TEXT | KEY1 | KEY2 | KEY3 |
+------+---------+--------+--------+
| TEXT | VALUE1 | VALUE2 | VALUE3 |
| TEXT | VALUE4 | VALUE5 | VALUE3 |
...
+------+---------+--------+--------+
I have been using JSON1 for other tables which use simple objects. So for instance when I have values which are objects and not arrays of objects I can use json_extract for each field.
For an array I think I am supposed to use json_each but I am having a hard time figuring out how to apply it to this specific problem.
I came up with this solution:
INSERT INTO NATIVE (name, key1, key2, key3)
SELECT name, json_extract(x.value, '$.key1')
, json_extract(x.value, '$.key2')
, json_extract(x.value, '$.key3')
FROM JSON, json_each(JSON.value) AS x;
The trick is that json_each when used in conjunction with the table containing JSON and a SELECT is returning rows in which there are fields called key and value which contain each key and value. It is then possible to call json_extract in the select to pick individual fields out, which can then be inserted into the new table.
We can try using the JSON_EXTRACT function from the json1 extension library, along with an INSERT INTO ... SELECT:
INSERT INTO NATIVE(name TEXT, KEY1, KEY2, KEY3)
SELECT
name,
JSON_EXTRACT(value, '$.key1'),
JSON_EXTRACT(value, '$.key2'),
JSON_EXTRACT(value, '$.key3')
FROM JSON;
This assumes that it is the value column in JSON which contains the raw JSON. If not, then replace value in the above query by whatever column contains the JSON content.

Replacing empty string column with null in Kusto

How do I replace empty (non null) column of string datatype with null value?
So say the following query returns non zero recordset:-
mytable | where mycol == ""
Now these are the rows with mycol containing empty strings. I want to replace these with nulls. Now, from what I have read in the kusto documentation we have datatype specific null literals such as int(null),datetime(null),guid(null) etc. But there is no string(null). The closest to string is guid, but when I use it in the following manner, I get an error:-
mytable | where mycol == "" | extend test = translate(mycol,guid(null))
The error:-
translate(): argument #0 must be string literal
So what is the way out then?
Update:-
datatable(n:int,s:string)
[
10,"hello",
10,"",
11,"world",
11,"",
12,""
]
| summarize myset=make_set(s) by n
If you execute this, you can see that empty strings are being considered as part of sets. I don't want this, no such empty strings should be part of my array. But at the same time I don't want to lose value of n, and this is exactly what will happen if I if I use isnotempty function. So in the following example, you can see that the row where n=12 is not returned, there is no need to skip n=12, one could always get an empty array:-
datatable(n:int,s:string)
[
10,"hello",
10,"",
11,"world",
11,"",
12,""
]
| where isnotempty(s)
| summarize myset=make_set(s) by n
There's currently no support for null values for the string datatype: https://learn.microsoft.com/en-us/azure/kusto/query/scalar-data-types/null-values
I'm pretty certain that in itself, that shouldn't block you from reaching your end goal, but that goal isn't currently clear.
[update based on your update:]
datatable(n:int,s:string)
[
10,"hello",
10,"",
11,"world",
11,"",
12,""
]
| summarize make_set(todynamic(s)) by n

Get real row index with google.visualization.Query and google.visualization.Table

I have a google spreadsheet with data.
(I simplified all the examples to make it easier to read):
+-------+--------+-------+
| Name | Email | other |
+-------+--------+-------+
| Name1 | Email1 | info1 |
| Name2 | Email2 | info2 |
+-------+--------+-------+
I'm using google.visualization.Query to load the data from the spreadsheet into an html webpage.
I create a google.visualization.DataTable from the query and then a visualization Table from the DataTable.
Now I want to edit the spreadsheet when the user clicks a row, my old code was easy, just fired a new event
var DataTable= response.getDataTable(); //response is the response after sending the google.visualization.Query
var Table = new google.visualization.Table (...);
google.visualization.events.addListener(visualization_table, 'select', selectedRow);
function selectedRow(){
alert(Table.getSelection()[0]);
}
the code is working when the query includes the whole spreadhseet (select *)
but when you filter some row, for example (select * C contains 'John')
obviously the table in the html code don't have the same row index as the spreadsheet, so I can't use Table.getSelection()[0].
Is there any way to get the "real" row index to edit it properly?
I've never queried sheets as a data source like this, but I noticed two methods on DataTable that might help:
DataTable.getColumnId(columnIndex);
DataTable.getRowId(rowIndex)
Returns the identifier of a given column specified by the column index in the underlying table. For data tables that are retrieved by queries, the column identifier is set by the data source, and can be used to refer to columns when using the query language.

Resources