After parsing the JSON data in a column within my Kusto Cluster using parse_json, I'm noticing there is still more data in JSON format nested within the resulting projected value. I need to access that information and make every piece of the JSON data its own column.
I've attempted to follow the answer from this SO post (Parsing json in kusto query) but haven't been successful in getting the syntax correct.
myTable
| project
Time,
myColumnParsedJSON = parse_json(column)
| project myColumnParsedNestedJSON = parse_json(myColumnParsedJSON.nestedJSONDataKey)
I expect the results to be projected columns, each named as each of the keys, with their respective values displayed in one row record.
please see the note at the bottom of this doc:
It is somewhat common to have a JSON string describing a property bag in which one of the "slots" is another JSON string. In such cases, it is not only necessary to invoke parse_json twice, but also to make sure that in the second call, tostring will be used. Otherwise, the second call to parse_json will simply pass-on the input to the output as-is, because its declared type is dynamic
once you're able to get parse_json to properly have your payload parsed, you could use the bag_unpack plugin (doc) in order to achieve this requirement you mentioned:
I expect the results to be projected columns, each named as each of the keys, with their respective values displayed in one row record.
Related
I want to detect column data types of any SELECT query in SQLite.
In the C API, there is const char *sqlite3_column_decltype(sqlite3_stmt*,int) for this purpose. But that only works for columns in a real table. Expressions, such as LOWER('ABC'), or columns from queries like PRAGMA foreign_key_list("mytable"), always return null here.
I know there is also typeof(col), but I don't have control over the fired SQL, so I need a way to extract the data type out of the prepared statement.
You're looking for sqlite3_column_type():
The sqlite3_column_type() routine returns the datatype code for the initial data type of the result column. The returned value is one of SQLITE_INTEGER, SQLITE_FLOAT, SQLITE_TEXT, SQLITE_BLOB, or SQLITE_NULL. The return value of sqlite3_column_type() can be used to decide which of the first six interface should be used to extract the column value.
And remember that in sqlite, type is for the most part associated with value, not column - different rows can have different types stored in the same column.
Long story short, I can't use pivot for this task due to the long elements that I need to include in the columns. Although I tried to create a Classic Report based on function in Oracle Apex. The query it's generated correctly but it's not working in the Classic Report.
A general hint first: Output your variable l_sql to your console using dbms_output.put_line or use some kind of debugging table where you can insert it into. Also be careful about the data type of that variable. If you need to expand the SQL you can reach a point where you need to use a CLOB variable instead of varchar2.
You will need to supply table structures and test data if you like to have your problem analyzed completely, therefore I will at first give you some general explanations:
Use Generic Column Names is ok if you have a permanent, unchangable amount of columns. But if the order of your columns or even the amount can change, then this is a bad idea, as your page will show an error if your query results in more columns than Generic Column Count
Option 1: Use column aliases in your query
Enhance your PL/SQL Function Body returning SQL Query in a way that it outputs verbose display names, like this:
return 'select 1 as "Your verbose column name", 2 as "Column #2", 3 as "Column #3" from dual';
That looks like this:
It has the disadvantage that the column names also appear in this way in the designer and APEX will only update these column names if you re-validate the function. You will have a hard time to reference a column with the internal name of Your verbose column name in a process code or dynamic action.
However it still works, even if you change the column names without telling APEX, for example by externalizing the PL/SQL Function Body into a real function.
Option 2: Use custom column headings
A little bit hidden, but there is also the option of completely custom column headings. It is almost at the end of the attributes page of your report region.
Here you can also supply a function that returns your column names. Be careful that this function is not supposed to return an SQL query that itself returns column names, but instead return column names seperated by a colon.
With this method, it is easier to identify and reference your columns in the designer:
Option 3: Both of it
Turn off Generic Column Names, let your query return column names that can be easily identified and referenced, and use the custom column headings function return verbose names for your users.
My personal opinion
Im using the 3rd option in a production application where people can change the amount and order of columns using shuttle items on the report page themselves. It took some time, but now it works like a charm, like some dynamic PIVOT without PIVOT.
I want to create a MVCCKey with a timestamp and pretty value I know. But I realize a roachpb.key is not very straightforward; is there some prefix/suffix involved? Is the database name is also encoded in roachpb.key?
Can anyone please tell me how a MVCCKey is formed? What information does it have? In the documentation, it just says that it looks like /table/primary/key/column.
An engine.MVCCKey combines a regular key with a timestamp. MVCCKeys are encoded into byte strings for use as RockDB keys (RocksDB is configured with a custom comparator so MVCCKeys are sorted correctly even though the timestamp uses a variable-width encoding).
Regular keys are byte strings of type roachpb.Key. For ordinary data records, the keys are constructed from table, column, and index IDs, along with the values of indexed columns. (The database ID is not included here; the database to which a table belongs can be found in the system.descriptors table)
The function keys.PrettyPrint can convert a roachpb.Key to a human-readable form.
Is there some way to create a view that returns a pzPVStream that can be natively parsed by Pega when it executes an RDB?
For instance, maybe a query (in MS SQL Server) that resembled:
SELECT test_tbl_outer.ID, (
select *, 'My-Int-TestClass' as "pxObjClass"
from {class:My-Int-TestClass} as test_tbl_inner
where test_tbl_inner.ID=test_tbl_outer.ID
FOR XML RAW('pagedata'), TYPE, ELEMENTS
) as pzPVStream
from {class:My-Int-TestClass} as test_tbl_outer
This gets an invalid signature error (the SQL query does work directly however), and if I try to shove a signature string onto the column ('PR6d' or previous) I just get a different error regarding headers.
So at this point, I do realize that the pzPVstream is not stored as xml but as some sort of packed & compressed string. Is there a way for me to create a valid pzPVstream on the fly? Maybe something similar to what pr_read_from_stream does but in reverse?
The use case is that we'd like to pull a whole mess of data from an existing data warehouse. And it would be nice if we could pull all the multi-value data (many,many joins deep) over in one trip. We are not too concerned with the size of this object as we plan on pulling this data one way or another.
The pzPvStream is a compressed blob and it resonates a work object. It is compressed and stored as a single column in a table.
When it is read using obj-browse or obj-open activities, the blob is decompressed and all the encompassed properties are mapped to the clipboard.
This value has a proprietary format; the values are obfuscated.
This is odd because I'm not inserting data, I'm pulling data with a query.
I'm trying to run
SELECT DISTINCT description FROM products;
Which outputs the error "The field is too small to accept the amount of data you attempted to add.".
However, running the following doesn't produce the error:
SELECT description FROM products;
So I'm confused as to what the issue would be.
I'm using OleDbDataReader and taking data out of an mdb database file.
This might be related to: http://support.microsoft.com/kb/896950/us
This problem occurs because when you
set the UniqueValues query property to
Yes, a DISTINCT keyword is added to
the resulting SQL statement. The
DISTINCT keyword directs Access to
perform a comparison between records.
When Access performs a comparison
between two Memo fields, Access treats
the fields as Text fields that have a
255-character limit. Sometimes Memo
field data that is larger than 255
characters will generate the error
message that is mentioned in the
"Symptoms" section. Sometimes only 255
characters are returned from the Memo
field.
Workaround:
To work around this problem, modify
the original query by removing the
Memo field. Then, create a second
query that is based on both the table
and the original query. This new query
uses all the fields from the original
query, and this new query uses the
Memo field from the table. When you
run the second query, the first query
runs. Then, this data is used to run
the second query. This behavior
returns the Memo field data based on
the returned data of the first query.