SQLite C API equivalent to typeof(col) - sqlite

I want to detect column data types of any SELECT query in SQLite.
In the C API, there is const char *sqlite3_column_decltype(sqlite3_stmt*,int) for this purpose. But that only works for columns in a real table. Expressions, such as LOWER('ABC'), or columns from queries like PRAGMA foreign_key_list("mytable"), always return null here.
I know there is also typeof(col), but I don't have control over the fired SQL, so I need a way to extract the data type out of the prepared statement.

You're looking for sqlite3_column_type():
The sqlite3_column_type() routine returns the datatype code for the initial data type of the result column. The returned value is one of SQLITE_INTEGER, SQLITE_FLOAT, SQLITE_TEXT, SQLITE_BLOB, or SQLITE_NULL. The return value of sqlite3_column_type() can be used to decide which of the first six interface should be used to extract the column value.
And remember that in sqlite, type is for the most part associated with value, not column - different rows can have different types stored in the same column.

Related

Parsing nested JSON data within a Kusto column

After parsing the JSON data in a column within my Kusto Cluster using parse_json, I'm noticing there is still more data in JSON format nested within the resulting projected value. I need to access that information and make every piece of the JSON data its own column.
I've attempted to follow the answer from this SO post (Parsing json in kusto query) but haven't been successful in getting the syntax correct.
myTable
| project
Time,
myColumnParsedJSON = parse_json(column)
| project myColumnParsedNestedJSON = parse_json(myColumnParsedJSON.nestedJSONDataKey)
I expect the results to be projected columns, each named as each of the keys, with their respective values displayed in one row record.
please see the note at the bottom of this doc:
It is somewhat common to have a JSON string describing a property bag in which one of the "slots" is another JSON string. In such cases, it is not only necessary to invoke parse_json twice, but also to make sure that in the second call, tostring will be used. Otherwise, the second call to parse_json will simply pass-on the input to the output as-is, because its declared type is dynamic
once you're able to get parse_json to properly have your payload parsed, you could use the bag_unpack plugin (doc) in order to achieve this requirement you mentioned:
I expect the results to be projected columns, each named as each of the keys, with their respective values displayed in one row record.

Is there a way to display dynamic columns in Oracle apex

Long story short, I can't use pivot for this task due to the long elements that I need to include in the columns. Although I tried to create a Classic Report based on function in Oracle Apex. The query it's generated correctly but it's not working in the Classic Report.
A general hint first: Output your variable l_sql to your console using dbms_output.put_line or use some kind of debugging table where you can insert it into. Also be careful about the data type of that variable. If you need to expand the SQL you can reach a point where you need to use a CLOB variable instead of varchar2.
You will need to supply table structures and test data if you like to have your problem analyzed completely, therefore I will at first give you some general explanations:
Use Generic Column Names is ok if you have a permanent, unchangable amount of columns. But if the order of your columns or even the amount can change, then this is a bad idea, as your page will show an error if your query results in more columns than Generic Column Count
Option 1: Use column aliases in your query
Enhance your PL/SQL Function Body returning SQL Query in a way that it outputs verbose display names, like this:
return 'select 1 as "Your verbose column name", 2 as "Column #2", 3 as "Column #3" from dual';
That looks like this:
It has the disadvantage that the column names also appear in this way in the designer and APEX will only update these column names if you re-validate the function. You will have a hard time to reference a column with the internal name of Your verbose column name in a process code or dynamic action.
However it still works, even if you change the column names without telling APEX, for example by externalizing the PL/SQL Function Body into a real function.
Option 2: Use custom column headings
A little bit hidden, but there is also the option of completely custom column headings. It is almost at the end of the attributes page of your report region.
Here you can also supply a function that returns your column names. Be careful that this function is not supposed to return an SQL query that itself returns column names, but instead return column names seperated by a colon.
With this method, it is easier to identify and reference your columns in the designer:
Option 3: Both of it
Turn off Generic Column Names, let your query return column names that can be easily identified and referenced, and use the custom column headings function return verbose names for your users.
My personal opinion
Im using the 3rd option in a production application where people can change the amount and order of columns using shuttle items on the report page themselves. It took some time, but now it works like a charm, like some dynamic PIVOT without PIVOT.

How is an MVCCKey formed in CockroachDB?

I want to create a MVCCKey with a timestamp and pretty value I know. But I realize a roachpb.key is not very straightforward; is there some prefix/suffix involved? Is the database name is also encoded in roachpb.key?
Can anyone please tell me how a MVCCKey is formed? What information does it have? In the documentation, it just says that it looks like /table/primary/key/column.
An engine.MVCCKey combines a regular key with a timestamp. MVCCKeys are encoded into byte strings for use as RockDB keys (RocksDB is configured with a custom comparator so MVCCKeys are sorted correctly even though the timestamp uses a variable-width encoding).
Regular keys are byte strings of type roachpb.Key. For ordinary data records, the keys are constructed from table, column, and index IDs, along with the values of indexed columns. (The database ID is not included here; the database to which a table belongs can be found in the system.descriptors table)
The function keys.PrettyPrint can convert a roachpb.Key to a human-readable form.

SINGLEVALUEQUERY and MULTIVALUEQUERY with Pentaho Report Designer

I have multiple data sets that drive the Pentaho report. The data is derived from a handful of stored procedures. I need to access multiple data sources within the report without using sub reports and I believe the best solution is to create open formulas. The SINGLEVALUEQUERY I believe will only return the first column or row. I need to return multiple columns.
As an example here my stored procedure which is named HEADER in Pentaho (CALL Stored_procedure_test (2014, HEADER)), returns 3 values - HEADER_1, HEADER_2, HEADER_3. I'm uncertain of the correct syntax to return all three values for the open formula. Below is what I tried but was unsuccessful.
=MULTIVALUEQUERY("HEADER";?;?)
The second parameter denotes the column that contains the result.
If you dont give a column name here, the reporting engine will simply take the first column of the result. In the case of the MULTIVALUEQUERY function, the various values of the result set are then aggregated into a array of values that is suitable to be passed into a multi-select parameter or to be used in a IN clause in a SQL data-factory.
For more details see https://www.on-reporting.com/blog/using-queries-in-formulas-in-pentaho/

Teradata: Is it possible to generate an identity column value without creating a record?

In Oracle, I used to use sequences to generate value for a table's unique identifier. In a stored procedure, I'd call sequencename.nextval and assign that value to a variable. After that, I'd use that variable for the procedure's insert statement and the procedure's out param so I could deliver the newly-generated ID to the .NET client.
I'd like to do the same thing with Teradata, but I am thinking the only way to accomplish this is to create a table that holds a value that is sequentially incremented. Ideally, however, I'd really like to be able to acquire the value that will be used for an identity column's next value without actually creating a new record in the database.
No, it is not possible with Teradata because Identify values are cached at either the parsing engine (PE) or AMP level based on the type of operation being performed. My understanding is that the DBC.IdCol table shows the next value that will be use to seed the next batch of IDENTITY values that are needed by the PE or AMP.
Another solution would be to avoid using IDENTITY in this manner for your UPI. You could always use the ROW_NUMBER() window aggregate function partitioned by your logical primary key to seed the next range of values for your surrogate key.

Resources