does tSQLt support comparing two JSON strings? - tsqlt

I am using tSQLt to test stored procedures which output JSON string results. The problem I have is that the key/value pairs in JSON are not always the same order. For example,
JSON1: {"A":1, "B":2} and JSON2: {"B":2, "A":1}
tSQLt.AssertEqualsString will fail when compares JSON1 with JSON2. I expects the test to be passed if only the order is different.
Any help will be appreciated.
Thanks in advance!

There's currently no direct way to compare JSON in tSQLt.
However, one way you could deal with that is by splitting the JSON into a key/value pair table. You can for example use OPENJSON for that.
Once the values are in rows, you can use tSQLt.AssertEqualsTable to compare the expected result to the actual result.

Related

How to insert an element into the middle of an array (json) in SQLite?

I found a method json_insert in the json section of the SQLite document. But it seems to be not working in the way that I expected.
e.g. select json_insert('[3,2,1]', '$[3]', 4) as result;
The result column returns '[3,2,1,4]', which is correct.
But for select json_insert('[3,2,1]', '$[1]', 4) as result;
I am expecting something like '[3,2,4,1]' to be returned, instead of '[3,2,1]'.
Am I missing something ? I don't see there is an alternative method to json_insert.
P.S. I am playing it on https://sqlime.org/#demo.db, the SQLite version is 3.37.2.
The documentation states that json_insert() will not overwrite values ("Overwrite if already exists? - No"). That means you can't insert elements in the middle of the array.
My interpretation: The function is primarily meant to insert keys into an object, where this kind of behavior makes more sense - not changing the length of an array is a sacrifice for consistency.
You could shoehorn it into SQLite by turning the JSON array into a table, appending your element, sorting the result, and turning it all back into a JSON array:
select json_group_array(x.value) from (
select key, value from json_each('[3,2,1]')
union
select 1.5, 4 -- 1.5 = after 1, before 2
order by 1
) x
This will produce '[3,2,4,1]'.
But you can probably see that this won't scale, and even if there was a built-in function that did this for you, it wouldn't scale, either. String manipulation is slow. It might work well enough for one-offs, or when done infrequently.
In the long run, I would recommend properly normalizing your database structure instead of storing "non-blob" data in JSON blobs. Manipulating normalized data is much easier than manipulating JSON, not to mention faster by probably orders of magnitude.

SQLite C API equivalent to typeof(col)

I want to detect column data types of any SELECT query in SQLite.
In the C API, there is const char *sqlite3_column_decltype(sqlite3_stmt*,int) for this purpose. But that only works for columns in a real table. Expressions, such as LOWER('ABC'), or columns from queries like PRAGMA foreign_key_list("mytable"), always return null here.
I know there is also typeof(col), but I don't have control over the fired SQL, so I need a way to extract the data type out of the prepared statement.
You're looking for sqlite3_column_type():
The sqlite3_column_type() routine returns the datatype code for the initial data type of the result column. The returned value is one of SQLITE_INTEGER, SQLITE_FLOAT, SQLITE_TEXT, SQLITE_BLOB, or SQLITE_NULL. The return value of sqlite3_column_type() can be used to decide which of the first six interface should be used to extract the column value.
And remember that in sqlite, type is for the most part associated with value, not column - different rows can have different types stored in the same column.

Parsing nested JSON data within a Kusto column

After parsing the JSON data in a column within my Kusto Cluster using parse_json, I'm noticing there is still more data in JSON format nested within the resulting projected value. I need to access that information and make every piece of the JSON data its own column.
I've attempted to follow the answer from this SO post (Parsing json in kusto query) but haven't been successful in getting the syntax correct.
myTable
| project
Time,
myColumnParsedJSON = parse_json(column)
| project myColumnParsedNestedJSON = parse_json(myColumnParsedJSON.nestedJSONDataKey)
I expect the results to be projected columns, each named as each of the keys, with their respective values displayed in one row record.
please see the note at the bottom of this doc:
It is somewhat common to have a JSON string describing a property bag in which one of the "slots" is another JSON string. In such cases, it is not only necessary to invoke parse_json twice, but also to make sure that in the second call, tostring will be used. Otherwise, the second call to parse_json will simply pass-on the input to the output as-is, because its declared type is dynamic
once you're able to get parse_json to properly have your payload parsed, you could use the bag_unpack plugin (doc) in order to achieve this requirement you mentioned:
I expect the results to be projected columns, each named as each of the keys, with their respective values displayed in one row record.

Oracle 11g: Comparing two *LOB columns of different types

I've got read-only access to a database containing two schema with tables like this:
schema1.A.unique_id, schema1.A.content
schema2.B.unique_id, schema2.B.content
A.unique_id and B.unique_id will match while A.content and B.content are *LOB columns that should match (wasn't my idea lol). What I'd like to do is compare the contents of the content fields and see how many are equal. However, one is a CLOB and one is a BLOB.
DBMS_LOB.COMPARE() is an obvious helper, however it only compares two *LOBs of the same type (e.g. CLOB vs. CLOB).
In lieu of writing a script to get the content of the fields and compare them in memory, how can I perform this comparison in straight-up PL/SQL? Is there some way I can convert one of the fields on-the-fly so that the types match (again keep in mind I only have read-only access)?
Thanks!

How to return multiple values from unconnected lookup?

In my mapping, I am using flat files as source and target. I have to use unconnected lookup. Can somebody tell me how to return multiple values from unconnected lookup specially when we are using flat files as source and target.
I know how to return multiple values when we use relational tables. In that case, we just concat values and return as single value. Again we split those values.
Please help me.
if unconnected lookup on relational table
In lookup override we can concatenate two or multiple ports and return that port to expression transformation.
In expression transformation extract those values.
I think you replace the first delimiter with some other delimiter(say &) in the source file. Using "&" as delimiter you can create the lookup and use it to retrieve the concatenated return field which wil give you multiple return values for the match.

Resources