Dynamically generating pzPVStream from a view - pega

Is there some way to create a view that returns a pzPVStream that can be natively parsed by Pega when it executes an RDB?
For instance, maybe a query (in MS SQL Server) that resembled:
SELECT test_tbl_outer.ID, (
select *, 'My-Int-TestClass' as "pxObjClass"
from {class:My-Int-TestClass} as test_tbl_inner
where test_tbl_inner.ID=test_tbl_outer.ID
FOR XML RAW('pagedata'), TYPE, ELEMENTS
) as pzPVStream
from {class:My-Int-TestClass} as test_tbl_outer
This gets an invalid signature error (the SQL query does work directly however), and if I try to shove a signature string onto the column ('PR6d' or previous) I just get a different error regarding headers.
So at this point, I do realize that the pzPVstream is not stored as xml but as some sort of packed & compressed string. Is there a way for me to create a valid pzPVstream on the fly? Maybe something similar to what pr_read_from_stream does but in reverse?
The use case is that we'd like to pull a whole mess of data from an existing data warehouse. And it would be nice if we could pull all the multi-value data (many,many joins deep) over in one trip. We are not too concerned with the size of this object as we plan on pulling this data one way or another.

The pzPvStream is a compressed blob and it resonates a work object. It is compressed and stored as a single column in a table.
When it is read using obj-browse or obj-open activities, the blob is decompressed and all the encompassed properties are mapped to the clipboard.
This value has a proprietary format; the values are obfuscated.

Related

Parsing nested JSON data within a Kusto column

After parsing the JSON data in a column within my Kusto Cluster using parse_json, I'm noticing there is still more data in JSON format nested within the resulting projected value. I need to access that information and make every piece of the JSON data its own column.
I've attempted to follow the answer from this SO post (Parsing json in kusto query) but haven't been successful in getting the syntax correct.
myTable
| project
Time,
myColumnParsedJSON = parse_json(column)
| project myColumnParsedNestedJSON = parse_json(myColumnParsedJSON.nestedJSONDataKey)
I expect the results to be projected columns, each named as each of the keys, with their respective values displayed in one row record.
please see the note at the bottom of this doc:
It is somewhat common to have a JSON string describing a property bag in which one of the "slots" is another JSON string. In such cases, it is not only necessary to invoke parse_json twice, but also to make sure that in the second call, tostring will be used. Otherwise, the second call to parse_json will simply pass-on the input to the output as-is, because its declared type is dynamic
once you're able to get parse_json to properly have your payload parsed, you could use the bag_unpack plugin (doc) in order to achieve this requirement you mentioned:
I expect the results to be projected columns, each named as each of the keys, with their respective values displayed in one row record.

How is an MVCCKey formed in CockroachDB?

I want to create a MVCCKey with a timestamp and pretty value I know. But I realize a roachpb.key is not very straightforward; is there some prefix/suffix involved? Is the database name is also encoded in roachpb.key?
Can anyone please tell me how a MVCCKey is formed? What information does it have? In the documentation, it just says that it looks like /table/primary/key/column.
An engine.MVCCKey combines a regular key with a timestamp. MVCCKeys are encoded into byte strings for use as RockDB keys (RocksDB is configured with a custom comparator so MVCCKeys are sorted correctly even though the timestamp uses a variable-width encoding).
Regular keys are byte strings of type roachpb.Key. For ordinary data records, the keys are constructed from table, column, and index IDs, along with the values of indexed columns. (The database ID is not included here; the database to which a table belongs can be found in the system.descriptors table)
The function keys.PrettyPrint can convert a roachpb.Key to a human-readable form.

Having trouble with FIELDPROC on a database (Column Encryption on Iseries)

I used Listing 3 in the following link to create a FIELDPROC program QGPL/MOBHOMEPAS which should encrypt a variable char column Field Encryption in DB2 for i
I compiled the RPGLE program and I created a separate database DBMLIB/UMAAAP00 as follows
A R UMAAAF00 TEXT('-
A TEST ENCRYPTION')
A*
A IPIAAA 20A VARLEN(20)
A KYGAAA 11S 2 COLHDG('SALARY')
I then use strsql to alter the table and protect IPIAAA
ALTER TABLE DBMLIB/UMAAAP00 alter column IPIAAA set FIELDPROC
QGPL.MOBHOMEPAS
ALTER COMPLETED FOR TABLE UMAAAP00 IN DBMLIB.
For some reason when I go in to add entries through upddta directly to the file itself and then do a wrkqry to query and file and view them I don't see them as encrypted.
Is this not how it's supposed to work? Is anyone able to assist me with the logic? Ultimately, I'd like to create a simple table from scratch that has a single 20 character or so password column as encrypted.
If the code being utilized for the named FieldProc program QGPL.MOBHOMEPAS was modeled-after [an effective copy of] the source code that was found at the URL from the OP [which BTW includes a position-to request to the comments section... Why?], then that code is implemented using the base-level of the DB2 for IBM i 7.1 SQL FieldProc support, not the next [enhanced] level of support in which the masking feature was added. That is, every invocation other than for function-code=8 will necessarily always be an Encode or a Decode operation for which any masking of the data is unsupported, because changing the data [with that level of support] would corrupt the data in the TABLE.
Note [from http://www.mcpressonline.com/rpg/db2-field-procedures-finally-support-conditional-masking.html] the differences in the coding requirements described for the pre-masking-support [eight parameters] and since-masking-support [nine parameters] as the pre-requisite to have the Run Query (RUNQRY) and Update Data (UPDDTA) features mask the data that is presented to the user:
The new FieldProc Masking support revolves around two main components.
The first component is a new parameter that was added to the parameter
lists that the DB2 engine passes to the FieldProc program on each
decode call. This new parameter controls whether or not the FieldProc
program can return a masked value. There are some DB2 operations—such
as the RGZPFM (Reorganize Physical File Member) command and trigger
processing—that always require the clear-text version of the data to
be returned. The second component is a new special SQLState value
('09501') that is to be returned by the FieldProc program whenever it
is passed a masked value on the encode call. This prevents the masked
value from being encoded, which would result in the original data
value being lost. When this special SQLState value is returned, DB2
will ignore the encoded value that is passed back by the FieldProc
program and instead use the value that's currently stored in the
record image for that column.
For some reason when I go in to add entries through upddta directly to
the file itself and then do a wrkqry to query and file and view them I
don't see them as encrypted. Is this not how it's supposed to work?
No, that's not how it's supposed to work. The data will be encoded on disk only.
When you view the data it will be decoded automatically by the FIELDPROC program no matter what you're using to view it (WRKQRY [yuck], DFU, STRSQL, whatever). This is how it works regardless of field masking (which is different/additional functionality).

BizTalk SQL Adapter returns data but after I map it to another schema the transformation returns empty xml document

Has anyone faced this issue before. I am trying to use the BizTalk SQL Adapter to read data from stored procedure using ( for xml auto, elements, xmldata ). the data is returned normally (I push it to physical file and the data is there) but after I map this returned data to a another schema (with similar structure), the file generated after the transform is always empty file. I want to know what is wrong.
Generate the schema with xmldata then after that replace it with root('rootName') then update the schema by adding a child element rootName and make it as a father element for the xml data right after the auto generated root

storing and retrieving large strings in SQLite with ADO and VBScript

I am using VBscript, ADO and the SQLite ODBC driver to store and retrieve large strings (~5KB). Storing them works fine, maybe because I am able to specify a size while I bind the parameters of the insert statement. When I try to retrieve those strings, however, I correctly get the first 256 (or 255) characters but the rest seams to come from a random memory area. What am I doing wrong (besides using VBscript and ADO...)?
I'm open to the idea of storing the text as binary data. But the functions I tried, to retrieve it later, didn't work.
getChunk will not work on a record field as noted on msdn, also the field attribute adFldLong states if getChunk can be used on that field.
In some fields you must use the SQL query to retrieve the length of data instead of using attribute actualSize
there is a good example e here http://kek.ksu.ru/eos/ecommerce/masteringasp/18-06.html

Resources