I used Listing 3 in the following link to create a FIELDPROC program QGPL/MOBHOMEPAS which should encrypt a variable char column Field Encryption in DB2 for i
I compiled the RPGLE program and I created a separate database DBMLIB/UMAAAP00 as follows
A R UMAAAF00 TEXT('-
A TEST ENCRYPTION')
A*
A IPIAAA 20A VARLEN(20)
A KYGAAA 11S 2 COLHDG('SALARY')
I then use strsql to alter the table and protect IPIAAA
ALTER TABLE DBMLIB/UMAAAP00 alter column IPIAAA set FIELDPROC
QGPL.MOBHOMEPAS
ALTER COMPLETED FOR TABLE UMAAAP00 IN DBMLIB.
For some reason when I go in to add entries through upddta directly to the file itself and then do a wrkqry to query and file and view them I don't see them as encrypted.
Is this not how it's supposed to work? Is anyone able to assist me with the logic? Ultimately, I'd like to create a simple table from scratch that has a single 20 character or so password column as encrypted.
If the code being utilized for the named FieldProc program QGPL.MOBHOMEPAS was modeled-after [an effective copy of] the source code that was found at the URL from the OP [which BTW includes a position-to request to the comments section... Why?], then that code is implemented using the base-level of the DB2 for IBM i 7.1 SQL FieldProc support, not the next [enhanced] level of support in which the masking feature was added. That is, every invocation other than for function-code=8 will necessarily always be an Encode or a Decode operation for which any masking of the data is unsupported, because changing the data [with that level of support] would corrupt the data in the TABLE.
Note [from http://www.mcpressonline.com/rpg/db2-field-procedures-finally-support-conditional-masking.html] the differences in the coding requirements described for the pre-masking-support [eight parameters] and since-masking-support [nine parameters] as the pre-requisite to have the Run Query (RUNQRY) and Update Data (UPDDTA) features mask the data that is presented to the user:
The new FieldProc Masking support revolves around two main components.
The first component is a new parameter that was added to the parameter
lists that the DB2 engine passes to the FieldProc program on each
decode call. This new parameter controls whether or not the FieldProc
program can return a masked value. There are some DB2 operations—such
as the RGZPFM (Reorganize Physical File Member) command and trigger
processing—that always require the clear-text version of the data to
be returned. The second component is a new special SQLState value
('09501') that is to be returned by the FieldProc program whenever it
is passed a masked value on the encode call. This prevents the masked
value from being encoded, which would result in the original data
value being lost. When this special SQLState value is returned, DB2
will ignore the encoded value that is passed back by the FieldProc
program and instead use the value that's currently stored in the
record image for that column.
For some reason when I go in to add entries through upddta directly to
the file itself and then do a wrkqry to query and file and view them I
don't see them as encrypted. Is this not how it's supposed to work?
No, that's not how it's supposed to work. The data will be encoded on disk only.
When you view the data it will be decoded automatically by the FIELDPROC program no matter what you're using to view it (WRKQRY [yuck], DFU, STRSQL, whatever). This is how it works regardless of field masking (which is different/additional functionality).
Related
I am trying to implement a use-case in Mule4 where a tour needs to be assigned to a user if it has not already been assigned.
I was hoping that I could implement it using Mule db:insert component and using INSERT WHERE NOT EXISTS SQL script as below.
INSERT INTO TL_MAPPING_TOUR(TOURNO,TLID,SYSTEM) select :tourno,:tlid,:system from DUAL
where not exists(select * from TL_MAPPING_TOUR where (TOURNO=:tourno and TLID=:tlid and SYSTEM=:system))
However, this is resulting in Mule Exception
Message : ORA-01722: invalid number
Error type : DB:BAD_SQL_SYNTAX
TL_MAPPING_TOUR table has an id column (Primary Key), but that is auto-generated by a sequence.
The same script, modified for running directly in SQL developer, as shown below, is working fine.
INSERT into TL_MAPPING_TOUR(TOURNO,TLID,SYSTEM)
select 'CLLO001474','123456789','AS400'
from DUAL
where not exists(select * from TL_MAPPING_TOUR where (TOURNO='CLLO001474' and TLID='123456789' and SYSTEM='AS400'));
Clearly Mule db:insert component doesn't like the syntax, but it's not very clear to me what is wrong here. I can't find any INSERT WHERE NOT EXISTS example implementation for the Mule4 Database component either.
stackoverflow page https://stackoverflow.com/questions/54910330/insert-record-into-sql-server-when-it-does-not-already-exist-using-mule directs to page not found.
Any idea what is wrong here and how to implement this in Mule4 without using another Mule4 db:select component before db:insert?
I don't know "mule4", but this:
Message : ORA-01722: invalid number
doesn't mean that syntax is wrong (as you already tested it - the same statement works OK in another tool).
Cause: You executed a SQL statement that tried to convert a string to a number, but it was unsuccessful.
Resolution:
The option(s) to resolve this Oracle error are:
Option #1: Only numeric fields or character fields that contain numeric values can be used in arithmetic operations. Make sure that all expressions evaluate to numbers.
Option #2: If you are adding or subtracting from dates, make sure that you added/substracted a numeric value from the date.
In other words, it seems that one of columns is declared as NUMBER, while you passed something that is a string. Oracle performed implicit conversion when you tested the statement in SQL Developer, but it seems that mule4 didn't and hence the error.
The most obvious cause (based on what you posted) is putting '123456789' into TLID as other values are obviously strings. Therefore, pass 123456789 (a number, no single quotes around it) and see what happens. Should work.
SQL Developer is too forgiving. It will convert string to numbers and vise versa automatically when it can. And it can a lot.
Mulesoft DB connector tries the same but it is not as succefule as native tools. Pretty often it fails to convert, especially on dates but this is not your case.
In short - do not trust too much data sense of Mulesoft. If it works - great! Otherwise try to eliminate any intelligence from it and do all conversions in the query and better from the string. Usually number works fine but if doesn't - use to_number function to mark properly that this is the number.
More about this is here https://simpleflatservice.com/mule4/AvoidCoversionsOrMakeThemNative.html
Description:
Recently I've been trying to automatize some tasks at work using Pentaho (PDI), and I've come upon a problem that I had no luck of solving/finding solution for (I did research for many hours, been trying to solve it on my own as well). My aim is to load a text file containing name of the PL/SQL procedure stored on the server, and custom ammount of parameters for the procedure. For example if the source text file would contain following text:
Test_schema.job_pkg.run_job;12345
It should run job_pkg.run_job procedure from the defined connection, and use 12345 as a single parameter.
The problem:
The Call DB procedure transformation step only accepts SET ammount of parameters, for exampe I set the step to accept 4 parameters, but the procedure I'm calling is only accepting 1 parameter. I want to be able to IGNORE other parameters set in the step. When I try to send for example just one parameter but the step is set to accept 4 parameters, it throws:
Call DB Procedure.0 - ORA-06550: row 1, column 7:
PLS-00306: wrong number or types of arguments in call to 'RUN_JOB'
ORA-06550: row 1, column 7: PL/SQL: Statement ignored
What I have so far:
I've made a job that starts the transformation that loads the contents of the source file to memory, splits it to correct fields using Modified Java Script value, sets Pentaho variables with extraced values, then second transformation is loaded, that reads these variables and passes them as fields to Call DB procedure step. The last step always fails unless I manually remove all unused arguments.
Solution:
Based on AlainD's answer I've tried to use the Switch / Case step which solved the problem. Now there is different problem regarding conversion of values. If I pass a number but set it as STRING in Call DB Procedure's parameters, it throws
ORA-01403 no data found
This can be solved by handling the data via Modified Java Script Value step or any other step in order to convert the data into the "correct" format.
What I do in cases like that is to build a SQL command in a String, something like Test_schema.job_pkg.run_job(12345) and execute it with an Execute SQL script.
An other workaround would be to cont the number of parameters in the Modified Javascript step, and use a Switch/Case to redirect the flow on a sequence of DB Procedure steps: one with 0 parameter, one with 1 parameter, one with 2 parameters,... This method assume that the max number of parameters is small.
Is there some way to create a view that returns a pzPVStream that can be natively parsed by Pega when it executes an RDB?
For instance, maybe a query (in MS SQL Server) that resembled:
SELECT test_tbl_outer.ID, (
select *, 'My-Int-TestClass' as "pxObjClass"
from {class:My-Int-TestClass} as test_tbl_inner
where test_tbl_inner.ID=test_tbl_outer.ID
FOR XML RAW('pagedata'), TYPE, ELEMENTS
) as pzPVStream
from {class:My-Int-TestClass} as test_tbl_outer
This gets an invalid signature error (the SQL query does work directly however), and if I try to shove a signature string onto the column ('PR6d' or previous) I just get a different error regarding headers.
So at this point, I do realize that the pzPVstream is not stored as xml but as some sort of packed & compressed string. Is there a way for me to create a valid pzPVstream on the fly? Maybe something similar to what pr_read_from_stream does but in reverse?
The use case is that we'd like to pull a whole mess of data from an existing data warehouse. And it would be nice if we could pull all the multi-value data (many,many joins deep) over in one trip. We are not too concerned with the size of this object as we plan on pulling this data one way or another.
The pzPvStream is a compressed blob and it resonates a work object. It is compressed and stored as a single column in a table.
When it is read using obj-browse or obj-open activities, the blob is decompressed and all the encompassed properties are mapped to the clipboard.
This value has a proprietary format; the values are obfuscated.
I am using VBscript, ADO and the SQLite ODBC driver to store and retrieve large strings (~5KB). Storing them works fine, maybe because I am able to specify a size while I bind the parameters of the insert statement. When I try to retrieve those strings, however, I correctly get the first 256 (or 255) characters but the rest seams to come from a random memory area. What am I doing wrong (besides using VBscript and ADO...)?
I'm open to the idea of storing the text as binary data. But the functions I tried, to retrieve it later, didn't work.
getChunk will not work on a record field as noted on msdn, also the field attribute adFldLong states if getChunk can be used on that field.
In some fields you must use the SQL query to retrieve the length of data instead of using attribute actualSize
there is a good example e here http://kek.ksu.ru/eos/ecommerce/masteringasp/18-06.html
I need to be able to run a query such as
SELECT * FROM atable WHERE MyFunc(afield) = "some text"
I've written MyFunc in a VB module but the query results in "Undefined function 'MyFunc' in expression." when executed from .NET
From what I've read so far, functions in Access VB modules aren't available in .NET due to security concerns. There isn't much information on the subject but this avenue seems like a daed end.
The other possibility is through the CREATE PROCEDURE statement which also has precious little documentation: http://msdn.microsoft.com/en-us/library/bb177892%28v=office.12%29.aspx
The following code does work and creates a query in Access:
CREATE PROCEDURE test AS SELECT * FROM atable
However I need more than just a simple select statement - I need several lines of VB code.
While experimenting with the CREATE PROCEDURE statement, I executed the following code:
CREATE PROCEDURE test AS
Which produced the error "Invalid SQL statement; expected 'DELETE', 'INSERT', 'PROCEDURE', 'SELECT', or 'UPDATE'."
This seems to indicate that there's a SQL 'PROCEDURE' statement, so then I tried
CREATE PROCEDURE TEST AS PROCEDURE
Which resulted in "Syntax error in PROCEDURE clause."
I can't find any information on the SQL 'PROCEDURE' statement - maybe I'm just reading the error message incorrectly and there's no such beast. I've spent some time experimenting with the statement but I can't get any further.
In response to the suggestions to add a field to store the value, I'll expand on my requirements:
I have two scenarios where I need this functionality.
In the first scenario, I needed to enable the user to search on the soundex of a field and since there's no soundex SQL function in Access I added a field to store the soundex value for every field in every table where the user wants to be able to search for a record that "soundes like" an entered value. I update the soundex value whenever the parent field value changes. It's a fair bit of overhead but I considered it necessary in this instance.
For the second scenario, I want to normalize the spacing of a space-concatenation of field values and optionally strip out user-defined characters. I can come very close to acheiving the desired value with a combination of TRIM and REPLACE functions. The value would only differ if three or more spaces appeared between words in the value of one of the fields (an unlikely scenario). It's hard to justify the overhead of an extra field on every field in every table where this functionality is needed. Unless I get specific feedback from users about the issue of extra spaces, I'll stick with the TRIM & REPLACE value.
My application is database agnostic (or just not very religious... I support 7). I wrote a UDF for each of the other 6 databases that does the space normalization and character stripping much more efficiently than the built-in database functions. It really annoys me that I can write the UDF in Access as a VB macro and use that macro within Access but I can't use it from .NET.
I do need to be able to index on the value, so pulling the entire column(s) into .NET and then performing my calculation won't work.
I think you are running into the ceiling of what Access can do (and trying to go beyond). Access really doesn't have the power to do really complex TSQL statements like you are attempting. However, there are a couple ways to accomplish what you are looking for.
First, if the results of MyFunc don't change often, you could create a function in a module that loops through each record in atable and runs your MyFunc against it. You could either store that data in the table itself (in a new column) or you could build an in-memory dataset that you use for whatever purposes you want.
The second way of doing this is to do the manipulation in .NET since it seems you have the ability to do so. Do the SELECT statement and pull out the data you want from Access (without trying to run MyFunc against it). Then run whatever logic you want against the data and either use it from there or put it back into the Access database.
Why don't you want to create an additional field in your atable, which is atable.afieldX = MyFunc(atable.afield)? All what you need - to run UPDATE command once.
You should try to write a SQL Server function MyFunc. This way you will be able to run the same query in SQLserver and in Access.
A few usefull links for you so you can get started:
MSDN article about user defined functions: http://msdn.microsoft.com/en-us/magazine/cc164062.aspx
SQLServer user defined functions: http://www.sqlteam.com/article/intro-to-user-defined-functions-updated
SQLServer string functions: http://msdn.microsoft.com/en-us/library/ms181984.aspx
What version of JET (now called Ace) are you using?
I mean, it should come as no surprise that if you going to use some Access VBA code, then you need the VBA library and a copy of MS Access loaded and running.
However, in Access 2010, we now have table triggers and store procedures. These store procedures do NOT require VBA and in fact run at the engine level. I have a table trigger and soundex routine here that shows how this works:
http://www.kallal.ca/searchw/WebSoundex.htm
The above means if Access, or VB.net, or even FoxPro via odbc modifies a row, the table trigger code will fire and run and save the soundex value in a column for you. And this feature also works if you use the new web publishing feature in access 2010. So, while the above article is written from the point of view of using Access Web services (available in office 365 and SharePoint), the above soundex table trigger will also work in a stand a alone Access and JET (ACE) only application.