DB2 Encrypt Function - Insert Encrypt data - encryption

I am trying to insert data into table from application
statement required "insert into table1 (memberid) values (encrypt('1111','abcdef'))".
however application is preparing as
insert into table1 (memberid) values ('encrypt('1111','abcdef')') and Rows are getting inserted.
while select decrypt_char(memberid,'abcdef') from table1
getting SQL20146N - The Decryption function failed. The data is not encrypted.

You may get this error if you have made mistakes in your code or column declaration.
You should avoid using ENCRYPT() and DECRYPT_CHAR() because they are considered insecure , and are deprecated so might be removed from a future release of Db2. Instead use Db2 native encryption or underlying file system encryption.
Below is a worked example for your case, on Db2-LUW:
create table table1(memberid varchar(64) for bit data)
insert into table1 (memberid) values (encrypt('1111','abcdef'))
select hex(memberid) from table1
1
--------------------------------------------------------------------------------------------------------------------------------
0828D8FFB804AFD51CFBD754BD9D234F
select decrypt_char(memberid, 'abcdef') from table1
1
--------------------------------------------------------
1111

Related

DELETE DB2 using OPENQUERY returns error about key column insufficient

I am trying to delete rows on DB2 i Series using a link server but am getting an error message.
Key column information is insufficient or incorrect. Too many rows were affected by update
This is the query
DELETE FROM DB2
FROM OPENQUERY(TEST1, 'SELECT FIELD1 FROM LIBRARY1.FILE1') DB2
INNER JOIN #DLT_FILE1 DLT ON
DB2.FIELD1 = DLT.FIELD1
There is one column in both temp file #DLT_FILE1 and DB2 table LIBRARY1.FILE1
Db2 for IBM i (aka DB2-400) doesn't allow positioned deletes, ie from a cursor, that uses joins.
AMarc's suggestion might work, once you fix the syntax...I believe this is correct.
DELETE
FROM OPENQUERY(TEST1
, 'SELECT FIELD1 FROM LIBRARY1.FILE1 DB2
WHERE EXISTS (SELECT 1
FROM #DLT_FILE1 DLT
WHERE DB2.FIELD1 = DLT.FIELD1)
')

Is there any way to force SQLite constrains checks?

For example, let say DB has foreign key A.b_id -> B.id with SET NULL on delete.
If record with some B.id get deleted, all b_id references will be set to NULL.
But if A already contains record where A.b_id has value that is not in B.id (it was inserted without foreign keys support), is there a way to force SQLite DB check foreign keys and set to NULL such data?
In fact, in first place I'm solving an DB upgrading task.
On start app checks if internal DB (resource) has higher version than user DB.
If so it backups user DB, copies internal empty DB to user storage. Than turns off foreign keys support and fills new DB with data from backup, inserting automatically in loop table by table for all columns with same name. Turns on foreign keys support back.
Everything works fine, but if in some table in old DB there is no foreign key constrain previously, while new DB has one, the data will be inserted as is and link can point nowhere (possibly wrong links is unavoidable and not related to question).
Yes, I understand a way to insert without turning off foreign keys support, but it would need knowledge of tables dependencies order that I would like to avoid.
Thanks for any help in advance!
Although I don't know of a way that automatically will set to NULL all orphaned values of a column in a table that (should) reference another column in another table, there is a way to get a report of all these cases and then act accordingly.
This is the PRAGMA statement foreign_key_check:
PRAGMA schema.foreign_key_check;
or for a single table check:
PRAGMA schema.foreign_key_check(table-name);
From the documenation:
The foreign_key_check pragma checks the database, or the table called
"table-name", for foreign key constraints that are violated. The
foreign_key_check pragma returns one row output for each foreign key
violation. There are four columns in each result row. The first column
is the name of the table that contains the REFERENCES clause. The
second column is the rowid of the row that contains the invalid
REFERENCES clause, or NULL if the child table is a WITHOUT ROWID
table. The third column is the name of the table that is referred to.
The fourth column is the index of the specific foreign key constraint
that failed. The fourth column in the output of the foreign_key_check
pragma is the same integer as the first column in the output of the
foreign_key_list pragma. When a "table-name" is specified, the only
foreign key constraints checked are those created by REFERENCES
clauses in the CREATE TABLE statement for table-name.
Check a simplified demo of the way to use this PRAGMA statement, or its function counterpart pragma_foreign_key_check().
You can get a list of the rowids of all the problematic rows of each table.
In your case, you can execute an UPDATE statement that will set to NULL all the orphaned b_ids:
UPDATE A
SET b_id = NULL
WHERE rowid IN (SELECT rowid FROM pragma_foreign_key_check() WHERE "table" = 'A')
This also works in later versions of SQLite:
UPDATE A
SET b_id = NULL
WHERE rowid IN (SELECT rowid FROM pragma_foreign_key_check('A'))
but it does not seem to work up to SQLite 3.27.0

Insert on constraint-less table, caused ora-02449 unique/primary keys in table referenced by foreign keys

We have an old batch file which does only such statement:
Insert into table_a select * from table_b;
table a is bulk table with no index and constraint
after few years, with increments in record counts, this batch became slow
but suddenly for few days, we got this error every time we try to run the batch:
Ora-00604 error occured at recursive sql level 1 ora-02449 unique/primary keys in table referenced by foreign keys
our only option is to make chunks of data, and insert them part by part, which fixes the batch output, but the problem still exists
we are not dropping and table or object here
can you help us find the cause of problem?
I've checked database level triggers but there is no trigger for insert at database level

Is it possible to run a Teradata query in Excel that uses Volatile tables?

My Teradata query creates a volatile that is used to join to existing views. When linking query to excel the following error pops up: "Teradata: [Teradata Database] [3932] Only an ET or null statement is legal after a DDL Statement". Is there a workaround for this for someone that does not have write permissions in teradata to create a real view or table? I want to avoid linking to Teradata in SQL and running an open query to pull in the data needed.
This is for Excel 2016 64bit and using Teradata version 15.10.1.12
Normally this error will occur if you are using ANSI mode or have issued a BT (Begin Transaction) in BTET mode.
Here are a few workarounds to try:
Issue an ET; statement (commit) after the create volatile table statement. If you are using ANSI mode, use COMMIT; instead of ET;. If you are unsure, try each one in turn. Only one will be valid but both do the same thing. Make sure your Volatile table includes ON COMMIT PRESERVE ROWS
Try using BT ET mode (a.k.a. Teradata mode) when establishing the session. I do not remember where but there will be a setting in the ODBC configuration for this.
Try using a Global Temporary table. These work similarly to Volatile tables except you define them once and the definition sticks around. That is, you can create it in, say BTEQ, or SQL assistant etc. The definition is common to all users and sessions (i.e. your Excel session), but the content is transient and unique to each session (like a volatile table).
Move the select part of your insert into the volatile table into the query that selects the data from the volatile table. See simple example below.
If you do not have create Global Temporary table permissions, ask your DBA.
Here is a simple example to illustrate point 4.
Current:
create volatile table tmp (id Integer)
ON COMMIT PRESERVE ROWS;
insert into tmp
select customer_number
from customer
where X = Y and yr = 2019
;
select a,b,c
from another_tbl A join TMP T ON
A.id = T.id
;
Becomes:
select a,b,c
from another_tbl A join (
select customer_number
from customer
where X = Y and yr = 2019
) AS T
ON
A.id = T.id
;
Or better yet, just Join your tables directly.
Note The first sequence (create table, Insert into and select) is a three statement series. This will return 3 "result sets". The first two will be row counts the last will be the actual data. Most programs (including I think Excel) can not process multiple result set responses. This is one of the reasons it is difficult to use Teradata Macros with client tools like Excel.
The latter solution (a single select) avoids this potential problem.

How to merge N SQLite database files into one if db has the primary field?

I have a bunch of SQLite db files, and I need to merge them into one big db files.
How can I do that?
Added
Based on this, I guess those three commands should merge two db into one.
attach './abc2.db' as toMerge;
insert into test select * from toMerge.test
detach database toMerge
The problem is the db has PRIMARY KEY field, and I got this message - "Error: PRIMARY KEY must be unique".
This is the test table for the db.
CREATE TABLE test (id integer PRIMARY KEY AUTOINCREMENT,value text,goody text)
I'm just thinking off my head here... (and probably after everybody else has moved on, too).
Mapping the primary key to "NULL" should yield the wanted result (no good if you use it as foreign key somewhere else, since the key probably exists, but has different contents)
attach './abc2.db' as toMerge;
insert into test select NULL, value, goody from toMerge.test;
detach database toMerge;
actual test:
sqlite> insert into test select * from toMerge.test;
Error: PRIMARY KEY must be unique
sqlite> insert into test select NULL, value, goody from toMerge.test;
sqlite> detach database toMerge;
I'm not 100% sure, but it seems that I should read all the elements and insert the element (except the PRIMARY KEY) one by one into the new data base.

Resources