I have a normal String athakur#test.com. It is stored encrypted in oracle DB with some encryption key. The algo used is not available in DB2 and I want the same data in DB2.
I am not able to directly transfer data by copy paste as the characters are different it basically gives different characters when I paste it from SQL developer to data studio. So I am trying to convert encrypted data to hex and then converting hex to data in DB2. But that does not seem to work.
Encrypted data in hex using rawtohex is 1E70A8495CEC19EEBDBA7A652344C850B1266E74247A9306 but in DB2 when I do
select x'1E70A8495CEC19EEBDBA7A652344C850B1266E74247A9306' from dual;
I am getting null.
Any idea what am I missing or any other way to replicate data?
What version and platform of DB2?
Your statement should work, assuming you're on a version that has Oracle's dual table instead of the sysibm.sysdummy1 equivalent.
It does work for me, though the display value is unreadable of course. I suspect you really want
select hex(x'1E70A8495CEC19EEBDBA7A652344C850B1266E74247A9306')
from dual;
You can't display the encrypted value directly as it isn't valid displayable characters. The best you can do
insert x'1E70A8495CEC19EEBDBA7A652344C850B1266E74247A9306'
into mytbl;
select hex(myfld)
from mytbl;
Make sure that you define myfld as CHAR(24) FOR BIT DATA
Related
I have an excel spreadsheet with multiple entries that I want to insert into an SQLite DB from UIPath. How do I do this?
You could do it one of two ways. For both methods, you will need to use the Excel Read Range to put the excel into a table.
Scenario 1: You could read the table in a for each loop, line by line, converting each row to SQL and use a Execute non-query activity. This is too long, and if you like O notation, this is an O(n) solution.
Scenario 2: You could upload the entire table (as long as its compatible with the DB Table) to the database.
you will need Database > Insert activity
You will need to provide the DB Connection (which I answer in another post how to create)
Then enter the sqlite database table you want to insert into in Quotes
And then enter the table name that you have created or pulled from another resource in the last field
Output will be an integer (Affected Records)
For O Notation, this is an O(1) solution. At least from our coding perspective
I have to find out in MariaDb how to implement some features used in Oracle . I have :
Load a file: in Oracle I use the external table. Is there a way (fast and efficient one ) to load a file into a table . Has MariaDb a plugin which allows to load well a specific format of files?
In my existing Oracle code I used to developp a java wrap functions which allow those feature (is there a way in MariaDb to do this?), specifically :
1- Searching a files in an OS directory and insert them in a table,
2- send an SNMP trap
3- Send a mail via SMTP
Is there an equivalent to an Oracle job in Mariadb?
Is there an equivalent to Oracle TDE (Transparent data encryption) ?
Is there an equivalent to the VPD (virtual private policy)?
What is the maximum length of a varchar column/variable ? (in Oracle we can use the CLOBs..)
Many Thanks and Best Regards
MariaDB (and MySQL) can do a LOAD DATA on a CSV file. It is probably the most efficient way to convert external data to a table. (There is also ENGINE=CSV, which requires no conversion, but is limited in that it has no indexes, etc.)
MariaDB cannot, for security reasons, issue any arbitrary system calls. No emails, no 'exec', etc.
No Job, TDE, VPD.
Network transmissions can (optionally) use SSL for encryption at that level.
There is a family of virtually identical datatypes for characters:
CHAR(n), VARCHAR(n) -- where n is up to 65535; n is the limit of _characters_, not _bytes_.
TINYTEXT, TEXT, MEDIUMTEXT, LONGTEXT -- of various limits; the last is limited to 4GB.
For non-character storage (eg, images), there is a similar set of datatypes
BINARY(n), VARBINARY(n)
TINYBLOB, BLOB, MEDIUMBLOB, LONGBLOB
The various sizes of TEXT and BLOB indicate whether that is a 1-, 2-, 3-, or 4-byte length field in the implementation.
NVARCHAR is a synonym for VARCHAR. Character sets are handled by declaring a column to be, for example, CHARACTER SET utf8 COLLATE utf8_unicode_ci. Such can be defaulted at the database (schema) level, defaulted at the table level, or specified differently for different columns (even in the same table).
I'm using ROracle to query a database with a VARCHAR2 field containing some Unicode characters. When I access the database directly or via RJDBC, I have no issues with pulling this data.
When I pull the data with ROracle, I get ????? instead of the text.
In OCI you have use env. variable NLS_LANG. For example:
NLS_LANG=AMERICAN_AMERICA.AL32UTF8
will make OCI client return all strings returned in UTF8. This should work, if internal string representation in R also uses UTF8. Then ROracle can make simple binary copy from one buffer into another buffer.
Oracle uses question marks in case when it can not translate char into target code page.
I am having an issue with an SQLite database. I am using the SQLite ODBC from http://www.ch-werner.de/sqliteodbc/ Installed the 64-bit version and created the ODBC with these settings:
I open my Access database and link to the datasource. I can open the table, add records, but cannot delete or edit any records. Is there something I need to fix on the ODBC side to allow this? The error I get when I try to delete a record is:
The Microsoft Access database engine stopped the process because you and another user are attempting to change the same data at the same time.
When I edit a record I get:
The record has been changed by another user since you started editing it. If you save the record, you will overwrite the changed the other user made.
Save record is disabled. Only copy to clipboard or drop changes is available.
My initial attempt to recreate your issue was unsuccessful. I used the following on my 32-bit test VM:
Access 2010
SQLite 3.8.2
SQLite ODBC Driver 0.996
I created and populated the test table [tbl1] as documented here. I created an Access linked table and when prompted I chose both columns ([one] and [two]) as the Primary Key. When I opened the linked table in Datasheet View I was able to add, edit, and delete records without incident.
The only difference I can see between my setup and yours (apart from the fact that I am on 32-bit and you are on 64-bit) is that in the ODBC DSN settings I left the Sync.Mode setting at its default value of NORMAL, whereas yours appears to be set to OFF.
Try setting your Sync.Mode to NORMAL and see if that makes a difference.
Edit re: comments
The solution in this case was the following:
One possible workaround would be to create a new SQLite table with all the same columns plus a new INTEGER PRIMARY KEY column which Access will "see" as AutoNumber. You can create a unique index on (what are currently) the first four columns to ensure that they remain unique, but the new new "identity" (ROWID) column is what Access would use to identify rows for CRUD operations.
I had this problem too. I have a table with a primary key on a VARCHAR(30) (TEXT) field.
Adding an INTEGER PRIMARY KEY column didn't help at all. After lots of testing I found the issue was with a DATETIME field I had in the table. I removed the DATETIME field and I was able to update record values in MS-Access datasheet view.
So now any DATETIME fields I need in SQLite, I declare as VARCHAR(19) so they some into Access via ODBC as text. Not perfect but it works. (And of course SQLite doesn't have a real DATETIME field type anyway so TEXT is just fine and will convert OK)
I confirmed it's a number conversion issue. With an empty DATETIME field, I can add a time of 01-01-2014 12:01:02 via Access's datasheet view, if I then look at the value in SQLite the seconds have been rounded off:
sqlite> SELECT three from TEST where FLoc='1020';
2014-01-01 12:01:00.000
SYNCMODE should also be NORMAL not OFF.
Update:
If you have any text fields with a defined length (e.g. foo VARCHAR(10)) and the field contents contains more characters than the field definition (which SQLite allows) MS-Access will also barf when trying to update any of the fields on that row.
I've searched all similar posts as I had a similar issue with SQLite linked via ODBC to Access. I had three tables, two of them allowed edits, but the third didn't. The third one had a DATETIME field and when I changed the data type to a TEXT field in the original SQLite database and relinked to access, I could edit the table. So for me it was confirmed as an issue with the DATETIME field.
After running into this problem, not finding a satisfactory answer, and wasting a lot of time trying other solutions, I eventually discovered that what others have mentioned about DATETIME fields is accurate but another solution exists that lets you keep the proper data type. The SQLite ODBC driver can convert Julian day values into the ODBC SQL_TIMESTAMP / SQL_TYPE_TIMESTAMP types by looking for floating point values in the column, if you have that option enabled in the driver. Storing dates in this manner gives the ODBC timestamp value enough precision to avoid the write conflict error, as well as letting Access see the column as a date/time field.
Even storing sub-second precision in the date string doesn't work, which is possibly a bug in the driver because the resulting TIMESTAMP_STRUCT contains the same values, but the fractional seconds must be lost elsewhere.
I am using VBscript, ADO and the SQLite ODBC driver to store and retrieve large strings (~5KB). Storing them works fine, maybe because I am able to specify a size while I bind the parameters of the insert statement. When I try to retrieve those strings, however, I correctly get the first 256 (or 255) characters but the rest seams to come from a random memory area. What am I doing wrong (besides using VBscript and ADO...)?
I'm open to the idea of storing the text as binary data. But the functions I tried, to retrieve it later, didn't work.
getChunk will not work on a record field as noted on msdn, also the field attribute adFldLong states if getChunk can be used on that field.
In some fields you must use the SQL query to retrieve the length of data instead of using attribute actualSize
there is a good example e here http://kek.ksu.ru/eos/ecommerce/masteringasp/18-06.html