I have to find out in MariaDb how to implement some features used in Oracle . I have :
Load a file: in Oracle I use the external table. Is there a way (fast and efficient one ) to load a file into a table . Has MariaDb a plugin which allows to load well a specific format of files?
In my existing Oracle code I used to developp a java wrap functions which allow those feature (is there a way in MariaDb to do this?), specifically :
1- Searching a files in an OS directory and insert them in a table,
2- send an SNMP trap
3- Send a mail via SMTP
Is there an equivalent to an Oracle job in Mariadb?
Is there an equivalent to Oracle TDE (Transparent data encryption) ?
Is there an equivalent to the VPD (virtual private policy)?
What is the maximum length of a varchar column/variable ? (in Oracle we can use the CLOBs..)
Many Thanks and Best Regards
MariaDB (and MySQL) can do a LOAD DATA on a CSV file. It is probably the most efficient way to convert external data to a table. (There is also ENGINE=CSV, which requires no conversion, but is limited in that it has no indexes, etc.)
MariaDB cannot, for security reasons, issue any arbitrary system calls. No emails, no 'exec', etc.
No Job, TDE, VPD.
Network transmissions can (optionally) use SSL for encryption at that level.
There is a family of virtually identical datatypes for characters:
CHAR(n), VARCHAR(n) -- where n is up to 65535; n is the limit of _characters_, not _bytes_.
TINYTEXT, TEXT, MEDIUMTEXT, LONGTEXT -- of various limits; the last is limited to 4GB.
For non-character storage (eg, images), there is a similar set of datatypes
BINARY(n), VARBINARY(n)
TINYBLOB, BLOB, MEDIUMBLOB, LONGBLOB
The various sizes of TEXT and BLOB indicate whether that is a 1-, 2-, 3-, or 4-byte length field in the implementation.
NVARCHAR is a synonym for VARCHAR. Character sets are handled by declaring a column to be, for example, CHARACTER SET utf8 COLLATE utf8_unicode_ci. Such can be defaulted at the database (schema) level, defaulted at the table level, or specified differently for different columns (even in the same table).
Related
I'm developing a Rust application for user registration via SSH (like the one working for SDF).
I'm using the SQLite3 database as a backend to store the information about users.
I'm opening the database file (or creating it if it does not exist) but I don't know the approach for checking if the necessary tables with expected structure are present in the database.
I tried to use PRAGMA schema_version for versioning purposes, but this approach is unreliable.
I found that there are posts with answers that are heavily related to my question:
How to list the tables in a SQLite database file that was opened with ATTACH?
How do I retrieve all the tables from database? (Android, SQLite)
How do I check in SQLite whether a table exists?
I'm opening the database file (or creating it if it does not exist)
but I don't know the approach for checking if the necessary tables
I found querying sqlite_master to check for tables, indexes, triggers and views and for columns using PRAGMA table_info(the_table_name) to check for columns.
e.g. the following would allow you to get the core basic information and to then be able to process it with relative ease (just for tables for demonstration):-
SELECT name, sql FROM sqlite_master WHERE type = 'table' AND name LIKE 'my%';
with expected structure
PRAGMA table_info(mytable);
The first results in (for example) :-
Whilst the second results in (for mytable) :-
Note that type is blank/null for all columns as the SQL to create the table doesn't specify column types.
If you are using SQLite 3.16.0 or greater then you could use PRAGMA Functions (e.g. pragma_table_info(table_name)) rather than the two step approach need prior to 3.16.0.
I am having problems inserting values into SQL Server columns of type decimal(38, 20) from BizTalk 2013 using WCF-SQL adapter. I get InvalidCastException with message: "System.InvalidCastException: Specified cast is not valid"
If I test column type decimal(18,18) it works.
Seems like the WCF-SQL adapter does not handle decimal with very high precision. Question is what is the limitation? And, if there is a workaround?
When I generate XSD from database table information, decimal(38,20) turns into xs:string with length restriction of 40. Maybe this is a sign of that WCF-SQL adapter cannot handle such precision...? I have also tested to alter the XSD to be xs:decimal, but no difference.
Anyone?
ADDITION:
Did not find any "good" way to handle this limitation.
Final setup is: XML => WCF-SQL adapter => Stored Procedure with table type parameter containing varchar(40) columns => CAST table variable columns to decimal(38,20) one-by-one => INSERT into destination table.
So, solution was to modify table type to accept varchar, and manually convert in stored procedure.
Would be happy if someone could explain the better solution!
Decimal precision is limited to the .NET framework type. See here.
Also described in the BizTalk documentation here. "Decimal if precision <= 28. String if precision > 28".
So your way of handling with strings is an option. Use the Round functoid in your map to the SQL schema if you don't really need more than 29 positions.
Another option you could consider is changing the regional settings for the BizTalk host user running the send port. The current setting/language of your decimal separator is a comma instead of a dot (or the other way around) and not matching the data type for SQL Server. For this option you have to keep the type as string in your schema and keep it decimal in your SQL Server table.
I have a normal String athakur#test.com. It is stored encrypted in oracle DB with some encryption key. The algo used is not available in DB2 and I want the same data in DB2.
I am not able to directly transfer data by copy paste as the characters are different it basically gives different characters when I paste it from SQL developer to data studio. So I am trying to convert encrypted data to hex and then converting hex to data in DB2. But that does not seem to work.
Encrypted data in hex using rawtohex is 1E70A8495CEC19EEBDBA7A652344C850B1266E74247A9306 but in DB2 when I do
select x'1E70A8495CEC19EEBDBA7A652344C850B1266E74247A9306' from dual;
I am getting null.
Any idea what am I missing or any other way to replicate data?
What version and platform of DB2?
Your statement should work, assuming you're on a version that has Oracle's dual table instead of the sysibm.sysdummy1 equivalent.
It does work for me, though the display value is unreadable of course. I suspect you really want
select hex(x'1E70A8495CEC19EEBDBA7A652344C850B1266E74247A9306')
from dual;
You can't display the encrypted value directly as it isn't valid displayable characters. The best you can do
insert x'1E70A8495CEC19EEBDBA7A652344C850B1266E74247A9306'
into mytbl;
select hex(myfld)
from mytbl;
Make sure that you define myfld as CHAR(24) FOR BIT DATA
I don't know what is the difference between SQLite NVARCHAR and NVARCHAR2 column.
I know that NVARCHAR is a Unicode-only text column, but what about NVARCHAR2?
There is a difference. In a way...
HereĀ“s the thing:
As Lasse V. Karlsen says, SQLite does not act on the types you mentioned nor does it restrict the length by an argument passed in like in NVARCHAR(24) (but you could do check constraints to restrict length).
So why are these available in SQLite Expert (and other tools)?
This info will be saved in the database schema (please check https://www.sqlite.org/datatype3.html#affinity and http://www.sqlite.org/pragma.html#pragma_table_info) So should you bother to set these when creating a SQLite db as it will not be used by SQLite?
Yes if you will be using any tool to generate code from the schema! Maybe somebody will ask you to transfer the db to MSSQL, then there are some great tools that will use the schema and will map your SQLite to MSSQL in a blink. Or maybe you will use some .NET tool to map the tables into POCO classes, and these can also use the schema to map to the correct type and they will also use the restrictions and transfer these into data annotations on the properties that the columns map to. And EntityFramework 7 will have support built in for SQLite and their code generation will surely make use of the schema.
There is no difference.
SQLite does not operate with strict data types like that, it has "storage classes".
If you check the official documentation you'll find this rule, one of five used to determine which storage class to assign to a column from the data type you specify:
If the declared type of the column contains any of the strings "CHAR", "CLOB", or "TEXT" then that column has TEXT affinity. Notice that the type VARCHAR contains the string "CHAR" and is thus assigned TEXT affinity.
There are 5 rules in total but rule 2 covers NVARCHAR and NVARCHAR2 and both will assign the storage class TEXT to the column.
I is there a way to add "additional info" to a sqlite database. Something like date of creation of a database, amount of entries or name of user who created it. If I don't want to create special tables in order to store all this info especially if there will only be one of each type.
Thank you in advance.
Why not use one special table and store each special value as a name-value pair?
CREATE TABLE SpecialInfoKeyValues (
Key VARCHAR UNIQUE COLLATE NOCASE,
Value
);
Since SQLite uses "manifest typing," you can store any kind of value you want in there.
In short, no. SQLite has no concept of users, and doesn't store creation metadata.
No, there is no way to do that, you will have to use a "special" table to carry data within the file, or you will have to use external means.
There are, however, two version counters stored within the database itself: the schema_version and the user_version (see Pragmas to query/modify version values for details.) Perhaps you could abuse those. Please keep in mind, though, that by default the sqlite3 shell application does not store those when you use the .dump command to dump the database into a textual representation.