Snowflake ODBC driver does not recognize TIMESTAMP_TZ - odbc

(1) Table describe in ODBC returns a TIMESTAMP_TZ column in Snowflake as sqltype = 93 (SQL_TYPE_TIMESTAMP).
All the same attributes are returned for TIMESTAMP_TZ column Vs. a TIMESTAMP_NTZ column.
SELECT get_ddl('TABLE', 'TS_TEST');
create or replace TABLE TS_TEST (
TS TIMESTAMP_TZ(9),
ID NUMBER(38,0)
);
SELECT column_name, data_type, datetime_precision
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_schema = 'PUBLIC'
and table_name = 'TS_TEST'
and column_name = 'TS';
COLUMN_NAME DATA_TYPE DATETIME_PRECISION
----------- ----------- -------------------
TS TIMESTAMP_TZ 9
sqlstmt = 0x000000a220befd60 L"SELECT * FROM TS_TEST LIMIT 1"
rc = SQLDescribeColW (*cursor_ptr,
column_index,
(SE_WCHAR FAR *) column_name,
(SNOW_MAX_IDENTIFIER_LEN * sizeof(SE_WCHAR)), /* BufferLength */
&name_length,
&sqltype,
(SQLULEN *) &precision_size,
&scale,
&nulls);
column_name = 0x000000a220bef670 L"TS"
name_length = 2
sqltype = 93 // #define SQL_TYPE_TIMESTAMP 93; // C:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\um\sql.h
precision_size = 29 // #define SQL_SF_TIMESTAMP_COLUMN_SIZE 29; C:\Program Files\Snowflake ODBC Driver\include\sf_odbc.h
scale = 9
nulls = 1
(2) The Snowflake ODBC driver documentation is very sparse regarding TIMESTAMP_TZ.
There are no examples of binding input/output to TIMESTAMP_TZ with ODBC.
What is the data structure provided by Snowflake (Simba) ODBC to bind input/output
to a TIMESTAMP_TZ column when the value includes time zone offset information?
Where is the structure defined?
For example:
MS SqlServer defines SQL_SS_TIMESTAMPOFFSET_STRUCT for binding a DATETIMEOFFSET column in
C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\sqltypes.h
typedef struct tagSS_TIMESTAMPOFFSET_STRUCT {
SQLSMALLINT year;
SQLUSMALLINT month;
SQLUSMALLINT day;
SQLUSMALLINT hour;
SQLUSMALLINT minute;
SQLUSMALLINT second;
SQLUINTEGER fraction;
SQLSMALLINT timezone_hour;
SQLSMALLINT timezone_minute;
} SQL_SS_TIMESTAMPOFFSET_STRUCT;
Are we expected to bind TIMESTAMP_TZ columns as BINARY (SQL_C_BINARY) OR as a STRING (SQL_C_WCHAR)?
That should only be applicable to ODBC 3.5 and should not be required with ODBC 3.8.
That is not feasible currently, because the function SQLDescribeColW() in the Snowflake ODBC driver
describes TIMESTAMP_TZ columns as SQL_TYPE_TIMESTAMP, i.e. the identical typecode as a TIMESTAMP_NTZ column.
Therefore, there is no way for an ODBC application to distinguish between TIMESTAMP_TZ and TIMESTAMP_NTZ columns.
(3) The following topic in the Snowflake ODBC documentation alludes to custom SQL Data Types, but
does NOT provide an example of binding a TIMESTAMP_TZ value, nor an appropriate data structure:
https://docs.snowflake.com/en/user-guide/odbc-api.html
"Some SQL data types supported by Snowflake have no direct mapping in
ODBC (e.g. TIMESTAMP_*tz, VARIANT). To enable the ODBC driver to
work with the unsupported data types, the header file shipped with the
driver includes definitions for the following custom data types:"
////////////////////////////////////////////////////////////////////////////////////////////////////
/// Custom SQL Data Type Definition
///
///
////////////////////////////////////////////////////////////////////////////////////////////////////
#define SQL_SF_TIMESTAMP_LTZ 2000
#define SQL_SF_TIMESTAMP_TZ 2001
#define SQL_SF_TIMESTAMP_NTZ 2002
#define SQL_SF_ARRAY 2003
#define SQL_SF_OBJECT 2004
#define SQL_SF_VARIANT 2005
Refer to the topic "C Data Type Extensibility" in the ODBC documentation
https://learn.microsoft.com/en-us/sql/odbc/reference/develop-app/c-data-types-in-odbc?redirectedfrom=MSDN&view=sql-server-ver15
"In ODBC 3.8, you can specify driver-specific C data types. This
enables you to bind a SQL type as a driver-specific C type in ODBC
applications when you call SQLBindCol, SQLGetData, or
SQLBindParameter. This can be useful for supporting new server
types, because existing C data types might not correctly represent
the new server data types. Using driver-specific C types can increase
the number of conversions that drivers can perform.
https://learn.microsoft.com/en-us/sql/odbc/reference/develop-app/driver-specific-data-types-descriptor-information-diagnostic?view=sql-server-2017
Note: "Driver-specific data types, descriptor fields, diagnostic
fields, information types, statement attributes,
and connection attributes must be described in the driver documentation. When any of these values is passed
to an ODBC function, the driver must check whether the value is valid. Drivers return
SQLSTATE HYC00 (Optional feature not implemented) for driver-specific values that apply to other drivers."
(4) Is there any registry key OR key to set in ODBC.ini ?
Or another attribute to enable on the ODBC connection handle
that controls behavior pertaining to Snowflake custom data types?
I'm specifically interested in TIMESTAMP_TZ, TIMESTAMP_NTZ, TIMESTAMP_LTZ.
I tried configuring the parameter ODBC_USE_STANDARD_TIMESTAMP_COLUMNSIZE
in accordance with the following topic in the Snowflake ODBC documentation:
https://docs.snowflake.com/en/user-guide/odbc-parameters.html
Additional Connection Parameters
ODBC_USE_STANDARD_TIMESTAMP_COLUMNSIZE
"This boolean parameter affects the column size (in characters)
returned for SQL_TYPE_TIMESTAMP. When this parameter is set to true,
the driver returns 29, following the ODBC standard. When this
parameter is set to false, the driver returns 35, which allows room
for the timezone offset (e.g. “-08:00”).
This value can be set via not only the odbc.ini file (Linux or macOS)
or the Microsoft Windows registry, but also the connection string."
However, none of the following has any impact on the behavior.
A) Setting the registry key ODBC_USE_STANDARD_TIMESTAMP_COLUMNSIZE under the DSN name:
(String value) to FALSE/TRUE
(DWORD 32 bit value) to 0/1
B) Concatenating to the ODBC connection string (DSN based OR DSN-less string):
- "ODBC_USE_STANDARD_TIMESTAMP_COLUMNSIZE=FALSE",
- "ODBC_USE_STANDARD_TIMESTAMP_COLUMNSIZE=TRUE"
- "ODBC_USE_STANDARD_TIMESTAMP_COLUMNSIZE=0",
- "ODBC_USE_STANDARD_TIMESTAMP_COLUMNSIZE=1"
NOTE: The above parameter makes no difference to the behavior.
SQLDescribeColW always returns the exact same attributes
for both TIMESTAMP_TZ and TIMESTAMP_NTZ columns.
sqltype = 93;
// #define SQL_TYPE_TIMESTAMP 93;
// C:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\um\sql.h
precision_size = 29;
// #define SQL_SF_TIMESTAMP_COLUMN_SIZE 29;
// C:\Program Files\Snowflake ODBC Driver\include\sf_odbc.h
// scale = 9;
// nulls = 1;
One would expect TIMESTAMP_TZ columns to be described back as
the type defined in // C:\Program Files\Snowflake ODBC Driver\include\sf_odbc.h,
namely SQL_SF_TIMESTAMP_TZ (2001)
and TIMESTAMP_NTZ columns to be described back as
the type SQL_SF_TIMESTAMP_NTZ (2002)
(5) NOTE: The installed version of SnowflakeDSII.dll in C:\Program Files\Snowflake ODBC Driver is 2.22.4.0
NOTE: Since the time of the original post, the Snowflake ODBC driver has been upgraded to the latest version, namely 2.24.5.0 - without any change in behavior.
/* In the connection, the target ODBC version is set to ODBC 3.8 */
rc = SQLSetEnvAttr (connection->henv,
SQL_ATTR_ODBC_VERSION,
(void *)SQL_OV_ODBC3_80,
0);
rc = SQLGetInfoW (connection->hdbc, SQL_DRIVER_ODBC_VER,
&odbc_ver, SE_MAX_MESSAGE_LENGTH, NULL);
odbc_ver = 0x00000000022cae40 L"03.80"
(6) The parameter CLIENT_TIMESTAMP_TYPE_MAPPING is not set to anything.
It only pertains to TIMESTAMP_LTZ or TIMESTAMP_NTZ, anyway.
I'm interested specifically in binding TIMESTAMP_TZ only.
https://docs.snowflake.com/en/sql-reference/parameters.html#client-timestamp-type-mapping
The parameter TIMESTAMP_TYPE_MAPPING is set to its default value.
It anyway specifies the TIMESTAMP_* variation that the TIMESTAMP data type alias maps to.
The test scenario explicitly creates a TIMESTAMP_TZ column and does not use an alias.

I get all three Snowflake correct data types by using:
SQLLEN nDataType = SQL_UNKNOWN_TYPE;
rc = ::SQLColAttribute(
hstmt,
nCol,
SQL_DESC_CONCISE_TYPE,
NULL,
0,
NULL,
&nDataType);
Seems there is currently no data type specific structure to use for SQL_SF_TIMESTAMP_TZ that provides the time zone stored for a record. Not sure if the Snowflake driver would return the time zone if you were to bind SQL_SF_TIMESTAMP_TZ data as regular text, but maybe worth trying.

Related

sqlite ODBC driver and "attach" statement

I want to use the sqlite ODBC driver from http://www.ch-werner.de/sqliteodbc/ and start with an ATTACH statement, as I need joint data from two databases. However the driver returns no dataw when provided the following SQL:
attach 'my.db' as mydb; select 1
It however correctly complains with only one SQL statement allowed when the first statement is indeed a SELECT:
select 2;attach 'my.db' as mydb; select 1
Checking at the source, a checkddl() function analyzes if the provided requests contains DDL (Data Definition Language) statement. Before digging in the complete code, question is:
did someone manage to issue a select after an attach with this driver ?

tbl with in_schema returns "Invalid object name" error

After a connection to the SQL server, the databases inside it can be listed.
con = dbConnect(odbc(),
Driver = "ODBC Driver 17 for SQL Server",
Server = "xxxxxxxxxxxx",
UID = "xxxxxxxxxxxx",
PWD = "xxxxxxxxxxxx",
Port = xxxxxxxxxxxx)
Here you can find a successful connection.
After, I just would like to list the databases within this SQL server
databases = dbGetQuery(con, "SELECT name FROM master..sysdatabases")
Since I am not familiar with the SQL, It is a little bit strange for me to see that there is an already assigned Database which is "DB01CWE5462" within "con". This database can also be found within the result of dbGetQuery (DB01CWE5462). I guess that this database is automatically assigned to the con.
However, I would like to export the yellow highlighted table which is seen above. The below code was successful before (one month ago), but now it returns an error.
tbl(con, in_schema("DB01WEA84103.dbo","Ad10Min1_Average"))
Error: nanodbc/nanodbc.cpp:1655: 42000: [Microsoft][ODBC Driver 17 for
SQL Server][SQL Server]Invalid object name
'DB01WEA84103.dbo.Ad10Min1_Average'. [Microsoft][ODBC Driver 17 for
SQL Server][SQL Server]Statement(s) could not be prepared.
'SELECT *
FROM "DB01WEA84103.dbo"."Ad10Min1_Average" AS "q13"
WHERE (0 = 1)'
After a little search, I found a solution that is quite slow compared with the above codes' previous successful runs.
dbReadTable(con, 'Ad10Min1_Average', schema='DB01WEA84103.dbo')
So, what is the thing that I am missing? What should I do for the con and in_schema code which produces an error to work again?
The difference in speed is because tbl(con, ...) is creating an access point to a remote table, while dbReadTable(con, ...) is reading/copying the table from SQL into R.
The approach you were using has been the standard work-around for specifying both database and schema. I would guess there has been an update to the dbplyr package that means this work-around now requires an additional step.
Taking a close look at the SQL from the error message reveals the cause:
SELECT * FROM "DB01WEA84103.dbo"."Ad10Min1_Average"
Note the double quotes around "DB01WEA84103.dbo". The double quotes tell SQL to treat this as a single object: a schema with name DB01WEA84103.dbo, instead of two objects: a database with name DB01WEA84103 and a schema with name dbo.
Ideally this query would read:
SELECT * FROM "DB01WEA84103"."dbo"."Ad10Min1_Average"
Now the full stop is not included in the double quotes.
Reading the dbplyr documentation (link) for in_schema is specifies that the names of schema and table "... will be automatically quoted; use sql() to pass a raw name that won’t get quoted."
Hence I recommend you try:
tbl(con, in_schema(sql("DB01WEA84103.dbo"),"Ad10Min1_Average"))
Notes:
Double quotes in SQL are used to indicate a single object, ignoring special characters. Square brackets are often used in SQL for the same purpose.
Whether you use single or double quotes in R does not affect whether or not the SQL code will contain double quotes. This is controlled by dbplyr's translation methods.
If your database name contains special characters then try enclosing them in square brackets instead: For example [my!odd#database#name].[my%unusual&schema*name].

Is there any way to derive parameters of a stored procedure in .net core?

SqlCommandBuilder.DeriveParameters is not available in .net core (even in .NETCoreApp 2.0), so is there any way to retrieve the parameter list of a stored procedure in .net core?
You could always just inspect the SQL Server catalog views to get this information - for example, with this query:
SELECT
pr.name, -- Procedure name
pa.name, -- Parameter name
pa.parameter_id,
ty.name, -- Parameter datatype name
pa.max_length, -- Max length (of string parameters)
pa.precision, -- Precision (total num of digits) for numeric parameters
pa.scale, -- Scale (num of digits after decimal point) for numeric parameters
pa.is_output,
pa.has_default_value,
pa.is_readonly,
pa.is_nullable
FROM
sys.procedures pr
INNER JOIN
sys.parameters pa ON pa.object_id = pr.object_id
INNER JOIN
sys.types ty ON ty.user_type_id = pa.user_type_id
This can be extended - there are quite a few more bits of information around, if you're interested in those.
See SQL Server catalog views for a great deal more information and detail about the indispensible catalog views in SQL Server

Getting datatypes from Paradox DB over ODBC into SQLite [Delphi]

I'm connecting to a .dbf using ODBC in Delphi using FireDAC. I've setup an ODBC connection, dBase 5.0, using the 32-bit Driver do Microsoft dBase (.dbf) driver.
In my IDE (Rad Studio 10.1 Berlin), I've setup the ODBC connection as a data source. The ODBCAdvanced connectiong string is DefaultDir=%s;DriverId=533;MaxBufferSize=2048;PageTimeout=5, where %s is the correct directory.
I managed to copy a table's structure to a SQLite db using TFields (code roughly as follows).
FieldNames := TStringList.Create;
PDOXTable.GetFieldNames(FieldNames);
FieldNames.Delimiter := ';';
FieldList := TList<TField>.Create;
PDOXTable.GetFieldList(FieldList, FieldNames.DelimitedText);
TempTable := TFDTable.Create(nil);
TempTable.Connection := TempConn;
TempTable.TableName := DataTable.TableName;
for I := 0 to FieldList.Count - 1 do TempTable.Fields.Add(FieldList.Items[I]);
TempTable.CreateTable(true, [tpTable, tpTriggers, tpIndexes]);
However, the data types are different and I don't have primary keys, notnull conditions, or 'dflt_value' which I got when I manually exported these same tables using an application called Exportizer (http://www.vlsoftware.net/exportizer/), which, though it has a command-line client, I'm not sure I'll be able to bundle with my application.
What is a reasonable way of copying a table from a paradox .dbf to a SQLite while saving as much of the datatypes and parameters as possible?
Use TFDBatchMove. SQLite is typeless, but FireDAC has its own pseudo data type mapping with which you might be able to preserve a lot from original data types. And if the data types won't be exactly by your will, you can define your custom Mappings.

Oracle DB vs Mariadb

I have to find out in MariaDb how to implement some features used in Oracle . I have :
Load a file: in Oracle I use the external table. Is there a way (fast and efficient one ) to load a file into a table . Has MariaDb a plugin which allows to load well a specific format of files?
In my existing Oracle code I used to developp a java wrap functions which allow those feature (is there a way in MariaDb to do this?), specifically :
1- Searching a files in an OS directory and insert them in a table,
2- send an SNMP trap
3- Send a mail via SMTP
Is there an equivalent to an Oracle job in Mariadb?
Is there an equivalent to Oracle TDE (Transparent data encryption) ?
Is there an equivalent to the VPD (virtual private policy)?
What is the maximum length of a varchar column/variable ? (in Oracle we can use the CLOBs..)
Many Thanks and Best Regards
MariaDB (and MySQL) can do a LOAD DATA on a CSV file. It is probably the most efficient way to convert external data to a table. (There is also ENGINE=CSV, which requires no conversion, but is limited in that it has no indexes, etc.)
MariaDB cannot, for security reasons, issue any arbitrary system calls. No emails, no 'exec', etc.
No Job, TDE, VPD.
Network transmissions can (optionally) use SSL for encryption at that level.
There is a family of virtually identical datatypes for characters:
CHAR(n), VARCHAR(n) -- where n is up to 65535; n is the limit of _characters_, not _bytes_.
TINYTEXT, TEXT, MEDIUMTEXT, LONGTEXT -- of various limits; the last is limited to 4GB.
For non-character storage (eg, images), there is a similar set of datatypes
BINARY(n), VARBINARY(n)
TINYBLOB, BLOB, MEDIUMBLOB, LONGBLOB
The various sizes of TEXT and BLOB indicate whether that is a 1-, 2-, 3-, or 4-byte length field in the implementation.
NVARCHAR is a synonym for VARCHAR. Character sets are handled by declaring a column to be, for example, CHARACTER SET utf8 COLLATE utf8_unicode_ci. Such can be defaulted at the database (schema) level, defaulted at the table level, or specified differently for different columns (even in the same table).

Resources