OLE DB provider 'for linked server returned data that does not match expected data length for - linked-server

I get an error querying a remote postgresql server from my sql server 2017 Standard via a linked server
this is the query:
SELECT CAST(test AS VARCHAR(MAX)) FROM OpenQuery(xxxx,
'SELECT corpo::TEXT as test From public.notification')
and this is the error message:
Msg 7347, Level 16, State 1, Line 57
OLE DB provider 'MSDASQL' for linked server 'xxx' returned data that does not match expected data length for
column '[MSDASQL].test'. The (maximum) expected data length is 1024, while the returned data length is 7774.
Even without conversions the error stills
For the odbc and linked server I followed this handy guide.

In my case, I was reading the data through a view. Apparently, the data size of one column was changed in the underlying table but the view still reported to the linked server the original smaller size of the column. The solution was to open the view with MSSMS and save it again.

Can you try this?
SELECT *
FROM OPENQUERY(xxxx, '\
SELECT TRIM(corpo) AS test
FROM public.notification;
') AS oq
I prefer using OPENQUERY since it will send the exact query to the linked server for it to execute.
MySQL currently has problem with casting to VARCHAR data type, so I using TRIM() function to cheat it.

Related

tbl with in_schema returns "Invalid object name" error

After a connection to the SQL server, the databases inside it can be listed.
con = dbConnect(odbc(),
Driver = "ODBC Driver 17 for SQL Server",
Server = "xxxxxxxxxxxx",
UID = "xxxxxxxxxxxx",
PWD = "xxxxxxxxxxxx",
Port = xxxxxxxxxxxx)
Here you can find a successful connection.
After, I just would like to list the databases within this SQL server
databases = dbGetQuery(con, "SELECT name FROM master..sysdatabases")
Since I am not familiar with the SQL, It is a little bit strange for me to see that there is an already assigned Database which is "DB01CWE5462" within "con". This database can also be found within the result of dbGetQuery (DB01CWE5462). I guess that this database is automatically assigned to the con.
However, I would like to export the yellow highlighted table which is seen above. The below code was successful before (one month ago), but now it returns an error.
tbl(con, in_schema("DB01WEA84103.dbo","Ad10Min1_Average"))
Error: nanodbc/nanodbc.cpp:1655: 42000: [Microsoft][ODBC Driver 17 for
SQL Server][SQL Server]Invalid object name
'DB01WEA84103.dbo.Ad10Min1_Average'. [Microsoft][ODBC Driver 17 for
SQL Server][SQL Server]Statement(s) could not be prepared.
'SELECT *
FROM "DB01WEA84103.dbo"."Ad10Min1_Average" AS "q13"
WHERE (0 = 1)'
After a little search, I found a solution that is quite slow compared with the above codes' previous successful runs.
dbReadTable(con, 'Ad10Min1_Average', schema='DB01WEA84103.dbo')
So, what is the thing that I am missing? What should I do for the con and in_schema code which produces an error to work again?
The difference in speed is because tbl(con, ...) is creating an access point to a remote table, while dbReadTable(con, ...) is reading/copying the table from SQL into R.
The approach you were using has been the standard work-around for specifying both database and schema. I would guess there has been an update to the dbplyr package that means this work-around now requires an additional step.
Taking a close look at the SQL from the error message reveals the cause:
SELECT * FROM "DB01WEA84103.dbo"."Ad10Min1_Average"
Note the double quotes around "DB01WEA84103.dbo". The double quotes tell SQL to treat this as a single object: a schema with name DB01WEA84103.dbo, instead of two objects: a database with name DB01WEA84103 and a schema with name dbo.
Ideally this query would read:
SELECT * FROM "DB01WEA84103"."dbo"."Ad10Min1_Average"
Now the full stop is not included in the double quotes.
Reading the dbplyr documentation (link) for in_schema is specifies that the names of schema and table "... will be automatically quoted; use sql() to pass a raw name that won’t get quoted."
Hence I recommend you try:
tbl(con, in_schema(sql("DB01WEA84103.dbo"),"Ad10Min1_Average"))
Notes:
Double quotes in SQL are used to indicate a single object, ignoring special characters. Square brackets are often used in SQL for the same purpose.
Whether you use single or double quotes in R does not affect whether or not the SQL code will contain double quotes. This is controlled by dbplyr's translation methods.
If your database name contains special characters then try enclosing them in square brackets instead: For example [my!odd#database#name].[my%unusual&schema*name].

SQL Server Polybase | Cosmos Document DB Date conversion issue

Im new to polybase. I have linked my SQL 2019 server to a third parties Azure cosmos and i am able to query data out of my collection. I am getting an error out when i try to query date fields though. In the documents the dates are defined as:
"created" : {
"$date" : 1579540834768
},
In my external table i have the column defined as
[created] DATE,
I have tried to create the column as int and nvarchar(128) but the schema detection rejects it each time. (i have tried to create a field created_date but the schema detection also disagree's that this is correct.
When i try a query that returns any of the date fields i get this error:
Msg 105082, Level 16, State 1, Line 8
105082;Generic ODBC error: [Microsoft][Support] (40460) Fractional data truncated while performing conversion. .
OLE DB provider "MSOLEDBSQL" for linked server "(null)" returned message "Unspecified error".
Msg 7421, Level 16, State 2, Line 8
Cannot fetch the rowset from OLE DB provider "MSOLEDBSQL" for linked server "(null)". .
This happens if i try and exclude null values in my query - even when filtering to specific records where the date is populated (validated using the Azure portal interface)
Is there something i should be doing to handle the integer date from the json records; or another type i can use to get my external table to work?
Found a solution. SQL Server recommends the wrong type for mongodb dates in the schema. Using DateTime2 resolved the issue. Found this on a polybase type mapping page in msdn.

MariaDB: SELECT INSERT from ODBC CONNECT engine from SQL Server keeps causing "error code 1406 data too long"

Objective: Using MariaDB I want to read some data from MS SQL Server (via ODBC Connect engine) and SELECT INSERT it into a local table.
Issue: I keep geting "error code 1406 data too long" even if source and destination varchar fields have the very same size (see further details)
Details:
The query which I'm trying to execute is in the form:
INSERT INTO DEST_TABLE(NUMERO_DOCUMENTO)
SELECT SUBSTR(TRIM(NUMERO_DOCUMENTO),0,5)
FROM CONNECT_SRC_TABLE
The above is the very minimal subset of fields which causes the problem.
The source CONNECT Table is actually a view inside SQL Server. The destination table has been defined so to be identical to the the ODBC CONNECT Table (same field names, same NULL constranints, same filed types ans sizes)
There's no issue on a couple of other VARCHAR fields
The issue is happening with a filed NUMERO_DOCUMENTO VARCHAR(14) DEFAULT NULL where the max length from the input table is 14
The same issue is also happening with 2 other fields ont the same table
All in all it seems to be an issue with the source data rather then the destination table.
Attemped workarounds:
I tried to force silent truncation but, reasonably, this does not make any difference: Error Code: 1406. Data too long for column - MySQL
I tried enlarging the destination field with no appreciable effect NUMERO_DOCUMENTO VARCHAR(100) DEFAULT NULL
I tried to TRIM the source field (hidden spaces?) and to limit its size at source to no avail: INSERT INTO DEST_TABLE(NUMERO_DOCUMENTO) SELECT SUBSTR(TRIM(NUMERO_DOCUMENTO),0,5) FROM CONNECT_SRC_TABLE but the very same error is always returned
Workaround:
I tried performing the same thing using a FOR x IN (src_query) DO INSERT .... END FOR and this solution seems to work: this means that the problem is not into the data itself but in how the engine performs the INSERT SELECT query

Get error message with date field in Informatica when the workflow is run

I am getting the following error when I try to link a date field from Source Qualifier to Target table in Informatica:
ERROR 7/19/2019 9:05:26 AM node01_dev WRITER_1_*_1 WRT_8229 Database errors occurred:
FnName: Execute -- [Informatica][ODBC SQL Server Wire Protocol driver]Timestamp parameters with zero scale must have a precision of 13, 16, or 19. Parameter number: 1, precision: 12.
FnName: Execute -- [DataDirect][ODBC lib] Function sequence error
I have done the same thing (used datetime for a target) with another workflow and it ran successfully.
I have done a search on the internet with this error message but none of the solutions from my search resolved the problem.
The target table SA_Cases needs to have the data insert into it. Right now, the Monitor shows that all of the rows are rejected.
The source is a table in Oracle. The target is a table in Microsoft SQL Server
enter image description here
enter image description here
Here is the mapping that worksenter image description here
The SA_Cases table, which is the Target table, has fields with spaces. I replaced the spaces with underscores and it works. The problem was the spaces in the field names.

Having trouble specifying a schema name from MonetDB.R

I am trying to connect to a table that is not in the sys schema. The code below works if sys.tablea exists.
conn <- dbConnect(dbDriver("MonetDB"), "monetdb://localhost/demo")
frame <- monet.frame(conn,"tablea")
If I define tablea in a different schema, e.g. xyz.tablea, then I get the error message
Server says 'SELECT: no such table 'tablea'' [#NA]
The account used to connect has rights to the table.
In a related question, is it possible to use camel-case from MonetDB.R? When I change the table name to TableA, the server again responds with
Server says 'SELECT: no such table 'tablea'' [#NA]
where the table name is all lower-case.
Using tables in other schemata is not possible with the current constructor of monet.frame. However, you can work around the issue as follows:
frame <- monet.frame(conn,"select * from xyz.tablea")
This trick also works with CamelCased table names.
For the next version, I am planning to fix the issue.

Resources