mload with bigint column in informatica mapping - teradata

Does anyone build a mapping with mload as connection and one of the column of source and target table (teradata DB) has datatype as bigint.? I am getting precision float error.

Related

Informatica creates duplicate records on a Teradata set table with a primary index ( direct SQL can not )

Very simple source->target insert, source records are treated as "Insert", target transformation uses the "Insert" function only. Target Teradata table is a set with a primary index defined. Informatica target trans also has a primary key defined. Teradata informatica relational connection does not have a "bulk" option.
So how come Informatica does create a duplicate record, while even a direct insert into Teradata can not.
Any ideas?

MariaDB: SELECT INSERT from ODBC CONNECT engine from SQL Server keeps causing "error code 1406 data too long"

Objective: Using MariaDB I want to read some data from MS SQL Server (via ODBC Connect engine) and SELECT INSERT it into a local table.
Issue: I keep geting "error code 1406 data too long" even if source and destination varchar fields have the very same size (see further details)
Details:
The query which I'm trying to execute is in the form:
INSERT INTO DEST_TABLE(NUMERO_DOCUMENTO)
SELECT SUBSTR(TRIM(NUMERO_DOCUMENTO),0,5)
FROM CONNECT_SRC_TABLE
The above is the very minimal subset of fields which causes the problem.
The source CONNECT Table is actually a view inside SQL Server. The destination table has been defined so to be identical to the the ODBC CONNECT Table (same field names, same NULL constranints, same filed types ans sizes)
There's no issue on a couple of other VARCHAR fields
The issue is happening with a filed NUMERO_DOCUMENTO VARCHAR(14) DEFAULT NULL where the max length from the input table is 14
The same issue is also happening with 2 other fields ont the same table
All in all it seems to be an issue with the source data rather then the destination table.
Attemped workarounds:
I tried to force silent truncation but, reasonably, this does not make any difference: Error Code: 1406. Data too long for column - MySQL
I tried enlarging the destination field with no appreciable effect NUMERO_DOCUMENTO VARCHAR(100) DEFAULT NULL
I tried to TRIM the source field (hidden spaces?) and to limit its size at source to no avail: INSERT INTO DEST_TABLE(NUMERO_DOCUMENTO) SELECT SUBSTR(TRIM(NUMERO_DOCUMENTO),0,5) FROM CONNECT_SRC_TABLE but the very same error is always returned
Workaround:
I tried performing the same thing using a FOR x IN (src_query) DO INSERT .... END FOR and this solution seems to work: this means that the problem is not into the data itself but in how the engine performs the INSERT SELECT query

How create a kudu table in cloudera quickstart VM

I have been trying to create a kudu table in impala using the cloudera quickstart VM following this example
https://kudu.apache.org/docs/quickstart.html
CREATE TABLE sfmta
PRIMARY KEY (report_time, vehicle_tag)
PARTITION BY HASH(report_time) PARTITIONS 8
STORED AS KUDU
AS SELECT
UNIX_TIMESTAMP(report_time, 'MM/dd/yyyy HH:mm:ss') AS report_time,
vehicle_tag,
longitude,
latitude,
speed,
heading
FROM sfmta_raw;
getting the following error:
ERROR: AnalysisException: Table property 'kudu.master_addresses' is required when the impalad startup flag -kudu_master_hosts is not used. The VM used is cloudera-quickstart-vm-5.13.0-0-virtualbox. Thanks in advance for your help
From the documentation
If the -kudu_master_hosts configuration property is not set, you can
still associate the appropriate value for each table by specifying a
TBLPROPERTIES('kudu.master_addresses') clause in the CREATE TABLE
statement or changing the TBLPROPERTIES('kudu.master_addresses') value
with an ALTER TABLE statement.
So your table creation should looks like
CREATE TABLE sfmta
PRIMARY KEY (report_time, vehicle_tag)
PARTITION BY HASH(report_time) PARTITIONS 8
STORED AS KUDU
TBLPROPERTIES ('kudu.master_addresses'='localhost:7051')
AS SELECT
UNIX_TIMESTAMP(report_time, 'MM/dd/yyyy HH:mm:ss') AS report_time,
vehicle_tag,
longitude,
latitude,
speed,
heading
FROM sfmta_raw;
7051 is the default port for kudu master.

Using Teradata Volatile Table in SSIS ADO NET Source

Put simply, can I use an ADO NET Source task to query a Teradata VOLATILE TABLE? For context, using Teradata SQL Assistant, I can easily create a Teradata VOLATILE TABLE, insert data into it and select data from it. In Visual Studio, using SSIS SQL Tasks, I am also able to create and insert date into a Teradata VOLATILE TABLE. However, because the table does not actually exist yet, it appears we cannot use a separate ADO NET Source task to select data from it, meaning we also cannot map the columns. We get the error "[Teradata Database][3807] Object 'TABLE_NAME' does not exist." If the data in a VOLATILE TABLE, and more accurately the VOLATILE TABLE column definitions, are only available at run time can an ADO NET Source task be used to query a Teradata VOLATILE TABLE? If so, how?
Really old, and not sure if it will work. But You can set validation to false, that might do what you are wanting.

Informatica giving error while loading Teradata Number column

In a Teradata(13.5) target table we have column with data-type Number; when we try to load this table using Informatica flow it gives following error:
[Severity Timestamp Node Thread Message Code Message
ERROR 4/1/2015 3:08:52 PM node01_<host_name> WRITER_1_*_1 WRT_8229 Database errors occurred:
FnName: Execute -- [Teradata][ODBC Teradata Driver] Illegal data conversion]
We have tried everything including:
1. Changing Informatica target datatype to decimal, bigint, integer, varchar
2. Importing target table to Informatica using Informatica Target Designer but this Number field is imported as Varchar(0)
Please suggest how to solve this as changing target data-type is not an option for us.
You can import target table to Informatica using Informatica Target Designer, and then edit the datatype for the column to anything you want. It can be alterered without issues.

Resources