I have a problem with loading my data into teradata table using MLoad. I have a text file with a data which is an output from SQL query.
851|73214|2019-01-03|2019-01-03|98.081270|RFF|249872083.40
854|73215|2019-01-03|2019-01-03|98.081270|RFF|355015298.0400
881|96634|2017-05-22|2017-05-22|97.697560|RFF|-6961747.270
and I'm trying to load this data using this mld file:
.LOGTABLE dss.load_DEALS7_log;
.RUN FILE "C:\PASS.TXT";
RELEASE MLOAD dss.DEALS;
drop table dss.DEALS;
create multiset table dss.DEALS
(
FILE_ROWNUM INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 MINVALUE -2147483647 MAXVALUE 100000000 NO CYCLE),
DEAL INTEGER,
EFFECT_DATE VARCHAR(80),
MAT_DATE VARCHAR(80),
CON DECIMAL(28,12),
C_CODE VARCHAR(80),
QUALITY DECIMAL(19,4),
LOAD_DTE DATE FORMAT 'YYYY-MM-DD'
)primary index(FILE_ROWNUM);
.BEGIN MLOAD TABLES dss.DEALS SESSIONS 2;
.LAYOUT FILE;
.FIELD DEAL * VARCHAR(80);
.FIELD EFFECT_DATE * VARCHAR(80);
.FIELD MAT_DATE * VARCHAR(80);
.FIELD CON * VARCHAR(80);
.FIELD C_CODE * VARCHAR(80);
.FIELD QUALITY * VARCHAR(80);
.DML LABEL LOAD;
INSERT INTO dss.DEALS VALUES
('',
:DEAL,
:EFFECT_DATE,
:MAT_DATE,
:CON,
:C_CODE,
:QUALITY,
CURRENT_DATE
);
.IMPORT
INFILE "C:\deals.txt"
LAYOUT FILE
format VARTEXT '|' DISPLAY ERRORS NOSTOP
APPLY LOAD
;
.END MLOAD;
.LOGOFF;
The problem is that the final table is empty and every row is in dss.ET_DEALS table with ErrorCode 2679. I know that the ErrorField is QUALITY, but I don't know why It won't load. The data seems alright. Teradata docs say - "This error occurs when the user submits a numeric-to-character conversion with an illegal format, or when, in converting characters to numeric, either the data or the format contains a bad character." At first I thought That It's because of negative numbers, but then there should only be few rows and not whole input data in ET table. Any help would be greatly appreciated!
I deleted and crated the tables from ground up and also added
.FILLER * VARCHAR(80);
in my mld file and It works fine.
Related
I want to update a table in PostgreSQL table from a newData dataframe in local through a loop when the id matches in both tables. However, I encountered issues that the text values do not update exactly as our newData to the database. Number is updating correctly but there are 2 issues when updating the text:
1) I have a column for house_nbr and it can be '120-12', but somehow it calculated and updated as '108' which should really be the text '120-12'.
2) I have a column for street_name and it can be 'Main Street', but I received an error that I couldn't resolve.
(Error in { :
task 1 failed - "Failed to prepare query: ERROR: syntax error at or near "Street")
The database table datatype is in char. It seems something is wrong with special character in the text, such as hyphen and space. Please advise how to retain the character text when updating to a Postgre database. Below is the code I am using. Thanks!
Update <- function(i) {
con <- dbConnect(RPostgres::Postgres(),
user="xxx",
password="xxx",
host="xxx",
dbname="xxx",
port=5432)
text <- paste("UPDATE dbTable SET house_nbr=" ,newData$house_nbr[i], ",street_name=",newData$street_name[i], "where id=",newData$id[i])
dbExecute(con, text)
dbDisconnect(con)
}
foreach(i = 1:length(newData$id), .inorder=FALSE,.packages="RPostgreSQL")%dopar%{
Update(i)
}
Error(7,1): PLS-00103: Encountered the symbol "CREATE". I tried to put
/ before create but then error was Error(6,1): PLS-00103: Encountered
the symbol "/" .
I am new to PL/SQL programming, could you please help on this.
CREATE OR REPLACE PACKAGE EMP_BULK_INSERT AS
PROCEDURE Bulk_Insert;
END EMP_BULK_INSERT;
/* package body */
CREATE OR REPLACE PACKAGE BODY EMP_BULK_INSERT AS
PROCEDURE Bulk_Insert
AS
/* select all records from source table */
CURSOR kt_test_cur IS
SELECT empid
, empband
, empname
, workexp
, salary
from kt_test;
/* create nested table type and variable that will hold BIG_TABLE's records */
TYPE kt_test_ntt IS TABLE OF kt_test_cur%ROWTYPE;
l_kt_test kt_test_ntt;
BEGIN
/* open pointer to SELECT statement */
OPEN kt_test_cur;
/* collect data in the collection */
FETCH kt_test_cur BULK COLLECT INTO l_kt_test;
/* close the pointer */
CLOSE kt_test_cur;
/* print size of the collection */
DBMS_OUTPUT.PUT_LINE('Nested table holds: ' || TO_CHAR(l_kt_test.COUNT) || ' records.');
/* write data down to target table */
FORALL indx IN l_kt_test.FIRST..l_kt_test.LAST
INSERT INTO kt1_test(empid,empband,empname,workexp,salary)
VALUES (l_kt_test(indx).empid,l_kt_test(indx).empband,l_kt_test(indx).empname,l_kt_test(indx).workexp,l_kt_test(indx).salary);
DBMS_OUTPUT.PUT_LINE('Number of rows inserted ' || SQL%ROWCOUNT || ' rows');
COMMIT;
END Bulk_Insert;
END EMP_BULK_INSERT;
The "/" needs to be on a blank line, all by itself.
Like this:
CREATE OR REPLACE PACKAGE abc AS
...
END;
/
CREATE OR REPLACE PACKAGE BODY abc AS
...
END;
/
The meaning of the "/" is to execute the command buffer. In this case it executes the previous PL/SQL block.
For a more in depth discussion on this topic, see here on
stack overflow
If you want to manipulate the package vie the dedicated object viewa of SQL Developer you have to separate package def and body and use the "/" in neither:
First compile the package spec (without "/"). Then open the body in a separate tab page with the small package icon (left from the compile icon). Edit your body code and compile there (again, without the "/").
I am trying to export count of all tables in excel or text file.
if any program or any query will help me ?
I am trying to code for export count of data available in each table.
code:
define stream table t1.
output stream t1 to t1.csv.
&scope-define display-fields count(*)
select count(*) from emp.
export starem t1 delimiter ",".
This code create excel with empty value but display result on screen. i out in excel.
Unsure what you want to do. Something like this if you want to count the number of tables in a database:
DEFINE VARIABLE icount AS INTEGER NO-UNDO.
FOR each _file NO-LOCK WHERE _file._owner = "PUB":
/* Skip "hidden" virtual system tables */
IF _file._file-name BEGINS "_" THEN NEXT.
iCount = iCount + 1.
END.
MESSAGE iCount "tables in the database"
VIEW-AS ALERT-BOX INFORMATION.
If you have several db's connected you need to prepend the _file-table with the database name ie database._file.
However: since you say "export to excel" perhaps what you mean is that you want to know the number of records for each table?
To count number of records in a table you can use FOR or SELECT.
SELECT COUNT(*) FROM tablename.
or
DEFINE VARIABLE iCount AS INTEGER NO-UNDO.
FOR EACH tablename NO-LOCK TABLE-SCAN:
iCount = iCount + 1.
END.
DISPLAY iCount.
If you don't want to code this for each table you need to combine it with a dynamic query counting all records.
DEFINE VARIABLE hQuery AS HANDLE NO-UNDO.
DEFINE VARIABLE hBuffer AS HANDLE NO-UNDO.
DEFINE VARIABLE iCount AS INTEGER NO-UNDO.
DEFINE VARIABLE cTable AS CHARACTER NO-UNDO.
/* Insert tablename here */
cTable = "TableName".
CREATE QUERY hQuery.
CREATE BUFFER hBuffer FOR TABLE cTable.
hQuery:SET-BUFFERS(hBuffer).
hQuery:QUERY-PREPARE(SUBSTITUTE("FOR EACH &1", cTable)).
hQuery:QUERY-OPEN.
queryLoop:
REPEAT:
hQuery:GET-NEXT().
IF hQUery:QUERY-OFF-END THEN LEAVE queryLoop.
iCount = iCount + 1.
END.
DELETE OBJECT hQuery.
DELETE OBJECT hBuffer.
MESSAGE iCount "records in the table".
Combine those two and you have a solution. It might be slow however since it will count all records of all tables.
A quick and dirty way is to run "tabanalys" on the database instead if you have access to it via the prompt:
proutil DatabaseName -C tabanalys > tabanalys.txt
This can be run online and might have impact on file io etc so run it the first time on off peak hours just to make sure. Then look into that file, you will see record count, sizes etc for all tables: system-tables as well as user-tables.
Proutil ran online might not be 100% correct but most likely "good enough".
What does this cryptic error mean?
> odbc load, exec("
> CREATE VOLATILE MULTISET TABLE vol_tab AS (
> SELECT TOP 10 user_id FROM dw_users
> )
> WITH DATA
> PRIMARY INDEX(user_id)
> ON COMMIT PRESERVE ROWS;
> ") clear dsn("mozart");
The ODBC driver reported the following diagnostics
[Teradata][ODBC Teradata Driver] Invalid cursor state.
SQLSTATE=24000
r(693);
You are getting this error because you are telling Stata to load something, but your code does not have a SELECT clause that is not part of the table creation. If you add a SELECT clause at the bottom, it will work.
Alternatively, you can use odbc exec("SqlStmt") syntax if you just want to create a table.
Here's an example:
/* Works */
odbc load, exec("
CREATE VOLATILE MULTISET TABLE vol_tab AS (
SELECT TOP 10 user_id FROM dw_users
)
WITH DATA
PRIMARY INDEX(user_id)
ON COMMIT PRESERVE ROWS;
SELECT * FROM vol_tab;
") clear dsn("mozart") lowercase multistatement;
format user_id %20.0fc;
sort user_id;
list, clean noobs;
/*Also Works */
odbc exec("
CREATE VOLATILE MULTISET TABLE vol_tab AS (
SELECT TOP 10 user_id FROM dw_users
)
WITH DATA
PRIMARY INDEX(user_id)
ON COMMIT PRESERVE ROWS;
"), dsn("mozart");
/* Fails: Load With No Bottom Select Clause */
odbc load, exec("
CREATE VOLATILE MULTISET TABLE vol_tab AS (
SELECT TOP 10 user_id FROM dw_users
)
WITH DATA
PRIMARY INDEX(user_id)
ON COMMIT PRESERVE ROWS;
") clear dsn("mozart");
mload step
.LOGTABLE truser2.logtable;
.LOGON 127.0.0.1/truser2,trpass2;
.begin import mload tables coldata_test
WORKTABLES wt_test1
ERRORTABLES wt_test1 uv_test1
ERRLIMIT 1000
CHECKPOINT 100000
AMPCHECK NONE;
.layout test_layout;
.field col01 * varchar(255);
.field col02 * varchar(255);
.field col03 * varchar(255);
.DML LABEL test_insert
IGNORE MISSING UPDATE ROWS
DO INSERT FOR MISSING UPDATE ROWS;
UPDATE coldata_test SET col02 =:col02
,col03 = :col03
where col01 = :col01;
insert into coldata_test
values(
:col01,
:col02,
:col03);
.IMPORT INFILE /home/tdatuser/bin/data2.dat
LAYOUT test_layout FORMAT VARTEXT ','
APPLY test_insert;
.END MLOAD;
.LOGOFF;
data2.dat : abdfe,aasf,xxcvf
result : This MultiLoad import task cannot proceed: an unexpected
MultiLoad phase, data acquisition, was reported by the RDBMS.
what is problem??