Teradata inserting into a table with a generated UPI - teradata

Hoping you all can help. I have a created table with a UPI (incremental index) and when I run the macro to insert in continuously gives me the error that "the positional assignment list has too few values". I have verified that the two tables match except for the UPI ID. How do you account for that field in the insert macro so that the table and the macro have the same number of assignments?

The list of values specified by the INSERT statement is shorter than the list of columns in the table. This error occurs on the INSERT statement.
try to check the origem table with the match of the destiny table.

Related

REPLACE is not working inside plsql procedure

I am writing a procedure with a cursor that take table name and column name from a lookup table we have in DB. This table has a mapping of table name and columns.
These two info I am collecting in a variable and running a select query to see if the data in that column is number or not. for example:-
select TO_NUMBER(REGEXP_REPLACE(v_landing_column,'',''))
from v_landing_table
Here v_landing_column is a column name and v_landing_table is a table name. One of the values in v_landing_column is 12,300 which is a number but because of , the loop flow is going into exception where I am dumping error record in a seperate table.
I tried using REPLACE also with above syntax but still flow is dumping this record with value 12,300 into error table. How to remove , from 12,300 inside plsql procedure using execute immediate?

In Teradata there get columns/fields used by join and where condition and respective table without parsing query

I am trying to automate some performance check on query in Teradata.
So as part of that I want to check if columns used in joining condition are primary index of respective table or not and similarly for columns used in where condition are partition column in respective table or not. Is there any direct Teradata query which can directly give this without parsing whole query.
Yes there are two dbc objects where you can query :
dbc.columnsv
dbc.indicesv.
Primary index information will be stored in the 2nd view just search with your tablename and database name.
Partitioned information is stored in columnsv , there is a column with a flag value 'Y' for partitioned columns.
Example :
SELECT DATABASENAME,TABLENAME,COLUMNNAME FROM DBC.COLUMNSV WHERE PARTITIONINGCOLUMN='Y' where tablename=<> and databasename=<>;
Select * from dbc.indicesv where tablename=<> and databasename=<>;

Sqlite INSERT INTO ... SELECT using ORDER BY

I have a table as below:
CREATE TABLE IF NOT EXISTS TRACE_TABLE ([TRACE_NUM] INTEGER NOT NULL PRIMARY KEY [TRACE_ID] INTEGER NOT NULL [TRACE_TIME_DELTA] TEXT NOT NULL [TRACE_TIME_HEX] INTEGER NOT NULL [TRACE_TIME_AHB] INTEGER NOT NULL [TRACE_PARAM_TEXT] TEXT NOT NULL [TRACE_PARAM_TEXT_DECODED] TEXT);
Now I want to sort this table using a column. To do this I do following:
Create a new table TRACE_TABLE_TEMP using above statement if not exists.
Then delete all rows (in case any exists) by earlier operations
Then copy all rows from TRACE_TABLE to TRACE_TABLE_TEMP but in sorted order using a column.
I try to execute the statement in Sqlite DB browser but I am not getting the expected result. Please see below, the TRACE_NUM column is not sorted as DESC.
How do I copy the table to another in sorted order?
The documentation says:
If a SELECT statement that returns more than one row does not have an ORDER BY clause, the order in which the rows are returned is undefined.
So it does not make much sense to change the order in which rows are stored, because you'd have to put the same ORDER BY on the queries used to read the data later.
Anyway, the error is that 'TRACE_NUM' is a constant string. To refer to the contents of the column, use TRACE_NUM.

Insert returns error: (sub-select returns 6 columns - expected 1)

I have a table SIGHTINGS(NAME, PERSON, LOCATION, SIGHTED), and I'm trying to insert a new row into that table with the following query:
INSERT INTO SIGHTINGS (NAME, PERSON, LOCATION, SIGHTED)
VALUES ('Douglas dustymaiden', 'Person B', 'Double Mountain', '2005-11-28');
But it's returning this error:
[2017-12-04 17:08:18] [1] [SQLITE_ERROR] SQL error or missing database (sub-select returns 6 columns - expected 1)
I've looked up the correct syntax for sqlite inserts here, and from what I can tell, the insert is written correctly. Can someone tell me why it's throwing this error instead of doing the insert? I'm using DataGrip 2017 if that helps identify any issues.
EDIT:
Here's the trigger I added to the database. The insert works without the trigger.
CREATE TRIGGER SightingLocationError
BEFORE INSERT ON SIGHTINGS
FOR EACH ROW
WHEN NEW.LOCATION NOT IN FEATURES
BEGIN
SELECT RAISE(ABORT, 'Error: Insert into the SIGHTINGS table references location that is not found in the database.');
END;
WHEN NEW.LOCATION NOT IN FEATURES
The FEATURES table has six columns, so the database does not know how it should search for the location value.
Use an explicit subquery to return the column you want to use for this:
WHEN NEW.LOCATION NOT IN (SELECT xxx FROM FEATURES)

BTEQ The activity count returned by DBS does not match the actual number of rows returned

When I am exporting a table from teradata using BTEQ, the output row count does not match the select query count. The following is the warning shown by BTEQ
Warning: The activity count returned by DBS does not match
the actual number of rows returned.
Activity Count=495294, Total Rows Returned=495286
Here is the select query,
SELECT CUST_ID, SPEC1_CODE FROM Table
GROUP BY 1,2
Here is the create table script,
CREATE MULTISET TABLE Table ,NO FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT
(
RECORD_KEY DECIMAL(20,0) NOT NULL,
CUST_ID VARCHAR(40) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
SPEC1_CODE VARCHAR(50) CHARACTER SET LATIN NOT CASESPECIFIC)
PRIMARY INDEX ( RECORD_KEY );
When we contacted Teradata support, they asked us to run the following query.
DIAGNOSTIC NOAGGRENH ON FOR SESSION;
So, if we run the above query and then run our select/BTEQ export, it is working fine.
I was hoping you would answer my questions in the comment sooner but I'm going to throw this out as a possible reason for the discrepancy you are seeing in the warning message.
Your table is defined as MULTISET with a non-unique primary index or possibly as a NOPI table in Teradata 13.x. There are no additional unique constraints on the table or unique indexes. The table has been loaded with 8 duplicate rows of data.
For reasons that I can not pinpoint based on your description BTEQ returned a unique set of records although the optimizer indicates that the activity count for the statement was greater. Thus the warning message that you are seeing.

Resources