Teradata ALTER TABLE to CURRENT taking too long - teradata

I am running a procedure in Teradata which has the statement "ALTER TABLE TABLE_NAME to CURRENT"
It is taking too long to run, I have also collected stats on the given table.
Table size is 130 GB.
Can someone help what can be the issue?

Related

How to modify column type fast in SQLite?

I have a 3 GB SQLite database. I want to modify column type of one of the table columns.
I know that sqlite does not support altering columns and this can only be done by recreating a table.
That's how I do it:
BEGIN TRANSACTION;
ALTER TABLE tbl RENAME TO tbl_;
CREATE TABLE tbl (a INTEGER, b TEXT, c TEXT);
INSERT INTO tbl SELECT * FROM tbl_;
DROP TABLE tbl_;
COMMIT;
I thought that during this process since I use a transaction then the database size will not increase. But it did. On my disk not enough space to double the database size. Is it normal that the database size increases within the transaction? Is there any other way of modifying column type without increasing database size? This process also takes a lot of time. Unexpectedly, most of the time is taken by DROP TABLE statement, it's even longer than INSERT statement. Why dropping table is longer than copying data from one table to another?
Thanks in advance!

Insert on constraint-less table, caused ora-02449 unique/primary keys in table referenced by foreign keys

We have an old batch file which does only such statement:
Insert into table_a select * from table_b;
table a is bulk table with no index and constraint
after few years, with increments in record counts, this batch became slow
but suddenly for few days, we got this error every time we try to run the batch:
Ora-00604 error occured at recursive sql level 1 ora-02449 unique/primary keys in table referenced by foreign keys
our only option is to make chunks of data, and insert them part by part, which fixes the batch output, but the problem still exists
we are not dropping and table or object here
can you help us find the cause of problem?
I've checked database level triggers but there is no trigger for insert at database level

how to select first 200 rows in oracle without full table scan

I need to search first 200 rows in my database with out full table can. If I scan full table it takes too much time because my table contain 160 million record. I am using oracle 11g.
Do you really need to avoid a FTS in this case as I expect
SELECT * FROM table WHERE ROWNUM <= 200;
runs pretty fast and starts returning results immediately despite a FTS even with a table containing millions of rows.

Load Historical data to teradata temporal table

I have a task to load existing SQL Server table to Teradata temporal table. Existing table is a type 2 table and has many versions of record. I need to load them into teradata temporal table. I am planning to load version 1 1st and then update all other versions one by one.
Difficulties i am having is that in existing table every record has start time and end time. I need to update that time in teradata temporal table as validity.
1st I am trying to insert and while insert i am not able to insert end time as less than current time. It report error as "Check constraint violation". Below is sample piece of code for creating table and inserting.
I am yet to test updates as not able to do 1st step.
CREATE multiset TABLE EDW_T.edw_Contracts_History_Test
(
ID INTEGER,
Validity PERIOD(TIMESTAMP(3)) NOT NULL AS VALIDTIME
);
insert into EDW_T.edw_Contracts_History_Test(id,Validity) values(
1,period(cast('1996-01-20 05.00.00.000' as TIMESTAMP(3)), cast('2016-06-23 21.52.20.000' as TIMESTAMP(3))))
--this pass as 2016 is greater than current date
insert into EDW_T.edw_Contracts_History_Test(id,Validity) values(
1,period(cast('1996-01-20 05.00.00.000' as TIMESTAMP(3)), cast('2015-06-23 21.52.20.000' as TIMESTAMP(3))))
--This fails as i m trying to give end time less than current date.
Is there anyway to give end time as less than current date. any way to disable this constraint for time and then enable.
Please help. Thanks!
To insert history rows you should use Sequnce Valid Time Modifier...
Eg :
SEQUENCED VALIDTIME
insert into EDW_T.edw_Contracts_History_Test(id,Validity) values(
1,period(cast('1996-01-20 05.00.00.000' as TIMESTAMP(3)), cast('2015-06-23 21.52.20.000' as TIMESTAMP(3))));

Strange result in DB2. Divergences queries

A strange thing, that I don't know the cause, is happenning when trying to collect results from a db2 database.
The query is the following:
SELECT
COUNT(*)
FROM
MYSCHEMA.TABLE1 T1
WHERE
NOT EXISTS (
SELECT
*
FROM
MYSCHEMA.TABLE2 T2
WHERE
T2.PRIMARY_KEY_PART_1 = T1.PRIMARY_KEY_PART_2
AND T2.PRIMARY_KEY_PART_2 = T1.PRIMARY_KEY_PART_2
)
It is a very simple one.
The strange thing is, this same query, if I change COUNT(*) to * I will get 8 results and using COUNT(*) I will get only 2. The process was repeated some more times and the strange result is still continuing.
At this example, TABLE2 is a parent table of the TABLE1 where the primary key of the TABLE1 is PRIMARY_KEY_PART_1 and PRIMARY_KEY_PART_2, and the primary key of the TABLE2 is PRIMARY_KEY_PART_1, PRIMARY_KEY_PART_2 and PRIMARY_KEY_PART_3.
There's no foreign key between them (because they were legacy ones) and they have a huge amount of data.
The DB2 query SELECT VERSIONNUMBER FROM SYSIBM.SYSVERSIONS returns:
7020400
8020400
9010600
And the client used is SquirrelSQL 3.6 (without the rows limit marked).
So, what is the explanation to this strange result?
Without the details (including, at least, the exact Db2 version and DDL for both tables and their indexes) it can be just anything, and even with that details only IBM support will be really able to say, what is the actual reason.
Generally this looks like damaged data (e.g. differences in index vs table data).
Worth to open the support case with IBM.

Resources