Load Historical data to teradata temporal table - teradata

I have a task to load existing SQL Server table to Teradata temporal table. Existing table is a type 2 table and has many versions of record. I need to load them into teradata temporal table. I am planning to load version 1 1st and then update all other versions one by one.
Difficulties i am having is that in existing table every record has start time and end time. I need to update that time in teradata temporal table as validity.
1st I am trying to insert and while insert i am not able to insert end time as less than current time. It report error as "Check constraint violation". Below is sample piece of code for creating table and inserting.
I am yet to test updates as not able to do 1st step.
CREATE multiset TABLE EDW_T.edw_Contracts_History_Test
(
ID INTEGER,
Validity PERIOD(TIMESTAMP(3)) NOT NULL AS VALIDTIME
);
insert into EDW_T.edw_Contracts_History_Test(id,Validity) values(
1,period(cast('1996-01-20 05.00.00.000' as TIMESTAMP(3)), cast('2016-06-23 21.52.20.000' as TIMESTAMP(3))))
--this pass as 2016 is greater than current date
insert into EDW_T.edw_Contracts_History_Test(id,Validity) values(
1,period(cast('1996-01-20 05.00.00.000' as TIMESTAMP(3)), cast('2015-06-23 21.52.20.000' as TIMESTAMP(3))))
--This fails as i m trying to give end time less than current date.
Is there anyway to give end time as less than current date. any way to disable this constraint for time and then enable.
Please help. Thanks!

To insert history rows you should use Sequnce Valid Time Modifier...
Eg :
SEQUENCED VALIDTIME
insert into EDW_T.edw_Contracts_History_Test(id,Validity) values(
1,period(cast('1996-01-20 05.00.00.000' as TIMESTAMP(3)), cast('2015-06-23 21.52.20.000' as TIMESTAMP(3))));

Related

Insert on constraint-less table, caused ora-02449 unique/primary keys in table referenced by foreign keys

We have an old batch file which does only such statement:
Insert into table_a select * from table_b;
table a is bulk table with no index and constraint
after few years, with increments in record counts, this batch became slow
but suddenly for few days, we got this error every time we try to run the batch:
Ora-00604 error occured at recursive sql level 1 ora-02449 unique/primary keys in table referenced by foreign keys
our only option is to make chunks of data, and insert them part by part, which fixes the batch output, but the problem still exists
we are not dropping and table or object here
can you help us find the cause of problem?
I've checked database level triggers but there is no trigger for insert at database level

Need to delete/insert that corresponding row in that table, as there is duplicate data coming up- PLSQL

I need a help in getting the PLSQL procedure to : Insert/Delete the rows of a table , because as I used Update functionality getting duplicates for that particular Sequence ID field.
So for a particular sequence ID row, whenever I insert the data, it should be the latest in that table.
The last sentence you wrote suggests that you have to
delete row(s) whose ID equals that particular "sequence ID" value
then insert a new row
If you expected some code to be written, you should have posted some more info (CREATE TABLE and INSERT INTO sample data, as well as the way you manipulate it by inserting new row, showing what you expect to happen with old one(s)). It is difficult to write code based on unknown data model.
A guess...
INSERT INTO schema_name.table_name(
primary_key_column
, other_column
)
VALUES(
(SELECT max(primary_key_column)+1 FROM schema_name.table_name),
, 'other_value'
);
COMMIT;
This is the procedure I am using:
https://drive.google.com/file/d/1eGbxSppjexpICKh6pzuW0ZzckVxA6BB0/view?usp=sharing
My requirement when we need to insert the new data , the previous data should be deleted for the corresponding ID.
In the above procedure I am updating the data.

Teradata: How to extend the range partition of a non-empty partitioned table?

I have create a table, mydb.mytable, with essentially the following SQL, say last week:
CREATE MULTISET TABLE mydb.mytable ,NO FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT,
DEFAULT MERGEBLOCKRATIO
(
master_transaction_header VARCHAR(64) CHARACTER SET LATIN NOT CASESPECIFIC,
demand_date DATE FORMAT 'YY/MM/DD',
item_id BIGINT,
QTY INTEGER,
price DECIMAL(15,2))
PRIMARY INDEX ( master_transaction_header )
PARTITION BY RANGE_N(demand_date BETWEEN DATE '2018-01-01' AND CURRENT_DATE EACH INTERVAL '1' DAY );
When I try to insert data into it, for say yesterday, teradata gives me the following error message
Partitioning violation for table mydb.mytable
When I try to extend the partition using:
ALTER TABLE mydb.mytable MODIFY PRIMARY INDEX (master_transaction_header) ADD RANGE BETWEEN DATE '2018-03-15' AND CURRENT_DATE EACH INTERVAL '1' DAY;
I get the following error message from teradata:
The altering of RANGE_N definition with CURRENT_DATE/CURRENT_TIMESTAMP is not allowed.
I understand that I could:
Create a copy with PARTITION BY RANGE_N(demand_date BETWEEN DATE '2018-01-01' AND '9999-12-31' EACH INTERVAL '1' DAY );
Insert all the data from the old table into the new one
drop the old table
rename the new table
but I am hoping that teradata provides a more elegant way to add partitions to an existing partitioned table.
I have already consulted the following stackoverflow posts:
Range partition table creation with large number of paritions
Teradata: How to add range partition to non empty table?
They were enlightening, but I could not conjure an answer from the discussion therein.
Using CURRENT_DATE for partitioning is possible, but I never found a use case for it.
When you create that table it is resolved to the current date, but not changed afterwards, check the ResolvedCurrent_Date column in dbc.PartitioningConstraintsV. When you submit an ALTER TABLE mydb.mytable TO CURRENT it's resolved again and the range modified.
But there's no reason to do this, simply define the range large enough, so you never have to modify it again, e.g.
PARTITION BY RANGE_N(demand_date
BETWEEN DATE '2018-01-01'
AND DATE '2040-01-01' EACH INTERVAL '1' DAY);
Unused partitions have zero overhead in Teradata.

How to assign an ID but then delete if not used

I am unsure on how to do this 'best practice' wise.
I have a web application (asp.net VB) that connects to an MS SQL server 2012. Currently when the page loads the app connects to a DB table and gets the last ID and adds 1 to it and displays this to the user. When the user submits the form the new ID is saved to the DB.
The problem being the app may be opened by 2 users at the same time and therefore they will get assigned the same ref number which will cause problems when the data is saved.
How can I assign different numbers to different users if the app is opened at the same time without saving unnecessary data?
You have multiple solutions for this, I'll try to outline a few approaches. (I'll assume that you need to insert things into a DB that I'll call "orders".)
First of all, you can move the ID-generation to the moment when the order is actually inserted, and not at the moment when the user start to enter the data. That way, you do not generate an ID for a user that never completes the form. Also this scenario is easy to accomplish using autoincrementing values in sql server. You can, for example do:
-- create a table with an identity column
create table Orders (
ID int identity(1,1) not null,
Description nvarchar(max) not null
);
-- insert values, without specifying the ID column
insert into Orders (Description) values ()
-- select the row back
-- returns 1, 'My First Order'
select * from Orders;
Another way to do this is to use SQL Server Sequences. These are things that do nothing except generate numbers in a row. They guarantee that the numbers won't be repeated, and always keep count of the current value, i.e.
-- create a sequence
create sequence OrderIdSequence
start with 1
increment by 1;
-- get the next sequence value
select next value for OrderIdSequence

SQL Server Stored Procedure Creating Duplicates

I am running a website using SQL Server 2008 and ASP.NET 4.0. I am trying to trace an issue down that my stored procedure is creating duplicate entries for the same date. Originally I thought this may be a couple post issue but the duplicates are recording the same date down to the milliseconds. One of the duplicates is at :'2013-04-26 15:48:28.323' All of the data is exactly the same except for the id.
#check_date is an input into the stored procedure which gives us the particular date we are looking at (entries are maid daily)
#formHeaderId is grabbed earlier in the stored procedure, getting the header ID as this is a detail table with a 1 to many relationship with the header.
The #getdate() entry is where I found the duplicate entries, there are entries with the exact getdate() values for different rows.
This doesn't occur with each entry either, it is randomly occurring in the application.
select #formHeaderId=stage2_checklist_header_id
from stage2_checklist_header
where environmental_forms_id=#envFormId
and checklist_monthyear=#inspected_month
order by start_date desc
if #formHeaderId = 0 begin
insert into stage2_checklist_header(
environmental_forms_id
,start_date
,checklist_monthyear
,st2h_load_date )
values( #envFormId
,#check_date
,#inspected_month
,getdate())
set #formHeaderId = scope_identity()
print 'inserted new header record ' + cast(#formHeaderId as varchar(50))
end
IF (NOT EXISTS(
SELECT *
FROM stage2_checklist_detail
WHERE stage2_checklist_header_id = #formHeaderId
AND check_date = #check_date
))
INSERT INTO stage2_checklist_detail
(stage2_checklist_header_id, check_date, st2_chk_det_load_date,
inspected_by)
VALUES
(#formHeaderId, #check_date, GETDATE(), #inspected_by)
SET #form_detail_id = SCOPE_IDENTITY()
PRINT 'inserted detail record ' + CAST(#form_detail_id AS VARCHAR(50))
Here is a similar case where the developer was able to track the duplicate entries to simultaneous calls from different spids (which sidestepped the EXISTS check). After experimenting with isolation levels and transactions - and trying to avoid deadlocks - it sounds like the solution in that case was to use sp_getapplock and sp_releaseapplock.
In the NOT EXISTS check, you are looking for records that have both the same ID and the same date. So, if the combination of ID AND date does not exist in the table, the row will be inserted.
In your description of the problem you state "All of the data is exactly the same except for the id". The ID being different will always cause an INSERT based on the logic you are using to check for existence.

Resources