Teradata LOCK ROW FOR ACCESS on insert query into a VOLATILE TABLE - teradata

I have a VOLATILE TABLE in teradata that i created with the code below
CREATE VOLATILE TABLE Temp
(
ID VARCHAR(30),
has_cond INT
) ON COMMIT PRESERVE ROWS;
I want to insert records from a select statement that i have created which is a pretty big SQL statement and definitely requires a row lock before proceeding
INSERT INTO Temp
(ID ,has_cond)
SELECT * FROM....
Can anyone tell me how to safely lock the rows so i can insert the records into my VOLATILE TABLE as they are production tables and i don't want to lock out some ETL that might be happening in the background

I don't think you can apply a row lock for an insert unless you put the select in a view.
Or you switch to lock table, but don't forget to include all tables...
But in most production environments there's a database with 1-1-views including lock row access, you can use those (or you might already, check Explain).

Related

Is it possible to run a Teradata query in Excel that uses Volatile tables?

My Teradata query creates a volatile that is used to join to existing views. When linking query to excel the following error pops up: "Teradata: [Teradata Database] [3932] Only an ET or null statement is legal after a DDL Statement". Is there a workaround for this for someone that does not have write permissions in teradata to create a real view or table? I want to avoid linking to Teradata in SQL and running an open query to pull in the data needed.
This is for Excel 2016 64bit and using Teradata version 15.10.1.12
Normally this error will occur if you are using ANSI mode or have issued a BT (Begin Transaction) in BTET mode.
Here are a few workarounds to try:
Issue an ET; statement (commit) after the create volatile table statement. If you are using ANSI mode, use COMMIT; instead of ET;. If you are unsure, try each one in turn. Only one will be valid but both do the same thing. Make sure your Volatile table includes ON COMMIT PRESERVE ROWS
Try using BT ET mode (a.k.a. Teradata mode) when establishing the session. I do not remember where but there will be a setting in the ODBC configuration for this.
Try using a Global Temporary table. These work similarly to Volatile tables except you define them once and the definition sticks around. That is, you can create it in, say BTEQ, or SQL assistant etc. The definition is common to all users and sessions (i.e. your Excel session), but the content is transient and unique to each session (like a volatile table).
Move the select part of your insert into the volatile table into the query that selects the data from the volatile table. See simple example below.
If you do not have create Global Temporary table permissions, ask your DBA.
Here is a simple example to illustrate point 4.
Current:
create volatile table tmp (id Integer)
ON COMMIT PRESERVE ROWS;
insert into tmp
select customer_number
from customer
where X = Y and yr = 2019
;
select a,b,c
from another_tbl A join TMP T ON
A.id = T.id
;
Becomes:
select a,b,c
from another_tbl A join (
select customer_number
from customer
where X = Y and yr = 2019
) AS T
ON
A.id = T.id
;
Or better yet, just Join your tables directly.
Note The first sequence (create table, Insert into and select) is a three statement series. This will return 3 "result sets". The first two will be row counts the last will be the actual data. Most programs (including I think Excel) can not process multiple result set responses. This is one of the reasons it is difficult to use Teradata Macros with client tools like Excel.
The latter solution (a single select) avoids this potential problem.

Teradata - how to select without locking writers? (LOCKING ROW FOR ACCESS vs. LOCKING TABLE FOR ACCESS)

I am developing an application which fetches some data from a Teradata DWH. DWH developers told me to use LOCK ROW FOR ACCESS before all SELECT queries to avoid delaying writes to that table(s).
Being very familiar with MS SQL Servers's WITH(NOLOCK) hint, I see LOCK ROW FOR ACCESS as its equivalent. However, INSERT or UPDATE statements do not allow using LOCK ROW FOR ACCESS (it is not clear for me why this fails, since it should apply for table(s) the statement selects from, not to the one I insert into):
-- this works
LOCK ROW FOR ACCESS
SELECT Cols
FROM Table
-- this does not work
LOCK ROW FOR ACCESS
INSERT INTO SomeVolatile
SELECT Cols
FROM PersistentTable
I have seen that LOCKING TABLE ... FOR ACCESS can be used, but it is unclear if it fits my need (NOLOCK equivalent - do not block writes).
Question: What hint should I use to minimize writes delaying when selecting within an INSERT statement?
You can't use LOCK ROW FOR ACCESS on an INSERT-SELECT statement. The INSERT statement will put a WRITE lock on the table to which it's writing and a READ lock on the tables from which it's selecting.
If it's absolutely imperative that you get LOCK ROW FOR ACCESS on the INSERT-SELECT, then consider creating a view like:
CREATE VIEW tmpView_PersistentTable AS
LOCK ROW FOR ACCESS
SELECT Cols FROM PersistentTable;
And then perform your INSERT-SELECT from the view:
INSERT INTO SomeVolatile
SELECT Cols FROM tmpView_PersistentTable;
Not a direct answer, but it's always been my understanding that this is one of the reasons your users/applications/etc should access data through views. Views lock for access, which does not prevent inserts/updates. Selecting from a table uses read locks, which will prevent inserts/updates.
The downside is with access locks, the possibility for dirty reads exists.
Change your query as below and you should be good.
LOCKING TABLE PersistentTable FOR ACCESS
INSERT INTO SomeVolatile
SELECT Cols
FROM PersistentTable ;

Trigger in sqlite different database

I have 2 different database 'A' and 'B'.I need to create a trigger that when I would insert any entry in table 'T1' of database 'A' then entries of table 'T2' of database 'B' would gets deleted.
Kindly suggest me a way!!
This is not possible.
In SQLite, DML inside triggers can only modify tables of the same database (see here). You cannot modify tables of an attached database.
Similarly, you cannot declare triggers for an attached database (to do it the other way) unless you declare them TEMPORARY.
Hence, (only) the following is possible:
For A.sqlite:
create table T1(id integer primary key);
For B.sqlite:
create table T2(id integer primary key);
attach 'A.sqlite' as A;
create temporary trigger T1_del after delete on A.T1
begin
delete from T2 where id = OLD.id;
end;
But that would only propagate deletes from T1 to T2 within the connection that declared the temporary trigger. If you opened A.sqlite separately, the trigger would not be there.

Not sure about the type of SQL Server lock to use for synchronization

I have an ASP.NET web application that populates the SQL Server 2008 database table like this:
INSERT INTO tblName1 (col1, col2, col3)
VALUES(1, 2, 3)
I also have a separate service application that processes the contents of that table (on the background) by first renaming that table, and then by creating an empty table as such:
SET XACT_ABORT ON
BEGIN TRANSACTION
--Rename table
EXEC sp_rename 'tblName1', 'temp_tblName1'
--Create new table
CREATE TABLE tblName1(
id INT NOT NULL IDENTITY(1,1) PRIMARY KEY,
col1 INT,
col2 INT,
col3 INT
)
COMMIT
SET XACT_ABORT OFF
--Begin working with the 'temp_tblName1' table
What I am not sure is which SQL lock do I need to use in this situation on the tblName1 table?
PS. To give you a frequency with which these two code samples run: first may run several times a second (although most times, less frequently), and the second one -- twice a day.
As some of the comments have suggested, consider doing this differently. You may benefit from using the snapshot isolation level. Using snapshot isolation requires ALLOW_SNAPSHOT_ISOLATION to be set to ON on the database. This setting is off by default, so you'll want to check whether you can turn it on.
Once you are able to use snapshot isolation, you would not need to change your INSERT statement, but your other process could change to something like:
SET XACT_ABORT ON
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
BEGIN TRANSACTION
-- Do whatever this process does, but don't rename the table.
-- If you want to get rid of the old records:
DELETE [tblName1] WHERE 1 = 1
-- Then
COMMIT TRANSACTION
In case you really do need to create a new non-temporary table for some reason, you may need to do so before entering the transaction, as there are some limits on what you are allowed to do during snapshot isolation.

Database Isolation Models

I have a database with an "ID" column. Whenever there is a new entry for the database, I fetch the last ID from the database, increment the value, and then use it in the Insert statement.
EDIT : I need the ID to use in multiple Insert statements. I will fetch this ID from the primary table and use this ID to insert values into related tables.
NextID = Select Max(ID) + 1 From Table
INSERT INTO Table1(ID, Col1, Col2...) Values(NextId, Value1, Value2...)
INSERT INTO Table2 (ID,col1,col2....) Values (NextID, Value1, Value2...)
I dont know if this is a good way because I know there will be concurrency issues.
When my application tries to read the NextID, there is a chance that another instance of the application is also trying to read the same value and thus concurrency issues may arise.
Is there a proper way to deal with this situation? I mean there are ways to set the database isolation level. Which would be a proper Isolation level for this situation.
Also if anybody could suggest me with an alternate way to maintain and increment manually the ID in the database, I'm also open to that.
If this information is not enough, please let me know what you require.
I am working with ASP.Net with VB and MS Sql Server 2008. I do not want to use the built-in "Identity" of SQL Server.
The only way to get the next ID is to actually insert the row, and use identity. Everything else will fail. So you must start by inserting into the parent table:
begin transaction;
insert into Table (col1, col2, col3) values (value1, value2, value3);
set #Id = scope_identity();
insert into Table1(ID, col1, col2) values (#Id, ...);
insert into Table3(ID, col1, col2) values (#Id, ...);
commit;
This is atomic and concurrency safe.
I do not want to use the built-in "Identity" of SQL Server.
tl;dr. What you 'want' matter little unless you can make a clear justification why. You can do it correctly, or you can spend the time 'ill oblivion reinventing the wheel.
Esentially you have a batch of three SQL statements - one select and two inserts. The database engine can execute another statement from a different session anywhere between them, thus breaking your data consistency - some other session can get the same MAX() value that you've got and use it for other insert statements. The only way to prevent DB engine from doing it is to use transactions. Wrap your batch with BEGIN TRANSACTION ... COMMIT and you are done.
Your way of doing this fine, what you would need is transaction handling..
BEGIN TRANSACTION
begin try
NextID = Select Max(ID) + 1 From Table
INSERT INTO Table1(ID, Col1, Col2...) Values(NextId, Value1, Value2...)
INSERT INTO Table2 (ID,col1,col2....) Values (NextID, Value1, Value2...)
COMMIT TRANSACTION
end try
begin catch
ROLLBACK TRANSACTION
--exception logging goes here
end catch

Resources