How to create a Teradata table by copying and modifying another Table? - teradatasql

I am trying to create a new Teradata table by copying another table, but also need to add one new column, based on a condition of another column from the old table while copying, can you help me on the code?
create Table new_table as
(select *
from old_table) with data
ALTER TABLE new_table ADD new_col varchar(20) check(new_col in ('National', 'Local')
-- there is a column in the old_table with value ( 'Y', 'N'), how can i create the new column in the new_table with this condition: if Y new_col=national, if N, new_col=local?
Thank you.

You can't create a check constraint that will immediately be violated. Also note that CREATE TABLE AS (SELECT ...) results in all columns being nullable, and if you don't explicitly state a Primary Index the new table will use system default, e.g. first column alone as PI. A CASE expression can be used to populate the new column.
One possible sequence:
create Table new_table as old_table with no data; -- copy index definitions, NOT NULL attributes
ALTER TABLE new_table ADD new_col varchar(20) check(new_col in ('National', 'Local'));
INSERT new_table SELECT o.*,
CASE WHEN o.old_col = 'Y' THEN 'National' ELSE 'Local' END
FROM old_table o;

Related

Update statement for a list of strings

I'm using sqlite3 and my table has a text field that actually has a list of strings.
So a sample select (select * from table where id=1) would return for example
1|foo#bar.com|21-03-2015|["foo", "bar", "foobar"]
I couldn't figure out how the sqlite statement for updating the list is though. I tried
update table set list="["foo", "bar"] where id=1;
update table set list=["foo", "bar"] where id=1;
update table set list="\["foo", "bar"\]" where id=1;
update table set list=(value) where id=1 VALUES (["foo", "bar"])
This is the statement you need:
UPDATE table SET list = '[\"foo\", \"bar\"]' WHERE id = 1

How to write a delete trigger in Oracle PL/SQL - stuck on identifying "matching rows"

I have a trigger I'm writing whereby , once I delete a row, I want to delete the corresponding row in the another table (which is common_cis.security_function ).
and the source table is party.security_function
Here are the columns in common_cis.security_function :
URL
SCRTY_FUNC_NAME
SCRTY_FUNC_DESC
IDN
CREATE_TMSTMP
CNCRCY_USER_IDN
Here are the columns in party.security_function :
UPDATE_USER_SRC_SYS_CD
UPDATE_USER_ID
UPDATE_TS
SCRT_FUNC_NM
SCRT_FUNC_DESC
CREAT_USER_SRC_SYS_CD
CREAT_USER_ID
CREAT_TS
What I have so far is :
delete from common_cis.security_function CCSF
where CCSF.SCRTY_FUNC_NAME = :new.SCRT_FUNC_NM;
Is this the right idea? Or do I use some kind of row-ID ?
thanks
I think you should use integrity constraints for that, namely foreign key constraint with "ON DELETE CASCADE" condition.
Here is an example, but check first is there tables in your schema with names that I used:
-- create tables:
create table master_table(
URL varchar2(1000),
SCRTY_FUNC_NAME varchar2(100),
SCRTY_FUNC_DESC varchar2(1000));
create table detail_table(
SCRT_FUNC_NM varchar2(100),
SCRT_FUNC_DESC varchar2(1000),
UPDATE_USER_ID number,
UPDATE_TS varchar2(100));
-- add primary key and foreign key constraints:
alter table master_table add constraint function_pk primary key (SCRTY_FUNC_NAME);
alter table detail_table add constraint function_fk foreign key (SCRT_FUNC_NM) references master_table (SCRTY_FUNC_NAME) on delete cascade;
-- fill tables with data:
insert into master_table
values ('url number 1', 'sec function #1', 'description of function #1');
insert into detail_table
values('sec function #1', 'description', 1, '123abc');
insert into detail_table
values('sec function #1', 'description', 2, '456xyz');
-- check tables: first contains 1 row and second - 2 rows
select count(*) from master_table;
select count(*) from detail_table;
-- delete rows from first table only:
delete from master_table;
-- check tables once again - both are empty:
select count(*) from master_table;
select count(*) from detail_table;
-- clear test tables:
drop table detail_table;
drop table master_table;

how to select column based on its on order in Table

I want to select Column from Table based on Its Order
like
create Table Products
(
ProductId Int,
ProductName varchar(50)
)
lets Say I don't Know the name of the second column.
How I can get it like :
Select Col1,Col2 From Product
For SQL Server:
You can't do this in the SELECT clause. You can't select based on the order number of the column. You have to list the columns' names you need to select explicitly, otherwise, use SELECT * to list all. Me be if you are using a data reader object or any other ado.net methods to get the data from database you can do something like this, but this will be based on the column names list listed in your SQL statement.
However, you can do something like this dynamically, by reading columns' metadata ordinal_position from information_schema.columns as explained in the following answer:
Is it possible to select sql server data using column ordinal position?
But, you can do this in the ORDER BY clause. You can ORDER BY column number:
SELECT *
FROM TableName
ORDER BY 2; -- for col2
But this is not recommended to use in ORDER BY or in the SELECT (if any). Furthermore, columns order is not significant in the relational model.
Update: If you want to select at least 3 columns from any table parameter passed to your stored procedure. Try this as follows:
Your stored procedure supposed to receive a parameter #tableNameParam. The folowing code should return the first three columns from the #tablenameParam passed to the stored procedure:
DECLARE #col1 AS VARCHAR(100);
DECLARE #col2 AS VARCHAR(100);
DECLARE #col3 AS VARCHAR(100);
DECLARE #tableNameParam AS VARCHAR(50) = 'Tablename';
DECLARE #sql AS VARCHAR(MAX) ;
SELECT #col1 = column_name FROM information_schema.columns
WHERE table_name = #tableNameParam
AND ordinal_position = 1;
SELECT #col2 = column_name FROM information_schema.columns
WHERE table_name = #tableNameParam;
AND ordinal_position = 2;
SELECT #col3 = column_name FROM information_schema.columns
WHERE table_name = #tableNameParam;
AND ordinal_position = 3;
SET #sql = 'SELECT ' + col1 + ',' + col2 ' + 'col3 ' + FROM ' + #tablename;
you always can do
select * from Product
I'd like to share the following code as a solution to CRUD processing on Ordinal Position within a table. I had this problem today and it took me quite a long time to research and find a working solution. Many of the posted answers indicated that it was not possible to interact with the tables columns on an Ordinal bases but as indicated in the post above using the information_schema table will allow using the column position.
My situation was interacting with a table populated through the use of a pivot view so the columns are always changing based on the data, which is fine in a view result but when the dataset is stored into a table the columns are dynamic. The column names are a Year-Month combination such as 201801, 201802 with an Item Number as a primary key. This pivot table is to indicate manufacturing quantities by Year-Month on a rolling 12 month period so each month the column names with change/shift which changes their ordinal position when the table is rebuilt each month.
The Pivot view is used to build the Staging table, The Staging table is used to build the
Target table so the ordinal position of the staging and target tables are lined up with the same ordinal position.
Declare #colname Varchar(55) -- Column Name
Declare #ordpos INT -- Ordinal Position
Declare #Item Varchar(99) -- PK
Declare #i INT -- Counter
Declare #cnt INT -- Count
Declare #ids table(idx int identity(1,1), Item Varchar(25))
-- Item List
Insert INTO #ids Select Item From DBName.Schema.TableName
select #i = min(idx) - 1, #cnt = max(idx) from #ids
-- Row Loop
While #i < #cnt
Begin
Select #i = #i + 1
Set #ordpos=3
Set #Item = (select Item from #ids where idx = #i)
-- Column Loop
While #ordpos < 27
Begin
Select #colname =column_name From INFORMATION_SCHEMA.Columns Where table_name='TargetTable' and ordinal_position=#ordpos
Exec ('Update TargetTable set ['+#colname+']= (Select ['+#colname+'] From StagingTable Where Item='''+#Item+''') where Item='''+#Item+'''')
Set #ordpos=#ordpos + 1
End -- End Column Loop
End -- End Row Loop
The code here will loop through the Item matrix by rows and by columns and uses Dynamic SQL to build the action, in this case the action is an update but it could just as easily be a select. Each column is processed through the While Loop and then loops through the next row. This allows updates to a specific cell in the matrix by (Item X YearMonth) without actually knowing what the column name at a given position.
The one concern is that depending on the size of the data in this matrix it can be SLOW. I just wanted to show this as a way to use unknown column names in an ordinal position.

Create table in SQLite only if it doesn't exist already

I want to create a table in a SQLite database only if doesn't exist already. Is there any way to do this? I don't want to drop the table if it exists, only create it if it doesn't.
From http://www.sqlite.org/lang_createtable.html:
CREATE TABLE IF NOT EXISTS some_table (id INTEGER PRIMARY KEY AUTOINCREMENT, ...);
Am going to try and add value to this very good question and to build on #BrittonKerin's question in one of the comments under #David Wolever's fantastic answer. Wanted to share here because I had the same challenge as #BrittonKerin and I got something working (i.e. just want to run a piece of code only IF the table doesn't exist).
# for completeness lets do the routine thing of connections and cursors
conn = sqlite3.connect(db_file, timeout=1000)
cursor = conn.cursor()
# get the count of tables with the name
tablename = 'KABOOM'
cursor.execute("SELECT count(name) FROM sqlite_master WHERE type='table' AND name=? ", (tablename, ))
print(cursor.fetchone()) # this SHOULD BE in a tuple containing count(name) integer.
# check if the db has existing table named KABOOM
# if the count is 1, then table exists
if cursor.fetchone()[0] ==1 :
print('Table exists. I can do my custom stuff here now.... ')
pass
else:
# then table doesn't exist.
custRET = myCustFunc(foo,bar) # replace this with your custom logic

Add not null DateTime column to SQLite without default value?

I can't add a not null constraint or remove a default constraint. I would like to add a datetime column to a table and have all the values set to anything (perhaps 1970 or year 2000) but it seems like i cant use not null without a default and I cant remove a default once added in. So how can i add this column? (once again just a plain datetime not null)
Instead of using ALTER TABLE ADD COLUMN, create a new table that has the extra column, and copy your old data. This will free you from the restrictions of ALTER TABLE and let you have a NOT NULL constraint without a default value.
ALTER TABLE YourTable RENAME TO OldTable;
CREATE TABLE YourTable (/* old cols */, NewColumn DATETIME NOT NULL);
INSERT INTO YourTable SELECT *, '2000-01-01 00:00:00' FROM OldTable;
DROP TABLE OldTable;
Edit: The official SQLite documentation for ALTER TABLE now warns against the above procedure because it “might corrupt references to that table in triggers, views, and foreign key constraints.” The safe alternative is to use a temporary name for the new table, like this:
CREATE TABLE NewTable (/* old cols */, NewColumn DATETIME NOT NULL);
INSERT INTO NewTable SELECT *, '2000-01-01 00:00:00' FROM YourTable;
DROP TABLE YourTable;
ALTER TABLE NewTable RENAME TO YourTable;

Resources