We have an application that creates a table with a randomly generated name. I would like to create a trigger on this table.Since I do not know the name of the tabe I would like to get it from the all_table view. How can I go about achieveing something like this?
create or replace trigger t1
after insert or update on (select table_name from all_tables where owner = 'CustomAPP' and table_name like 'STAGE_%')
-- for each row
declare
-- local variables here
begin
end t1;
The SQL above obviously gives an error because of the select clause after the create trigger instead of a table name. Please advise
You would need to make the entire CREATE TRIGGER dynamic in order to do this. Something like this should work. You probably want to make the trigger name depend on the name of the table assuming there could be multiple tables that your query against ALL_TABLES might return multiple rows. And you certainly want the trigger to do something rather than having an empty body.
SQL> create table stg_12345( col1 number );
Table created.
SQL> begin
2 for x in (select *
3 from user_tables
4 where table_name like 'STG%')
5 loop
6 execute immediate
7 'create or replace trigger trg_foo ' ||
8 ' before insert on ' || x.table_name ||
9 ' for each row ' ||
10 'begin ' ||
11 ' null; ' ||
12 'end;';
13 end loop;
14 end;
15 /
PL/SQL procedure successfully completed.
SQL> select count(*) from user_triggers where trigger_name = 'TRG_FOO';
COUNT(*)
----------
1
Of course, the idea of an application that creates tables on the fly is one that frightens me to the core. If you have any control over that, I would strongly suggest reconsidering the architecture.
Solution 1:
If the problem is "poor performance due to lack of statistics", perhaps changing the OPTIMIZER_DYNAMIC_SAMPLING parameter at a system or session level can help. See the Performance Tuning Guide for a more thorough discussion, but I've found the default of 2 (64 blocks) to be insufficient, especially for large data sets where keeping optimizer statistics current is impractical.
Solution 2:
If you really want to automatically create a trigger after a table's been created, you'll need to create a DDL trigger for the schema. The SQL below demonstrates that.
CREATE OR REPLACE TRIGGER MAKE_ME_A_TRIGGER
AFTER CREATE ON CUSTOM_APP_SCHEMA
AS
l_trigger_sql varchar2(4000);
BEGIN
if l_ora_obj_dict_type = 'TABLE'
then
l_trigger_sql := 'create or replace trigger ' || ora_dict_obj_name
' before insert on ' || ora_dict_obj_type||
' for each row ' ||
'begin ' ||
' null; ' ||
'end;'
execute immediate l_sql;
end if;
END;
/
You can use EXECUTE IMMEDIATE to dynamically execute SQL, including DDL scripts, provided the active connection has appropriate permissions on the database. Use PL/SQL to build the full DDL statement via string concatenation, and then you can execute it dynamically.
Docs:
http://docs.oracle.com/cd/B12037_01/appdev.101/b10807/13_elems017.htm
More Docs:
http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/dynamic.htm
Related
Description what I am trying to do:
I have 2 environments one has data (X) second one has no data (Y).
I have done procedure which has input parameter P_TableName. It should check if in this table is any data and IF There is then we will take data to Y environment.
So Mostly it works but I have problem with one freaking simple thing ( I have not much experience in TD but in Oracle it would be a 10seconds).
I need to pass select count(*) from X to variable how to do that?.
I was trying by SET VAR = SELECT...
INSERT INTO VAR SELECT...
I was trying to make a variable for statement which is directly executing
SET v_sql_stmt = 'INSERT INTO ' || VAR|| ' SELECT COUNT(*) FROM ' || P_TableName;
CALL DBC.SYSEXECSQL(v_sql_stmt);
It's probably really simple thing but I can't find good solution for that. Please help
You'll have to open a cursor to fetch the results since you are running dynamic SQL. There is a good example in the Teradata help doc on Dynamic SQL:
CREATE PROCEDURE GetEmployeeSalary
(IN EmpName VARCHAR(100), OUT Salary DEC(10,2))
BEGIN
DECLARE SqlStr VARCHAR(1000);
DECLARE C1 CURSOR FOR S1;
SET SqlStr = 'SELECT Salary FROM EmployeeTable WHERE EmpName = ?';
PREPARE S1 FROM SqlStr;
OPEN C1 USING EmpName;
FETCH C1 INTO Salary;
CLOSE C1;
END;
You can't use INTO in Dynamic SQL in Teradata.
As a workaround you need to do a cursor returning a single row:
DECLARE cnt BIGINT;
DECLARE cnt_cursor CURSOR FOR S;
SET v_sql_stmt = ' SELECT COUNT(*) FROM ' || P_TableName;
PREPARE S FROM v_sql_stmt;
OPEN cnt_cursor;
FETCH cnt_cursor INTO cnt;
CLOSE cnt_cursor;
Let's have a look on my source code:
CREATE OR REPLACE PROCEDURE MAKE_COPY_OF_CLASSROOMS AUTHID CURRENT_USER AS
TYPE classrooms_table_type IS TABLE OF classrooms%ROWTYPE INDEX BY PLS_INTEGER;
classrooms_backup classrooms_table_type;
CURSOR classrooms_cursor IS
SELECT *
FROM classrooms
WHERE year = 1
ORDER BY name;
v_rowcnt PLS_INTEGER := 0;
BEGIN
OPEN classrooms_cursor;
FETCH classrooms_cursor
BULK COLLECT INTO classrooms_backup;
CLOSE classrooms_cursor;
EXECUTE IMMEDIATE 'CREATE TABLE classrooms_copy AS (SELECT * FROM classrooms WHERE 1 = 2)';
--COPY ALL STORED DATA FROM classrooms_backup TO classrooms_copy
END MAKE_COPY_OF_classrooms;
I'm stucked for hours on trying to insert data from "classrooms_backup" into the table "classrooms_copy", which is created by EXECUTE IMMEDIATE command. It's necessary to create table "classrooms_copy" via EXECUTE IMMEDIATE command. I tried to create another EXECUTE command with for loop in it:
EXECUTE IMMEDIATE 'FOR i IN classrooms_backup.FIRST..classrooms_backup.LAST LOOP
INSERT INTO classrooms_copy(id,room_id,year,name)
VALUES(classrooms_backup(i).id,classrooms_backup(i).room_id,classrooms_backup(i).year,classrooms_backup(i).name);
END LOOP;';
But it's road to the hell. I'm retrieving an invalid SQL statement error.
Thanks for your help!
There's no need for much PL/SQL here. Also, try to avoid the keyword CURSOR - there's almost always a better way to do it.
create or replace procedure make_copy_of_classrooms authid current_user as
begin
execute immediate '
create table classrooms_copy as
select *
from classrooms
where year = 1
order by name
';
end make_copy_of_classrooms;
/
I need to run some SQL blocks to test them, is there an online app where I can insert the code and see what outcome it triggers?
Thanks a lot!
More specific question below:
<<block1>>
DECLARE
var NUMBER;
BEGIN
var := 3;
DBMS_OUTPUT.PUT_LINE(var);
<<block2>>
DECLARE
var NUMBER;
BEGIN
var := 200;
DBMS_OUTPUT.PUT_LINE(block1.var);
END block2;
DBMS_OUTPUT.PUT_LINE(var);
END block1;
Is the output:
3
3
200
or is it:
3
3
3
I read that the variable's value is the value received in the most recent block so is the second answer the good one? I'd love to test these online somewhere if there is a possibility.
Also, is <<block2>> really the correct way to name a block??
Later edit:
I tried this with SQL Fiddle, but I get a "Please build schema" error message:
Thank you very much, Dave! Any idea why this happens?
create table log_table
( message varchar2(200)
)
<<block1>>
DECLARE
var NUMBER;
BEGIN
var := 3;
insert into log_table(message) values (var)
select * from log_table
<<block2>>
DECLARE
var NUMBER;
BEGIN
var := 200;
insert into log_table(message) values (block1.var || ' 2nd')
select * from log_table
END block2;
insert into log_table(message) values (var || ' 3rd')
select * from log_table
END block1;
In answer to your three questions.
You can use SQL Fiddle with Oracle 11g R2: http://www.sqlfiddle.com/#!4. However, this does not allow you to use dbms_output. You will have to insert into / select from tables to see the results of your PL/SQL scripts.
The answer is 3 3 3. Once the inner block is END-ed the variables no longer exist/have scope. You cannot access them any further.
The block naming is correct, however, you aren't required to name blocks, they can be completely anonymous.
EDIT:
So after playing with SQL Fiddle a bit, it seems like it doesn't actually support named blocks (although I have an actual Oracle database to confirm what I said earlier).
You can, however, basically demonstrate the way variable scope works using stored procedures and inner procedures (which are incidentally two very important PL/SQL features).
Before I get to that, I noticed three issues with you code:
You need to terminate the insert statements with a semi-colon.
You need to commit the the transactions after the third insert.
In PL/SQL you can't simply do a select statement and get a result, you need to select into some variable. This would be a simple change, but because we can't use dbms_output to view the variable it doesn't help us. Instead do the inserts, then commit and afterwards select from the table.
In the left hand pane of SQL Fiddle set the query terminator to '//' then paste in the below and 'build schema':
create table log_table
( message varchar2(200)
)
//
create or replace procedure proc1 as
var NUMBER;
procedure proc2 as
var number;
begin
var := 200;
insert into log_table(message) values (proc1.var || ' 2nd');
end;
begin
var := 3;
insert into log_table(message) values (var || ' 1st');
proc2;
insert into log_table(message) values (var || ' 3rd');
commit;
end;
//
begin
proc1;
end;
//
Then in the right hand panel run this SQL:
select * from log_table
You can see that proc2.var has no scope outside of proc2. Furthermore, if you were to explicitly try to utilize proc2.var outside of proc2 you would raise an exception because it is out-of-scope.
create or replace procedure sample
as
ID VARCHAR(20);
BEGIN
execute immediate
'CREATE GLOBAL TEMPORARY TABLE UPDATE_COLUMN_NO_TP
(
NAME VARCHAR2(256)
)';
INSERT INTO UPDATE_COLUMN_NO_TP
SELECT SRC_PK_COLUMNS.PK_KEY
FROM SRC_PK_COLUMNS
WHERE NOT EXISTS (
SELECT 1
FROM TGT_PK_COLUMNS
WHERE TGT_PK_COLUMNS.ID = SRC_PK_COLUMNS.ID);
END;
Error is:
The table is no exist.
So, I want a best solution for this scenario. In my stored procedure I have 10 temporary tables. All are all dynamic creations and inserts.
Table UPDATE_COLUMN_NO_TP not exists at compile time, so you got the error.
If you created a table dynamically, you should access it dynamically.
And pay attention to Mat's comment about essence of GTT.
execute immediate '
INSERT INTO UPDATE_COLUMN_NO_TP
SELECT SRC_PK_COLUMNS.PK_KEY
FROM SRC_PK_COLUMNS
WHERE NOT EXISTS (
SELECT 1
FROM TGT_PK_COLUMNS
WHERE TGT_PK_COLUMNS.ID = SRC_PK_COLUMNS.ID
)
';
Ok - I have a situation in which I must execute a dynamically built stored procedure against tables that may, or may not be in the database. The data retrieved is then shunted to a VB.Net backed ASP based report page. By design, if the tables are not present in the database, the relevant data is automatically hidden on the report page. Currently, I'm doing this by checking for the inevitable error, and hiding the div in the catch block. A bit kludgy, but it worked.
I can't include the VB code-behind, but the relevant stored procedure is included below.
However, a problem with this method was recently brought to my attention when, for no apparent reason, the div was being hidden even though the proper data was available. As it turned out, the user trying to select the table in the dynamic SQL call didn't have the proper select permissions, an easy enough fix once I could track it down.
So, two fold question. First and foremost - is there a better way to check for a missing table than through catching the error in the VB.Net codebehind? All things considered, I'd rather save the error checking for an actual error. Secondly, is there a preferred method to squirrel out a particular OLE DB error out of the general object caught by the try->catch block other than just checking the actual stack trace string?
SQL Query - The main gist of the code is that, due to the design of the database, I have to determine the name of the actual table being targeted manually. The database records jobs in a single table, but each job also gets its own table for processing data on the items processed in that job, and it's data from those tables I have to retrieve. Absolutely nothing I can do about this setup, unfortunately.
DECLARE #sql NVarChar(Max),
#params NVarChar(Max),
#where NVarChar(Max)
-- Retained for live testing of stored procedure.
-- DECLARE #Table NvarChar(255) SET #Table = N'tblMSGExportMessage_10000'
-- DECLARE #AcctID Integer SET #AcctID = 10000
-- DECLARE #Type Integer SET #Type = 0 -- 0 = Errors only, 1 = All Messages
-- DECLARE #Count Integer
-- Sets our parameters for our two dynamic SQL calls.
SELECT #params = N'#MsgExportAccount INT, #cnt INT OUTPUT'
-- Sets our where clause dependent upon whether we want all results or just errors.
IF #Type = 0
BEGIN
SELECT #where =
N' AND ( mem.[MSGExportStatus_OPT_CD] IN ( 11100, 11102 ) ' +
N' OR mem.[IngestionStatus_OPT_CD] IN ( 11800, 11802, 11803 ) ' +
N' OR mem.[ShortcutStatus_OPT_CD] IN ( 11500, 11502 ) ) '
END
ELSE
BEGIN
SELECT #where = N' '
END
-- Retrieves a count of messages.
SELECT #sql =
N'SELECT #cnt = Count( * ) FROM dbo.' + QuoteName( #Table ) + N' AS mem ' +
N'WHERE mem.[MSGExportAccount_ID] = #MsgExportAccount ' + #where
EXEC sp_executesql #sql, #params, #AcctID, #cnt = #Count OUTPUT
To avoid an error you could query the sysobjects table to find out if the table exists. Here's the SQL (replace YourTableNameHere). If it returns > 0 then the table exists. Create a stores procedure on the server that runs this query.
select count(*)
from sysobjects a with(nolock)
where a.xtype = 'U'
and a.name = 'YourTableNameHere'