I have a list of queries stored in an Oracle DB table. My requirement is to fetch each of those queries one by one and fire them in a procedure and log their start end and elapsed times in another table
My problem is how should I handle the column list as that's going to be different for each of those queries and the number of columns and their datatypes cannot be anticipated at runtime.
Please suggest a way out.
For now, I have written down the code below. Here I have enclosed each query fetched with a count() to avoid the problem. However, the actual time taken for the count() query will be different from the time taken for the original query to execute.
Thanks a lot!
DECLARE
before_time TIMESTAMP;
after_time TIMESTAMP;
elapsed_time_in_ms NUMBER;
stmnt CLOB; --varchar(32000);
counts NUMBER;
sql_no NUMBER;
err_mess VARCHAR2(100);
CURSOR get_queries
IS
SELECT * FROM SLOW_RUNNING_SQL WHERE curr_ind = 1;
FUNCTION get_elapsed_time(
start_time_in TIMESTAMP ,
end_time_in TIMESTAMP )
RETURN NUMBER
AS
l_days NUMBER;
hours NUMBER;
minutes NUMBER;
seconds NUMBER;
milliseconds NUMBER;
BEGIN
<calculates elapsed time in milliseconds and returns that>
RETURN milliseconds ;
END;
BEGIN
dbms_output.put_line(CURRENT_TIMESTAMP);
before_time := SYSTIMESTAMP;
FOR i IN get_queries
LOOP
stmnt := i.SQL_DESC;
sql_no := i.sql_no;
stmnt := 'SELECT count(*) FROM ('||stmnt||') a';
dbms_output.put_line(stmnt);
EXECUTE IMMEDIATE stmnt INTO counts;
after_time := SYSTIMESTAMP;
elapsed_time_in_ms:= get_elapsed_time(before_time,after_time);
dbms_output.put_line(elapsed_time_in_ms);
INSERT
INTO query_performance_log VALUES
(
i.sql_no,
stmnt,
counts,
before_time,
after_time,
elapsed_time_in_ms/1000,
'No exception',
elapsed_time_in_ms );
dbms_output.put_line(stmnt);
dbms_output.put_line(counts);
dbms_output.put_line(after_time);
dbms_output.put_line(TO_CHAR(after_time - before_time));
COMMIT;
END LOOP;
ROLLBACK;
EXCEPTION
WHEN OTHERS THEN
err_mess:= SQLERRM;
INSERT
INTO query_performance_log VALUES
(
sql_no,
stmnt,
0,
NULL,
NULL,
0,
err_mess,
0
);
dbms_output.put_line(SQLERRM);
ROLLBACK;
END;
A solution that might suit you is to select a constant for every line returned by your query and make a bulk collect INTO a collection of varchar2 variables.
Here is what you're looking for:
-- declare a list of varchar2:
CREATE OR REPLACE TYPE t_my_list AS TABLE OF VARCHAR2(100);
-- then use this type in your proc.:
[..]
declare
v_res t_my_list;
[..]
-- then run the query
execute immediate 'SELECT ''x'' FROM ('||stmnt||') '
bulk collect into v_res;
If the columns selected by your queries are "simple", the above should work fair enough to evaluate performances. But if you start calling other functions and procedures for the data you retrieve in the select, then its more complicate.
In this other case, then you should try to work something out to build a concatenation of the columns returned (and enlarge the VARCHAR2(100) in the declaration of t_my_list). This implies you start work on stmnt and extract the columns, a part being the replacement of , by ''||'' or so.
Related
CREATE OR REPLACE TRIGGER POSITION_NUMBER
BEFORE UPDATE OR INSERT OR DELETE ON APPLIES
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
NUMBER_OF_POSITIONS NUMBER;
BEGIN
SELECT count(pnumber) INTO NUMBER_OF_POSITIONS
FROM APPLIES WHERE anumber = :NEW.anumber;
IF( NUMBER_OF_POSITIONS > 2 AND count(APPDATE) > 2 )
THEN
RAISE_APPLICATION_ERROR(-20000,'an Employee cannot apply for
more than two positions');
END IF;
END;
/
Im attemtping to create a trigger that goes off if an Applicant applys for more than two Positions on the Same Day, but im not sure how i would implement the Date side of it. Below is the set of relational Schemeas
You can use the TRUNC function to remove the time portion and then see if the application date matches today's date, regardless of time.
Also, there is no need for the autonomous transaction pragma. You are not executing any DML.
CREATE OR REPLACE TRIGGER position_number
BEFORE UPDATE OR INSERT OR DELETE
ON applies
DECLARE
number_of_positions NUMBER;
BEGIN
SELECT COUNT (pnumber)
INTO number_of_positions
FROM applies
WHERE anumber = :new.anumber AND TRUNC (appdate) = TRUNC (SYSDATE);
IF number_of_positions > 2
THEN
raise_application_error (
-20000,
'An Employee cannot apply for more than two positions on the same day');
END IF;
END;
/
here is my PLSQL code:
declare
headerStr varchar2(1000):='C1~C2~C3~C5~C6~C7~C8~C9~C10~C11~C12~C16~C17~C18~C19~RN';
mainValStr varchar2(32000):='1327~C010802~9958756666~05:06AM~NO~DISPOSAL~NDL~4~P32~HELLO~U~28-OCT-2017~28-OCT-2017~Reject~C010741~1;1328~C010802~9958756666~06:07AM~MH~DROP~NDL~1~P32~~U~28-OCT-2017~28-OCT-2017~Reject~C010741~2;1329~C010802~9999600785~01:08AM~BV~DROP~NDL~2~P32~MDFG~U~28-OCT-2017~28-OCT-2017~Reject~C010741~3';
valStr varchar2(4000);
headerCur sys_refcursor;
mainValCur sys_refcursor;
valCur sys_refcursor;
header varchar2(1000);
val varchar2(1000);
iterator number:=1000;
strIdx number;
strLen number;
idx number;
TYPE T_APPROVAL_RECORD IS TABLE OF VARCHAR2(4000) INDEX BY VARCHAR2(1000);
headerTable T_APPROVAL_RECORD;
cnt number;
begin
open headerCur for select * from table(split_str(headerStr,'~'));
open mainValCur for select * from table(split_str(mainValStr,';'));
loop
fetch mainValCur into valStr;
exit when mainValCur%notfound;
insert into header_test values(cnt, valStr); -- for testing purpose
open valCur for select * from table(split_str(valStr,'~'));
loop
fetch valCur into val;
fetch headerCur into header;
exit when valCur%notfound;
exit when headerCur%notfound;
insert into header_test values(header, val);
headerTable(header):= val;
end loop;
idx := headerTable.FIRST; -- Get first element of array
WHILE idx IS NOT NULL LOOP
insert into header_test values (idx, headerTable(idx));
idx := headerTable.NEXT(idx); -- Get next element of array
END LOOP;
headerTable.delete;
end loop;
commit;
end;
c1 c2 ..... c19 are column name and RN is rownumber,
data for the columns of each will be in mainValString seperated by ;
Why i am getting ORA-14551 when i am trying to access collection "headerTable"?
Please help.
Problem is with this line.
idx := headerTable.FIRST;
The index of headertable is of TYPE VARCHAR2 whereas idx is defined as NUMBER.
declare idx as VARCHAR2(1000), it should work.
Having said that, ORA-14551 - Cannot perform DML ... is not related to this error. It is unclear to me why should you encounter this error.
Oh but it does:
EXCEPTION WHEN OTHERS THEN
v_msg:=sqlcode||sqlerrm;
insert into err_data_transfer values('SPLIT_STR',v_msg,sysdate,null);
It may only be during an exception, but it's still DML during a select statement. You may be able to create another procedure as an AUTONOMOUS_TRANSACTION to create the error log. Also, you should either re-raise or raise_application_error afterward. If not your procedure will continue as though the error did not occur; which leads to more problems as to why your main process does not work (including running to completion but doing the wrong thing).
I am using a for loop to pass different values to a cursor, bulk collect the data and append it to the same nested table using MULTISET UNION operator. However, to avoid duplicate data, i tried to use MULTISET UNION DISTINCT and it throws the error PLS-00306: wrong number or types of arguments in call to 'MULTISET_UNION_DISTINCT' The codes works well without DISTINCT. Please let me know if i'm missing anything here.
I'm using Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
My code is as follows:
DECLARE
TYPE t_search_rec
IS
RECORD
(
search_id VARCHAR2(200),
search_name VARCHAR2(240),
description VARCHAR2(240) );
TYPE t_search_data
IS
TABLE OF t_search_rec;
in_user_id NUMBER;
in_user_role VARCHAR2(20);
in_period VARCHAR2(20) := 'Sep-15';
in_search_string VARCHAR2(20) := 'v';
l_search_tt t_search_data;
x_search_tt t_search_data;
v_entity_gl VA_LOGIN_API.t_entity_list ;
X_RETURN_CODE VARCHAR2(20);
X_RETURN_MSG VARCHAR2(2000);
CURSOR c_vendors_gl(v_entity VARCHAR2)
IS
SELECT UNIQUE vendor_id,
vendor,
NULL description
FROM XXPOADASH.XX_VA_PO_LINES
WHERE period = in_period
AND entity = v_entity
AND upper(vendor) LIKE upper(in_search_string)
||'%';
BEGIN
in_user_role := 'ROLE';
in_user_id := 4359;
VA_LOGIN_API.DATA_ACCESS_PROC( X_RETURN_CODE => X_RETURN_CODE, X_RETURN_MSG => X_RETURN_MSG, IN_PERSON_ID => in_user_id, IN_PERSON_ROLE => in_user_role, X_ACCESSED_ENTITY_LIST => v_entity_gl );
IF( v_entity_gl.COUNT >0) THEN
x_search_tt := t_search_data();
FOR I IN v_entity_gl.FIRST..v_entity_gl.COUNT
LOOP
OPEN c_vendors_gl(v_entity_gl(i));
FETCH c_vendors_gl BULK COLLECT INTO l_search_tt;
CLOSE c_vendors_gl;
x_search_tt := x_search_tt MULTISET
UNION DISTINCT l_search_tt;
l_search_tt.delete;
END LOOP;
IF x_search_tt.count = 0 THEN
x_return_msg := 'No lines found';
END IF;
END IF;
DBMS_OUTPUT.PUT_LINE(x_return_msg);
DBMS_OUTPUT.PUT_LINE(x_search_tt.count);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(x_return_msg);
DBMS_OUTPUT.PUT_LINE(x_search_tt.count);
DBMS_OUTPUT.PUT_LINE(SQLERRM);
END;
multiset union distinct requires the elements of the collection to be comparable. In your case the elements are PL/SQL records that are unfortunately not comparable data structures (i.e. PL/SQL provides no build-in mechanism to compare PL/SQL records).
multiset union works because it doesn't need to compare the elements.
One possible workaround is to use Oracle object type instead of PL/SQL record. Object type allows you to implement a comparison method required by multiset union distinct.
My requirement in pl/sql nested table is the following:
I have a nested table collection type declared and I populate the elements based on a lookup from a table.
In cases where the lookup yields more than one row(more than one value for code), then add all those multiple values in the nested table and proceed. Here is where i am stuck.
I am not able to increment that parent counter "indx" inside the exception to process those multiple rows. Since i am not, it only stores the latest data in the nested table and not all of them.
declare
TYPE final_coll_typ IS TABLE OF varchar2(100);
l_final_coll final_coll_typ;
MULTI_FETCH EXCEPTION;
PRAGMA EXCEPTION_INIT(MULTI_FETCH, -1422); -- this is an ora error for exact fetch returns more than the required number of rows
begin
for indx in 1..<count> loop
<some processing logic here>
select code into l_final_coll(indx) from lookup_tbl where <some filter>;
exception
when MULTI_FETCH then
for p in (select code from lookup_tbl where <some filter>)
loop
l_final_coll(indx) := p.code;
dbms_output.put_line(l_final_coll(indx));
end loop;
continue; -- this is for further processing after the loop
end loop;
end;
Lets say, the first iteration of the counter indx produced only one row data for code. That gets stored in l_final_coll(indx).
Lets say the next iteration of indx i the main for loop produces 2 rows of values for code. My thought was to catch the exception (ORA-01422) and keep adding these 2 code values in the existing nested table.
So, in effect, my nested table should now have 3 values of code in its element. But, currently, I can only get it to populate 2 of them (the single value from first itreration and the latest value from the next)
Any pointers would be appreciated on how I can accomplish this.
PS: Tried manipulating the counter variables indx and p. But, obviously pl/sql does not allow it for a "for loop".
You don't need to do a single select at all, just use the cursor loop to start with, and append to the collection (which you'd have to initialise):
declare
type final_coll_typ is table of varchar2(100);
l_final_coll final_coll_typ;
begin
l_final_coll := final_coll_typ();
for indx in 1..<count> loop
<some processing logic here>
for p in (select code from lookup_tbl where <some filter>) loop
l_final_coll.extend(1);
l_final_coll(l_final_coll.count) := p.code;
end loop;
end loop;
dbms_output.put_line('Final size: ' || l_final_coll.count);
end;
/
For each row found but the cursor, the collection is extended by one (which isn't very efficient), and the cursor value is put in the last, empty, row; which is found from the current count.
As a demo, if I create a dummy table with a duplicate value:
create table lookup_tbl(code varchar2(100));
insert into lookup_tbl values ('Code 1');
insert into lookup_tbl values ('Code 2');
insert into lookup_tbl values ('Code 2');
insert into lookup_tbl values ('Code 3');
... then with a specific counter and filter:
declare
type final_coll_typ is table of varchar2(100);
l_final_coll final_coll_typ;
begin
l_final_coll := final_coll_typ();
for indx in 1..3 loop
for p in (select code from lookup_tbl where code = 'Code ' || indx) loop
l_final_coll.extend(1);
l_final_coll(l_final_coll.count) := p.code;
end loop;
end loop;
dbms_output.put_line('Final size: ' || l_final_coll.count);
end;
/
... I get:
anonymous block completed
Final size: 4
As a slightly more complicated option, you could bulk-collect all the matching data into a temporary collection, then loop over that to append those values into the real collection. Something like:
declare
type final_coll_typ is table of varchar2(100);
l_final_coll final_coll_typ;
l_tmp_coll sys.dbms_debug_vc2coll;
begin
l_final_coll := final_coll_typ();
for indx in 1..<count> loop
<some processing logic here>
select code bulk collect into l_tmp_coll from lookup_tbl where <some filter>;
for cntr in 1..l_tmp_coll.count loop
l_final_coll.extend(1);
l_final_coll(l_final_coll.count) := l_tmp_coll(cntr);
end loop;
end loop;
end;
/
There may be a quicker way to combine two collections but I'm not aware of one. Bulk-collect has to be into a schema-level collection type, so you can't use your local final_coll_typ. You can create your own schema-level type, and then use it for both the temporary and final collection variables; but I've used a built-in one, sys.dbms_debug_vc2coll, which is defined as table of varchar2(1000).
As a demo, with the same table/data as above, and the same specific count and filter:
declare
type final_coll_typ is table of varchar2(100);
l_final_coll final_coll_typ;
l_tmp_coll sys.dbms_debug_vc2coll;
begin
l_final_coll := final_coll_typ();
for indx in 1..3 loop
select code bulk collect into l_tmp_coll
from lookup_tbl where code = 'Code ' || indx;
for cntr in 1..l_tmp_coll.count loop
l_final_coll.extend(1);
l_final_coll(l_final_coll.count) := l_tmp_coll(cntr);
end loop;
end loop;
dbms_output.put_line('Final size: ' || l_final_coll.count);
end;
/
... I again get:
anonymous block completed
Final size: 4
I have a procedure that runs every one hour populating a table. The records handled from the procedure are many so it takes approximately 12~17 mins each time it is executed.
Do you now if there is a way (i.e. trigger) to record the duration of each execution (i.e. into a table)?
I don't know of a trigger that would allow this to be done automatically. One way to do this would be something like
PROCEDURE MY_PROC IS
tsStart TIMESTAMP;
tsEnd TIMESTAMP;
BEGIN
tsStart := SYSTIMESTAMP;
-- 'real' code here
tsEnd := SYSTIMESTAMP;
INSERT INTO PROC_RUNTIMES (PROC_NAME, START_TIME, END_TIME)
VALUES ('MY_PROC', tsStart, tsEnd);
END MY_PROC;
If you only need this for a few procedures this might be sufficient.
Share and enjoy.
I typically use a log table with a date or timestamp column that uses a default value of sysdate/systimestamp. Then I call an autonomous procedure that does the log inserts at certain places I care about (starting/ending a procedure call, after a commit, etc):
See here (look for my answer).
If you are inserting millions of rows, you can control when (how often) you insert to the log table. Again, see my example.
To add to the first answer, once you have start and end timestamps, you can use this function to turn them into a number of milliseconds. That helps with readability if nothing else.
function timestamp_diff(
start_time_in timestamp,
end_time_in timestamp) return number
as
l_days number;
l_hours number;
l_minutes number;
l_seconds number;
l_milliseconds number;
begin
select extract(day from end_time_in-start_time_in)
, extract(hour from end_time_in-start_time_in)
, extract(minute from end_time_in-start_time_in)
, extract(second from end_time_in-start_time_in)
into l_days, l_hours, l_minutes, l_seconds
from dual;
l_milliseconds := l_seconds*1000 + l_minutes*60*1000
+ l_hours*60*60*1000 + l_days*24*60*60*1000;
return l_milliseconds;
end;