Track execution time for each procedure and function in package - plsql

I have a package in oracle10g with multi procedures and functions.
Is there any way to track execution time for each procedure and function in the package without using dbms_utility.get_time in each one of them

You can use the Oracle PL/SQL profiler to do this.
CREATE OR REPLACE PROCEDURE proc_a IS
BEGIN
dbms_lock.sleep(seconds => 3);
END proc_a;
/
Start and stop the profiler around your code.
DECLARE
v_run_number BINARY_INTEGER;
BEGIN
dbms_profiler.start_profiler(run_number => v_run_number);
proc_a;
dbms_profiler.stop_profiler;
END;
/
-- 3 seconds
You will get data into the plsql_profiler_% tables. A run is a single call of the profiler, units are the various procedures, function, packages and data is the run time, call occurrences, etc of each line of each unit.
SELECT *
FROM plsql_profiler_runs r
ORDER BY r.run_date DESC,
r.runid DESC;
-- runid = 3006
-- plsql_profiler_data
-- plsql_profiler_units
We use Allaround Automations PL/SQL Developer (great tool by the way), but I believe Oracle SQL Developer and Quest Toad also support viewing this data (and perhaps even handing the start and stop of the profiler for you: I know the tool we use does).
For a simple SQL based way to see the data I would recommend the following Metalink article. It provides a script you run and provide your run ID to and will generate you a nice HTML file with a report of your program(s).
Script to produce HTML report with top consumers out of PL/SQL Profiler DBMS_PROFILER data (Doc ID 243755.1)

Related

Table records to Xml in PL/SQL

We currently use Oracle/PL/SQL and Oracle Forms WEB as User Interface.
The thing is that we decided to migrate the UI from Forms to another UI (probably HTML5/ Angular...).
Our system architecture is layered in a way that the batch code will remain untouched and all we have to do is to access the GUI Façade from the new technology (still to be chosen). The problem is: All the data this GUI Façade provides (curretly to Oracle Forms) is structured in collections like:
TYPE tp_rc_cod IS RECORD(
-- Return code
cd_return NUMBER(2),
-- Name
cd_name some_table.name%TYPE
);
TYPE tp_table_rc_cod IS TABLE OF tp_rc_cod INDEX BY PLS_INTEGER;
So, Is there any way to quicky convert the returns of our current GUI Façade from table records to XML or JSON?
We thought about building a Wrapper in the middle of the new UI and current GUI Façade, however the system is not small, so it could became hard to build and maybe have performance issues.
I already know that It is not feasible for Oracle JDBC drivers to support calling arguments or return values of the PL/SQL RECORD, BOOLEAN, or table with non-scalar element types. However, Oracle JDBC drivers support PL/SQL index-by table of scalar element types. If this happens, How can Oracle Forms, for instance, do it? Does it build a Wrapper itself?
Any suggestions?
IF your types are actual oracle types (and not package types) you can cast them to a CLOB containing the xml output with code similar to
declare
l_tab tp_table_rc_cod := tp_table_rc_cod();
--new variables
l_rc SYS_REFCURSOR;
l_xmlctx number;
l_new_retval CLOB;
begin
l_tab.extend(1);
l_tab(1) := tp_rc_cod( 1, 'testname');
--TABLE RETURN
--return l_tab;
-- XML RETURN
open l_rc for select * from table(l_tab);
l_xmlctx := SYS.DBMS_XMLGEN.NEWCONTEXT(l_rc);
l_new_retval := dbms_xmlgen.getXML(l_xmlctx);
DBMS_XMLGEN.closeContext(l_xmlctx);
--return l_new_retval;
end;
/
but as you see it is still some effort. and there are other DBMS_XMLGEN options you'd probably want to set.
I also think oracle 12 does remove the "oracle type" requirement but I am not sure.
im not sure that exactly answers your question but I hope it helps

Is there a way to query Oracle DB server name and use in conditional compilation?

I got bit trying to maintain code packages that run on two different Oracle 11g2 systems when a line of code to be changed slipped by me. We develop on one system with a specific data set and then test on another system with a different data set.
The differences aren't tremendous, but include needing to change a single field name in two different queries in two different packages to have the packages run. On one system, we use one field, on the other system... a different one. The databases have the same schema name, object names, and field names, but the hosting system server names are different.
The change is literally as simple as
INSERT INTO PERSON_HISTORY
( RECORD_NUMBER,
UNIQUE_ID,
SERVICE_INDEX,
[... 140 more fields... ]
)
SELECT LOD.ID RECORD_NUMBER ,
-- for Mgt System, use MD5 instead of FAKE_SSN
-- Uncomment below, and comment out Dev system statement
-- MD5 UNIQUE_ID ,
-- for DEV system, use below
'00000000000000000000' || LOD.FAKE_SSN UNIQUE_ID ,
null SERVICE_INDEX ,
[... 140 more fields... ]
FROM LEGACY_DATE LOD
WHERE (conditions follow)
;
I missed one of the field name changes in one of the queries, and our multi-day run is crap.
For stupid reasons I won't go into, I wind up maintaining all of the code, including having to translate and reprocess developer changes manually between versions, then transfer and update the required changes between systems.
I'm trying to reduce the repetitive input I have to provide to swap out code -- I want to automate this step so I don't overlook it again.
I wanted to implement conditional compilation, pulling the name of the database system from Oracle and having the single line swap automatically -- but Oracle conditional compilation requires a package static constant (boolean in this case). I can't use the sys_context function to populate the value. Or, it doesn't seem to let ME pull data from the sys_context and evaluate it conditionally and assign that to a constant. Oracle isn't having any. DB_DOMAIN, DB_NAME, or SERVER_HOST might work to differentiate the systems, but I can't find a way to USE the information.
An option is to create a global constant that I set manually when I move the code to the other system, but at this point, I have so many steps to do for a transfer that I'm worried that I'd even screw that up. I would like to make this independent of other packages or my own processes.
Is there a good way to do this?
-------- edit
I will try the procedure and try to figure out the view over the weekend. Ultimately, the project will be turned over to a customer who expects to "just run it", so they won't understand what any switches are meant to do, or why I have "special" code in a package. And, they won't need to... I don't even know if they'll look at the comments.
Thank you
As Mat says in the comments for this specific example you can solve with a view, however there are other ways for more complex situations.
If you're compiling from a filesystem or using any automatic system you can create a separate PL/SQL block/procedure, which you execute in the same session prior to compilation. I'd do something like this:
declare
l_db varchar2(30) := sys_context('userenv','instance_name');
begin
if l_db = 'MY_DB' then
execute immediate 'alter session set plsql_ccflags = ''my_db:true''';
end if;
end;
/
One important point; conditional compilation does not involve a "package static constant", but a session one. So, you need to ensure that your compilation flags are identical/unique across packages/sessions.

Oracle Stored Procedure benchmarking

I am a .Net web developer working with a legacy Oracle database. In the past I have worked with orm tools like nHibernate but all database communication here is required to be done via stored procedures. Our dba is asking us to pass a bunch of administrative info to every procedure we call including the username/domain/ip of the end user. This data is then used to call another stored procedure that logs usage info each time a procedure is called.
I am not that well versed in Oracle or Pl/Sql and I am trying to write my .Net code in a clean way that meets best practices whenever possible. It seems to me that this process of passing extra data through to every procedure is messy and tedious on both the .Net and Oracle ends.
Does anyone know of a better way to accomplish the dba's goal without all the overhead? Or is this a standard way of doing things that I should get used to.
I'd use a context rather than passing additional parameters to every stored procedure call. A context is a convenient place to store arbitrary session-level state data that the stored procedures can all reference.
For example, I can create a context MYAPP_CTX for my application and create a simple package that lets me set whatever values I want in the context.
SQL> create context myapp_ctx using ctx_pkg;
Context created.
SQL> create package ctx_pkg
2 as
3 procedure set_value( p_key in varchar2, p_value in varchar2 );
4 end;
5 /
Package created.
SQL> create package body ctx_pkg
2 as
3 procedure set_value( p_key in varchar2, p_value in varchar2 )
4 as
5 begin
6 dbms_session.set_context( 'MYAPP_CTX', p_key, p_value );
7 end;
8 end;
9 /
Package body created.
When the application gets a connection from the connection pool, it would simply set all the context information once.
SQL> begin
2 ctx_pkg.set_value( 'USERNAME', 'JCAVE' );
3 ctx_pkg.set_value( 'IP_ADDRESS', '192.168.17.34' );
4 end;
5 /
PL/SQL procedure successfully completed.
Subsequent calls and queries in the same session can then just ask for whatever values are stored in the context.
SQL> select sys_context( 'MYAPP_CTX', 'USERNAME' )
2 from dual;
SYS_CONTEXT('MYAPP_CTX','USERNAME')
--------------------------------------------------------------------------------
JCAVE
Realistically, you'd almost certainly want to add a clear_context procedure to the package that called dbms_session.clear_context( 'MYAPP_CTX' ) to clear whatever values had been set in the context when a connection was returned to the connection pool to avoid inadvertently allowing context information from one session to bleed over into another. You would probably also design the package with separate procedures to set and to get at least the common keys (username, ip address, etc.) rather than having 'USERNAME' hard-coded multiple places. I used a single generic set_value method just for simplicity.

MS Access CREATE PROCEDURE Or use Access Macro in .NET

I need to be able to run a query such as
SELECT * FROM atable WHERE MyFunc(afield) = "some text"
I've written MyFunc in a VB module but the query results in "Undefined function 'MyFunc' in expression." when executed from .NET
From what I've read so far, functions in Access VB modules aren't available in .NET due to security concerns. There isn't much information on the subject but this avenue seems like a daed end.
The other possibility is through the CREATE PROCEDURE statement which also has precious little documentation: http://msdn.microsoft.com/en-us/library/bb177892%28v=office.12%29.aspx
The following code does work and creates a query in Access:
CREATE PROCEDURE test AS SELECT * FROM atable
However I need more than just a simple select statement - I need several lines of VB code.
While experimenting with the CREATE PROCEDURE statement, I executed the following code:
CREATE PROCEDURE test AS
Which produced the error "Invalid SQL statement; expected 'DELETE', 'INSERT', 'PROCEDURE', 'SELECT', or 'UPDATE'."
This seems to indicate that there's a SQL 'PROCEDURE' statement, so then I tried
CREATE PROCEDURE TEST AS PROCEDURE
Which resulted in "Syntax error in PROCEDURE clause."
I can't find any information on the SQL 'PROCEDURE' statement - maybe I'm just reading the error message incorrectly and there's no such beast. I've spent some time experimenting with the statement but I can't get any further.
In response to the suggestions to add a field to store the value, I'll expand on my requirements:
I have two scenarios where I need this functionality.
In the first scenario, I needed to enable the user to search on the soundex of a field and since there's no soundex SQL function in Access I added a field to store the soundex value for every field in every table where the user wants to be able to search for a record that "soundes like" an entered value. I update the soundex value whenever the parent field value changes. It's a fair bit of overhead but I considered it necessary in this instance.
For the second scenario, I want to normalize the spacing of a space-concatenation of field values and optionally strip out user-defined characters. I can come very close to acheiving the desired value with a combination of TRIM and REPLACE functions. The value would only differ if three or more spaces appeared between words in the value of one of the fields (an unlikely scenario). It's hard to justify the overhead of an extra field on every field in every table where this functionality is needed. Unless I get specific feedback from users about the issue of extra spaces, I'll stick with the TRIM & REPLACE value.
My application is database agnostic (or just not very religious... I support 7). I wrote a UDF for each of the other 6 databases that does the space normalization and character stripping much more efficiently than the built-in database functions. It really annoys me that I can write the UDF in Access as a VB macro and use that macro within Access but I can't use it from .NET.
I do need to be able to index on the value, so pulling the entire column(s) into .NET and then performing my calculation won't work.
I think you are running into the ceiling of what Access can do (and trying to go beyond). Access really doesn't have the power to do really complex TSQL statements like you are attempting. However, there are a couple ways to accomplish what you are looking for.
First, if the results of MyFunc don't change often, you could create a function in a module that loops through each record in atable and runs your MyFunc against it. You could either store that data in the table itself (in a new column) or you could build an in-memory dataset that you use for whatever purposes you want.
The second way of doing this is to do the manipulation in .NET since it seems you have the ability to do so. Do the SELECT statement and pull out the data you want from Access (without trying to run MyFunc against it). Then run whatever logic you want against the data and either use it from there or put it back into the Access database.
Why don't you want to create an additional field in your atable, which is atable.afieldX = MyFunc(atable.afield)? All what you need - to run UPDATE command once.
You should try to write a SQL Server function MyFunc. This way you will be able to run the same query in SQLserver and in Access.
A few usefull links for you so you can get started:
MSDN article about user defined functions: http://msdn.microsoft.com/en-us/magazine/cc164062.aspx
SQLServer user defined functions: http://www.sqlteam.com/article/intro-to-user-defined-functions-updated
SQLServer string functions: http://msdn.microsoft.com/en-us/library/ms181984.aspx
What version of JET (now called Ace) are you using?
I mean, it should come as no surprise that if you going to use some Access VBA code, then you need the VBA library and a copy of MS Access loaded and running.
However, in Access 2010, we now have table triggers and store procedures. These store procedures do NOT require VBA and in fact run at the engine level. I have a table trigger and soundex routine here that shows how this works:
http://www.kallal.ca/searchw/WebSoundex.htm
The above means if Access, or VB.net, or even FoxPro via odbc modifies a row, the table trigger code will fire and run and save the soundex value in a column for you. And this feature also works if you use the new web publishing feature in access 2010. So, while the above article is written from the point of view of using Access Web services (available in office 365 and SharePoint), the above soundex table trigger will also work in a stand a alone Access and JET (ACE) only application.

PL/SQL parser to identify the operation on table

I am writing a PL/SQL parser to identify the operations(Select,Insert,Delete) performed on the Table when I run Procedure, Function or Package.
GOAL:I Goal of this tool is to identify which all the tables will be affected by running the procedure,Fun to prepare with better test case.
Any better ideas or tool will really help a lot.
INPUT:
some SQL file with procedure
or proc file.
OUTPUT required is:
SELECT from: First_table, secondTable
-> In procedure XYZ --This is if the procedure is calling one more procedure
INSERT into: SomeTable
INSERT into: SomeDiffTable
-> END of procedure XYZ --End of one more procedure.
DELETE from: xyzTable
INSERT into: OnemoreTable
My requirement is When I am parsing porc1 if it calls another proc2. I have to go inside that proc2 to find out what all the operation is performed in that and come back to proc1 and continue.:
For this I have to store the all procedure some where and while parsing I have to check each token(word with space) in the tempStorage to find out if it is procedure or not.
As my logic's takes lot of time. Can any body suggest better logic to achieve my GOAL.
There's also the possiblity of triggers being involved. That adds an additional layer of complexity.
I'd say you're better off mining DBA_DEPENDENCIES with a recursive query to determine impact analysis in the abstract; it won't capture dynamic SQL, but nothing will 100% of the time. In your case, proc1 depends on proc2, and proc2 depends on whatever it depends on, and so forth. It won't tell you the nature of the dependency - INSERT, UPDATE, DELETE, SELECT - but it's a beginning.
If you're really interested in determining the actual impact of a single-variable-value run of a procedure, implement it in a non-production system, and then turn auditing on your system up to 11:
begin
for i in (select owner, object_type, object_name from dba_objects
where owner in ([list of application schemas]
and object_type in ('TABLE', 'PACKAGE', 'PROCEDURE', 'FUNCTION', 'VIEW')
loop
execute immediate 'AUDIT ALL ON ' || i.owner || '.' || i.object_type ||
' BY SESSION';
end loop;
end;
/
Run your test, and see what objects got touched as a result of the exectution by mining the audit trail. It's not bulletproof, as it only audits objects that got touched by that execution, but it does tell you how they got touched.

Resources