Oracle Stored Procedure benchmarking - asp.net

I am a .Net web developer working with a legacy Oracle database. In the past I have worked with orm tools like nHibernate but all database communication here is required to be done via stored procedures. Our dba is asking us to pass a bunch of administrative info to every procedure we call including the username/domain/ip of the end user. This data is then used to call another stored procedure that logs usage info each time a procedure is called.
I am not that well versed in Oracle or Pl/Sql and I am trying to write my .Net code in a clean way that meets best practices whenever possible. It seems to me that this process of passing extra data through to every procedure is messy and tedious on both the .Net and Oracle ends.
Does anyone know of a better way to accomplish the dba's goal without all the overhead? Or is this a standard way of doing things that I should get used to.

I'd use a context rather than passing additional parameters to every stored procedure call. A context is a convenient place to store arbitrary session-level state data that the stored procedures can all reference.
For example, I can create a context MYAPP_CTX for my application and create a simple package that lets me set whatever values I want in the context.
SQL> create context myapp_ctx using ctx_pkg;
Context created.
SQL> create package ctx_pkg
2 as
3 procedure set_value( p_key in varchar2, p_value in varchar2 );
4 end;
5 /
Package created.
SQL> create package body ctx_pkg
2 as
3 procedure set_value( p_key in varchar2, p_value in varchar2 )
4 as
5 begin
6 dbms_session.set_context( 'MYAPP_CTX', p_key, p_value );
7 end;
8 end;
9 /
Package body created.
When the application gets a connection from the connection pool, it would simply set all the context information once.
SQL> begin
2 ctx_pkg.set_value( 'USERNAME', 'JCAVE' );
3 ctx_pkg.set_value( 'IP_ADDRESS', '192.168.17.34' );
4 end;
5 /
PL/SQL procedure successfully completed.
Subsequent calls and queries in the same session can then just ask for whatever values are stored in the context.
SQL> select sys_context( 'MYAPP_CTX', 'USERNAME' )
2 from dual;
SYS_CONTEXT('MYAPP_CTX','USERNAME')
--------------------------------------------------------------------------------
JCAVE
Realistically, you'd almost certainly want to add a clear_context procedure to the package that called dbms_session.clear_context( 'MYAPP_CTX' ) to clear whatever values had been set in the context when a connection was returned to the connection pool to avoid inadvertently allowing context information from one session to bleed over into another. You would probably also design the package with separate procedures to set and to get at least the common keys (username, ip address, etc.) rather than having 'USERNAME' hard-coded multiple places. I used a single generic set_value method just for simplicity.

Related

Lock a table during multiple statements in a stored procedure

I am looking to implement the equivalent of snapshot isolation with a Teradata transaction. Oracle supports this type of isolation, but Teradata does not (at least not in versions 14 or prior that I am aware of). The goal is to create a procedure that deletes a table's contents and then repopulates it all while preventing other users from reading from/writing to the table.
I came across the begin request statement which, according to my understanding, allows the optimizer to know about all the various table locks within the request.
I wrote the procedure below, but don't know how to reliably debug it as easy as I would if I were testing thread locking in a .NET application (easy to set breakpoints and monitor other threads). In Teradata, not sure if what I wrote here will properly lock mydb.destinationtable exclusively for the duration of the procedure. Is this correct?
Edit: I'll add that the procedure does work. It's just being able to properly time a SELECT while it's doing its DELETE/INSERT.
replace procedure mydb.myproc()
begin
begin request
locking mydb.destinationtable for exclusive
delete mydb.destinationtable;
locking mydb.destinationtable for exclusive
insert into mydb.destinationtable
select * from mydb.sourcetable;
end request;
end;
BEGIN REQUEST/END REQUEST creates a so-called Multi Statement Request (MSR) which is the same a submitting both requests in SQL Assistant using F9.
To see the plan run this with F9:
EXPLAIN
locking mydb.destinationtable for exclusive
delete mydb.destinationtable;
insert into mydb.destinationtable
select * from mydb.sourcetable;
or in BTEQ:
EXPLAIN
locking mydb.destinationtable for exclusive
delete mydb.destinationtable
;insert into mydb.destinationtable
select * from mydb.sourcetable;
Btw, the 2nd lock is redundant.
But. When you run Delete & InsSel as a single transaction both will be Transient Journalled. Which is much slower than seperate requests.
A more common way to do this is to use two copies of the target table and base access on Views not Tables:
-- no BEGIN/END REQUEST
insert into mydb.destinationtable_2
select * from mydb.sourcetable;
-- there's just a short dictionary lock
-- all requests against the view submitted before the replace use the old data
-- and all submitted after the new data
replace view myview as
select * from mydb.destinationtable_2;
delete from mydb.destinationtable_1;
Now your SP only needs the logic to switch between 1 & 2 (based on table [not] empty)

Table records to Xml in PL/SQL

We currently use Oracle/PL/SQL and Oracle Forms WEB as User Interface.
The thing is that we decided to migrate the UI from Forms to another UI (probably HTML5/ Angular...).
Our system architecture is layered in a way that the batch code will remain untouched and all we have to do is to access the GUI Façade from the new technology (still to be chosen). The problem is: All the data this GUI Façade provides (curretly to Oracle Forms) is structured in collections like:
TYPE tp_rc_cod IS RECORD(
-- Return code
cd_return NUMBER(2),
-- Name
cd_name some_table.name%TYPE
);
TYPE tp_table_rc_cod IS TABLE OF tp_rc_cod INDEX BY PLS_INTEGER;
So, Is there any way to quicky convert the returns of our current GUI Façade from table records to XML or JSON?
We thought about building a Wrapper in the middle of the new UI and current GUI Façade, however the system is not small, so it could became hard to build and maybe have performance issues.
I already know that It is not feasible for Oracle JDBC drivers to support calling arguments or return values of the PL/SQL RECORD, BOOLEAN, or table with non-scalar element types. However, Oracle JDBC drivers support PL/SQL index-by table of scalar element types. If this happens, How can Oracle Forms, for instance, do it? Does it build a Wrapper itself?
Any suggestions?
IF your types are actual oracle types (and not package types) you can cast them to a CLOB containing the xml output with code similar to
declare
l_tab tp_table_rc_cod := tp_table_rc_cod();
--new variables
l_rc SYS_REFCURSOR;
l_xmlctx number;
l_new_retval CLOB;
begin
l_tab.extend(1);
l_tab(1) := tp_rc_cod( 1, 'testname');
--TABLE RETURN
--return l_tab;
-- XML RETURN
open l_rc for select * from table(l_tab);
l_xmlctx := SYS.DBMS_XMLGEN.NEWCONTEXT(l_rc);
l_new_retval := dbms_xmlgen.getXML(l_xmlctx);
DBMS_XMLGEN.closeContext(l_xmlctx);
--return l_new_retval;
end;
/
but as you see it is still some effort. and there are other DBMS_XMLGEN options you'd probably want to set.
I also think oracle 12 does remove the "oracle type" requirement but I am not sure.
im not sure that exactly answers your question but I hope it helps

Stored Procedure works fine from SQL Mgt Studio but throws Invalid Object name #AllActiveOrders from MVC app

I can run the 'guts' of my stored procedure as a giant query.. just fine from SQL Management Studio. Furthermore, I can even right click and 'execute' the stored procedure - .. y'know.. run it as a stored procedure - from SQL Management Studio.
When my ASP.NET MVC app goes to run this stored procedure, I get issues..
System.Data.SqlClient.SqlException: Invalid object name '#AllActiveOrders'.
Does the impersonation account that ASP.NET runs under need special permissions? That can't be it.. even when I run it locally from my Visual Studio (under my login account) I also get the temp table error message.
EDIT: Furthermore, it seems to work fine when called from one ASP.NET app (which is using a WCF service / ADO.NET to call the stored procedure) but does not work from a different ASP.NET app (which calls the stored proc directly using ADO.NET)
FURTHERMORE: The MVC app that doesn't crash, does pass in some parameters to the stored procedure, while the crashing app runs the Stored Proc with default parameters (doesn't pass any in). FWIW - when I run the stored procedure in SQL Mgt. Studio, it's with default parameters (and it doesn't crash).
If it's of any worth, I did have to fix a 'String or Binary data would be truncated' issue just prior to this situation. I went into this massive query and fixed the temptable definition (a different one) that I knew to be the problem (since I had just edited it a day or so ago). I was able to see the 'String/Binary truncation' issue in SQL Mgt. Studio / as well as resolve the issue in SQL Mgt Studio.. but, I'm really stumped as to why I cannot see the 'Invalid Object name' issue in SQL Mgt. Studio
Stored procedures and temp tables generally don't mix well with strongly typed implementations of database objects (ado, datasets, I'm sure there's others).
If you change your #temp table to a #variable table that should fix your issue.
(Apparently) this works in some cases:
IF 1=0 BEGIN
SET FMTONLY OFF
END
Although according to http://msdn.microsoft.com/en-us/library/ms173839.aspx, the functionality is considered deprecated.
An example on how to change from temp table to var table would be like:
create table #tempTable (id int, someVal varchar(50))
to:
declare #tempTable table (id int, someval varchar(50))
There are a few differences between temp and var tables you should consider:
What's the difference between a temp table and table variable in SQL Server?
When should I use a table variable vs temporary table in sql server?
Ok. Figured it out with the help of my colleague who did some better Google-fu than I had done prior..
First, we CAN indeed make SQL Management Studio puke on my stored procedure by adding the FMTONLY option:
SET FMTONLY ON;
EXEC [dbo].[My_MassiveStackOfSubQueriesToProduceADigestDataSet]
GO
Now, on to my two competing ASP.NET applications... why one of them worked and one of them didn't? Under the covers, both essentially used an ADO.NET System.Data.SqlClient.SqlDataAdapter to go get the data and each performed a .Fill(DataSet1)
However, the one that was crashing was trying to get the schema in advanced of the data, instead of just deriving the schema after the fact.. so, it was this line of code that was killing it:
da.FillSchema(DataSet1, SchemaType.Mapped)
If you're struggling with this same issue that I've had, you may have come across forums like this from MSDN which are all over the internets - which explain the details of what's going on quite adequately. It had just never occurred to me that when I called "FillSchema" that I was essentially tripping over this same issue.
Now I know!!!
Following on from bkwdesign's answer about finding the problem was due to ADO.NET DataAdapter.FillSchema using SET FMTONLY ON, I had a similar problem. This is how I dealt with it:
I found the simplest solution was to short-circuit the stored proc, returning a dummy recordset FillSchema could use. So at the top of the stored proc I added something like:
IF 1 = 0
BEGIN;
SELECT CAST(0 as INT) AS ID,
CAST(NULL AS VARCHAR(10)) AS SomTextCol,
...;
RETURN 0;
END;
The columns of the select statement are identical in name, data type and order to the schema of the recordset that will be returned from the stored proc when it executes normally.
The RETURN ensures that FillSchema doesn't look at the rest of the stored proc, and so avoids problems with temp tables.

Stored procedure slow when called from web, fast from Management Studio

I have stored procedure that insanely times out every single time it's called from the web application.
I fired up the Sql Profiler and traced the calls that time out and finally found out these things:
When executed the statements from within the MS SQL Management Studio, with same arguments (in fact, I copied the procedure call from sql profile trace and ran it): It finishes in 5~6 seconds avg.
But when called from web application, it takes in excess of 30 seconds (in trace) so my webpage actually times out by then.
Apart from the fact that my web application has its own user, every thing is same (same database, connection, server etc)
I also tried running the query directly in the studio with the web application's user and it doesn't take more than 6 sec.
How do I find out what is happening?
I am assuming it has nothing to do with the fact that we use BLL > DAL layers or Table adapters as the trace clearly shows the delay is in the actual procedure. That is all I can think of.
EDIT I found out in this link that ADO.NET sets ARITHABORT to true - which is good for most of the time but sometime this happens, and the suggested work-around is to add with recompile option to the stored proc. In my case, it's not working but I suspect it's something very similar to this. Anyone knows what else ADO.NET does or where I can find the spec?
I've had a similar issue arise in the past, so I'm eager to see a resolution to this question. Aaron Bertrand's comment on the OP led to Query times out when executed from web, but super-fast when executed from SSMS, and while the question is not a duplicate, the answer may very well apply to your situation.
In essence, it sounds like SQL Server may have a corrupt cached execution plan. You're hitting the bad plan with your web server, but SSMS lands on a different plan since there is a different setting on the ARITHABORT flag (which would otherwise have no impact on your particular query/stored proc).
See ADO.NET calling T-SQL Stored Procedure causes a SqlTimeoutException for another example, with a more complete explanation and resolution.
I also experience that queries were running slowly from the web and fast in SSMS and I eventually found out that the problem was something called parameter sniffing.
The fix for me was to change all the parameters that are used in the sproc to local variables.
eg. change:
ALTER PROCEDURE [dbo].[sproc]
#param1 int,
AS
SELECT * FROM Table WHERE ID = #param1
to:
ALTER PROCEDURE [dbo].[sproc]
#param1 int,
AS
DECLARE #param1a int
SET #param1a = #param1
SELECT * FROM Table WHERE ID = #param1a
Seems strange, but it fixed my problem.
Not to spam, but as a hopefully helpful solution for others, our system saw a high degree of timeouts.
I tried setting the stored procedure to be recompiled by using sp_recompile and this resolved the issue for the one SP.
Ultimately there were a larger number of SP's that were timing-out, many of which had never done so before, by using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE the incident rate of timeouts has plummeted significantly - there are still isolated occurrences, some where I suspect the plan regeneration is taking a while, and some where the SPs are genuinely under-performant and need re-evaluation.
Could it be that some other DB calls made before the web application calls the SP is keeping a transaction open? That could be a reason for this SP to wait when called by the web application. I say isolate the call in the web application (put it on a new page) to ensure that some prior action in the web application is causing this issue.
You can target specific cached execution plans via:
SELECT cp.plan_handle, st.[text]
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
WHERE [text] LIKE N'%your troublesome SP or function name etc%'
And then remove only the execution plans causing issues via, for example:
DBCC FREEPROCCACHE (0x050006003FCA862F40A19A93010000000000000000000000)
I've now got a job running every 5 minutes that looks for slow running procedures or functions and automatically clears down those execution plans if it finds any:
if exists (
SELECT cpu_time, *
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext
--order by req.total_elapsed_time desc
WHERE ([text] LIKE N'%your troublesome SP or function name etc%')
and cpu_time > 8000
)
begin
SELECT cp.plan_handle, st.[text]
into #results
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
WHERE [text] LIKE N'%your troublesome SP or function name etc%'
delete #results where text like 'SELECT cp.plan_handle%'
--select * from #results
declare #handle varbinary(max)
declare #handleconverted varchar(max)
declare #sql varchar(1000)
DECLARE db_cursor CURSOR FOR
select plan_handle from #results
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO #handle
WHILE ##FETCH_STATUS = 0
BEGIN
--e.g. DBCC FREEPROCCACHE (0x050006003FCA862F40A19A93010000000000000000000000)
print #handle
set #handleconverted = '0x' + CAST('' AS XML).value('xs:hexBinary(sql:variable("#handle"))', 'VARCHAR(MAX)')
print #handleconverted
set #sql = 'DBCC FREEPROCCACHE (' + #handleconverted + ')'
print 'DELETING: ' + #sql
EXEC(#sql)
FETCH NEXT FROM db_cursor INTO #handle
END
CLOSE db_cursor
DEALLOCATE db_cursor
drop table #results
end
Simply recompiling the stored procedure (table function in my case) worked for me
like #Zane said it could be due to parameter sniffing. I experienced the same behaviour and I took a look at the execution plan of the procedure and all the statements of the sp in a row (copied all the statements form the procedure, declared the parameters as variables and asigned the same values for the variable as the parameters had). However the execution plan looked completely different. The sp execution took 3-4 seconds and the statements in a row with the exact same values was instantly returned.
After some googling I found an interesting read about that behaviour: Slow in the Application, Fast in SSMS?
When compiling the procedure, SQL Server does not know that the value of #fromdate changes, but compiles the procedure under the assumption that #fromdate has the value NULL. Since all comparisons with NULL yield UNKNOWN, the query cannot return any rows at all, if #fromdate still has this value at run-time. If SQL Server would take the input value as the final truth, it could construct a plan with only a Constant Scan that does not access the table at all (run the query SELECT * FROM Orders WHERE OrderDate > NULL to see an example of this). But SQL Server must generate a plan which returns the correct result no matter what value #fromdate has at run-time. On the other hand, there is no obligation to build a plan which is the best for all values. Thus, since the assumption is that no rows will be returned, SQL Server settles for the Index Seek.
The problem was that I had parameters which could be left null and if they were passed as null the would be initialised with a default value.
create procedure dbo.procedure
#dateTo datetime = null
begin
if (#dateTo is null)
begin
select #dateTo = GETUTCDATE()
end
select foo
from dbo.table
where createdDate < #dateTo
end
After I changed it to
create procedure dbo.procedure
#dateTo datetime = null
begin
declare #to datetime = coalesce(#dateTo, getutcdate())
select foo
from dbo.table
where createdDate < #to
end
it worked like a charm again.
--BEFORE
CREATE PROCEDURE [dbo].[SP_DEMO]
(
#ToUserId bigint=null
)
AS
BEGIN
SELECT * FROM tbl_Logins WHERE LoginId = #ToUserId
END
--AFTER CHANGING TO IT WORKING FINE
CREATE PROCEDURE [dbo].[SP_DEMO]
(
#ToUserId bigint=null
)
AS
BEGIN
DECLARE #Toid bigint=null
SET #Toid=#ToUserId
SELECT * FROM tbl_Logins WHERE LoginId = #Toid
END

PL/SQL parser to identify the operation on table

I am writing a PL/SQL parser to identify the operations(Select,Insert,Delete) performed on the Table when I run Procedure, Function or Package.
GOAL:I Goal of this tool is to identify which all the tables will be affected by running the procedure,Fun to prepare with better test case.
Any better ideas or tool will really help a lot.
INPUT:
some SQL file with procedure
or proc file.
OUTPUT required is:
SELECT from: First_table, secondTable
-> In procedure XYZ --This is if the procedure is calling one more procedure
INSERT into: SomeTable
INSERT into: SomeDiffTable
-> END of procedure XYZ --End of one more procedure.
DELETE from: xyzTable
INSERT into: OnemoreTable
My requirement is When I am parsing porc1 if it calls another proc2. I have to go inside that proc2 to find out what all the operation is performed in that and come back to proc1 and continue.:
For this I have to store the all procedure some where and while parsing I have to check each token(word with space) in the tempStorage to find out if it is procedure or not.
As my logic's takes lot of time. Can any body suggest better logic to achieve my GOAL.
There's also the possiblity of triggers being involved. That adds an additional layer of complexity.
I'd say you're better off mining DBA_DEPENDENCIES with a recursive query to determine impact analysis in the abstract; it won't capture dynamic SQL, but nothing will 100% of the time. In your case, proc1 depends on proc2, and proc2 depends on whatever it depends on, and so forth. It won't tell you the nature of the dependency - INSERT, UPDATE, DELETE, SELECT - but it's a beginning.
If you're really interested in determining the actual impact of a single-variable-value run of a procedure, implement it in a non-production system, and then turn auditing on your system up to 11:
begin
for i in (select owner, object_type, object_name from dba_objects
where owner in ([list of application schemas]
and object_type in ('TABLE', 'PACKAGE', 'PROCEDURE', 'FUNCTION', 'VIEW')
loop
execute immediate 'AUDIT ALL ON ' || i.owner || '.' || i.object_type ||
' BY SESSION';
end loop;
end;
/
Run your test, and see what objects got touched as a result of the exectution by mining the audit trail. It's not bulletproof, as it only audits objects that got touched by that execution, but it does tell you how they got touched.

Resources