Stored procedure slow when called from web, fast from Management Studio - asp.net

I have stored procedure that insanely times out every single time it's called from the web application.
I fired up the Sql Profiler and traced the calls that time out and finally found out these things:
When executed the statements from within the MS SQL Management Studio, with same arguments (in fact, I copied the procedure call from sql profile trace and ran it): It finishes in 5~6 seconds avg.
But when called from web application, it takes in excess of 30 seconds (in trace) so my webpage actually times out by then.
Apart from the fact that my web application has its own user, every thing is same (same database, connection, server etc)
I also tried running the query directly in the studio with the web application's user and it doesn't take more than 6 sec.
How do I find out what is happening?
I am assuming it has nothing to do with the fact that we use BLL > DAL layers or Table adapters as the trace clearly shows the delay is in the actual procedure. That is all I can think of.
EDIT I found out in this link that ADO.NET sets ARITHABORT to true - which is good for most of the time but sometime this happens, and the suggested work-around is to add with recompile option to the stored proc. In my case, it's not working but I suspect it's something very similar to this. Anyone knows what else ADO.NET does or where I can find the spec?

I've had a similar issue arise in the past, so I'm eager to see a resolution to this question. Aaron Bertrand's comment on the OP led to Query times out when executed from web, but super-fast when executed from SSMS, and while the question is not a duplicate, the answer may very well apply to your situation.
In essence, it sounds like SQL Server may have a corrupt cached execution plan. You're hitting the bad plan with your web server, but SSMS lands on a different plan since there is a different setting on the ARITHABORT flag (which would otherwise have no impact on your particular query/stored proc).
See ADO.NET calling T-SQL Stored Procedure causes a SqlTimeoutException for another example, with a more complete explanation and resolution.

I also experience that queries were running slowly from the web and fast in SSMS and I eventually found out that the problem was something called parameter sniffing.
The fix for me was to change all the parameters that are used in the sproc to local variables.
eg. change:
ALTER PROCEDURE [dbo].[sproc]
#param1 int,
AS
SELECT * FROM Table WHERE ID = #param1
to:
ALTER PROCEDURE [dbo].[sproc]
#param1 int,
AS
DECLARE #param1a int
SET #param1a = #param1
SELECT * FROM Table WHERE ID = #param1a
Seems strange, but it fixed my problem.

Not to spam, but as a hopefully helpful solution for others, our system saw a high degree of timeouts.
I tried setting the stored procedure to be recompiled by using sp_recompile and this resolved the issue for the one SP.
Ultimately there were a larger number of SP's that were timing-out, many of which had never done so before, by using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE the incident rate of timeouts has plummeted significantly - there are still isolated occurrences, some where I suspect the plan regeneration is taking a while, and some where the SPs are genuinely under-performant and need re-evaluation.

Could it be that some other DB calls made before the web application calls the SP is keeping a transaction open? That could be a reason for this SP to wait when called by the web application. I say isolate the call in the web application (put it on a new page) to ensure that some prior action in the web application is causing this issue.

You can target specific cached execution plans via:
SELECT cp.plan_handle, st.[text]
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
WHERE [text] LIKE N'%your troublesome SP or function name etc%'
And then remove only the execution plans causing issues via, for example:
DBCC FREEPROCCACHE (0x050006003FCA862F40A19A93010000000000000000000000)
I've now got a job running every 5 minutes that looks for slow running procedures or functions and automatically clears down those execution plans if it finds any:
if exists (
SELECT cpu_time, *
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext
--order by req.total_elapsed_time desc
WHERE ([text] LIKE N'%your troublesome SP or function name etc%')
and cpu_time > 8000
)
begin
SELECT cp.plan_handle, st.[text]
into #results
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
WHERE [text] LIKE N'%your troublesome SP or function name etc%'
delete #results where text like 'SELECT cp.plan_handle%'
--select * from #results
declare #handle varbinary(max)
declare #handleconverted varchar(max)
declare #sql varchar(1000)
DECLARE db_cursor CURSOR FOR
select plan_handle from #results
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO #handle
WHILE ##FETCH_STATUS = 0
BEGIN
--e.g. DBCC FREEPROCCACHE (0x050006003FCA862F40A19A93010000000000000000000000)
print #handle
set #handleconverted = '0x' + CAST('' AS XML).value('xs:hexBinary(sql:variable("#handle"))', 'VARCHAR(MAX)')
print #handleconverted
set #sql = 'DBCC FREEPROCCACHE (' + #handleconverted + ')'
print 'DELETING: ' + #sql
EXEC(#sql)
FETCH NEXT FROM db_cursor INTO #handle
END
CLOSE db_cursor
DEALLOCATE db_cursor
drop table #results
end

Simply recompiling the stored procedure (table function in my case) worked for me

like #Zane said it could be due to parameter sniffing. I experienced the same behaviour and I took a look at the execution plan of the procedure and all the statements of the sp in a row (copied all the statements form the procedure, declared the parameters as variables and asigned the same values for the variable as the parameters had). However the execution plan looked completely different. The sp execution took 3-4 seconds and the statements in a row with the exact same values was instantly returned.
After some googling I found an interesting read about that behaviour: Slow in the Application, Fast in SSMS?
When compiling the procedure, SQL Server does not know that the value of #fromdate changes, but compiles the procedure under the assumption that #fromdate has the value NULL. Since all comparisons with NULL yield UNKNOWN, the query cannot return any rows at all, if #fromdate still has this value at run-time. If SQL Server would take the input value as the final truth, it could construct a plan with only a Constant Scan that does not access the table at all (run the query SELECT * FROM Orders WHERE OrderDate > NULL to see an example of this). But SQL Server must generate a plan which returns the correct result no matter what value #fromdate has at run-time. On the other hand, there is no obligation to build a plan which is the best for all values. Thus, since the assumption is that no rows will be returned, SQL Server settles for the Index Seek.
The problem was that I had parameters which could be left null and if they were passed as null the would be initialised with a default value.
create procedure dbo.procedure
#dateTo datetime = null
begin
if (#dateTo is null)
begin
select #dateTo = GETUTCDATE()
end
select foo
from dbo.table
where createdDate < #dateTo
end
After I changed it to
create procedure dbo.procedure
#dateTo datetime = null
begin
declare #to datetime = coalesce(#dateTo, getutcdate())
select foo
from dbo.table
where createdDate < #to
end
it worked like a charm again.

--BEFORE
CREATE PROCEDURE [dbo].[SP_DEMO]
(
#ToUserId bigint=null
)
AS
BEGIN
SELECT * FROM tbl_Logins WHERE LoginId = #ToUserId
END
--AFTER CHANGING TO IT WORKING FINE
CREATE PROCEDURE [dbo].[SP_DEMO]
(
#ToUserId bigint=null
)
AS
BEGIN
DECLARE #Toid bigint=null
SET #Toid=#ToUserId
SELECT * FROM tbl_Logins WHERE LoginId = #Toid
END

Related

PLSQL: No output displayed when using dynamic query inside Stored Procedure

I have been asked to create an SP which creates temporary table and insert some records.
I am preparing some sample code for the same as mentioned below but the output is not displayed.
create or replace procedure Test
is
stmt varchar2(1000);
stmt2 varchar2(1000);
begin
stmt := 'create global temporary table temp_1(id number(10))';
execute immediate stmt;
insert into temp_1(id) values (10);
execute immediate 'Select * from temp_1';
execute immediate 'Drop table temp_1';
commit;
end;
When i am executing the SP by (Exec Test) desired O/P is not displayed.
I am expecting O/P of "Select * from temp_1" to be displayed. But it is not happening. Please suggest where i am doing wrong.
But i am interesting in knowing why ( execute immediate 'Select * from temp_1';) do not yield any result
For two reasons. Firstly because as #a_horse_with_no_name said PL/SQL won't display the result of a query. But more importantly here, perhaps, the query is never actually executed. This behaviour is stated in the documentation:
If dynamic_sql_statement is a SELECT statement, and you omit both into_clause and bulk_collect_into_clause, then *execute_immediate_statement( never executes.
You would have to execute immediate into a variable, or more likely a collection if your real scenario has more than one row, and then process that data - iterating over the collection in the bulk case.
There is not really a reliable way to display anything from PL/SQL; you can use dbms_output but that's more suited for debugging than real output, and you usually have no guarantee that the client will be configured to show whatever you put into its buffer.
This is all rather academic since creating and dropping a GTT on the fly is not a good idea and there are better ways to accomplish whatever it is you're trying to do.
The block you showed shouldn't actually run at all; as you're creating temp_1 dynamically, the static SQL insert into temp_1 will error as that table does not yet exist when the block is compiled. The insert would have to be dynamic too. Any dynamic SQL is a bit of a warning sign you're maybe doing something wrong, though it is sometimes necessary; having to do everything dynamically suggests the whole approach needs a rethink, as does creating objects at runtime.

Why TableAdapter doesn't recognize #parameter

I am using table adapter Query configuration wizard in Visual studio 2013 for getting data from my database. For some queries like this:
SELECT *
FROM ItemsTable
ORDER BY date_of_creation desc, time_of_creation desc
OFFSET (#PageNumber - 1) * #RowsPerPage ROWS
FETCH NEXT #RowsPerPage ROWS ONLY
it doesn't recognize the #pageNumber as a paremeter and it cannot generate function that has these arguments while it works fine for queries like:
Select Top (#count) * from items_table
Why does in first query tableadapter fail to generate function with mentioned arguments whereas it can generate function fine for second one for example: tableadapter.getDataByCount(?int count)
Am I forced to use stored procedure, if yes since I don't know anything about it how?
Update: The Problem exactly occurs in TableAdapter Configuration Wizard in DataSet editor (VS 2013) and it doesn't generate functions with these parameters some times it says #RowsPerPage should be declared! but it should generate a function with this arguments I found that it happens when we don't use #parameter_name in clause other than SELECT and WHERE for example in this query we used the, in Offset clause.
I can't tell you how to fix it in ASP, but here is a simple stored procedure that should do the same thing:
CREATE PROCEDURE dbo.ReturnPageOfItems
(
#pageNumber INT,
#rowsPerPage INT
)
AS
BEGIN;
SELECT *
FROM dbo.ItemsTable
ORDER BY date_of_creation desc,
time_of_creation desc
OFFSET (#pageNumber - 1) * #rowsperpage ROWS
FETCH NEXT #rowsPerPage ROWS ONLY;
END;
This will also perform better than simply passing the query, because SQL Server will take advantage of the cached query plan created for the procedure on its first execution. It is best practice not to use SELECT *, as that can cause maintenance trouble for you if there are schema changes to the table(s) involved, so I encourage you to spell out the columns in which you're actually interested. The documentation for the CREATE PROCEDURE command is available here, and it spells out the many various options you have in greater detail. However, the code above should work fine as is.
If you need to grant access to your application user so they can use this proc, that code is
GRANT EXECUTE ON OBJECT::dbo.ReturnPageOfItems TO userName;

PL/SQL parser to identify the operation on table

I am writing a PL/SQL parser to identify the operations(Select,Insert,Delete) performed on the Table when I run Procedure, Function or Package.
GOAL:I Goal of this tool is to identify which all the tables will be affected by running the procedure,Fun to prepare with better test case.
Any better ideas or tool will really help a lot.
INPUT:
some SQL file with procedure
or proc file.
OUTPUT required is:
SELECT from: First_table, secondTable
-> In procedure XYZ --This is if the procedure is calling one more procedure
INSERT into: SomeTable
INSERT into: SomeDiffTable
-> END of procedure XYZ --End of one more procedure.
DELETE from: xyzTable
INSERT into: OnemoreTable
My requirement is When I am parsing porc1 if it calls another proc2. I have to go inside that proc2 to find out what all the operation is performed in that and come back to proc1 and continue.:
For this I have to store the all procedure some where and while parsing I have to check each token(word with space) in the tempStorage to find out if it is procedure or not.
As my logic's takes lot of time. Can any body suggest better logic to achieve my GOAL.
There's also the possiblity of triggers being involved. That adds an additional layer of complexity.
I'd say you're better off mining DBA_DEPENDENCIES with a recursive query to determine impact analysis in the abstract; it won't capture dynamic SQL, but nothing will 100% of the time. In your case, proc1 depends on proc2, and proc2 depends on whatever it depends on, and so forth. It won't tell you the nature of the dependency - INSERT, UPDATE, DELETE, SELECT - but it's a beginning.
If you're really interested in determining the actual impact of a single-variable-value run of a procedure, implement it in a non-production system, and then turn auditing on your system up to 11:
begin
for i in (select owner, object_type, object_name from dba_objects
where owner in ([list of application schemas]
and object_type in ('TABLE', 'PACKAGE', 'PROCEDURE', 'FUNCTION', 'VIEW')
loop
execute immediate 'AUDIT ALL ON ' || i.owner || '.' || i.object_type ||
' BY SESSION';
end loop;
end;
/
Run your test, and see what objects got touched as a result of the exectution by mining the audit trail. It's not bulletproof, as it only audits objects that got touched by that execution, but it does tell you how they got touched.

Executing sequential stored procedures; works in query analyzer, doesn't in my .NET application

I have an audit record table that I am writing to. I am connecting to MyDb, which has a stored procedure called 'CreateAudit', which is a passthrough stored procedure to another database on the same machine called MyOther DB with a stored procedure called 'CreatedAudit' as well.
In other words in MyDB I have CreateAudit, which does the following EXEC dbo.MyOtherDB.CreateAudit.
I call the MyDb CreateAudit stored procedure from my application, using subsonic as the DAL. The first time I call it, I call it with the following (pseudocode):
int openStatus, closeStatus = 0;
openStatus = Convert.ToInt32(SPs.LogAccess(userId, "OPENED"));
closeStatus = Convert.ToInt32(SPs.LogAccess(userId, "CLOSED"));
This is simplified, but this is what LogAccess calls:
ALTER procedure [dbo].[LogAccess]
#UserID uniqueid,
#Action varchar(10),
#Status integer output
as
DECLARE #mStatus INT
EXEC [MyOtherDb].[dbo].[LogAccess]
#UserID = #UserID,
#Action = #Action,
#Status = #mStatus OUTPUT
select #mStatus
In my second stored procedure it is supposed to mark the record that was created by the CreateAudit(recordId, "Opened") with a status of closed.
This works great if I run them independently of one another, or even if I paste them into query analyzer. However when they execute from the application, the record is not marked as "Closed".
When I run SQL profiler I see that both queries ran, and if I copy the queries out and run them from query analyzer the record gets marked as closed 100% of the time!
When I run it from the application, about once every 20 times or so, the record is successfully marked closed - the other 19 times nothing happens, but I do not get an error!
Is it possible for the .NET app to skip over the ouput from the first stored procedure and start executing the second stored procedure before the record in the first is created?
When I add a "WAITFOR DELAY '00:00:00:003'" to the top of my stored procedure, the record is also closed 100% of the time.
My head is spinning, any ideas why this is happening!
Thanks for any responses, very interested in hearing how this can happen.
In your 1st stored proc, try having the EXEC statement wait for a return value from the 2nd stored proc. My suspicion is that your first SP is firing off the 2nd stored proc and then immediately returning control to your .NET code, which is leading to the above commenter's concurrency issue. (That is to say, the 2nd SP hasn't finished running yet by the time your next DB call is made!)
SP1: EXEC #retval = SP2 ....

How to find out which package/procedure is updating a table?

I would like to find out if it is possible to find out which package or procedure in a package is updating a table?
Due to a certain project being handed over (the person who handed over the project has since left) without proper documentation, data that we know we have updated always go back to some strange source point.
We are guessing that this could be a database job or scheduler that is running the update command without our knowledge. I am hoping that there is a way to find out where the source code is calling from that is updating the table and inserting the source as a trigger on that table that we are monitoring.
Any ideas?
Thanks.
UPDATE: I poked around and found out
how to trace a statement back to its
owning PL/SQL object.
In combination with what Tony mentioned, you can create a logging table and a trigger that looks like this:
CREATE TABLE statement_tracker
( SID NUMBER
, serial# NUMBER
, date_run DATE
, program VARCHAR2(48) null
, module VARCHAR2(48) null
, machine VARCHAR2(64) null
, osuser VARCHAR2(30) null
, sql_text CLOB null
, program_id number
);
CREATE OR REPLACE TRIGGER smb_t_t
AFTER UPDATE
ON smb_test
BEGIN
INSERT
INTO statement_tracker
SELECT ss.SID
, ss.serial#
, sysdate
, ss.program
, ss.module
, ss.machine
, ss.osuser
, sq.sql_fulltext
, sq.program_id
FROM v$session ss
, v$sql sq
WHERE ss.sql_address = sq.address
AND ss.SID = USERENV('sid');
END;
/
In order for the trigger above to compile, you'll need to grant the owner of the trigger these permissions, when logged in as the SYS user:
grant select on V_$SESSION to <user>;
grant select on V_$SQL to <user>;
You will likely want to protect the insert statement in the trigger with some condition that only makes it log when the the change you're interested in is occurring - on my test server this statement runs rather slowly (1 second), so I wouldn't want to be logging all these updates. Of course, in that case, you'd need to change the trigger to be a row-level one so that you could inspect the :new or :old values. If you are really concerned about the overhead of the select, you can change it to not join against v$sql, and instead just save the SQL_ADDRESS column, then schedule a job with DBMS_JOB to go off and update the sql_text column with a second update statement, thereby offloading the update into another session and not blocking your original update.
Unfortunately, this will only tell you half the story. The statement you're going to see logged is going to be the most proximal statement - in this case, an update - even if the original statement executed by the process that initiated it is a stored procedure. This is where the program_id column comes in. If the update statement is part of a procedure or trigger, program_id will point to the object_id of the code in question - you can resolve it thusly:
SELECT * FROM all_objects where object_id = <program_id>;
In the case when the update statement was executed directly from the client, I don't know what program_id represents, but you wouldn't need it - you'd have the name of the executable in the "program" column of statement_tracker. If the update was executed from an anonymous PL/SQL block, I'm not how to track it back - you'll need to experiment further.
It may be, though, that the osuser/machine/program/module information may be enough to get you pointed in the right direction.
If it is a scheduled database job then you can find out what scheduled database jobs exist and look into what they do. Other things you can do are:
look at the dependencies views e.g. ALL_DEPENDENCIES to see what packages/triggers etc. use that table. Depending on the size of your system that may return a lot of objects to trawl through.
Search all the database source code for references to the table like this:
select distinct type, name
from all_source
where lower(text) like lower('%mytable%');
Again that may return a lot of objects, and of course there will be some "false positives" where the search string appears but isn't actually a reference to that table. You could even try something more specific like:
select distinct type, name
from all_source
where lower(text) like lower('%insert into mytable%');
but of course that would miss cases where the command was formatted differently.
Additionally, could there be SQL scripts being run through "cron" jobs on the server?
Just write an "after update" trigger and, in this trigger, log the results of "DBMS_UTILITY.FORMAT_CALL_STACK" in a dedicated table.
The purpose of this function is exactly to give you the complete call stack of al the stored procedures and triggers that have been fired to reach your code.
I am writing from the mobile app, so i can't give you more detailed examples, but if you google for it you'll find many of them.
A quick and dirty option if you're working locally, and are only interested in the first thing that's altering the data, is to throw an error in the trigger instead of logging. That way, you get the usual stack trace and it's a lot less typing and you don't need to create a new table:
AFTER UPDATE ON table_of_interest
BEGIN
RAISE_APPLICATION_ERROR(-20001, 'something changed it');
END;
/

Resources