I am attempting to spy a procedure with an output parameter. This procedure has two parameters, one input parameter and one output parameter.
The input parameter has a default value of NULL.
CREATE PROCEDURE spExampleProcedure
#INPUTPARAM DATETIME = NULL,
#OUTPUTPARAM INT = NULL OUTPUT
AS
....
I'm attempting to test a procedure that is calling spExampleProcedure. spExampleProcedure is called multiple times with a different #INPUTPARAM. I want to check that param and return a different value based on the input. (A more advanced sort of mock.)
EXEC tSQLt.SpyProcedure 'dbo.spExampleProcedure',
'SET #OUTPUTPARAM = CASE WHEN #INPUTPARAM IS NULL THEN 1 ELSE 2 END'
This is not working. I would really like to be able to fake/spy a procedure like I do a function because It would really help when a stored procedure is called multiple times.
An option that I've considered is converting my spExampleProcedure to a Function but that would only avoid my issue instead. Looking at spy procedure, I see no reason why my setup should not be working beside maybe the fake procedure it creates might not have a default value of null.
The posted example should work as pointed out by Sebastian Meine. I wanted to elaborate on why my test was not working in case that ends up helping somebody else.
My issue was related to my test data setup.
Consider:
CREATE PROCEDURE spExampleProcedure
#INPUTPARAM DATETIME = NULL,
#OUTPUTPARAM INT = NULL OUTPUT
AS
....
And
EXEC tSQLt.SpyProcedure 'dbo.spExampleProcedure',
'SET #OUTPUTPARAM = CASE WHEN #INPUTPARAM IS NULL THEN 1 ELSE 2 END'
Procedure Under Test:
CREATE PROCEDURE spExampleProcedureUnderTest
#ID INT
AS
BEGIN
DECLARE #EXAMPLEVAR DATETIME, #OUTPUT
SELECT #EXAMPLEVAR = VAR FROM ExampleTable WHERE ID = #ID
EXEC spExampleProcedure #OUTPUTPARAM = #OUTPUT OUTPUT
EXEC spExampleProcedure #EXAMPLEVAR, #OUTPUT OUTPUT
...
My Test Procedure was faking ExampleTable but not putting a value in for VAR.
EXEC tSQLt.FakeTable 'dbo.ExampleTable'
INSERT INTO ExampleTable (ID) VALUES (1)
EXEC tSQLt.SpyProcedure 'dbo.spExampleProcedure',
'SET #OUTPUTPARAM = CASE WHEN #INPUTPARAM IS NULL THEN 1 ELSE 2 END'
EXEC spExampleProvedureUnderTest 1
Instead of
EXEC tSQLt.FakeTable 'dbo.ExampleTable'
INSERT INTO ExampleTable (ID, VAR) VALUES (1, '2018-06-01')
EXEC tSQLt.SpyProcedure 'dbo.spExampleProcedure',
'SET #OUTPUTPARAM = CASE WHEN #INPUTPARAM IS NULL THEN 1 ELSE 2 END'
EXEC spExampleProvedureUnderTest 1
EMPHASIS ON line 2 of each. Notice that I added a value in my insert.
Effectively, my spied Procedure was being called both times with NULL. BECAREFUL with data coming from Faked Tables. Faked tables remove constraints so it's easy to put a null into a table that would otherwise not allow it.
Related
I am working on Oracle 11g Db, Having trouble on writing Oracle syntax.
I am trying to pass a number variable to my select query and populate the select query to a cursor.
Declare yr_nr NUMBER;
Begin
yr_nr := 2014;
SELECT DCD.CCY ID, DCD.CCYCDDSC DSC
FROM CCYDCD DCD, CCYEXC EXC
WHERE DCD.CCY = EXC.CCY
AND EXC.YEARNR = yr_nr
End
This select query returns 80 records. How to rewrite this syntax.
Ok, so what you have here is an anonymous block and everything that happens in the block stays in that block. Kinda like Vegas.
In other words there is nothing to handle the result set from your query. When you do this:
declare
[varName] [type]
begin
select foo from bar where column = var ; <--- this has no place to go!
end
When you are at an sqlPlus prompt, sqlPlus has a default record set handler which then processes the returned record set and prints it to the screen.
When you use any third party tool like JDBC or Oracle's own OCI library those provide a record set handler then parse them to you with the appropriate calls to get the data, e.g.:
rs.getInteger([query],[column] ) //which returns the specific value.
That anonymous block is essentially a stored procedure. So you have to have something to do with the result set. This is the cause of the missing "into" error you are getting.
If on the other hand you did something like:
declare
[varName] [type]
result number ;
begin
select count(foo) into result from bar where column = var ;
end
The variable result would have the value of 80 since that is the number of records fetched.
declare
[varName] [type]
cursor thisCursor(p1 in number ) is select foo from bar where column = p1 ;
begin
for rec in thisCursor(varName) loop
If rec.column = [some value] then
doSomething
end if ;
end loop ;
end
Do this would allow you to do something with the result set.
I'm very new to tSQLt and am having some difficulty with what should really be a very simple test.
I have added a column to the SELECT statement executed within a stored procedure.
How do I test in a tSQLt test that the column is included in the resultset from that stored procedure?
Generally, when adding a column to the output of a stored procedure, you will want to test that the column both exists and is populated with the correct data. Since we're going to make sure that the column is populated with the same data, we can design a test that does exactly that:
CREATE PROCEDURE MyTests.[test stored procedure values MyNewColumn correctly]
AS
BEGIN
-- Create Actual and Expected table to hold the actual results of MyProcedure
-- and the results that I expect
CREATE TABLE MyTests.Actual (FirstColumn INT, MyNewColumn INT);
CREATE TABLE MyTests.Expected (FirstColumn INT, MyNewColumn INT);
-- Capture the results of MyProcedure into the Actual table
INSERT INTO MyTests.Actual
EXEC MySchema.MyProcedure;
-- Create the expected output
INSERT INTO MyTests.Expected (FirstColumn, MyNewColumn)
VALUES (7, 12);
INSERT INTO MyTests.Expected (FirstColumn, MyNewColumn)
VALUES (25, 99);
-- Check that Expected and Actual tables contain the same results
EXEC tSQLt.AssertEqualsTable 'MyTests.Expected', 'MyTests.Actual';
END;
Generally, the stored procedure you are testing relies on other tables or other stored procedures. Therefore, you should become familiar with FakeTable and SpyProcedure as well: http://tsqlt.org/user-guide/isolating-dependencies/
Another option if you are just interested in the structure of the output and not the content (and you are running on SQL2012 or greater) would be to make use of sys.dm_exec_describe_first_result_set_for_object in your test.
This dmo (dynamic management object) returns a variety of information about the first result set returned for a given object.
In my example below, I have only used a few of the columns returned by this dmo but others are available if, for example, your output includes decimal data types.
In this test, I populate a temporary table (#expected) with information about how I expect each column to be returned - such as name, datatype and nullability.
I then select the equivalent columns from the dmo into another temporary table (#actual).
Finally I use tSQLt.AssertEqualsTable to compare the contents of the two tables.
Having said all that, whilst I frequently write tests to validate the structure of views or tables (using tSQLt.AssertResultSetsHaveSameMetaData), I have never found the need to just test the result set contract for procedures. Dennis is correct, you would typically be interested in asserting that the various columns in your result set are populated with the correct values and by the time you've covered that functionality you should have covered every column anyway.
if object_id('dbo.myTable') is not null drop table dbo.myTable;
go
if object_id('dbo.myTable') is null
begin
create table dbo.myTable
(
Id int not null primary key
, ColumnA varchar(32) not null
, ColumnB varchar(64) null
)
end
go
if object_id('dbo.myProcedure') is not null drop procedure dbo.myProcedure;
go
create procedure dbo.myProcedure
as
begin
select Id, ColumnA, ColumnB from dbo.myTable;
end
go
exec tSQLt.NewTestClass #ClassName = 'myTests';
if object_id('[myTests].[test result set on SQL2012+]') is not null drop procedure [myTests].[test result set on SQL2012+];
go
create procedure [myTests].[test result set on SQL2012+]
as
begin
; with expectedCte (name, column_ordinal, system_type_name, is_nullable)
as
(
-- The first row sets up the data types for the #expected but is excluded from the expected results
select cast('' as nvarchar(200)), cast(0 as int), cast('' as nvarchar(200)), cast(0 as bit)
-- This is the result we are expecting to see
union all select 'Id', 1, 'int', 0
union all select 'ColumnA', 2, 'varchar(32)', 0
union all select 'ColumnB', 3, 'varchar(64)', 1
)
select * into #expected from expectedCte where column_ordinal > 0;
--! Act
select
name
, column_ordinal
, system_type_name
, is_nullable
into
#actual
from
sys.dm_exec_describe_first_result_set_for_object(object_id('dbo.myProcedure'), 0);
--! Assert
exec tSQLt.AssertEqualsTable '#expected', '#actual';
end
go
exec tSQLt.Run '[myTests].[test result set on SQL2012+]'
input
Package name (IN)
procedure name (or function name) (IN)
A table indexed by integer, it will contain values that will be used to execute the procedure (IN/OUT).
E.g
let's assume that we want to execute the procedure below
utils.get_emp_num(emp_name IN VARCHAR
emp_last_name IN VARCHAR
emp_num OUT NUMBER
result OUT VARCHAR);
The procedure that we will create will have as inputs:
package_name = utils
procedure_name = get_emp_num
table = T[1] -> name
T[2] -> lastname
T[3] -> 0 (any value)
T[4] -> N (any value)
run_procedure(package_name,
procedure_name,
table)
The main procedure should return the same table that has been set in the input, but with the execution result of the procedure
table = T[1] -> name
T[2] -> lastname
T[3] -> 78734 (new value)
T[4] -> F (new value)
any thought ?
You can achieve it with EXECUTE IMMEDIATE. Basically, you build a SQL statement of the following form:
sql := 'BEGIN utils.get_emp_num(:1, :2, :3, :4); END;';
Then you execute it:
EXECUTE IMMEDIATE sql USING t(1), t(2), OUT t(3), OUT t(4);
Now here comes the tricky part: For each number of parameters and IN/OUT combinations you need a separate EXECUTE IMMEDIATE statement. And to figure out the number of parameters and their direction, you need to query the ALL_ARGUMENTS table first.
You might be able to simplify it by passing the whole table as a bind argument instead of a separate bind argument for each table element. But I haven't quite figured out how you would do that.
And the next thing you should consider: the elements of the table T your using will have a type: VARCHAR, NUMBER etc. So the current mixture where you have both numbers and strings won't work.
BTW: Why do you want such a dynamic call mechanism anyway?
Get from the all_arguments table the argument_name, data_type, in_out, and the position
Build the PLSQL block
DECLARE
loop over argument_name and create the declare section
argument_name data_type if in_out <> OUT then := VALUE OF THE INPUT otherwise NULL
BEGIN
--In the case of function create an additional argument
function_var:= package_name.procedure_name( loop over argument_name);
--use a table of any_data, declare it as global in the package
if function then
package_name.ad_table.EXTEND;
package_name.ad_table(package_name.ad_table.LAST):= function_var;
end if
--loop over argument_name IF IN_OUT <> IN
package_name.ad_table.EXTEND;
package_name.ad_table(package_name.ad_table.LAST):=
if data_type = VARCHAR2 then := ConvertVarchar2(argument_name)
else if NUMBER then ConvertNumber
else if DATE then ConvertDate
...
END;
The result is stored in the table.
To get value use Access* functions
In my project EF calls a stored procedure which is shown below. It returns either 1 or scope identity.
On EF function imports, the stored procedure is listed with a return type of decimal.
When the stored procedure returns scope identity, everything is ok.
But when if condition of sp satisfies, ef throws error as
The data reader returned by the store data provider does not have enough columns for the query requested.
Pls help..
This is my stored procedure:
#VendorId int,
#ueeareaCode varchar(3),
#TuPrfxNo varchar(3),
#jeeSfxNo varchar(4),
#Tjode varchar(3),
#uxNo varchar(3),
#TyufxNo varchar(4),
#Iyuy bit
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
SET NOCOUNT ON;
IF EXISTS (Select dfen_id
from dbo.efe_phfedwn_eflwn
where
[yu] = #Tyuode and
[uy] = #TuyxNo and
[yuno] = #Tuo)
return 1
ELSE
Begin
INSERT INTO dbo.yu
....................
Select Scope_Identity()
End
END
The error tells us that EF is expecting a result set and when we use RETURN we don't get a result set. Your error means that the stored procedure is returning an integer but EF is expecting a decimal, so we just CAST the selected values to a decimal.
So modify the SQL so that we SELECT instead of RETURN, like so (not forgetting to use CAST):
IF EXISTS (Select cntct_ctr_phn_ln_id
from dbo.cntct_ctr_phn_ln
where
[toll_free_phn_area_cd] = #TollfreeareaCode and
[toll_free_phn_prfx_no] = #TollfreePrfxNo and
[toll_free_phn_sfx_no] = #TollfreeSfxNo)
SELECT CAST(1 AS decimal)
Then also CAST the result of SCOPE_IDENTITY() to a decimal:
SELECT CAST(SCOPE_IDENTITY() AS decimal)
Hi
I have DAL Layer, from where invoking a stored procedure to insert values into the table.
E.g.:-
CREATE PROCEDURE [dbo].[DataInsert]
#DataName nvarchar(64)
AS
BEGIN
INSERT INTO
table01 (dataname)
VALUES
(#dataname)
END
Now as requirement changed, per client request i have to add values 5 times. So what is the best practice?
Do i call this Stored Procedure 5 times from my DAL?
or
Pass all the values (may be comma separated) to storedprocedure in one go and then let the stored procedure add it for 5 times?
BTW. Its not always 5 times. It is changeable.
You could create a user-defined table type;
CREATE TYPE [dbo].[SomeInfo] AS TABLE(
[Id] [int] NOT NULL,
[SomeValue] [int] NOT NULL )
Define your stored proc as such;
CREATE PROCEDURE [dbo].[AddSomeStuff]
#theStuff [SomeInfo] READONLY
AS
BEGIN
INSERT INTO SOMETABLE ([...columns...])
SELECT [...columns...] from #theStuff
END
Then you'll need to create a datatable (called table below) that matches the schema and call the stored proc as so;
var cmd = new SqlCommand("AddSomeStuff", sqlConn) {CommandType = CommandType.StoredProcedure};
var param = new SqlParameter("#theStuff", SqlDbType.Structured) {Value = table};
cmd.Parameters.Add(param);
cmd.ExecuteNonQuery();
btw this proc works - I've just written and tested it see results below!
CREATE PROCEDURE [dbo].[DataInsert]
#DataName nvarchar(max) AS
BEGIN
DECLARE #pos SMALLINT, #str VARCHAR(max)
WHILE #DataName <> ''
BEGIN
SET #pos = CHARINDEX(',', #DataName)
IF #pos>0
BEGIN
SET #str = LEFT(#DataName, #pos-1)
SET #DataName = RIGHT(#DataName, LEN(#DataName)-#pos)
END
ELSE
BEGIN
SET #str = #DataName
SET #DataName = ''
END
INSERT INTO table01 VALUES(CONVERT(VARCHAR(100),#str))
END
END
GO
then run it: -
EXEC #return_value = [dbo].[DataInsert]
#DataName = N'five, bits, of, your, data'
*rows from table01: *
five
bits
of
your
data
(5 row(s) affected)
I'd either call your proc repeatedly(that would be my choice), or else you could use XML to pass in a list of values as a single parameter.
http://support.microsoft.com/kb/555266
Instead of fancy SQL code that is difficult to maintain and is not scalable, I would simply go to invoking your stored procedure multiple times.
If performance or transactional behavior is an issue, you can consider to send the commands in a single batch.
You talked about 5 insert. If the number of record to insert is much greater, you could consider bulk insert as well.