How to ROLLBACK a transaction when testing using tSQLt - tsqlt

I recently was calling a procedure that contained a rasierror in the code. The raiserror was in a try catch block. Also a BEGIN TRAN was in the same try catch block after the raiserror. The Catch block is designed to ROLLBACK the transaction if the error occurred in the transaction. The way it does this is to check the ##TRANCOUNT if it is greater that 0 I know that it had started a transaction and needs to ROLLBACK. When testing with tSQLt the ##TRANCOUNT is always >0 so if it ever hits the CATCH Block the ROLLBACK is executed and tSQLt fails (because tSQLt is running in a transaction). When I rasie an error and the CATCH block is run tSQLt always fails the test. I have no way to test for the correct handling of the raiserror. How would you create a test case that can potentially ROLLBACK a transaction?

As you mentioned, tSQLt runs every test in its own transaction. To keep track of what is going on is relies on that same transaction to be still open when the test finishes. SQL Server does not support nested transactions, so your procedure rolls back everything, including the status information the framework stored for the current test. At that point tSQLt can only assume that something really bad happened. It therefore marks the test as errored.
SQL Server itself discourages a rollback inside a procedure, by throwing an error if that procedure was called within an open transaction. For ways to deal with this situation and some additional info check out my blog post about how to rollback in procedures.

As I'm just reading up on tSQLt this was one of the first questions that came to mind when I've learned each test ran in a transactions. As some of my stored procedures do start transaction, some even use nested transactions, this can become challenging. What I've learned about nested transactions, if you apply the following rules you can keep your code clean of constant error checking and still handle errors gracefully.
Always use a TRY/CATCH block when opening a transactions
Always commit the transactions unless an error was raised
Always rollback the transaction when an error is raised unless ##TRANCOUNT = 0
Always reraise the error unless you're absolutely sure there was no transaction open at the start of the stored procedure.
Keeping those rules in mind here is an example of a proc implementation and test code to test it.
ALTER PROC testProc
#IshouldFail BIT
AS
BEGIN TRY
BEGIN TRAN
IF #IshouldFail = 1
RAISERROR('failure', 16, 1);
COMMIT
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
ROLLBACK;
-- Do some exception handling
-- You'll need to reraise the error to prevent exceptions about inconsistent
-- ##TRANCOUNT before / after execution of the stored proc.
RAISERROR('failure', 16, 1);
END CATCH
GO
--EXEC tSQLt.NewTestClass 'tSQLt.experiments';
--GO
ALTER PROCEDURE [tSQLt.experiments].[test testProc nested transaction fails]
AS
BEGIN
--Assemble
DECLARE #CatchWasHit CHAR(1) = 'N';
--Act
BEGIN TRY
EXEC dbo.testProc 1
END TRY
BEGIN CATCH
IF ##TRANCOUNT = 0
BEGIN TRAN --reopen an transaction
SET #CatchWasHit = 'Y';
END CATCH
--Assert
EXEC tSQLt.AssertEqualsString #Expected = N'Y', #Actual = #CatchWasHit, #Message = N'Exception was expected'
END;
GO
ALTER PROCEDURE [tSQLt.experiments].[test testProc nested transaction succeeds]
AS
BEGIN
--Act
EXEC dbo.testProc 0
END;
GO
EXEC tSQLt.Run #TestName = N'tSQLt.experiments'

Better to use a BEGIN TRY block after BEGIN TRANSACTION. I did this when I had a similar problem. This is more logical, because in CATCH block I checked IF ##TRANCOUNT > 0 ROLLBACK. This condition doesn't need to be checked if another error is raised before BEGIN TRANSACTION. And in this case you can test your RAISERROR functionality.

+1 to both the above answers.
However, if you don't want to use TRY .. CATCH, please try the following code. The part between the lines ----- represents the test, and above and below that represents tSQLt, before and after it calls your test. As you can see, the transaction started by tSQLt before calling the test, is still in place, as it expects, whether or not the error occurs. ##TRANSCOUNT is still 1
You can comment out the RAISERROR to try it with and without the exception being raised.
SET NOCOUNT ON
BEGIN TRANSACTION -- DONE BY tSQLt
PRINT 'Inside tSQLt before calling the test: ##TRANCOUNT = ' + CONVERT (VARCHAR, ##TRANCOUNT)
---------------------------------
PRINT ' Start of test ---------------------------'
SAVE TRANSACTION Savepoint
PRINT ' Inside the test: ##TRANCOUNT = ' + CONVERT (VARCHAR, ##TRANCOUNT)
BEGIN TRANSACTION -- PART OF THE TEST
PRINT ' Transaction in the test: ##TRANCOUNT = ' + CONVERT (VARCHAR, ##TRANCOUNT)
RAISERROR ('A very nice error', 16, 0)
PRINT ' ##ERROR = ' + CONVERT(VARCHAR,##ERROR)
-- PART OF THE TEST - CLEAN-UP
IF ##ERROR <> 0 ROLLBACK TRANSACTION Savepoint -- Not all the way, just tothe save point
ELSE COMMIT TRANSACTION
PRINT ' About to finish the test: ##TRANCOUNT = ' + CONVERT (VARCHAR, ##TRANCOUNT)
PRINT ' End of test ---------------------------'
---------------------------------
ROLLBACK TRANSACTION -- DONE BY tSQLt
PRINT 'Inside tSQLt after finishing the test: ##TRANCOUNT = ' + CONVERT (VARCHAR, ##TRANCOUNT)
With acknowledgement to information and code at http://www.blackwasp.co.uk/SQLSavepoints.aspx

Related

How to handle plsql errors and exceptions

My code structure is as below.
PKG1
Procedure TEST1()
BEGIN
PKG2.TEST2;
PKG_X.TEST_X
EXCEPTION HANDLING
END;
PKG2
Procedure TEST2
BEGIN
If(Condition is met)THEN
//Should raise an error message
END IF;
END;
A procedure(TEST1) in PKG1 calls a procedure(TEST2) in PKG2. There is an exception handling part in PKG1 but not in PKG2.
In TEST2, I want to raise an Error and stop the flow when a certain condition is met. When I debug the code it was noticed that the error message does not stop the flow. 'PKG_X.TEST_X' line is also executed. The error message can be seen in the Exception handling part and the debug code of TEST1 part. What would be the possible way to stop the flow while calling 'PKG2.TEST2'. Assume that the syntax and the functionality is correct.
You didn't post the important part: how exactly you raise an error? If you want to stop execution, then
Procedure TEST2
BEGIN
If(Condition is met)THEN
--Should raise an error message
raise_application_error(-20000, 'Stop execution'); --> this
END IF;
END;

Oracle PLSQL Print Bulk Batch Output After Each Batch Completion [duplicate]

I have an SQL script that is called from within a shell script and takes a long time to run. It currently contains dbms_output.put_line statements at various points. The output from these print statements appear in the log files, but only once the script has completed.
Is there any way to ensure that the output appears in the log file as the script is running?
Not really. The way DBMS_OUTPUT works is this: Your PL/SQL block executes on the database server with no interaction with the client. So when you call PUT_LINE, it is just putting that text into a buffer in memory on the server. When your PL/SQL block completes, control is returned to the client (I'm assuming SQLPlus in this case); at that point the client gets the text out of the buffer by calling GET_LINE, and displays it.
So the only way you can make the output appear in the log file more frequently is to break up a large PL/SQL block into multiple smaller blocks, so control is returned to the client more often. This may not be practical depending on what your code is doing.
Other alternatives are to use UTL_FILE to write to a text file, which can be flushed whenever you like, or use an autonomous-transaction procedure to insert debug statements into a database table and commit after each one.
If it is possible to you, you should replace the calls to dbms_output.put_line by your own function.
Here is the code for this function WRITE_LOG -- if you want to have the ability to choose between 2 logging solutions:
write logs to a table in an autonomous transaction
CREATE OR REPLACE PROCEDURE to_dbg_table(p_log varchar2)
-- table mode:
-- requires
-- CREATE TABLE dbg (u varchar2(200) --- username
-- , d timestamp --- date
-- , l varchar2(4000) --- log
-- );
AS
pragma autonomous_transaction;
BEGIN
insert into dbg(u, d, l) values (user, sysdate, p_log);
commit;
END to_dbg_table;
/
or write directly to the DB server that hosts your database
This uses the Oracle directory TMP_DIR
CREATE OR REPLACE PROCEDURE to_dbg_file(p_fname varchar2, p_log varchar2)
-- file mode:
-- requires
--- CREATE OR REPLACE DIRECTORY TMP_DIR as '/directory/where/oracle/can/write/on/DB_server/';
AS
l_file utl_file.file_type;
BEGIN
l_file := utl_file.fopen('TMP_DIR', p_fname, 'A');
utl_file.put_line(l_file, p_log);
utl_file.fflush(l_file);
utl_file.fclose(l_file);
END to_dbg_file;
/
WRITE_LOG
Then the WRITE_LOG procedure which can switch between the 2 uses, or be deactivated to avoid performances loss (g_DEBUG:=FALSE).
CREATE OR REPLACE PROCEDURE write_log(p_log varchar2) AS
-- g_DEBUG can be set as a package variable defaulted to FALSE
-- then change it when debugging is required
g_DEBUG boolean := true;
-- the log file name can be set with several methods...
g_logfname varchar2(32767) := 'my_output.log';
-- choose between 2 logging solutions:
-- file mode:
g_TYPE varchar2(7):= 'file';
-- table mode:
--g_TYPE varchar2(7):= 'table';
-----------------------------------------------------------------
BEGIN
if g_DEBUG then
if g_TYPE='file' then
to_dbg_file(g_logfname, p_log);
elsif g_TYPE='table' then
to_dbg_table(p_log);
end if;
end if;
END write_log;
/
And here is how to test the above:
1) Launch this (file mode) from your SQLPLUS:
BEGIN
write_log('this is a test');
for i in 1..100 loop
DBMS_LOCK.sleep(1);
write_log('iter=' || i);
end loop;
write_log('test complete');
END;
/
2) on the database server, open a shell and
tail -f -n500 /directory/where/oracle/can/write/on/DB_server/my_output.log
Two alternatives:
You can insert your logging details in a logging table by using an autonomous transaction. You can query this logging table in another SQLPLUS/Toad/sql developer etc... session. You have to use an autonomous transaction to make it possible to commit your logging without interfering the transaction handling in your main sql script.
Another alternative is to use a pipelined function that returns your logging information. See here for an example: http://berxblog.blogspot.com/2009/01/pipelined-function-vs-dbmsoutput.html When you use a pipelined function you don't have to use another SQLPLUS/Toad/sql developer etc... session.
the buffer of DBMS_OUTPUT is read when the procedure DBMS_OUTPUT.get_line is called. If your client application is SQL*Plus, it means it will only get flushed once the procedure finishes.
You can apply the method described in this SO to write the DBMS_OUTPUT buffer to a file.
Set session metadata MODULE and/or ACTION using dbms_application_info().
Monitor with OEM, for example:
Module: ArchiveData
Action: xxx of xxxx
If you have access to system shell from PL/SQL environment you can call netcat:
BEGIN RUN_SHELL('echo "'||p_msg||'" | nc '||p_host||' '||p_port||' -w 5'); END;
p_msg - is a log message
v_host is a host running python script that reads data from socket on port v_port.
I used this design when I wrote aplogr for real-time shell and pl/sql logs monitoring.

Why doesn't my SQLite database add data until I close my application?

My Delphi application is using FireDac and an SQLite database. I've noticed that updates are being saved in a journal file and the database file is not actually updated until I close my application.
The application is making lots of 'batch updates' to the database. Each individual update is inside a TFDQuery.StartTransaction ... TFDQuery.Commit pair. Despite this, it seems all updates are held in the journal file until the application ends.
How can I force SQLite to update the database after each batch of updates rather than when my application finishes?
I've tried changing the SQLite db to WAL but the same thing happens.
Despite using 'StartTransaction' and 'Commit' the data stays in the journal until the application ends.
try
Query.Connection := FDConnection1;
FDConnection1.Open;
FDConnection1.StartTransaction;
Query.SQL.Text := 'select 1 from t_Manufacturers where m_Name = ' + QuotedStr(ManString);
Query.Open;
if Query.RecordCount = 0 then begin
{ not found, so add }
Query.SQL.Text := 'insert into t_Manufacturers (m_Name, m_ManUID) values (:Name, null)';
Query.ParamByName('Name').AsString := ManString;
Query.ExecSQL;
{ save m_ManUID for logging }
Query.SQL.Text := 'select m_ManUID from t_Manufacturers where m_Name = ' + QuotedStr(ManString);
Query.Open;
end;
Result := Query.FieldByName(m_ManUID).AsInteger;
FDConnection1.Commit;
except
on E : EDatabaseError do begin
MessageDlg('Database error adding manufacturer: ' + E.Message, mtError, [mbOk], 0);
FDConnection1.Rollback;
end;
No error messages or issues. Providing the application finishes OK, the database is updated as expected, so I'm happy that my programming and SQL is doing exactly what I need in that respect.
It is very dubious that "it seems all updates are held in the journal file until the application ends". SQLite3 is very serious about writing data - more serious than most DB engines I know. Just check https://www.sqlite.org/atomiccommit.html
I suspect you are somewhat confused by the presence of the journal file. After a transaction, the journal file is still kept there on disk, ready for any new write operation. But the data is actually written in the main file.
Just write some data, then kill the application before closing it (using the task manager). Then re-open the file (re-start the app): I am almost sure you will see the data properly stored.
FireDAC is "cheating" with the default journalization mode, for best performance. It uses some default values which may be confusing. As stated by FireDAC documentation: Set LockingMode to Normal to enable shared DB access. Set Synchronous to Normal or Full to make committed data visible to others.
You are using FDConnection1.StartTransaction; In this state condition, the transaction is still on the memory (cache) without any end. Therefore, you need to end your transaction with commit command such like FDConnection1.Commit;
OK, I went back to basics and wrote a test application with all the database activity confined to a single procedure fired by a button click. In that procedure I added multiple rows to a table using a for loop.
The for loop is surrounded by StartTransaction and Commit calls. Running through the code in the debugger, the journal file is created on the first call to ExecSQL. However, the file remains there after the loop has completed and Commit has been called.
The database is only updated and the journal file deleted when Close is called at the end of the procedure.
procedure TForm1.Button1Click(Sender: TObject);
var
Query : TFDQuery;
Index : Integer;
begin
FDConnection1.DriverName := 'SQLite';
FDConnection1.Params.Values['Database'] := 'C:\Testing\test.db';
FDConnection1.Open;
Query := TFDQuery.Create(nil);
Query.Connection := FDConnection1;
try
FDConnection1.StartTransaction;
for Index := 1 to 10 do begin
Query.SQL.Text := 'insert into Table1 (Name, IDNum) values (:Name, :IDNum)';
Query.ParamByName('Name').AsString := 'Test_Manufacturer_' + IntToStr(Index);
Query.ParamByName('IDNum').AsInteger := Index;
Query.ExecSQL;
end;
FDConnection1.Commit;
except
on E : EDatabaseError do begin
MessageDlg('Database error adding manufacturer: ' + E.Message, mtError, [mbOk], 0);
FDConnection1.Rollback;
end;
end;
Query.Destroy;
FDConnection1.Close;
end;
I'm suspecting that I have another connection to the database open within my application and that might be stopping the update until the application closes. However, I'm still not understanding why the call to Commit isn't updating the database at the end of the transaction block.

Catching errors from procedures outside of processing block

I have following code (it's simplified for illustration purposes). I'm creating records in different DB tables in proc1, proc2, and proc3. What I'm trying to achieve is...if I encounter an error while looping through temp-tables at any point (even after I created a bunch of DB records already), I want to roll everything back so no records are created. It catches errors if proc1, proc2, and proc3 with no issues but I cannot figure out how to pass those errors to the main processing block so it understands it and rolls everything back. In other words, the message ('error # main trans block') never pops up so the already created records stay in the DB. As a matter of fact, nothing gets rolled back.
DO TRANSACTION ON ERROR UNDO, THROW:
FOR EACH tt1:
RUN proc1.
FOR EACH tt2 WHERE tt2.field1 EQ tt1.field1:
RUN proc2.
FOR EACH tt3 WHERE tt3.field2 EQ tt2.field2:
RUN proc3.
END.
END.
END.
CATCH e AS PROGRESS.Lang.AppERROR:
MESSAGE 'error # main trans block'
VIEW-AS ALERT-BOX INFO BUTTONS OK.
END CATCH.
END.
PROCEDURE proc1.
DO TRANSACTION ON ERROR UNDO, THROW:
/* creating some DB records */
CATCH e AS PROGRESS.Lang.ERROR:
RETURN ERROR 'Proc1 ' + e:getmessage(1).
END CATCH.
END.
END PROCEDURE.
PROCEDURE proc2.
DO TRANSACTION ON ERROR UNDO, THROW:
/* creating some DB records */
CATCH e AS PROGRESS.Lang.ERROR:
RETURN ERROR 'Proc2 ' + e:getmessage(1).
END CATCH.
END.
END PROCEDURE.
PROCEDURE proc3.
DO TRANSACTION ON ERROR UNDO, THROW:
/* creating some DB records */
CATCH e AS PROGRESS.Lang.ERROR:
RETURN ERROR 'Proc3 ' + e:getmessage(1).
END CATCH.
END.
END PROCEDURE.
TIA
There are a couple of potential issues.
First, your temp-tables tt1 and tt2 need to be defined without the NO-UNDO flag.
Second, the FOR EACH blocks are using their default error handling behavior, which is ON ERROR UNDO, NEXT. So errors raised within the FOR EACH blocks will cause the current iteration to be undone, not the whole transaction.
I recommend adding the
BLOCK-LEVEL ON ERROR UNDO, THROW .
to the top of the program. Or at least
ROUTINE-LEVEL ON ERROR UNDO, THROW .
in combination with an ON ERROR UNDO, THROW option on all the FOR EACH blocks.
The BLOCK-LEVEL error handling option is available since OpenEdge 11.3 (or so).

simple way to output messages from a stored procedure in Teradata

Using SQL*Assistant:
REPLACE PROCEDURE test_proc()
BEGIN
DECLARE l_msg varchar(128);
set l_msg = 'test';
-- PRINT is not supported
--print l_msg;
-- debug is recognized as a special token, but doesn't work
--debug l_msg;
-- this does nothing
--SIGNAL SQLSTATE '02000';
END;
Is there a simple way to output a text during a procedure execution, aside writing to a log table?
TD 14.xx
EDIT:
Not trying to handle exceptions, but rather send text messages to the client, as the procedure progresses, regardless of the state/condition, similar to PRINT (Sybase), DMBS_OUTPUT(Oracle), DEBUG(SQL Server).

Resources