Oracle PLSQL Print Bulk Batch Output After Each Batch Completion [duplicate] - plsql

I have an SQL script that is called from within a shell script and takes a long time to run. It currently contains dbms_output.put_line statements at various points. The output from these print statements appear in the log files, but only once the script has completed.
Is there any way to ensure that the output appears in the log file as the script is running?

Not really. The way DBMS_OUTPUT works is this: Your PL/SQL block executes on the database server with no interaction with the client. So when you call PUT_LINE, it is just putting that text into a buffer in memory on the server. When your PL/SQL block completes, control is returned to the client (I'm assuming SQLPlus in this case); at that point the client gets the text out of the buffer by calling GET_LINE, and displays it.
So the only way you can make the output appear in the log file more frequently is to break up a large PL/SQL block into multiple smaller blocks, so control is returned to the client more often. This may not be practical depending on what your code is doing.
Other alternatives are to use UTL_FILE to write to a text file, which can be flushed whenever you like, or use an autonomous-transaction procedure to insert debug statements into a database table and commit after each one.

If it is possible to you, you should replace the calls to dbms_output.put_line by your own function.
Here is the code for this function WRITE_LOG -- if you want to have the ability to choose between 2 logging solutions:
write logs to a table in an autonomous transaction
CREATE OR REPLACE PROCEDURE to_dbg_table(p_log varchar2)
-- table mode:
-- requires
-- CREATE TABLE dbg (u varchar2(200) --- username
-- , d timestamp --- date
-- , l varchar2(4000) --- log
-- );
AS
pragma autonomous_transaction;
BEGIN
insert into dbg(u, d, l) values (user, sysdate, p_log);
commit;
END to_dbg_table;
/
or write directly to the DB server that hosts your database
This uses the Oracle directory TMP_DIR
CREATE OR REPLACE PROCEDURE to_dbg_file(p_fname varchar2, p_log varchar2)
-- file mode:
-- requires
--- CREATE OR REPLACE DIRECTORY TMP_DIR as '/directory/where/oracle/can/write/on/DB_server/';
AS
l_file utl_file.file_type;
BEGIN
l_file := utl_file.fopen('TMP_DIR', p_fname, 'A');
utl_file.put_line(l_file, p_log);
utl_file.fflush(l_file);
utl_file.fclose(l_file);
END to_dbg_file;
/
WRITE_LOG
Then the WRITE_LOG procedure which can switch between the 2 uses, or be deactivated to avoid performances loss (g_DEBUG:=FALSE).
CREATE OR REPLACE PROCEDURE write_log(p_log varchar2) AS
-- g_DEBUG can be set as a package variable defaulted to FALSE
-- then change it when debugging is required
g_DEBUG boolean := true;
-- the log file name can be set with several methods...
g_logfname varchar2(32767) := 'my_output.log';
-- choose between 2 logging solutions:
-- file mode:
g_TYPE varchar2(7):= 'file';
-- table mode:
--g_TYPE varchar2(7):= 'table';
-----------------------------------------------------------------
BEGIN
if g_DEBUG then
if g_TYPE='file' then
to_dbg_file(g_logfname, p_log);
elsif g_TYPE='table' then
to_dbg_table(p_log);
end if;
end if;
END write_log;
/
And here is how to test the above:
1) Launch this (file mode) from your SQLPLUS:
BEGIN
write_log('this is a test');
for i in 1..100 loop
DBMS_LOCK.sleep(1);
write_log('iter=' || i);
end loop;
write_log('test complete');
END;
/
2) on the database server, open a shell and
tail -f -n500 /directory/where/oracle/can/write/on/DB_server/my_output.log

Two alternatives:
You can insert your logging details in a logging table by using an autonomous transaction. You can query this logging table in another SQLPLUS/Toad/sql developer etc... session. You have to use an autonomous transaction to make it possible to commit your logging without interfering the transaction handling in your main sql script.
Another alternative is to use a pipelined function that returns your logging information. See here for an example: http://berxblog.blogspot.com/2009/01/pipelined-function-vs-dbmsoutput.html When you use a pipelined function you don't have to use another SQLPLUS/Toad/sql developer etc... session.

the buffer of DBMS_OUTPUT is read when the procedure DBMS_OUTPUT.get_line is called. If your client application is SQL*Plus, it means it will only get flushed once the procedure finishes.
You can apply the method described in this SO to write the DBMS_OUTPUT buffer to a file.

Set session metadata MODULE and/or ACTION using dbms_application_info().
Monitor with OEM, for example:
Module: ArchiveData
Action: xxx of xxxx

If you have access to system shell from PL/SQL environment you can call netcat:
BEGIN RUN_SHELL('echo "'||p_msg||'" | nc '||p_host||' '||p_port||' -w 5'); END;
p_msg - is a log message
v_host is a host running python script that reads data from socket on port v_port.
I used this design when I wrote aplogr for real-time shell and pl/sql logs monitoring.

Related

Why doesn't my SQLite database add data until I close my application?

My Delphi application is using FireDac and an SQLite database. I've noticed that updates are being saved in a journal file and the database file is not actually updated until I close my application.
The application is making lots of 'batch updates' to the database. Each individual update is inside a TFDQuery.StartTransaction ... TFDQuery.Commit pair. Despite this, it seems all updates are held in the journal file until the application ends.
How can I force SQLite to update the database after each batch of updates rather than when my application finishes?
I've tried changing the SQLite db to WAL but the same thing happens.
Despite using 'StartTransaction' and 'Commit' the data stays in the journal until the application ends.
try
Query.Connection := FDConnection1;
FDConnection1.Open;
FDConnection1.StartTransaction;
Query.SQL.Text := 'select 1 from t_Manufacturers where m_Name = ' + QuotedStr(ManString);
Query.Open;
if Query.RecordCount = 0 then begin
{ not found, so add }
Query.SQL.Text := 'insert into t_Manufacturers (m_Name, m_ManUID) values (:Name, null)';
Query.ParamByName('Name').AsString := ManString;
Query.ExecSQL;
{ save m_ManUID for logging }
Query.SQL.Text := 'select m_ManUID from t_Manufacturers where m_Name = ' + QuotedStr(ManString);
Query.Open;
end;
Result := Query.FieldByName(m_ManUID).AsInteger;
FDConnection1.Commit;
except
on E : EDatabaseError do begin
MessageDlg('Database error adding manufacturer: ' + E.Message, mtError, [mbOk], 0);
FDConnection1.Rollback;
end;
No error messages or issues. Providing the application finishes OK, the database is updated as expected, so I'm happy that my programming and SQL is doing exactly what I need in that respect.
It is very dubious that "it seems all updates are held in the journal file until the application ends". SQLite3 is very serious about writing data - more serious than most DB engines I know. Just check https://www.sqlite.org/atomiccommit.html
I suspect you are somewhat confused by the presence of the journal file. After a transaction, the journal file is still kept there on disk, ready for any new write operation. But the data is actually written in the main file.
Just write some data, then kill the application before closing it (using the task manager). Then re-open the file (re-start the app): I am almost sure you will see the data properly stored.
FireDAC is "cheating" with the default journalization mode, for best performance. It uses some default values which may be confusing. As stated by FireDAC documentation: Set LockingMode to Normal to enable shared DB access. Set Synchronous to Normal or Full to make committed data visible to others.
You are using FDConnection1.StartTransaction; In this state condition, the transaction is still on the memory (cache) without any end. Therefore, you need to end your transaction with commit command such like FDConnection1.Commit;
OK, I went back to basics and wrote a test application with all the database activity confined to a single procedure fired by a button click. In that procedure I added multiple rows to a table using a for loop.
The for loop is surrounded by StartTransaction and Commit calls. Running through the code in the debugger, the journal file is created on the first call to ExecSQL. However, the file remains there after the loop has completed and Commit has been called.
The database is only updated and the journal file deleted when Close is called at the end of the procedure.
procedure TForm1.Button1Click(Sender: TObject);
var
Query : TFDQuery;
Index : Integer;
begin
FDConnection1.DriverName := 'SQLite';
FDConnection1.Params.Values['Database'] := 'C:\Testing\test.db';
FDConnection1.Open;
Query := TFDQuery.Create(nil);
Query.Connection := FDConnection1;
try
FDConnection1.StartTransaction;
for Index := 1 to 10 do begin
Query.SQL.Text := 'insert into Table1 (Name, IDNum) values (:Name, :IDNum)';
Query.ParamByName('Name').AsString := 'Test_Manufacturer_' + IntToStr(Index);
Query.ParamByName('IDNum').AsInteger := Index;
Query.ExecSQL;
end;
FDConnection1.Commit;
except
on E : EDatabaseError do begin
MessageDlg('Database error adding manufacturer: ' + E.Message, mtError, [mbOk], 0);
FDConnection1.Rollback;
end;
end;
Query.Destroy;
FDConnection1.Close;
end;
I'm suspecting that I have another connection to the database open within my application and that might be stopping the update until the application closes. However, I'm still not understanding why the call to Commit isn't updating the database at the end of the transaction block.

simple way to output messages from a stored procedure in Teradata

Using SQL*Assistant:
REPLACE PROCEDURE test_proc()
BEGIN
DECLARE l_msg varchar(128);
set l_msg = 'test';
-- PRINT is not supported
--print l_msg;
-- debug is recognized as a special token, but doesn't work
--debug l_msg;
-- this does nothing
--SIGNAL SQLSTATE '02000';
END;
Is there a simple way to output a text during a procedure execution, aside writing to a log table?
TD 14.xx
EDIT:
Not trying to handle exceptions, but rather send text messages to the client, as the procedure progresses, regardless of the state/condition, similar to PRINT (Sybase), DMBS_OUTPUT(Oracle), DEBUG(SQL Server).

Dumping a complete Oracle 11g database schema to a set of SQL creation statements from a script

I need to dump the complete schema (ddl only, no data) of an Oracle database to a text file or a set of text files in order to be able to systematically track revisions to the database schema using standard VCS tools like git.
Using my favorite RDBMS, postgresql, this is an almost trivially easy task, using pg_dump --schema-only.
However, dumping an Oracle DB schema to an SQL file has proved to be a maddeningly difficult task with Oracle 11g. I'm interested to know about approaches that others have figured out.
Data pump export (no ☹)
Unfortunately, I cannot use the data pump export tools introduced in Oracle 10g, because these require DBA-level access and I cannot easily obtain this level of access for most of my clients' databases.
SQL developer
I've used Oracle's SQL developer GUI and it mostly does what I want with the "Separate files" setting:
Emits a syntactically correct SQL file to create each database object
Emits a summary SQLs file which includes each of the individual-object files in the correct order
However there are several major issues with it:
It's a GUI only; no way to script this behavior from the command line as far as I can tell
Running as an unprivileged user, it can only emit the DDL for that user's owned objects (even when that user has been granted privileges to view other users' objects ... ##$(*&!!)
It's extremely slow, taking around 20 minutes to output about 1 MB of DDL
exp and imp
Oracle's older exp command-line tool does not require DBA access. It can export the complete DDL for a database (with DBA access), or just the DDL for an individual user's owned objects.
Unfortunately, it is even slower than SQL developer (takes >1 hour for the same database even with a few performance tweaks.
However, the worst thing about exp is that it does not emit SQL, but rather a proprietary binary-format dump file (e.g. expdat.dmp).
The corresponding imp tool can "translate" these dump files into severely mangled SQL which does not contain syntactically correct end-of-statement delimiters.
Example of the horrible mangled SQL that imp show=y emits; notice the crazy line wrapping and lack of semicolons at the end of some but not all statements.
Export file created by EXPORT:V11.02.00 via direct path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses AL32UTF8 character set (possible charset conversion)
. importing FPSADMIN's objects into FPSADMIN
"BEGIN "
"sys.dbms_logrep_imp.instantiate_schema(schema_name=>SYS_CONTEXT('USERENV','"
"CURRENT_SCHEMA'), export_db_name=>'*******', inst_scn=>'371301226');"
"COMMIT; END;"
"CREATE TYPE "CLOBSTRINGAGGTYPE" TIMESTAMP '2015-06-01:13:37:41' OID '367CDD"
"7E59D14CF496B27D1B19ABF051' "
"AS OBJECT"
"("
" theString CLOB,"
" STATIC FUNCTION"
" ODCIAggregateInitialize(sctx IN OUT CLOBSTRINGAGGTYPE )"
" RETURN NUMBER,"
" MEMBER FUNCTION"
" ODCIAggregateIterate(self IN OUT CLOBSTRINGAGGTYPE, VALUE IN VARC"
"HAR2 )"
" RETURN NUMBER,"
" MEMBER FUNCTION"
" ODCIAggregateTerminate(self IN CLOBSTRINGAGGTYPE, returnValue OUT"
" CLOB, flags IN NUMBER)"
" RETURN NUMBER,"
" MEMBER FUNCTION"
" ODCIAggregateMerge(self IN OUT CLOBSTRINGAGGTYPE, ctx2 IN CLOBSTR"
"INGAGGTYPE)"
" RETURN NUMBER"
");"
"GRANT EXECUTE ON "CLOBSTRINGAGGTYPE" TO PUBLIC"
"GRANT DEBUG ON "CLOBSTRINGAGGTYPE" TO PUBLIC"
"CREATE OR REPLACE TYPE BODY CLOBSTRINGAGGTYPE"
I have written a Python script to demangle the output of imp show=y, but it cannot reliably demangle the output because it doesn't understand the complete Oracle SQL syntax.
dbms_metadata
Oracle has a dbms_metadata package which supports introspection of the database contents.
It's relatively easy to write an SQL statement which will retrieve the DDL for some but not all database objects. For example, the following statement will retrieve CREATE TABLE statements, but won't retrieve the corresponding privilege GRANTs on those tables.
select sub.*, dbms_metadata.get_ddl(sub.object_type, sub.object_name, sub.owner) sql
from (
select
created,
owner,
object_name,
decode(object_type,
'PACKAGE', 'PACKAGE_SPEC',
'PACKAGE BODY', 'PACKAGE_BODY',
'TYPE BODY', 'TYPE_BODY',
object_type
) object_type
from all_objects
where owner = :un
--These objects are included with other object types.
and object_type not in ('INDEX PARTITION','LOB','LOB PARTITION','TABLE PARTITION','DATABASE LINK')
--Ignore system-generated types that support collection processing.
and not (object_type like 'TYPE' and object_name like 'SYS_PLSQL_%')
) sub
Attempting to fetch the complete set of objects quickly leads down a very complex rabbit hole. (See "Reverse engineering object DDL and finding object dependencies" for more gory details.)
What else?
Any advice? I'm at a total loss for a sane and maintainable way to perform this seemingly indispensable database programming task.
Combine DBMS_DATAPUMP, Oracle Copy (OCP), and a simple shell script to create a one-click solution.
Sample Schema to Export
--Create test user.
drop user test_user cascade;
create user test_user identified by test_user;
create table test_user.table1(a number);
create view test_user.view1 as select 1 a from dual;
create or replace procedure test_user.procedure1 is begin null; end;
/
Create Directory and Procedure
Run these as steps as SYS. The definer's rights procedure runs as SYS. This way no roles or privileges need to be granted to any users.
--Create directory that will contain SQL file.
create directory ddl_directory as 'C:\temp';
grant read on directory ddl_directory to jheller;
--Create procedure that can only export one hard-coded schema.
--This is based on René Nyffenegger's solution here:
--dba.stackexchange.com/questions/91149/how-to-generate-an-sql-file-with-dbms-datapump
create or replace procedure sys.generate_ddl authid definer is
procedure create_export_file is
datapump_job number;
job_state varchar2(20);
begin
datapump_job := dbms_datapump.open(
operation => 'EXPORT',
job_mode => 'SCHEMA',
remote_link => null,
job_name => 'Export dump file',
version => 'LATEST');
dbms_output.put_line('datapump_job: ' || datapump_job);
dbms_datapump.add_file(
handle => datapump_job,
filename => 'export.dmp',
directory => 'DDL_DIRECTORY',
filetype => dbms_datapump.ku$_file_type_dump_file);
dbms_datapump.metadata_filter(
handle => datapump_job,
name => 'SCHEMA_LIST',
value => '''TEST_USER''');
dbms_datapump.start_job(
handle => datapump_job,
skip_current => 0,
abort_step => 0);
dbms_datapump.wait_for_job(datapump_job, job_state);
dbms_output.put_line('Job state: ' || job_state);
dbms_datapump.detach(datapump_job);
end create_export_file;
procedure create_sql_file is
datapump_job number;
job_state varchar2(20);
begin
datapump_job := dbms_datapump.open(
operation => 'SQL_FILE',
job_mode => 'SCHEMA',
remote_link => null,
job_name => 'Export SQL file',
version => 'LATEST');
dbms_output.put_line('datapump_job: ' || datapump_job);
dbms_datapump.add_file(
handle => datapump_job,
filename => 'export.dmp',
directory => 'DDL_DIRECTORY',
filetype => dbms_datapump.ku$_file_type_dump_file);
dbms_datapump.add_file(
handle => datapump_job,
filename => 'schema.sql',
directory => 'DDL_DIRECTORY',
filetype => dbms_datapump.ku$_file_type_sql_file);
dbms_datapump.start_job(
handle => datapump_job,
skip_current => 0,
abort_step => 0);
dbms_datapump.wait_for_job(datapump_job, job_state);
dbms_output.put_line('Job state: ' || job_state);
dbms_datapump.detach(datapump_job);
end create_sql_file;
begin
create_export_file;
create_sql_file;
end;
/
--Grant to users.
grant execute on generate_ddl to jheller;
Setup OCP on the Client
Files on an Oracle directory can be easily transferred to a client PC using OCP as described in this answer. The setup
is a bit tricky - download the precise version of the program and the instant client and unzip them into the same directory. I think I also had some problems with
a VC++ redistributable or something the first time.
Commands to Run
Now the easy part - creating and moving the files is done in two simple steps:
execute sys.generate_ddl;
C:\Users\jonearles\Downloads\ocp-0.1-win32>ocp jheller/jheller#orcl12 DDL_DIRECTORY:schema.sql schema.sql
Sample Output
This script contains a lot of weird things. Some weird extra commands that nobody will understand, and some weird options that nobody will understand. That's probably one of the reasons this seemingly obvious feature is so difficult - due to the thousands of odd features, it's impossible to have output that is both understandable and completely unambiguous.
CREATE TABLE "TEST_USER"."TABLE1"
( "A" NUMBER
) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
TABLESPACE "USERS" ;
...
-- new object type path: SCHEMA_EXPORT/PROCEDURE/PROCEDURE
-- CONNECT TEST_USER
CREATE EDITIONABLE procedure procedure1 is begin null; end;
/
...
-- new object type path: SCHEMA_EXPORT/VIEW/VIEW
CREATE FORCE EDITIONABLE VIEW "TEST_USER"."VIEW1" ("A") AS
select 1 a from dual
;

Opening and closing database connections in multiuser environment

This is a multiuser application (multithreaded) where various departments will access their own database.The database is SQLite and I am using FireDac.For each department I have assigned a separate ADConnection so I dont get any unexpected locks.
Which connection will be activated (active) depends solely on the number produced by the ADQuery3. This is done on MainForm Show because it needs to be this way (which gets shown after successfull login). I would like to be able to close every connection on FormClose but I run into some bad issues when multiusers use the same database and log in and out.So I would like to ask if this is the right programming logic I am doing or this could be done in a better way?
Also I have never used this many begin end else and I am wondering how to proceed with this?
I mean when I need to check the if the number of another department came up, like
if DataModule1.ADQuery3.FieldByName('DEPARTMENT').AsString = '12' where does the next ELSE come up?
procedure TMainForm.FormShow(Sender: TObject);
begin
if DataModule1.ADQuery3.FieldByName('DEPARTMENT').AsString = '13'
then begin
try
if DataModule1.1_CONNECTION.Connected = true then
DataModule1.1_CONNECTION.Connected := False
else
DataModule1.1_CONNECTION.DriverName:= 'SQLite';
DataModule1.1_CONNECTION.Params.Values['Database']:= ExtractFilePath(Application.ExeName)+ 'mydatabase.db';
DataModule1.1_CONNECTION.Connected := true;
DataModule1.ADTable1.TableName :='DEPT_13';
DataModule1.DEPT_13.Active:=True;
cxGrid1.ActiveLevel.GridView := DEPT_13;
except
on E: Exception do begin
ShowMessage('There was an error... : ' + E.Message);
end;
end;
end;

How to ignore errors from nested stored procedures in a SQL Server 2000 procedure called from ASP

I am working on a "classic" ASP application with a SQL Server 2000 database.
We have a stored procedure (let's call it SP0) that calls other stored procedures (let's say SP0.1, SP0.2 ...) which themselves call another stored procedure called SPX.
All those procedures generate errors when something goes wrong using RAISERROR().
We want to be able to launch SP0 with a parameter #errorsInResultSet which will change its behaviour : instead of "re-raising" the errors as it does so far, each sub-procedure will log the errors in a temporary table #detectedProblems and return it at the end.
Adding errors to the temporary table is not a problem, but I can not figure out how to ignore the errors generated by the nested stored procedures.
I have done this so far :
EXEC #rc = [SP0.1] #errorsAsResultSet = #errorsAsResultSet
IF (0 <> ##ERROR) OR (0 <> #rc)
BEGIN
IF (#errorsAsResultSet <> 0x1)
BEGIN
RAISERROR('SP0.1: Error for table Tests in db %s.%s', 16, 1, ##SERVERNAME, #db)
END
GOTO FAILURE
END
This works fine, but it still generate errors from the lowest SPX, which prevent it from being executed by ADO in classic ASP.
How can I ignore the errors ?
If you're happy that the errors are being logged and it's safe to continue, you can use ON ERROR RESUME NEXT on the line before the SP call. This will prevent the page from throwing errors.
To turn back on errors later in the page, you can use ON ERROR GOTO 0
In the end, it looks like there is no way to "hide" messages generated by PRINT or RAISERROR statements in SP0.1, SP0.2 from the calling Stored Procedure, which means that the execution is always interpreted as "erroneous" by ASP.
In the end, I rewrote a new Stored Procedure with a special parameter to configure how to report errors.

Resources