simple way to output messages from a stored procedure in Teradata - teradata

Using SQL*Assistant:
REPLACE PROCEDURE test_proc()
BEGIN
DECLARE l_msg varchar(128);
set l_msg = 'test';
-- PRINT is not supported
--print l_msg;
-- debug is recognized as a special token, but doesn't work
--debug l_msg;
-- this does nothing
--SIGNAL SQLSTATE '02000';
END;
Is there a simple way to output a text during a procedure execution, aside writing to a log table?
TD 14.xx
EDIT:
Not trying to handle exceptions, but rather send text messages to the client, as the procedure progresses, regardless of the state/condition, similar to PRINT (Sybase), DMBS_OUTPUT(Oracle), DEBUG(SQL Server).

Related

Oracle PLSQL Print Bulk Batch Output After Each Batch Completion [duplicate]

I have an SQL script that is called from within a shell script and takes a long time to run. It currently contains dbms_output.put_line statements at various points. The output from these print statements appear in the log files, but only once the script has completed.
Is there any way to ensure that the output appears in the log file as the script is running?
Not really. The way DBMS_OUTPUT works is this: Your PL/SQL block executes on the database server with no interaction with the client. So when you call PUT_LINE, it is just putting that text into a buffer in memory on the server. When your PL/SQL block completes, control is returned to the client (I'm assuming SQLPlus in this case); at that point the client gets the text out of the buffer by calling GET_LINE, and displays it.
So the only way you can make the output appear in the log file more frequently is to break up a large PL/SQL block into multiple smaller blocks, so control is returned to the client more often. This may not be practical depending on what your code is doing.
Other alternatives are to use UTL_FILE to write to a text file, which can be flushed whenever you like, or use an autonomous-transaction procedure to insert debug statements into a database table and commit after each one.
If it is possible to you, you should replace the calls to dbms_output.put_line by your own function.
Here is the code for this function WRITE_LOG -- if you want to have the ability to choose between 2 logging solutions:
write logs to a table in an autonomous transaction
CREATE OR REPLACE PROCEDURE to_dbg_table(p_log varchar2)
-- table mode:
-- requires
-- CREATE TABLE dbg (u varchar2(200) --- username
-- , d timestamp --- date
-- , l varchar2(4000) --- log
-- );
AS
pragma autonomous_transaction;
BEGIN
insert into dbg(u, d, l) values (user, sysdate, p_log);
commit;
END to_dbg_table;
/
or write directly to the DB server that hosts your database
This uses the Oracle directory TMP_DIR
CREATE OR REPLACE PROCEDURE to_dbg_file(p_fname varchar2, p_log varchar2)
-- file mode:
-- requires
--- CREATE OR REPLACE DIRECTORY TMP_DIR as '/directory/where/oracle/can/write/on/DB_server/';
AS
l_file utl_file.file_type;
BEGIN
l_file := utl_file.fopen('TMP_DIR', p_fname, 'A');
utl_file.put_line(l_file, p_log);
utl_file.fflush(l_file);
utl_file.fclose(l_file);
END to_dbg_file;
/
WRITE_LOG
Then the WRITE_LOG procedure which can switch between the 2 uses, or be deactivated to avoid performances loss (g_DEBUG:=FALSE).
CREATE OR REPLACE PROCEDURE write_log(p_log varchar2) AS
-- g_DEBUG can be set as a package variable defaulted to FALSE
-- then change it when debugging is required
g_DEBUG boolean := true;
-- the log file name can be set with several methods...
g_logfname varchar2(32767) := 'my_output.log';
-- choose between 2 logging solutions:
-- file mode:
g_TYPE varchar2(7):= 'file';
-- table mode:
--g_TYPE varchar2(7):= 'table';
-----------------------------------------------------------------
BEGIN
if g_DEBUG then
if g_TYPE='file' then
to_dbg_file(g_logfname, p_log);
elsif g_TYPE='table' then
to_dbg_table(p_log);
end if;
end if;
END write_log;
/
And here is how to test the above:
1) Launch this (file mode) from your SQLPLUS:
BEGIN
write_log('this is a test');
for i in 1..100 loop
DBMS_LOCK.sleep(1);
write_log('iter=' || i);
end loop;
write_log('test complete');
END;
/
2) on the database server, open a shell and
tail -f -n500 /directory/where/oracle/can/write/on/DB_server/my_output.log
Two alternatives:
You can insert your logging details in a logging table by using an autonomous transaction. You can query this logging table in another SQLPLUS/Toad/sql developer etc... session. You have to use an autonomous transaction to make it possible to commit your logging without interfering the transaction handling in your main sql script.
Another alternative is to use a pipelined function that returns your logging information. See here for an example: http://berxblog.blogspot.com/2009/01/pipelined-function-vs-dbmsoutput.html When you use a pipelined function you don't have to use another SQLPLUS/Toad/sql developer etc... session.
the buffer of DBMS_OUTPUT is read when the procedure DBMS_OUTPUT.get_line is called. If your client application is SQL*Plus, it means it will only get flushed once the procedure finishes.
You can apply the method described in this SO to write the DBMS_OUTPUT buffer to a file.
Set session metadata MODULE and/or ACTION using dbms_application_info().
Monitor with OEM, for example:
Module: ArchiveData
Action: xxx of xxxx
If you have access to system shell from PL/SQL environment you can call netcat:
BEGIN RUN_SHELL('echo "'||p_msg||'" | nc '||p_host||' '||p_port||' -w 5'); END;
p_msg - is a log message
v_host is a host running python script that reads data from socket on port v_port.
I used this design when I wrote aplogr for real-time shell and pl/sql logs monitoring.

Why doesn't my SQLite database add data until I close my application?

My Delphi application is using FireDac and an SQLite database. I've noticed that updates are being saved in a journal file and the database file is not actually updated until I close my application.
The application is making lots of 'batch updates' to the database. Each individual update is inside a TFDQuery.StartTransaction ... TFDQuery.Commit pair. Despite this, it seems all updates are held in the journal file until the application ends.
How can I force SQLite to update the database after each batch of updates rather than when my application finishes?
I've tried changing the SQLite db to WAL but the same thing happens.
Despite using 'StartTransaction' and 'Commit' the data stays in the journal until the application ends.
try
Query.Connection := FDConnection1;
FDConnection1.Open;
FDConnection1.StartTransaction;
Query.SQL.Text := 'select 1 from t_Manufacturers where m_Name = ' + QuotedStr(ManString);
Query.Open;
if Query.RecordCount = 0 then begin
{ not found, so add }
Query.SQL.Text := 'insert into t_Manufacturers (m_Name, m_ManUID) values (:Name, null)';
Query.ParamByName('Name').AsString := ManString;
Query.ExecSQL;
{ save m_ManUID for logging }
Query.SQL.Text := 'select m_ManUID from t_Manufacturers where m_Name = ' + QuotedStr(ManString);
Query.Open;
end;
Result := Query.FieldByName(m_ManUID).AsInteger;
FDConnection1.Commit;
except
on E : EDatabaseError do begin
MessageDlg('Database error adding manufacturer: ' + E.Message, mtError, [mbOk], 0);
FDConnection1.Rollback;
end;
No error messages or issues. Providing the application finishes OK, the database is updated as expected, so I'm happy that my programming and SQL is doing exactly what I need in that respect.
It is very dubious that "it seems all updates are held in the journal file until the application ends". SQLite3 is very serious about writing data - more serious than most DB engines I know. Just check https://www.sqlite.org/atomiccommit.html
I suspect you are somewhat confused by the presence of the journal file. After a transaction, the journal file is still kept there on disk, ready for any new write operation. But the data is actually written in the main file.
Just write some data, then kill the application before closing it (using the task manager). Then re-open the file (re-start the app): I am almost sure you will see the data properly stored.
FireDAC is "cheating" with the default journalization mode, for best performance. It uses some default values which may be confusing. As stated by FireDAC documentation: Set LockingMode to Normal to enable shared DB access. Set Synchronous to Normal or Full to make committed data visible to others.
You are using FDConnection1.StartTransaction; In this state condition, the transaction is still on the memory (cache) without any end. Therefore, you need to end your transaction with commit command such like FDConnection1.Commit;
OK, I went back to basics and wrote a test application with all the database activity confined to a single procedure fired by a button click. In that procedure I added multiple rows to a table using a for loop.
The for loop is surrounded by StartTransaction and Commit calls. Running through the code in the debugger, the journal file is created on the first call to ExecSQL. However, the file remains there after the loop has completed and Commit has been called.
The database is only updated and the journal file deleted when Close is called at the end of the procedure.
procedure TForm1.Button1Click(Sender: TObject);
var
Query : TFDQuery;
Index : Integer;
begin
FDConnection1.DriverName := 'SQLite';
FDConnection1.Params.Values['Database'] := 'C:\Testing\test.db';
FDConnection1.Open;
Query := TFDQuery.Create(nil);
Query.Connection := FDConnection1;
try
FDConnection1.StartTransaction;
for Index := 1 to 10 do begin
Query.SQL.Text := 'insert into Table1 (Name, IDNum) values (:Name, :IDNum)';
Query.ParamByName('Name').AsString := 'Test_Manufacturer_' + IntToStr(Index);
Query.ParamByName('IDNum').AsInteger := Index;
Query.ExecSQL;
end;
FDConnection1.Commit;
except
on E : EDatabaseError do begin
MessageDlg('Database error adding manufacturer: ' + E.Message, mtError, [mbOk], 0);
FDConnection1.Rollback;
end;
end;
Query.Destroy;
FDConnection1.Close;
end;
I'm suspecting that I have another connection to the database open within my application and that might be stopping the update until the application closes. However, I'm still not understanding why the call to Commit isn't updating the database at the end of the transaction block.

How can I log sql execution results in airflow?

I use airflow python operators to execute sql queries against a redshift/postgres database. In order to debug, I'd like the DAG to return the results of the sql execution, similar to what you would see if executing locally in a console:
I'm using psycop2 to create a connection/cursor and execute the sql. Having this logged would be extremely helpful to confirm the parsed parameterized sql, and confirm that data was actually inserted (I have painfully experiences issues where differences in environments caused unexpected behavior)
I do not have deep knowledge of airflow or the low level workings of the python DBAPI, but the pscyopg2 documentation does seem to refer to some methods and connection configurations that may allow this.
I find it very perplexing that this is difficult to do, as I'd imagine it would be a primary use case of running ETLs on this platform. I've heard suggestions to simply create additional tasks that query the table before and after, but this seems clunky and ineffective.
Could anyone please explain how this may be possible, and if not, explain why? Alternate methods of achieving similar results welcome. Thanks!
So far I have tried the connection.status_message() method, but it only seems to return the first line of the sql and not the results. I have also attempted to create a logging cursor, which produces the sql, but not the console results
import logging
import psycopg2 as pg
from psycopg2.extras import LoggingConnection
conn = pg.connect(
connection_factory=LoggingConnection,
...
)
conn.autocommit = True
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.addHandler(logging.StreamHandler(sys.stdout))
conn.initialize(logger)
cur = conn.cursor()
sql = """
INSERT INTO mytable (
SELECT *
FROM other_table
);
"""
cur.execute(sql)
I'd like the logger to return something like:
sql> INSERT INTO mytable (
SELECT ...
[2019-07-25 23:00:54] 912 rows affected in 4 s 442 ms
Let's assume you are writing an operator that uses postgres hook to do something in sql.
Anything printed inside an operator is logged.
So, if you want to log the statement, just print the statement in your operator.
print(sql)
If you want to log the result, fetch the result and print the result.
E.g.
result = cur.fetchall()
for row in result:
print(row)
Alternatively you can use self.log.info in place of print, where self refers to the operator instance.
Ok, so after some trial and error I've found a method that works for my setup and objective. To recap, my goal is to run ETL's via python scripts, orchestrated in Airflow. Referring to the documentation for statusmessage:
Read-only attribute containing the message returned by the last command:
The key is to manage logging in context with transactions executed on the server. In order for me to do this, I had to specifically set con.autocommit = False, and wrap SQL blocks with BEGIN TRANSACTION; and END TRANSACTION;. If you insert cur.statusmessage directly following a statement that deletes or inserts, you will get a response such as 'INSERT 0 92380'.
This still isn't as verbose as I would prefer, but it is a much better than nothing, and is very useful for troubleshooting ETL issues within Airflow logs.
Side notes:
- When autocommit is set to False, you must explicitly commit transactions.
- It may not be necessary to state transaction begin/end in your SQL. It may depend on your DB version.
con = psy.connect(...)
con.autocommit = False
cur = con.cursor()
try:
cur.execute([some_sql])
logging.info(f"Cursor statusmessage: {cur.statusmessage})
except:
con.rollback()
finally:
con.close()
There is some buried functionality within psycopg2 that I'm sure can be utilized, but the documentation is pretty thin and there are no clear examples. If anyone has suggestions on how to utilize things such as logobjects, or returning join PID to somehow retrieve additional information.

How get AReadLinesCount IdTCPClient

There is a certain remote server. I want to get an answer from him
procedure TForm1.Button1Click(Sender: TObject);
begin
Memo1.Clear;
IdTCPClient1.Host := '163.158.182.243';
IdTCPClient1.Port := 28900;
IdTCPClient1.Connect;
end;
procedure TForm1.IdTCPClient1Connected(Sender: TObject);
begin
IdTCPClient1.IOHandler.Write('001');
IdTCPClient1.IOHandler.ReadStrings(Memo1.Lines, 25, IndyTextEncoding(IdTextEncodingType.encOSDefault));
end;
The procedure requires a parameter to specify AReadLinesCount, otherwise the program stops responding
procedure TIdIOHandler.ReadStrings(ADest: TStrings; AReadLinesCount: Integer = -1;
AByteEncoding: IIdTextEncoding = nil
{$IFDEF STRING_IS_ANSI}; ADestEncoding: IIdTextEncoding = nil{$ENDIF}
);
How to AReadLinesCount from the responses received
The server needs to tell your client when to stop reading. There are two ways it can do that:
It can send the number of lines before sending the lines themselves. You would read the number first, and then read the specified number of lines that follow.
It can send a unique terminating delimiter after sending the lines. You would read lines in a loop until you reach the terminator.
You have not provided any details about the protocol you are trying to implement, so noone can tell you exactly what to write in your code to make this work.

How to ignore errors from nested stored procedures in a SQL Server 2000 procedure called from ASP

I am working on a "classic" ASP application with a SQL Server 2000 database.
We have a stored procedure (let's call it SP0) that calls other stored procedures (let's say SP0.1, SP0.2 ...) which themselves call another stored procedure called SPX.
All those procedures generate errors when something goes wrong using RAISERROR().
We want to be able to launch SP0 with a parameter #errorsInResultSet which will change its behaviour : instead of "re-raising" the errors as it does so far, each sub-procedure will log the errors in a temporary table #detectedProblems and return it at the end.
Adding errors to the temporary table is not a problem, but I can not figure out how to ignore the errors generated by the nested stored procedures.
I have done this so far :
EXEC #rc = [SP0.1] #errorsAsResultSet = #errorsAsResultSet
IF (0 <> ##ERROR) OR (0 <> #rc)
BEGIN
IF (#errorsAsResultSet <> 0x1)
BEGIN
RAISERROR('SP0.1: Error for table Tests in db %s.%s', 16, 1, ##SERVERNAME, #db)
END
GOTO FAILURE
END
This works fine, but it still generate errors from the lowest SPX, which prevent it from being executed by ADO in classic ASP.
How can I ignore the errors ?
If you're happy that the errors are being logged and it's safe to continue, you can use ON ERROR RESUME NEXT on the line before the SP call. This will prevent the page from throwing errors.
To turn back on errors later in the page, you can use ON ERROR GOTO 0
In the end, it looks like there is no way to "hide" messages generated by PRINT or RAISERROR statements in SP0.1, SP0.2 from the calling Stored Procedure, which means that the execution is always interpreted as "erroneous" by ASP.
In the end, I rewrote a new Stored Procedure with a special parameter to configure how to report errors.

Resources