Too many exceptions causes error 5635? (An actual Stack Overflow!) - openedge

I've got a problem with ABL exception handling. If looks as if raising too many exceptions gets you an error 5635? Which would make exception trapping not entirely useful, if true.
Has anyone else seen this?
Does anyone know of a way around it, short of going back to old-style ABL code without exception handling?
Here is (some of) my actual code. Lots of weird external calls but it's the exception checking we are talking about here:
for each b-archead
where b-archead.depot = ip-depot
and b-archead.o-week >= ip-startwk
and b-archead.o-week <= ip-endwk
use-index o-week
no-lock
on error undo, throw:
assign v-directory = b-archead.directory
v-invoice = b-archead.invoice
v-o-date = b-archead.o-date
v-path = arc_path(buffer b-archead)
v-success = no
v-error = "".
if not file_status(v-path) begins "Y"
then
undo, throw new progress.lang.apperror
(subst("Source file '&1' missing", v-path), 300).
run process_one (buffer b-archead, input v-path, input ip-todir).
v-success = yes.
catch e2 as progress.lang.error:
v-error = e2:getmessage(1).
run log ( 'w', v-error ).
next.
end catch.
finally:
put stream s-out unformatted
csv_char(v-directory)
',' csv_int(v-invoice)
',' csv_date(v-o-date)
',' csv_int(v-o-week)
',' csv_char(v-path)
',' csv_char(v-success)
',' csv_char(v-error)
skip.
end finally.
end.
Here is the error I get when I run it and most of the archead records result in an exception:
SYSTEM ERROR: -s exceeded. Raising STOP condition and attempting to write stack
trace to file 'procore'. Consider increasing -s startup parameter. (5635)
The code works fine with one or two exceptions; it only fails when there are a lot of them (hundreds?) -s is set to 150, which seems okay to me.

-s error can occur when you have an infinitive loop.
for example:
run a.
procedure a:
run b.
end.
procedure b:
run a.
end.
It could be a problem in file_status function, process_one or log procedure.

Regrettably -- and until I can get a more definitive answer -- it does in fact appear that there is an upper limit on the number of exceptions you can catch.
Here is the shortest code that reproduces the problem:
def var v-i as int no-undo.
do v-i = 1 to 5000 on error undo, throw:
undo, throw new progress.lang.apperror ( "error message" ).
catch e as progress.lang.apperror:
message "boo". pause 0.
end catch.
end.
For me this always falls over with the error above at the point when v-i = 4583. If my exception uses an error number, i.e. undo, throw new progress.lang.apperror ("error message", 1234)., then the number is 2293.
The lack of complexity of the failing code, plus the fact that the number of iterations you get depends on the complexity of the error object, leads me to believe that it is the error object that is causing the overflow. In other words, e is not cleared down with each iteration.
Whether or not this is a bug in my Openedge (10.2B) it's certainly something I will have to work around in future.
EDIT: the workaround turns out to be painfully obvious once you know it. The error object is scoped to the enclosing block, so don't use catch against a block if it iterates:
def var v-i as int no-undo.
do v-i = 1 to 5000 on error undo, throw:
do on error undo, throw:
undo, throw new progress.lang.apperror ( "error message" ).
catch e as progress.lang.apperror:
message "boo". pause 0.
end catch.
end.
end.

Log the activity to see what is actually going on - check the LOG-MANAGER docs for more info.

Related

R bigrquery - how to catch error messages from executed SQL?

Say I have some SQL code that refreshes a table of data, and I would like to schedule an R script to schedule this code to run daily. Is there a way to capture any potential error messages the SQL code may throw and save that error message to an R variable instead of the error message being displayed in the R console log?
For an example, assume I have stored procedure sp_causing_error() in BigQuery that that takes data from a source table source_table and refreshes a target table table_to_refresh.
CREATE OR REPLACE PROCEDURE sp_causing_error()
BEGIN
CREATE OR REPLACE TABLE table_to_refresh AS (
Select non_existent_column, x, y, z
From source_table
);
END;
Assume the schema of the source_table has changed and column non_existent_column no longer exists. When attempting to call sp_causing_error() in RStudio via:
library(bigrquery)
query <- "CALL sp_causing_error()"
bq_project_query(my_project, query)
We get an error message printed to the console (which masks the actual error message we would encounter if running in BigQuery):
Error in UseMethod("as_bq_table") : no applicable method for 'as_bq_table' applied to an object of class "NULL"
If we were to run sp_causing_error() in BigQuery, it throws an error message stating:
Query error: Unrecognized name: non_existent_column at [sp_throw_error:3:8]
Are query error message displayed in BigQuery ever captured anywhere in bigrquery when executing SQL? My goal would be to have some sort of try/catch block in the R script that catches an error message that can then be written to an output file if the SQL code did not run successfully. Hoping there is a way we can capture the descriptive error message from BigQuery and assign it to an R variable for further processing.
UPDATE
R's tryCatch() function comes in handy here to catch the R error message:
query <- "CALL sp_causing_error()"
result <- tryCatch(
bq_project_query("research-01-217611", query),
error = function(err) {
return(err)
}
)
result now contains the error message from the R console:
<simpleError in UseMethod("as_bq_table"): no applicable method for 'as_bq_table' applied to an object of class "NULL">
However, this is still not descriptive of the actual error message we see if we execute the same SQL code in BigQuery, quoted above which references an unrecognized column name. Are we able to catch that error message instead of the more generic R error message?
UPDATE/ANSWER
Wrapping the stored procedure call within R using BigQuery's Begin...Exception...End syntax lets us get at the actual error message. Example code snippet:
query <- '
BEGIN
CALL sp_causing_error();
EXCEPTION WHEN ERROR THEN
Select 1 AS error_flag, ##error.message AS error_message, ##error.statement_text AS error_statement_text, ##error.formatted_stack_trace AS stack_trace
;
END;
'
query_result <- bq_table_download(bq_project_query(<project>, query))
error_flag <- query_result["error_flag"][[1]]
if (error_flag == 0) {
print("Job ran successfully")
} else {
print("Job failed")
# Access error message variables here and take additional action as desired
}
Warning: Note that this solution could cause an R error if the stored procedure completes successfully, as error_flag will not exist unless explicitly passed at the end of the stored procedure. This can be worked around by adding one line at the end of your stored procedure in BigQuery to set the flag appropriately so the bq_table_download() function will get a value upon the stored procedure running successfully:
BEGIN
-- BigQuery stored procedure code
-- ...
-- ...
Select 0 AS error_flag;
END;

How to cause "Unable to flush stdout: Broken pipe" in Perl? [duplicate]

After upgrading to Perl 5.24.4 we repeatedly get this error in logs (without pointing the filename and line number):
Unable to flush stdout: Broken pipe
We have no idea what causes this error.
Is there any advice how to understand the cause of the error?
The error comes from perl.c, line 595:
PerlIO_printf(PerlIO_stderr(), "Unable to flush stdout: %s",
Strerror(errno));
This line is part of perl_destruct, which is called to shut down the perl interpreter at the end of the program.
As part of the global shutdown procedure, all still open filehandles are flushed (i.e. all buffered output is written out). The comment above says:
/* Need to flush since END blocks can produce output */
/* flush stdout separately, since we can identify it */
The error message is not listed in perldoc perldiag, which is arguably a documentation bug. It was probably overlooked because it's not a real warn or die call, it's effectively just print STDERR $message. It's not associated with a file name or line number because it only happens after your program stops running (i.e. after a call to exit or because execution fell off the end of the main script).
This is very general advice, but
use Carp::Always;
at the top of the script, or running with
perl -MCarp::Always the_script.pl arg1 arg2 ...
will get Perl to produce stack traces with every warning and error.
Broken pipe is the error string associated with system error EPIPE. One receives this error when writing to a closed pipe. Writing to a closed pipe usally results in the process being killed by a SIGPIPE, so it means the behaviour of SIGPIPE was changed from its default.
$ perl -e'
$SIG{PIPE} = "IGNORE";
print "foo\n"
or die("Can\x27t write to STDOUT: $!\n");
sleep(2);
close(STDOUT)
or die("Unable to flush STDOUT: $!\n");
' | perl -e'sleep(1)'
Unable to flush STDOUT: Broken pipe
As melpomene discovered, the error is automatically output if you write to a broken pipe in an END block.
$ perl -e'
$SIG{PIPE} = "IGNORE";
sleep(2);
END { print "foo\n"; }
' | perl -e'sleep(1)'
Unable to flush stdout: Broken pipe
This isn't necessarily a problem, although it could be a sign that a process is exiting prematurely.

'File not found' error on an existing file

I have sometimes a 'file not found' error on the 'DeleteFile' line of this small script:
(I guess when several clients open the script as the same time)
if objFSO.FileExists(fileName) then
Set f = objFSO.GetFile(fileName)
if DateDiff("d", f.DateLastModified, date()) > 3 then
Application.Lock
objFSO.DeleteFile(fileName)
Application.Unlock
end if
Set f = nothing
end if
But this should be protected by the 'FileExists' on the first line?
Any idea ? Thanks.
You're running into a race condition. The file attributes are cached in the second line with GetFile. If the file exists at that point, the code will continue to run. You either need to lock before that point, or refresh your attribute cache and double-check existence after Application.Lock.

invalid procedure call or argument left

I am facing the following error:
Microsoft VBScript runtime error '800a0005'
Invalid procedure call or argument: 'left'
/scheduler/App.asp, line 16
The line is:
point1 = left(point0,i-1)
This code works perfectly in another server, but now on another server it is showing this error. I can guess it has to do with system or IIS settings or may be something else but its nothing with code (as its works fine in another server).
If i is equal to zero then this will call Left() with -1 as the length parameter. This will result in an Invalid procedure call or argument error. Verify that i >= 0.
Just experienced this problem myself - a script running seamlessly for many months suddenly collapsed with this error. It seems that the scripting engine falls over itself for whatever reason and string functions cease being able to handle in-function calculations.
I appreciate it's been quite a while since this question was asked, but in case anyone encounters this in the future...
Replace
point1 = left(point0, i-1)
with
j = i-1
point1 = left(point0, j)
... and it will work.
Alternatively, simply re-boot the server (unfortunately, simply re-starting the WWW service won't fix it).

Vendors black box function can only be called successfully once

(first question here, sorry if I am breaking a piece of etiquette)
My site is running on an eCommerce back end provider that I subscribe to. They have everything in classic ASP. They have a black box function called import_products that I use to import a given text file into my site's database.
The problem is that if I call the function more than once, something breaks. Here is my example code:
for blah = 1 to 20
thisfilename = "fullcatalog_" & blah & ".csv"
Response.Write thisfilename & "<br>"
Response.Flush
Call Import_Products(3,thisfilename,1)
Next
Response.End
The first execution of the Import_Products function works fine. The second time I get:
Microsoft VBScript runtime error '800a0009'
Subscript out of range: 'i'
The filenames all exist. That part is fine. There are no bugs in my calling code. I have tried checking the value of "i" before each execution. The first time the value is blank, and before the second execution the value is "2". So I tried setting it to null during each loop iteration, but that didn't change the results at all.
I assume that the function is setting a variable or opening a connection during its execution, but not cleaning it up, and then not expecting it to already be set the second time. Is there any way to find out what this would be? Or somehow reset the condition back to nothing so that the function will be 'fresh'?
The function is in an unreadable include file so I can't see the code. Obviously a better solution would be to go with the company support, and I have a ticket it in with them, but it is like pulling teeth to get them to even acknowledge that there is a problem. Let alone solve it.
Thanks!
EDIT: Here is a further simplified example of calling the function. The first call works. The second call fails with the same error as above.
thisfilename = "fullcatalog_testfile.csv"
Call Import_Products(3,thisfilename,1)
Call Import_Products(3,thisfilename,1)
Response.End
The likely cause of the error are the two numeric parameters for the Import_Products subroutine.
Import_Products(???, FileName, ???)
The values are 3 and 1 in your example but you never explain what they do or what they are documented to do.
EDIT Since correcting the vender subroutine is impossible, but it always works for the first time it's called lets use an HTTP REDIRECT instead of a FOR LOOP so that it technically only gets called once per page execution.
www.mysite.tld/import.asp?current=1&end=20
curr = CInt(Request.QueryString("current"))
end = CInt(Request.QueryString("end"))
If curr <= end Then
thisfilename = "fullcatalog_" & curr & ".csv"
Call Import_Products(3,thisfilename,1)
Response.Redirect("www.mysite.tld/import.asp?current=" & (curr + 1) & "&end=" & end)
End If
note the above was written inside my browser and is untested so syntax errors may exist.

Resources