Time taken to run a loop (Progress 4GL) - openedge

I wrote a query which contains multiple for each statements. The query is taking more than 20 minutes to fetch the data. Is there a way to check what time each loop started and ended. (How much time does each loop takes to execute and also the total time taken to complete the program).

You could do as you ask (just follow JensD's suggestsions) but you would likely be better served to use the profiler. You can easily add profiling for a code snippet:
assign
profiler:enabled = yes
profiler:description = "description of this test"
profiler:profiling = yes
profiler:file-name = "filename.prf"
.
/* this is deliberately awful code that should take a long time to run */
for each orderline no-lock:
for each order no-lock:
for each customer no-lock:
if customer.custNum = order.custNum and orderLine.orderNum = orderLine.orderNum then
. /* do something */
end.
end.
end.
/* end of test snippet */
assign
profiler:enabled = no
profiler:profiling = no
.
profiler:write-data().
You can then load that prf file into an analysis tool. The specifics depend on your development environment - if you are using an up to date version of PSDOE there is a Profiler analyzer included, if not you might want to download ProTop
https://demo.wss.com/download.php and use the simple report included in lib/zprof_topx.p.
Ultimately what you are going to discover is that one or more of your FOR EACH statements is almost certainly using a WHERE clause that is a poor match for your available indexes.
To fix that you will need to determine which indexes are actually being selected and review the index selection rules. Some excellent material on that topic can be found here: http://pugchallenge.org/downloads2019/303_FindingData.pdf
If you don't want to go to the trouble of reading that then you should at least take a look at the actual index selection as shown by:
compile program.p xref program.xref
Do the selected indexes match your expectation? Did WHOLE-INDEX (aka "table scan") show up?

Using ETIME you can initiate a counter of milliseconds. It could be called once or several times to tell how much time has passed since reset.
ETIME(TRUE).
/*
Loop is here but instead I'll insert a small pause.
*/
PAUSE 0.5.
MESSAGE "This took" ETIME "milliseconds" VIEW-AS ALERT-BOX.
Milliseconds might not be useful when dealing with several minutes. Then you can use TIME to keep track of seconds but you need to handle start time yourself then.
DEFINE VARIABLE iStart AS INTEGER NO-UNDO.
iStart = TIME.
/*
Loop is here but instead I'll insert a slightly longer pause.
*/
PAUSE 2.
MESSAGE "This took" TIME - iStart "seconds" VIEW-AS ALERT-BOX.
If you want to keep track of several times then it might be better to output to a log file instead of using a MESSAGE-box that will stop execution until it's clicked.
DEFINE VARIABLE i AS INTEGER NO-UNDO.
DEFINE STREAM str.
OUTPUT STREAM str TO c:\temp\timing.txt.
ETIME(TRUE).
/*
Fake loop
*/
DO i = 1 TO 20:
PAUSE 0.1.
PUT STREAM str UNFORMATTED "Timing no " i " " ETIME "ms" SKIP.
END.
OUTPUT CLOSE.

Related

Sabre Scribe Scripting Specifically Looping

Anybody have any tips for looping, and continue? For example, I placed about 2500 pnrs on a queue, and I need to add a remark to each of them. Is it possible for a script to add the remark then move to the next pnr?
For example, I placed about 2500 pnrs on a queue, and I need to add a remark to each of them. Is it possible for a script to add the remark then move to the next pnr?
Loops are supported in Scribe but have to be built manually by creating your own iteration variables and breaking the loop manually when you know the work is complete.
What you are describing is definitely possible, but working in queues can be difficult as there are many possible responses when you try to end the PNRs. You'll have to capture the response to detect whether you need to do something else to get out of the error condition (e.g. if a PNR warning indicates you have to double-end the record).
If possible, its likely simpler to work off the queue by collecting all PNR locators and then looping through that list, adding your remarks, and then ending the PNRs. You'll still have to capture the response to determine if the PNR is actually ended properly, but you won't have to deal with the buggy queue behavior. A basic Scribe loop sample is below. Note that I haven't been a Scribe developer for a while and I did this in Notepad so there might be some errors in here, but hopefully it's a good starting point.
DEFINE [ROW=N:8] ;iteration variable/counter
DEFINE [LOCATOR_FILE=*:60] ;File Path
DEFINE [TEMP_LOCATOR=*:6] ;pnr locator variable, read from the temp file
DEFINE [BREAK=*:1] ;loop breaking variable
OPEN F=[TEMP_LOCATOR] L=0 ;open the file of locators
[BREAK] = ""
[ROW] = 0
REPEAT
[ROW] = [ROW] + 1
[TEMP_LOCATOR] = "" ;Reset temp locator variable, this will break our loop
READ F=[LOCATOR_FILE] R=[ROW] C=1 [TEMP_LOCATOR]
IF $[TEMP_LOCATOR] = 6 THEN ;test length of locator, if this is 6 chars, you have a good one, pull it up and add your remark
»"5YOUR REMARK HERE"{ENTER}«
»ER{ENTER}«
;trap errors
READ F="EMUFIND:" R=0 C=0 [TEMP_LOCATOR] ;read for the locator being present on this screen, which should indicate that the ER was successful - you'll have to trap other errors here though
IF [#SYSTEM_ERROR] = 0 THEN ;this locator was found, ER appears successful
»I{ENTER}« ;Ignore this PNR and move to the next one
ELSE
[BREAK] = "Y" ;error found afeter ER, break loop. Maybe show a popup box or something, up to you
ENDIF
ELSE ;No locator found in file, break the loop
[BREAK] = "Y"
ENDIF
UNTIL [BREAK] = "Y"
CLOSE [LOCATOR_FILE]

How to SELECT a single record in table X with the largest value for X.a WHERE values for fields X.b & X.c are specified

I am using the following query to obtain the current component serial number (tr_sim_sn) installed on the host device (tr_host_sn) from the most recent record in a transaction history table (PUB.tr_hist)
SELECT tr_sim_sn FROM PUB.tr_hist
WHERE tr_trnsactn_nbr = (SELECT max(tr_trnsactn_nbr)
FROM PUB.tr_hist
WHERE tr_domain = 'vattal_us'
AND tr_lot = '99524136'
AND tr_part = '6684112-001')
The actual table has ~190 million records. The excerpt below contains only a few sample records, and only fields relevant to the search to illustrate the query above:
tr_sim_sn |tr_host_sn* |tr_host_pn |tr_domain |tr_trnsactn_nbr |tr_qty_loc
_______________|____________|_______________|___________|________________|___________
... |
356136072015140|99524135 |6684112-000 |vattal_us |178415271 |-1.0000000000
356136072015458|99524136 |6684112-001 |vattal_us |178424418 |-1.0000000000
356136072015458|99524136 |6684112-001 |vattal_us |178628048 |1.0000000000
356136072015050|99524136 |6684112-001 |vattal_us |178628051 |-1.0000000000
356136072015836|99524137 |6684112-005 |vattal_us |178645337 |-1.0000000000
...
* = key field
The excerpt illustrates multiple occurrences of tr_trnsactn_nbr for a single value of tr_host_sn. The largest value for tr_trnsactn_nbr corresponds to the current tr_sim_sn installed within tr_host_sn.
This query works, but it is very slow, ~8minutes.
I would appreciate suggestions to improve or refactor this query to improve its speed.
Check with your admins to determine when they last updated the SQL statistics. If the answer is "we don't know" or "never" then you might want to ask them to run the following 4gl program which will create a SQL script to accomplish that:
/* genUpdateSQL.p
*
* mpro dbName -p util/genUpdateSQL.p -param "tmp/updSQLstats.sql"
*
* sqlexp -user userName -password passWord -db dnName -S servicePort -infile tmp/updSQLstats.sql -outfile tmp/updSQLtats.log
*
*/
output to value( ( if session:parameter <> "" then session:parameter else "updSQLstats.sql" )).
for each _file no-lock where _hidden = no:
put unformatted
"UPDATE TABLE STATISTICS AND INDEX STATISTICS AND ALL COLUMN STATISTICS FOR PUB."
'"' _file._file-name '"' ";"
skip
.
put unformatted "commit work;" skip.
end.
output close.
return.
This will generate a script that updates statistics for all table and all indexes. You could edit the output to only update the tables and indexes that are part of this query if you want.
Also, if the admins are nervous they could, of course, try this on a test db or a restored backup before implementing in a production environment.
I am posting this as a response to my request for an improved query.
As it turns out, the following syntax features two distinct features that greatly improved the speed of the query. One is to include tr_domain search criteria in both main and nested portions of the query. Second is to narrow the search by increasing the number of search criteria, which in the following are all included in the nested section of the syntax:
SELECT tr_sim_sn,
FROM PUB.tr_hist
WHERE tr_domain = 'vattal_us'
AND tr_trnsactn_nbr IN (
SELECT MAX(tr_trnsactn_nbr)
FROM PUB.tr_hist
WHERE tr_domain = 'vattal_us'
AND tr_part = '6684112-001'
AND tr_lot = '99524136'
AND tr_type = 'ISS-WO'
AND tr_qty_loc < 0)
This syntax results in ~0.5s response time. (credit to my colleague, Daniel V.)
To be fair, this query uses criteria outside the originally stated parameters that were included in the original post, making it difficult to impossible for others to attempt a reasonable answer. This omission was not on purpose of course, rather due to being fairly new to fundamentals of good query design. This query in part is a result of learning that when too-few or non-indexed fields are used as search criteria in a large table, it is sometimes helpful to narrow the search by increasing the number of search criteria items. The original had 3, this one has 5.

How to understand a Int-Proc entry, mentioned in client.mon file?

I'm dealing with a list of errors while trying to open a *.w file in the appBuilder. I managed to find a previous version of that file, which opens fine, and I see following differences between both files:
Per procedure segment information
---------------------------------
File Segment #Segments Total-Size
---- ------- --------- ----------
Good_version.w
...
Int-Proc: 19 1 26232
...
Bad_version.w
...
Int-Proc: 19 1 32712
As you can see, "Int-Proc" number 19 seems to be the one, exceeding the segment size (above 32K) and hence is the one causing the problem.
Now the obvious question: how can I know the meaning of "Int-Proc" number 19? I have some procedures inside my code but the number does not correspond with the total number of "Int-Proc" (very naïvely: I have 38 "Int-Proc" entries in client.mon but only 21 End procedure. entries in my source code).
Edit
The action to take in case of exceeding 32K limit is splitting the procedure, which grows too large, into smaller pieces. However, between Bad_version.w and Good_version.w, it seems that in total 5 procedures have been expanded, and I'd like to know which one I need to split.
Disclaimer: I have never used the AppBuilder.
client.mon is for r-code statistics, so I think that instead of .w there should be a .r there. The AppBuilder has a 32000 byte (= maximum size of a character variable) limit for internal procedures. 32000 new lines will also break the AppBuilder view, but compile to 0 bytes (or so).
I /thought/ the AppBuilder would complain about an internal procedure being too large upon selecting the procedure that is too large. If not you will need to get the /text/ content size of block of your .w between procedure and end procedure and you know which are your problem.
Something like:
def var lcw as longchar no-undo.
def var iprocs as integer no-undo.
def var lcproc as longchar no-undo.
def var cc as character no-undo.
def var ic as integer no-undo.
cc = chr(1).
copy-lob from file "my.w" to lcw.
assign
lcw = replace( lcw, 'procedure ', cc )
lcw = replace( lcw, 'end procedure', cc )
iprocs = num-entries( lcw, cc )
.
do ic = 1 to iprocs:
lcproc = entry( ic, lcw, cc ).
if length( lcproc ) > 31000 then
message substring( lcproc, 1, 100 ) view-as alert-box.
end.
Intrigued by how the AppBuilder really complains:
started the AppBuilder
created a Smart Window
opened the first procedure section (it was a trigger)
added // some comment
saved the .w
opened the .w with Notepad++ and blew up // some comment to be larger than 32000 bytes
Opened .w with AppBuilder, endless errors.
Quit.
-> Added -debugalert to my shortcut.
On first error started debugger.
Debugger tries to start, but does not (remember the hidden procedures post)
-> Added -zn to my shortcut.
On first error started debugger.
It starts. While I cannot see any source code since I have not extracted the source code pls, I can see and view all variable and buffers.
Since I had blown up a trigger, the error reported _trg. Viewing _trg:
And:

Fill-In validation with N format

I have a fill-in with the following code, made using the AppBuilder
DEFINE VARIABLE fichNoBuktiTransfer AS CHARACTER FORMAT "N(18)":U
LABEL "No.Bukti Transfer"
VIEW-AS FILL-IN NATIVE
SIZE 37.2 BY 1 NO-UNDO.
Since the format is N, it blocks the user from entering non-alphanumeric entries. However, it does not prevent the user from copypasting such entries into the fill-in. I have an error checking like thusly to prevent such entries using the on leave trigger:
IF LENGTH(SELF:Screen-value) > 18 THEN DO:
SELF:SCREEN-VALUE = ''.
RETURN NO-APPLY.
END.
vch-list = "!,*, ,#,#,$,%,^,&,*,(,),-,+,_,=".
REPEAT vinl-entry = 1 TO NUM-ENTRIES(vch-list):
IF INDEX(SELF:SCREEN-VALUE,ENTRY(vinl-entry,vch-list) ) > 0 THEN DO:
SELF:SCREEN-VALUE = ''.
RETURN NO-APPLY.
END.
END.
However, after the error handling kicked in, when the user inputs any string and triggers on leave, error 632 occurs:
error 632 occurs
Is there any way to disable the error message? Or should I approach the error handling in a different way?
EDIT: Forgot to mention, I am running on Openedge version 10.2B
You didn't mention the version, but I'll assume you have a version in which the CLIPBOARD system handle already exists.
I've simulated your program and I believe it shouldn't behave that way. It seems to me the error flag is raised anyway. My guess is even though those symbols can't be displayed, they are assigned to the screen value somehow.
Conjectures put aside, I've managed to suppress it by adding the following code:
ON CTRL-V OF FILL-IN-1 IN FRAME DEFAULT-FRAME
DO:
if index(clipboard:value, vch-list) > 0 then
return no-apply.
END.
Of course this means vch-list can't be scoped to your trigger anymore, in case it is, because you'll need the value before the leave. So I assigned the special characters list as an INIT value to the variable.
After doing this, I didn't get the error anymore. Hope it helps.
To track changes in a fill-in I always use at first this code:
ON VALUE-CHANGED OF FILL-IN-1 IN FRAME DEFAULT-FRAME
DO:
/* proofing part */
if ( index( clipboard:value, vch-list ) > 0 ) then do:
return no-apply.
end.
END.
You could add some mouse or developer events via AppBuilder to track changes in a fill-in.

Use input variable in assert or specify the data to assert

I have a unit test for a function that adds data (untransformed) to the database. The data to insert is given to the create function.
Do I use the input data in my asserts or is it better to specify the data that I’m asserting?
For eample:
$personRequest = [
'name'=>'John',
'age'=>21,
];
$id = savePerson($personRequest);
$personFromDb = getPersonById($id);
$this->assertEquals($personRequest['name'], $personFromDb['name']);
$this->assertEquals($personRequest['age'], $personFromDb['age']);
Or
$id = savePerson([
'name'=>'John',
'age'=>21,
]);
$personFromDb = getPersonById($id);
$this->assertEquals('John', $personFromDb['name']);
$this->assertEquals(21, $personFromDb['age']);
I think 1st option is better. Your input data may change in future and if you go by 2nd option, you will have to change assertion data everytime.
2nd option is useful, when your output is going to be same irrespective of your input data.
I got an answer from Adam Wathan by e-mail. (i took his test driven laravel course and noticed he uses the 'specify' option)
I think it's just personal preference, I like to be able to visually
skim and see "ok this specific string appears here in the output and
here in the input", vs. trying to avoid duplication by storing things
in variables." Nothing wrong with either approach in my opinion!
So i can't choose a correct answer.

Resources