In an unit test, I need to verify that the program skip locked records when processing a table.
I have been unable to setup a locked records because the test can't lock itself which make a lot of sense.
Here is a sample of what I'm trying to achieve.
DEV VAR v_isCommitted AS LOGI NO-UNDO.
DEF VAR hl AS HANDLE NO-UNDO.
DEF BUFFER bufl FOR tablename.
hl = BUFFER bufl:HANDLE.
LOCKED_RECORDS:
DO TRANSACTION ON ERROR UNDO, LEAVE LOCKED_RECORDS:
/*Setup : Create record not committed yet*/
CREATE tablename.
ASSIGN tablename.fields = fieldsvalue.
/*ACT : Code I'm trying to test*/
/*...some code...*/
v_isCommitted = hl:FIND-BY-ROWID(ROWID(tablename), EXCLUSIVE-LOCK, NO-WAIT)
AND AVAILABLE(bufl)
AND NOT LOCKED(bufl).
/*...some code touching the record if it is commited...*/
/*ASSERT : program left new record tablename AS IS.*/
END.
The problem is that the record is available and not locked to the test because it was created by it.
Is there a way I could have the test lock a record from itself so the act part can actually skip the record like it was created by someone else?
Progress: 11.7.1
A session can not lock itself. So you will need to start a second session. For example:
/* code to set things up ... */
/* spawn a sub process to try to lock the record */
os-command silent value( substitute( '_progres -b -db &1 -p lockit.p -param "&2" && > logfile 2>&&1', dbname, "key" )).
In lockit.p use session:parameter to get the key for the record to test (or hard code it I suppose).
Or, as mentioned in the comments below:
/* locktest.p
*/
define variable lockStatus as character no-undo format "x(20)".
find first customer exclusive-lock.
input through value( "_progres /data/sports120/sports120 -b -p ./lockit.p" ).
repeat:
import unformatted lockStatus.
end.
display lockStatus.
and:
/* lockit.p
*/
find first customer exclusive-lock no-wait no-error.
if locked( customer ) then
put "locked".
else
put "not locked".
quit.
Related
For the sake of argument, assume that I have a very simple database:
CREATE TABLE test(idx integer primary key autoincrement, count integer);
This has one row. The database is accessed by a CGI script, which is called by Apache. The script reads the current value of count, increments it, and writes it back. I can run the script as
curl http://localhost/cgi-bin/test
and it tells me what the new value of count is. The script is actually C++; the basic stripped-down code looks like this:
// 'callback' sets 'count' to the current value of count
sqlite3_exec(con, "select count from test where idx=1", callback, &count, 0);
++count;
command << "update test set count=" << count << " where idx=1";
sqlite3_exec(con, command.str().c_str(), 0, 0, 0);
If I write a bash script that runs 20 instances of curl in the background, then I get lots of messages that the database is locked, and the counter is only incremented to 2 or 3, instead of 20. Ok, that's not very surprising, but how do I fix this?
After some experimenting, I've put both sqlite3_exec statements inside an exclusive transaction:
while(true) {
rc = sqlite3_exec(con, "begin exclusive transaction", 0, 0, 0);
if(rc != SQLITE_BUSY)
break;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
if(rc != SQLITE_OK)
error();
...the select and update code shown above, followed by:
sqlite3_exec(con, "end transaction", 0, 0, 0);
This appears to be rock-solid, but I can't make much sense of the relevant bits of the sqlite docs, and I'm not convinced. Is there anything else I need to think about? Note that I don't have any rollbacks, or any other sqlite3 calls, apart from sqlite_open_v2, sqlite3_errmsg, and sqlite_close, no WAL, and I only test for SQLITE_BUSY. For testing, I run the bash script below, with $1 set to 1000 (ie. 1000 curl instances all running the CGI code). This completes in 10 or 11 seconds, and every time I run it it shows the final value of count as 1000, so it appears to be working.
Test script:
#!/bin/bash
sqlite3 /var/www/cgi-bin/test.db <<EOF
update test set count=0 where idx=1;
EOF
for ((c=0; c<$1; c++ ))
do
curl http://localhost/cgi-bin/test > /dev/null 2>&1 &
done
wait
sqlite3 /var/www/cgi-bin/test.db <<EOF
select count from test where idx=1;
EOF
I have a function/store proc in Maria DB
CREATE DEFINER=`root`#`localhost` PROCEDURE `test1`(var1 varchar(100))
BEGIN
select * from ttype where kode=var1;
END
I need to get a cursor from store proc, Howto get a cursor in vfp application, database in MariaDb/MySQL StoreProc?
I try with this in my Visual Foxpro :
Sqlexec(kon,"call test1 ('ABC')","test") --> not running
But when I use common select like this :
sqlexec(kon,"select * from ttype where kode='ABC'","test") --it's running well..
"Can you show me how to use aerror() in my case?"
You would use the AError() function always when any ODBC Remote action would fail, i.e. any of Vfp's SQL*() functions, like SqlStringConnect() for example or your proposed
sqlexec(kon,"select * from ttype where kode='ABC'","test") --it's running well
&& Actually you cannot know whether "it's running well" unless you are evaluating its return value like this:
Local lnResult, laSqlErrors[1], lcErrorMessage
lnResult = SqlExec(kon,"select * from ttype where kode='ABC'","test")
If m.lnResult = –1 && as documented in the F1 Help
AERROR(laSqlErrors)
lcErrorMessage = ;
TRANSFORM(laSqlErrors[1]) + ", " + ;
TRANSFORM(laSqlErrors[2])
&& now write a log and/or inform the user
ENDIF
&& to be continued
Just for the knowledge I just want to see how WRITE triggers execute for the query below. Is it possible to see them?.
FOR EACH Customer EXCLUSIVE-LOCK WHERE NAME = "Go Fishing Ltd":
ASSIGN Customer.Balance = 600.
END.
Add:
-clientlog path/to/log.log -logginglevel 4 -logentrytypes 4GLTrace
to your startup command.
This will create a log of all of the calls that your code makes.
For more information: https://knowledgebase.progress.com/articles/Knowledge/P9893
You can also use the LOG-MANAGER system handle within your code to dynamically control the logging at runtime:
https://docs.progress.com/bundle/openedge-abl-troubleshoot-applications/page/LOG-MANAGER-system-handle-attributes-and-methods.html
but for simple purposes like this it is easier to just add the startup parameters.
Watch the log-manager show the write trigger being executed on the sports2020 database:
def var clog as char no-undo.
def var lclog as longchar no-undo.
assign
clog = guid + '.log'.
log-manager:logfile-name = clog
log-manager:log-entry-types = '4gltrace:5,4glmessages'
.
for each Customer exclusive-lock where name = 'Go Fishing Ltd':
Customer.Balance = Customer.Balance + 1. // write trigger only fires when record changes
end.
log-manager:close-log().
copy-lob from file clog to lclog.
message string( lclog ).
https://abldojo.services.progress.com/?shareId=62978a833fb02369b25479f0
Relevant snippet from the output:
4GLTRACE Return from Main Block "Customer Customer" [sports2020trgs/wrcust.p]
I am new to progress 4GL. In my program, I tried to create a form using progress 4GL. The form has two fields one is DB name and another one is DB Description. The scope of this form is by default this should have one DB name and description and if a user-entered or keep blank in the field of DB name then an alert box should give a message. I have developed the form but when I run it the program keeps continuously running and window form goes not responding stage. I don't get a chance to enter or keep blank to the DB field name. Let me share my code and please help to find out what is the issue and why it's continuously running.
define variable cArcDB as character no-undo format "x(20)" INIT "qadb".
define variable cArcDBDesc as character no-undo format "x(25)" INIT "archive database".
define variable cTmp as character NO-UNDO.
form
cArcDB colon 25
cArcDBDesc colon 25
with frame frArchiveDB width 80 side-labels.
MAIN-LOOP:
REPEAT:
display
cArcDB
cArcDBDesc
with frame frArchiveDB.
set
cArcDB
with frame frArchiveDB editing:
if frame-field = "cArcDB" then do:
/* Find next/prev record from ttAppDB */
cTmp = cArcDB:input-value in frame frArchiveDB.
display
cArcDB
cArcDBDesc
with frame frArchiveDB.
end.
end. /* editing */
cArcDB = trim(cArcDB).
if cArcDB = "" then do:
/* Blank not allowed */
/* {us/bbi/pxmsg.i &MSGNUM=40 &ERRORLEVEL=3} */
next-prompt cArcDB with frame frArchiveDB.
undo MAIN-LOOP,retry MAIN-LOOP.
end.
END.
Please have look at the online reference of the "EDITING phrase". To me it looks like you're missing the READKEY after the beginning of the EDITING block and you also need to "APPLY LASTKEY" at some point. See the sample there:
/* Update Customer fields, monitoring each keystroke during the UPDATE */
UPDATE Customer.Name Customer.Address Customer.City Customer.State SKIP
Customer.SalesRep HELP "Use the space bar to select a SalesRep"
WITH 2 COLUMNS EDITING: /* Read a keystroke */
READKEY.
/* If the cursor is in any field except SalesRep, execute the last key
pressed and go on to the next iteration of this EDITING phrase to check
the next key */
IF FRAME-FIELD <> "SalesRep" THEN DO:
APPLY LASTKEY.
IF GO-PENDING THEN LEAVE.
ELSE NEXT.
END.
/* When in the SalesRep field, if the last key pressed was the space bar
then cycle through the sales reps */
IF LASTKEY = KEYCODE(" ") THEN DO:
FIND NEXT SalesRep NO-ERROR.
IF NOT AVAILABLE SalesRep THEN FIND FIRST SalesRep.
DISPLAY SalesRep.SalesRep # Customer.SalesRep.
NEXT.
END.
/* If the user presses any one of a set of keys while in the SalesRep field,
immediately execute that key */
IF LOOKUP(KEYFUNCTION(LASTKEY),
"TAB,BACK-TAB,GO,RETURN,END-ERROR") > 0 THEN APPLY LASTKEY.
ELSE BELL.
END.
I am relatively new to tcl dictionaries and don't see a good documentation on how to initialize an empty dictionary, loop over a log and save data into it. Finally I want to print a table that looks like this:
- Table:
HEAD1
Step 1 Start Time End Time
Step 2 Start Time End Time
**
- Log:
**
HEAD1
Step1
Start Time : 10am
.
.
.
End Time: 11am
Step2
Start Time : 11am
.
.
End time : 12pm
HEAD2
Step3
Start Time : 12pm
.
.
.
End Time: 1pm
Step4
Start Time : 1pm
.
.
End time : 2pm
You really don't have to initialise an empty dictionary in Tcl - you can simply start using it and it will get populated as you go along. As mentioned already, dict man page is the best way to start.
Additionally, I would suggest you check the regexp man page as you can use it nicely to parse your text file.
Not having anything better to do atm, I cobbled together a short sample code that should get you started. Use it as a starting tip, adjust it to your particular log layout and add some defensive measures to prevent errors from unexpected input.
# The following line is not strictly necessary as Tcl does not
# require you to first create an empty dictionary.
# You can simply start using 'dict set' commands below and the first
# one will create a dictionary for you.
# However, declaring something as a dict does add to code clarity.
set tableDict [dict create]
# Depending on your log sanity, you may want to declare some defaults
# so as to avoid errors in case the log file misses one of the expected
# lines (e.g. 'HEADx' is missing).
set headNumber {UNKNOWN}
set stepNumber {UNKNOWN}
set start {UNKNOWN}
set stop {UNKNOWN}
# Now read the file line by line and extract the interesting info.
# If the file indeed contains all of the required lines and exactly
# formatted as in your example, this should work.
# If there are discrepancies, adjust regex accordingly.
set log [open log.txt]
while {[gets $log line] != -1} {
if {[regexp {HEAD([0-9]+)} $line all value]} {
set headNumber $value
}
if {[regexp {Step([0-9]+)} $line all value]} {
set stepNumber $value
}
if {[regexp {Start Time : ([0-9]+(?:am|pm))} $line all value]} {
set start $value
}
# NOTE: I am assuming that your example was typed by hand and all
# inconsistencies stem from there. Otherwise, you have to adjust
# the regular expressions as 'End Time' is written with varying
# capitalization and with inconsistent white spaces around ':'
if {[regexp {End Time : ([0-9]+(?:am|pm))} $line all value]} {
set start $value
# NOTE: This short example relies heavily on the log file
# being formatted exactly as described. Therefore, as soon
# as we find 'End Time' line, we assume that we already have
# everything necessary for the next dictionary entry
dict set tableDict HEAD$headNumber Step$stepNumber StartTime $start
dict set tableDict HEAD$headNumber Step$stepNumber EndTime $stop
}
}
close $log
# You can now get your data from the dictionary and output your table
foreach head [dict keys $tableDict] {
puts $head
foreach step [dict keys [dict get $tableDict $head]] {
set start [dict get $tableDict $head $step StartTime]
set stop [dict get $tableDict $head $step EndTime]
puts "$step $start $stop"
}
}