How to retrieve the data from db in progress 4gl? - openedge

How to fetch the value from db in progress 4gl, initially have a input from user to select the record based on the value the record will be displayed. we try that but we can't fetch the exact value.
This is our program:
def var b as int.
def var obj as int.
/*set obj.
prompt-for obj.*/
def var sum as int.
def var a as int no-undo.
def var i as int no-undo.
for each po_mstr break by po_nbr.
select count (*) from po_mstr.
assign
a = 1
b = 583 .
repeat:
if first-of (po_nbr) then
set obj.
prompt-for obj.
if (a le obj and obj lt b) then
disp po_nbr po_vend po_ship po_ord_date with 2 col.
end.
end.
I can retrieve a single data only if we give more than 2 value means it will display the same first value.

Let's break this down. You're using a lot of commands without really realizing what they're for.
FOR EACH opens a block and perform record reads for each loop, in the selected sorting order (if none is selected, then it uses the primary index).
SELECT will perform a database operation, not necessarily doing on the fly, but it might.
REPEAT will also open a block, in which you might or might not be looping records with additional commands. Having said that, here's how I'd write this:
def var a as int no-undo.
def var b as int.
def var sum as int.
/*
def var obj as int.
update obj.
def var i as int no-undo.
you're not using this */
select count (*) into sum from po_mstr.
/* is this what you wanted? To have the count in sum? I don't see a reason, but suit yourself */
assign a = 1 b = 583 . /* or use UPDATE a b. if you would like to make that variable */
/* Since a and b are not changing, I moved it to outside the loop */
for each po_mstr where po_nbr >= a and po_nbr < b:
disp po_nbr po_vend po_ship po_ord_date with 2 col.
end.
I took some liberties. I removed obj because as far as I could assess, you were trying to copy po_nbr values to it, then use it to see if it was in the range you wanted to see. Since I believe po_nbr stands for po number, and that is probably unique, I'd also guess every iteration will have a different value for it. So no need to use if first-of. It also eliminates the need to copy it to a variable. Just compare it directly to the range of po's you want to see. Finally, the display should be fine.
I'm going to go ahead and assume your team hasn't been trained. This would be very important going forward, because QAD (any ERP software, in fact) is going to inflate really fast and badly written code, even when harmless, may impact performance and usability, which could be a problem to operation as a whole. Stack Overflow may be helpful for punctual questions, but the kind of problems you will run into if you aren't prepared will likely not be possible to solve here.
Hope this helps.

Related

Is there a way of using an IF statement inside a FOR EACH in PROGRESS-4GL?

I'm transforming some SQL codes into Progress 4GL. The specific code I'm writing right now has a lot of possible variables insertions, for example, there are 3 checkboxes that can be selected to be or not, and each selection adds a "AND ind_sit_doc IN", etc.
What I'd like to do is something like this:
FOR EACH doc-fiscal USE-INDEX dctfsc-09
WHERE doc-fiscal.dt-docto >= pd-data-1
AND doc-fiscal.dt-docto <= pd-data-2
AND doc-fiscal.cod-observa <> 4
AND doc-fiscal.tipo-nat <> 3
AND doc-fiscal.cd-situacao <> 06
AND doc-fiscal.cd-situacao <> 22
AND (IF pc-ind-sit-doc-1 = 'on' THEN: doc-fiscal.ind-sit-doc=1) NO-LOCK,
EACH natur-oper USE-INDEX natureza
WHERE doc-fiscal.nat-operacao = natur-oper.nat-operacao NO-LOCK:
The if part would only be read if the variable was in a certain way.
Is it possible?
Yes, you can do that (more or less as nwahmaet showed).
But it is a very, very bad idea in a non-trivial query. You are very likely going to force a table-scan to occur and you may very well send all of the data to the client for selection. That's going to be really painful. You would be much better off moving the IF THEN ELSE outside of the WHERE clause and implementing two distinct FOR EACH statements.
If your concern with that is that you would be duplicating the body of the FOR EACH then you could use queries. Something like this:
define query q for customer.
if someCondition = true then
open query q for each customer no-lock where state = "ma".
else
open query q for each customer no-lock where state = "nh".
get first q.
do while available customer:
display custNum state.
get next q.
end.
This is going to be much more efficient for anything other than a tiny little table.
You can also go fully dynamic and just build the needed WHERE clause as a string - but that involves using handles and is more complicated. But if that sounds attractive lookup QUERY-PREPARE in the documentation.
You can add an IF statement in a FOR EACH. You must have the complete IF ... THEN ... ELSE though.
For example:
FOR EACH customer WHERE (IF discount > 50 THEN state = 'ma' ELSE state = ''):
DISPL name state discount.
END.
That said, that condition will not be used for index selection and will only be applied on the client (if you're using networked db connections this is bad).

Skip assigning if found duplicates

I have a procedure that assigns values and sends it back. I need to implement a change that it would skip the assigning process whenever it finds duplicate iban code. It would be in this FOR EACH. Some kind of IF or something else. Basically, when it finds an iban code that was already used and assigned it would not assign it for the second or third time. I am new to OpenEdge Progress so it is really hard for me to understand correctly the syntax and write the code by myself yet. So if anyone could explain how I should implement this, give any pieces of advice or tips I would be very thankful.
FOR EACH viewpoint WHERE viewpoint.cif = cif.cif AND NOT viewpoint.close NO-LOCK:
DEFINE VARIABLE cIban AS CHARACTER NO-UNDO.
FIND FIRST paaa WHERE paaa.cif EQ cif.cif AND paaa.paaa = viewpoint.aaa AND NOT paaa.close NO-LOCK NO-ERROR.
cIban = viewpoint.aaa.
IF AVAILABLE paaa THEN DO:
cIban = paaa.vaaa.
CREATE tt_account_rights.
ASSIGN
tt_account_rights.iban = cIban.
END.
You have not shown the definition of tt_account_rights but assuming that "iban" is a uniquely indexed field in tt_account_rights you probably want something like:
DEFINE VARIABLE cIban AS CHARACTER NO-UNDO.
FOR EACH viewpoint WHERE viewpoint.cif = cif.cif AND NOT viewpoint.close NO-LOCK:
FIND FIRST paaa WHERE paaa.cif EQ cif.cif AND paaa.paaa = viewpoint.aaa AND NOT paaa.close NO-LOCK NO-ERROR.
cIban = viewpoint.aaa.
IF AVAILABLE paaa THEN DO:
cIban = paaa.vaaa.
find tt_account_rights where tt_account_rights.iban = cIban no-error.
if not available tt_account_rights then
do:
CREATE tt_account_rights.
ASSIGN
tt_account_rights.iban = cIban.
end.
END.
Some bonus perspective:
1) Try to express elements of the WHERE clause as equality matches whenever possible. This is the most significant contributor to query efficiency. So instead of saying "NOT viewpoint.close" code it as "viewpoint.close = NO".
2) Do NOT automatically throw FIRST after every FIND. You may have been exposed to some code where that is the "standard". It is none the less bad coding. If the FIND is unique it adds no value (it does NOT improve performance in that case). If the FIND is not unique and you do as you have done above and assign a value from that record you are, effectively, making that FIRST record special. Which is a violation of 3rd normal form (there is now a fact about the record which is not related to the key, the whole key and nothing but the key). What if the 2nd record has a different iBan? What if different WHERE clauses return different "1st" records?
There are cases where FIRST is appropriate. The point is that it is not ALWAYS correct and it should not be added to every FIND statement without any thought about why you are putting it there and what the impact of that keyword really is.
3) It is clearer to put the NO-LOCK (or EXCLUSIVE-LOCK or SHARE-LOCK) immediately after the table name rather than towards the end of the statement. The syntax works either way but from a readability perspective it is better to have the lock phrase right by the table.

sqlite optimal update after an alter table add

I have a process where at some point I got to add a new column to a table of type INTEGER, then I got to populate this new column with an UPDATE. I do this in C.
I can lean down my code to
CREATE table t (a integer, b integer)
populate t
CREATE INDEX t_ndx on t (a)
ALTER TABLE t add c integer
The C pseudo code to update the column 'c' look like this
sqlite3_stmt u;
sqlite3_prepare_v2(db, "update t set c=? where a=?, -1, &u, 0);
for(i=0;i<n;i++)
{ c=c_a[i];
a=a_a[i];
sqlite3_bind_int64(u, 1, c);
sqlite3_bind_int64(u, 1, a);
sqlite3_step(u);
}
The order of a's are the same for this UPDATE as the one given when t was created.
I'd like to know if the sqlite3 engine detect the 'sequential' access and do speed up the "where a=?" (i.e keep a kind of caching of previous cursor ?
I'd like to know as well if there are 'hidden' feature like binding array's (at least when dealing witn INTEGERs) to avoid construct such a loop and avoid all those bindings and avoid the bytecode for doing all those insert something along the line of
sqlite3_stmt u;
sqlite3_prepare_v2(db, "update t set c=? where a=?, -1, &u, 0);
sqlite3_bind_int64_array(u, 1, c_a, n);
sqlite3_bind_int64_array(u, 1, a_a, n);
sqlite3_step_array(u,n);
Thanx in advance
Cheers
Phi
Your code already is pretty much optimal. Searching for the row by a needs a single index lookup and a single table row lookup; both will be fast because the needed pages are likely to be already cached.
You could speed up the lookup on a by making this column the INTEGER PRIMARY KEY, but this makes sense only if a actually is the primary key.
In theory, it would be possible to update multiple rows at once:
UPDATE t
SET c = CASE a
WHEN :a1 THEN :c1
WHEN :a2 THEN :c2
WHEN :a2 THEN :c3
END
WHERE a IN (:a1, :a2, :a3);
But for many a values, this is likely to be implemented as a scan over the table, so it would make sense only if you could fit all values into the query, which is not possible for a large table.

SQLite - Get a specific row index for a Sorted/Filtered Query

I'm creating a caching system to take data from an SQLite database table using a sorted/filtered query and display it. The tables I'm pulling from can be potentially very large and, of course, I need to minimize impact on memory by only retaining a maximum number of rows in memory at any given time. This is easily done by using LIMIT and OFFSET to load only the records I need and update the cache as needed. Implementing this is trivial. The problem I'm having is determining where the insertion index is for a new record inserted into a particular query so I can update my UI appropriately. Is there an easy way to do this? So far the ideas I've had are:
Dump the entire cache, re-count the Query results (there's no guarantee the new row will be included), refresh the cache and refresh the entire UI. I hope it's obvious why that's not really desirable.
Use my own algorithm to determine whether the new row is included in the current query, if it is included in the current cached results and at what index it should be inserted into if it's within the current cached scope. The biggest downfall of this approach is it's complexity and the risk that my own sorting/filtering algorithm won't match SQLite's.
Of course, what I want is to be able to ask SQLite: Given 'Query A' what is the index of 'Row B', without loading the entire query results. However, so far I haven't been able to find a way to do this.
I don't think it matters but this is all occurring on an iOS device, using the objective-c programming language.
More Info
The Query and subsequent cache is based off of user input. Essentially the user can re-sort and filter (or search) to alter the results they're seeing. My reticence in simply recreating the cache on insertions (and edits, actually) is to provide a 'smoother' UI experience.
I should point out that I'm leaning toward option "2" at the moment. I played around with creating my own caching/indexing system by loading all the records in a table and performing the sort/filter in memory using my own algorithms. So much of the code needed to determine whether and/or where a particular record is in the cache is already there, so I'm slightly predisposed to use it. The danger lies in having a cache that doesn't match the underlying query. If I include a record in the cache that the query wouldn't return, I'll be in trouble and probably crash.
You don't need record numbers.
Save the values of the ordered field in the first and last records of the LIMITed query result.
Then you can use these to check whether the new record falls into this range.
In other words, assuming that you order by the Name field, and that the original query was this:
SELECT Name, ...
FROM mytab
WHERE some_conditions
ORDER BY Name
LIMIT x OFFSET y
then try to get at the new record with a similar query:
SELECT 1
FROM mytab
WHERE some_conditions
AND PrimaryKey = LastInsertedValue
AND Name BETWEEN CachedMin AND CachedMax
Similarly, to find out before (or after) which record the new record was inserted, start directly after the inserted record and use a limit of one, like this:
SELECT Name
FROM mytab
WHERE some_conditions
AND Name > MyInsertedName
AND Name BETWEEN CachedMin AND CachedMax
ORDER BY Name
LIMIT 1
This doesn't give you a number; you still have to check where the returned Name is in your cache.
Typically you'd expect a cache to be invalidated if there were underlying data changes. I think dropping it and starting over will be your simplest, maintainable solution. I would recommend it unless you have a very good reason.
You could write another query that just returned the row count (example below) to see if your cache should be invalidated. That would save recreating the cache when it did not change.
SELECT name,address FROM people WHERE area_code=970;
SELECT COUNT(rowid) FROM people WHERE area_code=970;
The information you'd need from sqlite to know when your cache was invalidated would require some rather intimate knowledge of how the query and/or index was working. I would say that is fairly high coupling.
Otherwise, you'd want to know where it was inserted with regards to the sorting. You would probably key each page on the sorted field. Delete anything greater than the insert/delete field. Any time you change the sorting you'd drop everything.
Something like the below would be a start if you were using C++. I realize you aren't doing C++, but hopefully it is evident as to what I'm trying to do.
struct Person {
std::string name;
std::string addr;
};
struct Page {
std::string key;
std::vector<Person> persons;
struct Less {
bool operator()(const Page &lhs, const Page &rhs) const {
return lhs.key.compare(rhs.key) < 0;
}
};
};
typedef std::set<Page, Page::Less> pages_t;
pages_t pages;
void insert(const Person &person) {
if (sql_insert(person)) {
pages_t::iterator drop_cache_start = pages.lower_bound(person);
//... drop this page and everything after it
}
}
You'd have to do some wrangling to get different datatypes of key to work nicely, but its possible.
Theoretically you could just leave the pages out of it and only use the objects themselves. The database would no longer "own" the data though. If you only fill pages from the database, then you'll have less data consistency worries.
This may be a bit off topic, you aren't re-implementing views are you? It doesn't cache per se, but it isn't clear if that is a requirement of your project.
The solution I came up with is not exactly simple, but it's currently working well. I realized that the index of a record in a Query Statement is also the Count of all it's previous records. What I needed to do was 'convert' all the ORDER statements in the query to a series of WHERE statements that would return only the preceding records and take a count of those records. It's trickier than it sounds (or maybe not...it sounds tricky). The biggest issue I had was making sure the query was, in fact, sorted in a way I could predict. This meant I needed to have an order column in the Order Parameters that was based off of a column with unique values. So, whenever a user sorts on a column, I append to the statement another order parameter on a unique column (I used a "Modified Date Stamp") to break ties.
Creating the WHERE portion of the statement requires more than just tacking on a bunch of ANDs. It's easier to demonstrate. Say you have 3 Order columns: "LastName" ASC, "FirstName" DESC, and "Modified Stamp" ASC (the tie breaker). The WHERE statement would have to look something like this ('?' = record value):
WHERE
"LastName" < ? OR
("LastName" = ? AND "FirstName" > ?) OR
("LastName" = ? AND "FirstName" = ? AND "Modified Stamp" < ?)
Each set of WHERE parameters grouped together by parenthesis are tie breakers. If, in fact, the record values of "LastName" are equal, we must then look at "FirstName", and finally "Modified Stamp". Obviously, this statement can get really long if you're sorting by a bunch of order parameters.
There's still one problem with the above solution. Mathematical operations on NULL values always return false, and yet when you sort SQLite sorts NULL values first. Therefore, in order to deal with NULL values appropriately you've gotta add another layer of complication. First, all mathematical equality operations, =, must be replace by IS. Second, all < operations must be nested with an OR IS NULL to include NULL values appropriately on the < operator. This turns the above operation into:
WHERE
("LastName" < ? OR "LastName" IS NULL) OR
("LastName" IS ? AND "FirstName" > ?) OR
("LastName" IS ? AND "FirstName" IS ? AND ("Modified Stamp" < ? OR "Modified Stamp" IS NULL))
I then take a count of the RowID using the above WHERE parameter.
It turned out easy enough for me to do mostly because I had already constructed a set of objects to represent various aspects of my SQL Statement which could be assembled to generate the statement. I can't even imagine trying to manipulate a SQL statement like this any other way.
So far, I've tested using this on several iOS devices with up to 10,000 records in a table and I've had no noticeable performance issues. Of course, it's designed for single record edits/insertions so I don't really need it to be super fast/efficient.

How can I easily detect the trigger depth in sqlite3?

I have two SQLite3 tables, A and B. When column A.X is updated, I want to modify B.Y, and when B.Y is updated, I want to modify A.X.
I can use two triggers:
CREATE TRIGGER AtoB AFTER UPDATE OF X ON A BEGIN UPDATE B SET Y = ...
and
CREATE TRIGGER BtoA AFTER UPDATE OF Y ON B BEGIN UPDATE A SET X = ...
but it seems that both triggers are called once, no matter which table I modify, i.e. one always invokes the other.
I only want one of them to execute, since the updates are lossy in the direction of A to B. I don't want the loss in the reverse direction B to A, but if both triggers fire, then it makes it lossy in both directions.
A simple solution would be to implement three UDFs "increment_trigger_depth", "decrement_trigger_depth", and "get_trigger_depth", and then use "WHEN trigger_depth == 1" in the update statements. But, is there an easier way to determine trigger depth in SQLite3?
Use a new table to hold the trigger depth.
CREATE TABLE trigger_depth (depth);
INSERT INTO trigger_depth VALUES (0);
Then for increment_trigger_depth use
UPDATE trigger_depth SET depth = depth + 1;
etc.
Use
... WHEN (SELECT depth FROM trigger_depth) <= 0 BEGIN ...
to guard your trigger actions

Resources