Complex logic operation is not providing the right results in filemaker - logical-operators

I try to filter records in FileMaker using a script in which I loop through all records and filter out the ones I don't want.
I am using the following script (I am using the Dutch version, so I am not sure what the exact/correct English code is. My appology):
Go to record/page [First]
Loop
If [
db1::field1 = "valueA" or
db1::field2 = "ValueB" or
(db1::field3 ≠ "ValueC" and db1::field4 ≠ "ValueC")
]
Omit record
End If
Go to record [Next; Stop after last: On]
End Loop
When running the script, I do not get the right result. For example, I do get records where:
- field1 is "ValueA"
- field3 is not "ValueD".
Also, when I apply the script, multiple times, the number of records that remain are getting less each time. Even when the logic is not changing!
Anyone knows what goes wrong here?

I suspect there is an error in your goto next record script step, that you are not showing here. Possible you are skipping an extra record if you perform a goto next record after an omit has been performed.
Post the rest of the loop structure so that we can see the whole process.
P.S. You don’t need parentheses in your if statement as you have only OR operators.
EDIT: looking at your logic a bit closer, I don’t think it will do what you want, assuming I understand your need.
Try this:
(db1::field1 = "valueA" or db1::field2 = "ValueB")
and
db1::field3 ≠ "ValueC" and db1::field3 ≠ "ValueD"
EDIT 2:
Go to record/page [First]
Loop
If [
(db1::field1 = "valueA" or db1::field2 = "ValueB") and
db1::field3 ≠ "ValueC" and db1::field4 ≠ "ValueC"
]
Omit record
Else [
Go to record [Next; Stop after last: On]
]
End If
End Loop

Related

Vim - mapping a key to a function which does something else plus the orginal function of that key

The target is to have the key j doing a possibly complex task and moving to the next line (the latter action performed just like the original function of the j key).
My initial attempt was to map j key this way:
nn j :<C-U>execute "call MyFun(" . v:count . ")"<CR>
(as you can see I intend to make j's behavior depend on the count which is prepended to it)
and to define the function MyFun appropriately:
fu! MyFun(count)
" do more stuff based on a:count
normal j
endf
which is faulty, as hitting j now results in the error E169: Command too recursive, since the non-recursivity of nnoremap, as long as my deduction is correct, applies to the "literal" content of the {rhs} of the mapping, and not to whatever is "inside" it (in other words the function body makes use of the meaning of j at the moment it is called, thus causing the infinte recursion).
Therefore I tried the following
nn , j
nn j :<C-U>execute "call MyFun(" . v:count . ")"<CR>
fu! MyFun(count)
" do more stuff based on a:count
normal ,
endf
However this means that I waste the key ,. I know I can avoid the waste of that mapping doing
nn <Plug>Nobody j
but then I wouldn't know how to use <Plug>Nobody (my understanding is indeed that its use is only in the {rhs} of another, non-nore mapping).
My initial attempt was to map j key this way
Using execute here is redundant. It's enough to do:
nnoremap j :<C-U>call MyFun(v:count)<CR>
now results in the error E169: Command too recursive
That's because of normal. To suppress remapping you must use "bang"-form: normal! j. Please, refere
to documentation for :normal, whose second paragraph describes exactly your use case:
If the [!] is given, mappings will not be used. Without it, when this
command is called from a non-remappable mapping (:noremap), the
argument can be mapped anyway.
Besides, note that j normally supports count, so 2j is expected to move two lines down. So you, probably, should do execute 'normal!' a:count . 'j' instead.

PostScript forall on dictionaries

According to the PLRM it doesn't matter in which order you execute a forall on a dict:
(p. 597) forall pushes a key and a value on the operand stack and executes proc for each key-value pair in the dictionary
...
(p. 597) The order in which forall enumerates the entries in the dictionary is arbitrary. New entries put in the dictionary during the execution of proc may or may not be included in the enumeration. Existing entries removed from the dictionary by proc will not be encountered later in the enumeration.
Now I was executing some code:
/d 5 dict def
d /abc 123 put
d { } forall
My output (operand stack) is:
--------top-
/abc
123
-----bottom-
The output of ghostscript and PLRM (operand stack) is:
--------top-
123
/abc
-----bottom-
Does it really not matter in what order you process the key-value pairs of the dict?
on the stack, do you first need to push the value and then the key, or do you need to push the key first? (as the PLRM only talks about "a key and a value", but doesnt tell you anything about the order).
Thanks in advance
It would probably help if you quoted the page number qhen you quote sections from the PLRM, its hard to see where you are getting this from.
When executing forall the order in which forall enumerates the dictionary pairs is arbitrary, you have no influence over it. However forall always pushes the key and then the value. Even if this is implied in the text you (didn't quite) quote, you can see from the example in the forall operator that this is hte case.
when you say 'my output' do you mean you are writing your own PostScript interpreter ? If so then your output is incorrect, when pushing a key/value pair the key is pushed first.

Tail Recursions in erlang

I'm learning Erlang from the very basic and have a problem with a tail recursive function. I want my function to receive a list and return a new list where element = element + 1. For example, if I send [1,2,3,4,5] as an argument, it must return [2,3,4,5,6]. The problem is that when I send that exact arguments, it returns [[[[[[]|2]|3]|4]|5]|6].
My code is this:
-module(test).
-export([test/0]).
test()->
List = [1,2,3,4,5],
sum_list_2(List).
sum_list_2(List)->
sum_list_2(List,[]).
sum_list_2([Head|Tail], Result)->
sum_list_2(Tail,[Result|Head +1]);
sum_list_2([], Result)->
Result.
However, if I change my function to this:
sum_list_2([Head|Tail], Result)->
sum_list_2(Tail,[Head +1|Result]);
sum_list_2([], Result)->
Result.
It outputs [6,5,4,3,2] which is OK. Why the function doesn't work the other way around([Result|Head+1] outputing [2,3,4,5,6])?
PS: I know this particular problem is solved with list comprehensions, but I want to do it with recursion.
For this kind of manipulation you should use list comprehension:
1> L = [1,2,3,4,5,6].
[1,2,3,4,5,6]
2> [X+1 || X <- L].
[2,3,4,5,6,7]
it is the fastest and most idiomatic way to do it.
A remark on your fist version: [Result|Head +1] builds an improper list. the construction is always [Head|Tail] where Tail is a list. You could use Result ++ [Head+1] but this would perform a copy of the Result list at each recursive call.
You can also look at the code of lists:map/2 which is not tail recursive, but it seems that actual optimization of the compiler work well in this case:
inc([H|T]) -> [H+1|inc(T)];
inc([]) -> [].
[edit]
The internal and hidden representation of a list looks like a chained list. Each element contains a term and a reference to the tail. So adding an element on top of the head does not need to modify the existing list, but adding something at the end needs to mutate the last element (the reference to the empty list is replaced by a reference to the new sublist). As variables are not mutable, it needs to make a modified copy of the last element which in turn needs to mutate the previous element of the list and so on. As far as I know, the optimizations of the compiler do not make the decision to mutate variable (deduction from the the documentation).
The function that produces the result in reverse order is a natural consequence of you adding the newly incremented element to the front of the Result list. This isn't uncommon, and the recommended "fix" is to simply list:reverse/1 the output before returning it.
Whilst in this case you could simply use the ++ operator instead of the [H|T] "cons" operator to join your results the other way around, giving you the desired output in the correct order:
sum_list_2([Head|Tail], Result)->
sum_list_2(Tail, Result ++ [Head + 1]);
doing so isn't recommended because the ++ operator always copies it's (increasingly large) left hand operand, causing the algorithm to operate in O(n^2) time instead of the [Head + 1 | Tail] version's O(n) time.

When recursing in prolog, can one access variables from n amount of levels up?

To clarify what I mean, lets take this recursing example:
statement([]).
statement([A|B]):- A, statement(B).
The head, A is check by the rules my rules, and the tail B is sent to be recursed, which then becomes the head of level 2. When it recurses and is at the second level, how can I access the previous A? Am I thinking about it all wrong? If any clarification is needed please ask and I will do so. Thanks in advance.
What I am suppose to be testing for(type checker):
String s; int i; i = s.length(); // OK
or
String s; int i; s = i.length(); // fails
You have to record the previous statements explicitly such that at each iteration you will have access to the previous steps. It is up to you how do you record these statements. One solution would be:
statement(L) :- statement(L,[]).
statement([], _).
statement([A|B], L):- check(A), statement(B,[A|L]).
L records the preceding statements (in a reverse order).
Sure.. use the prolog database, assert and retract. This demonstrates it:
% Declare the lasthead fact as dynamic, so facts can change
:-dynamic lasthead/1.
% Set a starting value for the first iteration
lasthead(null).
statement([]).
statement([A|B]) :-
% Show the previous head
lasthead(LH),
writeln(['Last head was', LH]),
% Retract last head. ie. remove from the database
retract(lasthead(_)),
% Store the current head in the database
assertz(lasthead(A)),
% Recurse around
statement(B).
?- statement([a,b,c,d,e]).
[Last head was,null]
[Last head was,a]
[Last head was,b]
[Last head was,c]
[Last head was,d]
The above example uses retract to ensure there is only once lasthead(X) fact, but you could remove the retract which would ensure having multiple lasthead(X) facts, one for each list item.
You could then access/process multiple lasthead(X) facts using eg. findall(X, lasthead(X), Y), which would give you whichever lasthead(X) values you asserted along the way.

how can I reduce time of insert to database when many records?

this is my code and Thousands of records insert to table but time of execution is very long (about 15 minute), how can I do to reduce this time?
thank you
BEGIN
delete from pre_percapita_accords t where t.dat_capit = DATACTUL;
FOR REC2 IN FORMUL_ID LOOP
FOR rec1 in FND_Bunit loop
FOR REClkp1 IN EMPLY(rec1.cod_busun,rec1.lkp_cod_dput_busun) LOOP
v_result_param1 := Fnd_Formula_Pkg.SET_PARAM_VALUE_FUN(REC2.FRML_ID,
'EMPL_ID',
to_char(REClkp1.num_prsn_emply));
v_result_param2 := Fnd_Formula_Pkg.SET_PARAM_VALUE_FUN(REC2.FRML_ID,
'DAT_ACCORD',
to_char(DATACTUL));
fnd_formula_set_param_prc(REC2.FRML_ID);
resultFun := Fnd_Formula_Pkg.GET_PARAM_VALUE_FUN(REC2.FRML_ID,
'NUM_RESULT');
resultFun := trunc(resultFun, 3);
if REClkp1.NUM_PRSN_EMPLY is not null and resultFun is not null and
DATACTUL is not null then
INSERT INTO pre_percapita_accords
(FRMLS_FORMUL_STEP_ID,
dat_capit,
num_capit,
emply_num_prsn_emply,
lkp_status_capit)
VALUES
(rec2.formul_step_id,
DATACTUL,
resultFun,
REClkp1.NUM_PRSN_EMPLY,
'3');
end if;
END LOOP;
END LOOP;
end loop;
You have 3 nested cursor loops, within which you are calling several functions at the lowest level and performing a single-row insert. There are many possibilities here to improve things:
1) Function Fnd_Formula_Pkg.SET_PARAM_VALUE_FUN is called twice but the results are never used (in the code you have shown) - can these calls simply be removed?
2) The second call to Fnd_Formula_Pkg.SET_PARAM_VALUE_FUN doesn't use any data from the EMPLY or FND_Bunit cursors, so it could be moved outside of those 2 loops resulting in it being called less.
3) You could save the results into arrays and then use bulk inserts of e.g. 1000 rows at a time.
4) Ideally, the whole thing could be re-written without cursors as a single INSERT...SELECT statement. I cannot say for sure that this is possible here though.
Instead of guessing, you might try sql trace and tkprof for some better metrics of whats happening. My guess is the function calls are slow.
If that seems too daunting, you can also debug by adding some timing log statements (using timestamps), testing against a dev db of course, and see where the time is spent (a bit like the old printf statement debugging). At least you will have a better idea of what is taking most of the time.
Try to use collections and insert with FORALL:
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14261/forall_statement.htm

Resources