I have a clause like following:
lock_open:-
conditional_combination(X),
equal(X,[8,6,5,3,6,9]),!,
print(X).
this clause succeed. But I want to know how many times conditional_combination() is called before equal(X,[8,6,5,3,6,9]) is become true. the program is to generate a permutation by following some rules. And I need to how many permutation is need to generate to get a particular value like 865369.
What you actually want is something slightly different: You want to count the number of answers (so far) of a goal.
The following predicate call_nth(Goal_0, Nth) succeeds like call(Goal_0) but has an additional argument which indicates that the answer found is the n-th answer. This definition is highly specific to SWI or YAP. Do not use things like nb_setarg/3 in your general programs, but use them for well encapsulated cases as this one. Even within
those two systems, the precise meaning of these constructs is not well defined for the general case. Here is a definition for SICStus. Update: use unsigned_64 in newer versions instead of unsigned_32.
call_nth(Goal_0, Nth) :-
nonvar(Nth),
!,
Nth \== 0,
\+arg(Nth,+ 1,2), % produces all expected errors
State = count(0,_), % note the extra argument which remains a variable
Goal_0,
arg(1, State, C1),
C2 is C1+1,
( Nth == C2
-> !
; nb_setarg(1, State, C2),
fail
).
call_nth(Goal_0, Nth) :-
State = count(0,_), % note the extra argument which remains a variable
Goal_0,
arg(1, State, C1),
C2 is C1+1,
nb_setarg(1, State, C2),
Nth = C2.
A more robust abstraction is provided by Eclipse:
call_nth(Goal_0, Nth) :-
shelf_create(counter(0), CounterRef),
call(Goal_0),
shelf_inc(CounterRef, 1),
shelf_get(CounterRef, 1, Nth).
?- call_nth(between(1,5,I),Nth).
I = Nth, Nth = 1
; I = Nth, Nth = 2
; I = Nth, Nth = 3
; I = Nth, Nth = 4
; I = Nth, Nth = 5.
So simply wrap it around:
lock_open :-
call_nth(conditional_combination(X), Nth),
X = [8,6,5,3,6,9],
!,
....
If you are using SWI prolog you can use nb_getval/2 and nb_setval/2 to achieve what you want:
lock_open:-
nb_setval(ctr, 0), % Initialize counter
conditional_combination(X),
nb_inc(ctr), % Increment Counter
equal(X,[8,6,5,3,6,9]),
% Here you can access counter value with nb_getval(ctr, Value)
!,
print(X).
nb_inc(Key):-
nb_getval(Key, Old),
succ(Old, New),
nb_setval(Key, New).
Other prologs have other means to do the same, look for global variables in your prolog implementation. In this snippet I used the term ctr to hold the current goal counter. You can use any term there that is not used in your program.
While working on a module "micro", I recently invented pivots. They are inspired by the thread / pipe pattern to pass around data. A pivot is a bounded queue of maximum length one, the pivot_put/1 does a copy of the given term as well. But for performance reasons they don't use a synchronized and are non-blocking.
In as far they are very similar to nb_setarg/3, except that they don't destruct a Prolog term, but instead they update a Java data structure. As a result they are little bit safer than the non-logical term operations. Also they don't need some call_cleanup/3, since they are Java garbage collected.
In as far they are more similar than nb_setarg/3, than using some explicit allocate and dealloccate of structures. So for example a solution for SICStus Prolog could be:
call_nth(Goal_0, Nth) :-
new(unsigned_32, Counter),
call_cleanup(call_nth1(Goal_0, Counter, Nth),
dispose(Counter)).
call_nth1(Goal_0, Counter, Nth) :-
call(Goal_0),
get_contents(Counter, contents, Count0),
Count1 is Count0+1,
put_contents(Counter, contents, Count1),
Nth = Count1.
With pivots, there is even no 32-bit limitation, and we can directly do:
call_nth(G, C) :-
pivot_new(P),
pivot_put(P, 0),
call(G),
pivot_take(P, M),
N is M+1,
pivot_put(P, N),
C = N.
Related
I've encountered a problem when trying to iterate through two dimension array and summing up the lengths of all elements inside in prolog.
I've tried iterating through a simple 1D array and result was just as expected. However, difficulties appeared when I started writing the code for 2D array. Here's my code :
findsum(L):-
atom_row(L, Sum),
write(Sum).
atom_row([Head|Tail], Sum) :-
atom_lengths(Head, Sum),
atom_row(Tail, Sum).
atom_row([], 0).
atom_lengths([Head|Tail], Sum):-
atom_chars(Head, CharList),
length(CharList, ThisLenght),
atom_lengths(Tail, Temp),
Sum is Temp + ThisLenght,
write(ThisLenght).
atom_lengths([], 0).
For example, sum of the elements in array [[aaa, bbbb], [ccccc, dddddd]] should be equal to 18. And this is what I get:
?- findsum([[aaa, bbbb], [ccccc, dddddd]]).
436
false.
The output comes from write(ThisLength) line after each iteration.
Typically it helps (a lot) by splitting the problem into simpeler sub-problems. We can solve the problem, for example, with the following three steps:
first we concatenate the list of lists into a single one-dimension list, for example with append/2;
next we map each atom in that list to the length of that atom, with the atom_length/2 predicate; and
finally we sum up these values, for example with sum_list/2.
So the main predicate looks like:
findsum(LL, S) :-
append(LL, L),
maplist(atom_length, L, NL),
sumlist(NL, S).
Since maplist/3 is a predicate defined in the library(apply), we thus don't need to implement any other predicates.
Note: You can see the implementions of the linked predicates by clicking on the :- icon.
For example:
?- findsum([[aaa, bbbb], [ccccc, dddddd]], N).
N = 18.
I'm currently working on a side project that deals with a lot of recursive calls. I'm not a computer scientist, so I'm not exactly sure how to optimize my code. I know that recursive functions are not very efficient and I've heard that you can often replace it with tail calls, but I'm not exactly sure how to go about doing this. This function takes in three arrays: appendList, sequence, and used. The other arguments, base, length, index and last word are integers.
function Recursion(appendList, base, length, sequence, used, lastWord, index)
#Global variables:
global G_Seq_List
global G_Seq_Index
used = ones(UInt8, 1, base^length)
used[1] = 0
if index == base^length
check = zeros(UInt8, base^length, 1)
for i = 1 : base^length
index = 1
for j = 1 : length
k = mod(i+j-1,base^length)
index = index + base^(length - j)*sequence[k+1]
end
check[index] = check[index] + 1
if check[index] != 1
return
end
end
G_Seq_List[G_Seq_Index,:] = sequence[:]
G_Seq_Index = G_Seq_Index + 1
return
end
#Builds Sequence
for i = 1 : base^length
if appendList[i , mod(lastWord - 1, base^(length - 1)) + 1] == 1
if used[i] == 1
tempUsed = used
tempUsed[i] = 0
tempCounter = index + 1
tempSequence = sequence
tempSequence[tempCounter] = mod(i - 1, base)
Recursion(appendList, base, length, tempSequence, tempUsed, i, tempCounter)
end
end
end
end
Is it a quick fix to turn this recursion into a tail call? If not, what kind of things can I do to optimize this function?
In general, any recursion can be converted to a loop, and a loop will generally have better performance, since it has similar algorithmic performance without the need to allocate new frames and store extra information.
"Tail call optimization" is something the compilers (or runtimes) do, which is to automatically convert the recursion to a loop if the recursive call is a last call in the function (hence the name - "tail call"), typically by reusing the same call frame instead of allocating a new one. Reusing the frame is okay since if all you do with the result of the recursive call is return it, you don't need anything else from the enclosing function invocation, so there's no reason to keep the frame alive in the first place.
So, what you need to check is:
Whether your compiler supports tail-call optimization.
What you have to do in order to allow the compiler to do so - usually the straightforward return f(...) pattern will work, but sometimes the compiler can support more complex code.
Both depend on your specific compiler, so I would look up documentation about it - I could not tell what it is from your question.
I have written the following in Prolog (I am using version 7.4.0-rc1), trying to define a predicate insertPermutation/2 which is true if and only if both arguments are lists, one a permutation of the other.
delete(X,[X|T],T). % Base case, element equals head.
delete(X,[A|B],[A|C]) :- delete(X,B,C). % And/or repeat for the tail.
insert(X,Y,Z) :- delete(X,Z,Y). % Inserting is deletion in reverse.
insertPermutation([],[]). % Base case.
insertPermutation([H|T],P) :- insertPermutation(Q,T), insert(H,Q,P). % P permutation of T, H inserted.
I have already been made aware that delete is not a good name for the above helper predicate. We are required to write these predicates, and we cannot use the built-in predicates. This is why I wrote the above code in this way, and I chose the name I did (because I first wrote it to delete an element). It is true if and only if the third argument is a list, equal to the list in the second argument with the first instance of the first argument removed.
The insertPermutation predicate recursively tests if P equals a permutation of the tail of the first list, with the head added in any position in the permutation. This way it works to the base case of both being empty lists.
However, the permutation predicate does not behave the way I want it to. For instance, to the query
?- insertPermutation([1,2,2],[1,2,3]).
Prolog does not return false, but freezes. To the query
?- insertPermutation(X,[a,b,c]).
Prolog responds with
X = [a, b, c] ;
X = [b, a, c] ;
X = [c, a, b] ;
X = [a, c, b] ;
X = [b, c, a] ;
X = [c, b, a] ;
after which it freezes again. I see these problems are related, but not how. Can someone point out what case I am missing?
Edit: Two things, this is homework, and I need to solve this problem using an insert predicate. I wrote this one.
The answer is to change the last line
% P permutation of T, H inserted.
insertPermutation([H|T],P) :-
insertPermutation(Q,T),
insert(H,Q,P).
% P permutation of T, H inserted.
insertPermutation(P,[H|T]) :-
insertPermutation(Q,T),
insert(H,Q,P).
The use cases only needed to check if the first element is a permutation of the latter, not the other way around (or vice versa). Anti-climatic, but the answer to my problem.
Mathematica has a built-in function ArgMax for functions over infinite domains, based on the standard mathematical definition.
The analog for finite domains is a handy utility function.
Given a function and a list (call it the domain of the function), return the element(s) of the list that maximize the function.
Here's an example of finite argmax in action:
Canonicalize NFL team names
And here's my implementation of it (along with argmin for good measure):
(* argmax[f, domain] returns the element of domain for which f of
that element is maximal -- breaks ties in favor of first occurrence. *)
SetAttributes[{argmax, argmin}, HoldFirst];
argmax[f_, dom_List] := Fold[If[f[#1]>=f[#2], #1, #2]&, First[dom], Rest[dom]]
argmin[f_, dom_List] := argmax[-f[#]&, dom]
First, is that the most efficient way to implement argmax?
What if you want the list of all maximal elements instead of just the first one?
Second, how about the related function posmax that, instead of returning the maximal element(s), returns the position(s) of the maximal elements?
#dreeves, you're correct in that Ordering is the key to the fastest implementation of ArgMax over a finite domain:
ArgMax[f_, dom_List] := dom[[Ordering[f /# dom, -1]]]
Part of the problem with your original implementation using Fold is that you end up evaluating f twice as much as necessary, which is inefficient, especially when computing f is slow. Here we only evaluate f once for each member of the domain. When the domain has many duplicated elements, we can further optimize by memoizing the values of f:
ArgMax[f_, dom_List] :=
Module[{g},
g[e___] := g[e] = f[e]; (* memoize *)
dom[[Ordering[g /# dom, -1]]]
]
This was about 30% faster in some basic tests for a list of 100,000 random integers between 0 and 100.
For a posmax function, this somewhat non-elegant approach is the fastest thing I can come up with:
PosMax[f_, dom_List] :=
Module[{y = f/#dom},
Flatten#Position[y, Max[y]]
]
Of course, we can apply memoization again:
PosMax[f_, dom_List] :=
Module[{g, y},
g[e___] := g[e] = f[e];
y = g /# dom;
Flatten#Position[y, Max[y]]
]
To get all the maximal elements, you could now just implement ArgMax in terms of PosMax:
ArgMax[f_, dom_List] := dom[[PosMax[f, dom]]]
For posmax, you can first map the function over the list and then just ask for the position of the maximal element(s). Ie:
posmax[f_, dom_List] := posmax[f /# dom]
where posmax[list] is polymorphically defined to just return the position of the maximal element(s).
It turns out there's a built-in function, Ordering that essentially does this.
So we can define the single-argument version of posmax like this:
posmax[dom_List] := Ordering[dom, -1][[1]]
I just tested that against a loop-based version and a recursive version and Ordering is many times faster.
The recursive version is pretty so I'll show it off here, but don't ever try to run it on large inputs!
(* posmax0 is a helper function for posmax that returns a pair with the position
and value of the max element. n is an accumulator variable, in lisp-speak. *)
posmax0[{h_}, n_:0] := {n+1, h}
posmax0[{h_, t___}, n_:0] := With[{best = posmax0[{t}, n+1]},
If[h >= best[[2]], {n+1, h}, best]]
posmax[dom_List] := First#posmax0[dom, 0]
posmax[f_, dom_List] := First#posmax0[f /# dom, 0]
posmax[_, {}] := 0
None of this addresses the question of how to find all the maximal elements (or positions of them).
That doesn't normally come up for me in practice, though I think it would be good to have.
I'm writing a predicate to find all possible successor states for an iteration of A* and put them in a list like [(cost, state), ...] , which stands at this at the moment:
addSuccessors(L, [], _).
addSuccessors(L, [X|T], OrigList) :- memb(OrigList, Index, X),
add((X, Index), L, List2),
addSuccessors(List2, T, OrigList).
addSuccessors(L, [X|[]], OrigList) :- memb(OrigList, Index, X),
add((X, Index), L, L2),
addSuccessors(L2, [], OrigList).
Add adds something to the end of a list, memb gets the (index)th element of a list. I know they work, and when I look at L2 in the bottom predicate I get something like this
?- addSuccessors(X, [1500, 3670], [0, 0, 0, 1500, 3670]).
X = [] ;
[ (1500, 3), (3670, 4)]
X = [] ;
X = [_G1175] ;
[_G1175, (1500, 3), (3670, 4)]
X = [_G1175] ;
X = [_G1175, _G1181] ;
[_G1175, _G1181, (1500, 3), (3670, 4)]
X = [_G1175, _G1181] ;
...
Which is very frustrating because the [(1500, 3), (3670, 4)] list is what I want X to be after I call it so it looks to be doing what I want just not... where I want.
Please, how can I fix this?
It's been awhile since I programmed in Prolog, but I think you need to separate the list that you are building up from the list that you return (i.e., add another parameter). Your rule for [X|[]] should bind the output variable and not recurse.
Think about how L actually gets it's initial value. Well the thing is it doesn't. What you're trying to do is build up a list from scratch, so you need to start with an empty list, not an unbound variable. One way to solve this is to write a wrapper predicate which allows your current predicate to act as an accumulator predicate.
This might look something like this, where addSuccessors_acc will contain the clauses that you have already defined.
addSuccessors(L, X, Y) :-
addSuccessors_acc(L,[],X,Y).
The idea is that the second argument to the addSuccessors_acc predicate acts as your accumulator, which is the list that is being built up with each recursive call. Then in the base case of the accumulator predicate you just need to unify the accumulator variable with the first argument, to pass along the final list. eg:
addSuccessors_acc(L,L,_,_).
Also, as ergosys points out, your third clause can actually be the base case. Since you are dealing with the last element in the list there is no need to recurse - all that is doing is delaying the base case one extra call onwards.