Specific example of recursion in Prolog - recursion

polarbear([],H,[H]).
polarbear([H|T],Y,[H|Z]):- polarbear(T,Y,Z).
This is the prolog code. When entering ?-polarbear([1,2], 6, P). Get P =[1,2,6].
The thing is I just don't understand how it's working and I've been trying to work out how Prolog is doing what it's doing.
I have some experience with Prolog, but I don't understand this, so any guidance as to how it does what it does in order to help me understand Prolog would be greatly appreciated.

states that the first argument is a list with head H and tail T and the third argument is a list with head H and tail Z. So it forces (by using unification) the heads of the two lists to be the same. Recursively the two lists become identical except the fact that the third argument list has one more element in the end (element Y) and this is defined by the first clause. Note that second clause only works for lists with one or more elements. So as a base of the recursion when we examine the empty list then the third list due to first clause contains only one more element the element Y.

Related

rev_append vs (append or #)

If we have two lists l1 and l2 and we want to concatenate them we can use # or append which is in O(n1) where n1 is the length of l1. Or we can use rev_append which is according to the doc:
equivalent to List.rev l1 # l2, but rev_append is tail-recursive and more efficient.
So is rev_append more efficient than # or is it more efficient than List.rev + #? And is it better to use it instead of # and append when we don't care about the order?
OCaml lists are immutable. The second list doesn't need to be changed, but the first list has to be copied so the copy can point to the second list. Hence you're going to have to traverse the first list somehow. Nothing you can do will change the big-O time complexity of the append.
Since you can only add new elements at the beginning of a list, you need to traverse the first list in reverse order if you want the result to preserve the order of the first list.
The most obvious way to do this is to call recursively until you're at the end of the first list, then do the prefixing as you return from each recursive call. However this isn't tail-recursive. I.e., it will consume stack space proportional to the length of the first list. When the first list is long, you can run out of stack space (aka stack overflow).
This is the way that # works. It takes time and stack space proportional to the length of the first list.
Another idea is to give up on maintaining the order of the first list. If you prefix the first list in reverse order, you can can easily make the operation tail recursive. That's the purpose of List.rev_append. It takes constant stack space.
If you want to maintain the original list orders, but also use constant stack space you can reverse the first list (with List.rev), then use List.rev_append.
Plain List.rev_append is faster than # because it doesn't have to make internal function calls--it can just be a loop. It's also obviously faster than List.rev plus List.rev_append.
In summary if you don't care about the final order, then List.rev_append is faster than #, yes. Also it won't overflow the stack. It's not going to be a gigantic amount faster because the time complexity is basically the same.

Why after pressing semicolon program is back in deep recursion?

I'm trying to understand the semicolon functionality.
I have this code:
del(X,[X|Rest],Rest).
del(X,[Y|Tail],[Y|Rest]) :-
del(X,Tail,Rest).
permutation([],[]).
permutation(L,[X|P]) :- del(X,L,L1), permutation(L1,P).
It's the simple predicate to show all permutations of given list.
I used the built-in graphical debugger in SWI-Prolog because I wanted to understand how it works and I understand for the first case which returns the list given in argument. Here is the diagram which I made for better understanding.
But I don't get it for the another solution. When I press the semicolon it doesn't start in the place where it ended instead it's starting with some deep recursion where L=[] (like in step 9). I don't get it, didn't the recursion end earlier? It had to go out of the recursions to return the answer and after semicolon it's again deep in recursion.
Could someone clarify that to me? Thanks in advance.
One analogy that I find useful in demystifying Prolog is that Backtracking is like Nested Loops, and when the innermost loop's variables' values are all found, the looping is suspended, the vars' values are reported, and then the looping is resumed.
As an example, let's write down simple generate-and-test program to find all pairs of natural numbers above 0 that sum up to a prime number. Let's assume is_prime/1 is already given to us.
We write this in Prolog as
above(0, N), between(1, N, M), Sum is M+N, is_prime(Sum).
We write this in an imperative pseudocode as
for N from 1 step 1:
for M from 1 step 1 until N:
Sum := M+N
if is_prime(Sum):
report_to_user_and_ask(Sum)
Now when report_to_user_and_ask is called, it prints Sum out and asks the user whether to abort or to continue. The loops are not exited, on the contrary, they are just suspended. Thus all the loop variables values that got us this far -- and there may be more tests up the loops chain that sometimes succeed and sometimes fail -- are preserved, i.e. the computation state is preserved, and the computation is ready to be resumed from that point, if the user presses ;.
I first saw this in Peter Norvig's AI book's implementation of Prolog in Common Lisp. He used mapping (Common Lisp's mapcan which is concatMap in Haskell or flatMap in many other languages) as a looping construct though, and it took me years to see that nested loops is what it is really all about.
Goals conjunction is expressed as the nesting of the loops; goals disjunction is expressed as the alternatives to loop through.
Further twist is that the nested loops' structure isn't fixed from the outset. It is fluid, the nested loops of a given loop can be created depending on the current state of that loop, i.e. depending on the current alternative being explored there; the loops are written as we go. In (most of the) languages where such dynamic creation of nested loops is impossible, it can be encoded with nested recursion / function invocation / inside the loops. (Here's one example, with some pseudocode.)
If we keep all such loops (created for each of the alternatives) in memory even after they are finished with, what we get is the AND-OR tree (mentioned in the other answer) thus being created while the search space is being explored and the solutions are found.
(non-coincidentally this fluidity is also the essence of "monad"; nondeterminism is modeled by the list monad; and the essential operation of the list monad is the flatMap operation which we saw above. With fluid structure of loops it is "Monad"; with fixed structure it is "Applicative Functor"; simple loops with no structure (no nesting at all): simply "Functor" (the concepts used in Haskell and the like). Also helps to demystify those.)
So, the proper slogan could be Backtracking is like Nested Loops, either fixed, known from the outset, or dynamically-created as we go. It's a bit longer though. :)
Here's also a Prolog example, which "as if creates the code to be run first (N nested loops for a given value of N), and then runs it." (There's even a whole dedicated tag for it on SO, too, it turns out, recursive-backtracking.)
And here's one in Scheme ("creates nested loops with the solution being accessible in the innermost loop's body"), and a C++ example ("create n nested loops at run-time, in effect enumerating the binary encoding of 2n, and print the sums out from the innermost loop").
There is a big difference between recursion in functional/imperative programming languages and Prolog (and it really became clear to me only in the last 2 weeks or so):
In functional/imperative programming, you recurse down a call chain, then come back up, unwinding the stack, then output the result. It's over.
In Prolog, you recurse down an AND-OR tree (really, alternating AND and OR nodes), selecting a predicate to call on an OR node (the "choicepoint"), from left to right, and calling every predicate in turn on an AND node, also from left to right. An acceptable tree has exactly one predicate returning TRUE under each OR node, and all predicates returning TRUE under each AND node. Once an acceptable tree has been constructed, by the very search procedure, we are (i.e. the "search cursor" is) on a rightmost bottommost node .
Success in constructing an acceptable tree also means a solution to the query entered at the Prolog Toplevel (the REPL) has been found: The variable values are output, but the tree is kept (unless there are no choicepoints).
And this is also important: all variables are global in the sense that if a variable X as been passed all the way down the call chain from predicate to predicate to the rightmost bottommost node, then constrained at the last possible moment by unifying it with 2 for example, X = 2, then the Prolog Toplevel is aware of that without further ado: nothing needs to be passed up the call chain.
If you now press ;, search doesn't restart at the top of the tree, but at the bottom, i.e. at the current cursor position: the nearest parent OR node is asked for more solutions. This may result in much search until a new acceptable tree has been constructed, we are at a new rightmost bottommost node. The new variable values are output and you may again enter ;.
This process cycles until no acceptable tree can be constructed any longer, upon which false is output.
Note that having this AND-OR as an inspectable and modifiable data structure at runtime allows some magical tricks to be deployed.
There is bound to be a lot of power in debugging tools which record this tree to help the user who gets the dreaded sphynxian false from a Prolog program that is supposed to work. There are now Time Traveling Debuggers for functional and imperative languages, after all...

Understanding Prolog "append" recursive definition [duplicate]

This question already has answers here:
Explanation of a Prolog algorithm to append two lists together
(2 answers)
Closed 6 years ago.
I'm reading Programming in Prolog: Using the ISO Standard, but I'm having problems understanding the recursive definition of append introduced by the book:
append([], List, List).
append([X|List1], List2, [X|Result]) :- append(List1, List2, Result).
For example:
?- append([a, b, c], [3, 2, 1], Result).
Result = [a, b, c, 3, 2, 1]
As far as I understand, the definition says that the resulting list should contain the head of the first list as its head, so initially the resulting list is [ a ]. Then, we recursively run append() on the tail of the first and third argument, leaving the second one as it is, so the third argument (which is [ a ]), should contain the head of the new first argument as its head, so the resulting list is [ b, a ] (which is backwards, so clearly I'm not following correctly). At some point, the first list is [], and the resulting array is [ c, b, a ], so we hit the base case:
append([], List, List).
So append([], [3, 2, 1], [ c, b, a ])., which makes no sense at all. I also don't follow how the contents of the second list are taken into consideration if no manipulation is performed on it in the whole definition.
[...] the definition says that the resulting list should contain the head of the first list as its head, so initially the resulting list is [ a ].
Like you mentioned, the definition says that the head of the resulting list is a, it doesn't say that the entire list is [a]. Furthermore, this list is not passed on as argument to the recursive call.
The resulting list is defined as [X|Result], so in this case X is unified with a. We don't know anything about Result yet, but we "pass" it as third argument to the recursive call. So overall this means that the output will be a followed by the output of the recursive call.
The steps for b and c are exactly the same, so you can imagine the stack like this:
R = [a|R1]
R1 = [b|R2]
R2 = [c|R3]
Or, flattened: [a|[b|[c|R3]]]. Notice now the order is indeed correct?
Now the only remaining question is what is R3? Well, the first argument at this point is the empty list, so we reached the base case. This simply says that "if the first list is empty, the result is the second list".
so R3 = [3, 2, 1]. After this the stack unwinds and gives you the appended list as output.
In my view, such an operational reading will lead you away from the true advantage of logic programming, since it will make it extremely tempting to think in terms of "inputs" and "outputs", like in functional programming. Such a procedural or even functional reading is too limited in that it does not do justice to the full generality of the relation.
In addition, as you also already notice, reading this definition operationally is extremely hard. The precise call flow of Prolog is complex, and in general too hard to understand for beginners as well as experts.
In my opinion, a good way to think about your definition is to consider the two clauses, and understand their meaning, leading us to a declarative reading.
First, consider:
append([], List, List).
This simply states what holds, and can be easily seen to be correct: If the first list is empty, the second list is the same as the third list.
Note the wording: We are not even mentioning a resulting list, since all arguments may be specified or not.
Next, consider the second clause:
append([X|List1], List2, [X|Result]) :- append(List1, List2, Result).
Read the :- as what it is, namely ←. So, this says:
If append(List1, List2, Result) holds, then append([X|List1], List2, [X|Result]) also holds.
Again, this can be easily seen to be correct, and allows a reading that is applicable in all directions.
In this light, you may consider whether Result is a good name for the third argument, and further, as #WillNess correctly points out, whether even append/3 is a good name altogether to describe this relation.

How to write a Prolog predicate to split a list into a list of paired elements?

This was a question on a sample exam I did.
Give the definition of a Prolog predicate split_into_pairs that takes as arguments a list and returns as a result a list which consists of paired elements. For example, split_into_pairs([1,2,3,4,5,6],X) would return as a result X=[[1,2],[3,4],[5,6]]. Similarly, split_into_pairs([a,2,3,4,a,a,a,a],X) would return as result X=[[a,2],[3,4],[a,a],[a,a]] while split_into_pairs([1,2,3],X) would return No.
It's not meant to be done using built-in predicates I believe, but it shouldn't need to be too complicated either as it was only worth 8/120 marks.
I'm not sure what it should do for a list of two elements, so I guess that would either be not specified so that it returns no, or split_into_pairs([A,B],[[A,B]]).
My main issue is how to do the recursive call properly, without having extra brackets, not ending up as something like X=[[A,B],[[C,D],[[E,F]]]]?.
My most recent attempts have been variations of the code below, but obviously this is incorrect.
split_into_pairs([A,B],[A,B])
split_into_pairs([A,B|T], X) :- split_into_pairs(T, XX), X is [A,B|XX]
This is a relatively straightforward recursion:
split_into_pairs([], []).
split_into_pairs([First, Second | Tail], [[First, Second] | Rest]) :-
split_into_pairs(Tail, Rest).
The first rule says that an empty list is already split into pairs; the second requires that the source list has at least two items, pairs them up, and inserts the result of pairing up the tail list behind them.
Here is a demo on ideone.
Your solution could be fixed as well by adding square brackets in the result, and moving the second part of the rule into the header, like this:
split_into_pairs([A,B],[[A,B]]).
split_into_pairs([A,B|T], [[A,B]|XX]) :- split_into_pairs(T, XX).
Note that this solution does not consider an empty list a list of pairs, so split_into_pairs([], X) would fail.
Your code is almost correct. It has obvious syntax issues, and several substantive issues:
split_into_pairs([A,B], [ [ A,B ] ] ):- !.
split_into_pairs([A,B|T], X) :- split_into_pairs(T, XX),
X = [ [ A,B ] | XX ] .
Now it is correct: = is used instead of is (which is normally used with arithmetic operations), both clauses are properly terminated by dots, and the first one has a cut added into it, to make the predicate deterministic, to produce only one result. The correct structure is produced by enclosing each pair of elements into a list of their own, with brackets.
This is inefficient though, because it describes a recursive process - it constructs the result on the way back from the base case.
The efficient definition works on the way forward from the starting case:
split_into_pairs([A,B],[[A,B]]):- !.
split_into_pairs([A,B|T], X) :- X = [[A,B]|XX], split_into_pairs(T, XX).
This is the essence of tail recursion modulo cons optimization technique, which turns recursive processes into iterative ones - such that are able to run in constant stack space. It is very similar to the tail-recursion with accumulator technique.
The cut had to be introduced because the two clauses are not mutually exclusive: a term unifying with [A,B] could also be unifiable with [A,B|T], in case T=[]. We can get rid of the cut by making the two clauses to be mutually-exclusive:
split_into_pairs([], [] ).
split_into_pairs([A,B|T], [[A,B]|XX]):- split_into_pairs(T, XX).

prolog recursion

am making a function that will send me a list of all possible elemnts .. in each iteration its giving me the last answer .. but after the recursion am only getting the last answer back .. how can i make it give back every single answer ..
thank you
the problem is that am trying to find all possible distributions for a list into other lists .. the code
addIn(_,[],Result,Result).
addIn(C,[Element|Rest],[F|R],Result):-
member( Members , [F|R]),
sumlist( Members, Sum),
sumlist([Element],ElementLength),
Cap is Sum + ElementLength,
(Cap =< Ca,
append([Element], Members,New)....
by calling test .. am getting back all the list of possible answers .. now if i tried to do something that will fail like
bp(3,11,[8,2,4,6,1,8,4],Answer).
it will just enter a while loop .. more over if i changed the
bp(NB,C,OL,A):-
addIn(C,OL,[[],[],[]],A);
bp(NB,C,_,A).
to and instead of Or .. i get error :
ERROR: is/2: Arguments are not
sufficiently instantiated
appreciate the help ..
Thanks alot #hardmath
It sounds like you are trying to write your own version of findall/3, perhaps limited to a special case of an underlying goal. Doing it generally (constructing a list of all solutions to a given goal) in a user-defined Prolog predicate is not possible without resorting to side-effects with assert/retract.
However a number of useful special cases can be implemented without such "tricks". So it would be helpful to know what predicate defines your "all possible elements". [It may also be helpful to state which Prolog implementation you are using, if only so that responses may include links to documentation for that version.]
One important special case is where the "universe" of potential candidates already exists as a list. In that case we are really asking to find the sublist of "all possible elements" that satisfy a particular goal.
findSublist([ ],_,[ ]).
findSublist([H|T],Goal,[H|S]) :-
Goal(H),
!,
findSublist(T,Goal,S).
findSublist([_|T],Goal,S) :-
findSublist(T,Goal,S).
Many Prologs will allow you to pass the name of a predicate Goal around as an "atom", but if you have a specific goal in mind, you can leave out the middle argument and just hardcode your particular condition into the middle clause of a similar implementation.
Added in response to code posted:
I think I have a glimmer of what you are trying to do. It's hard to grasp because you are not going about it in the right way. Your predicate bp/4 has a single recursive clause, variously attempted using either AND or OR syntax to relate a call to addIn/4 to a call to bp/4 itself.
Apparently you expect wrapping bp/4 around addIn/4 in this way will somehow cause addIn/4 to accumulate or iterate over its solutions. It won't. It might help you to see this if we analyze what happens to the arguments of bp/4.
You are calling the formal arguments bp(NB,C,OL,A) with simple integers bound to NB and C, with a list of integers bound to OL, and with A as an unbound "output" Answer. Note that nothing is ever done with the value NB, as it is not passed to addIn/4 and is passed unchanged to the recursive call to bp/4.
Based on the variable names used by addIn/4 and supporting predicate insert/4, my guess is that NB was intended to mean "number of bins". For one thing you set NB = 3 in your test/0 clause, and later you "hardcode" three empty lists in the third argument in calling addIn/4. Whatever Answer you get from bp/4 comes from what addIn/4 is able to do with its first two arguments passed in, C and OL, from bp/4. As we noted, C is an integer and OL a list of integers (at least in the way test/0 calls bp/4).
So let's try to state just what addIn/4 is supposed to do with those arguments. Superficially addIn/4 seems to be structured for self-recursion in a sensible way. Its first clause is a simple termination condition that when the second argument becomes an empty list, unify the third and fourth arguments and that gives "answer" A to its caller.
The second clause for addIn/4 seems to coordinate with that approach. As written it takes the "head" Element off the list in the second argument and tries to find a "bin" in the third argument that Element can be inserted into while keeping the sum of that bin under the "cap" given by C. If everything goes well, eventually all the numbers from OL get assigned to a bin, all the bins have totals under the cap C, and the answer A gets passed back to the caller. The way addIn/4 is written leaves a lot of room for improvement just in basic clarity, but it may be doing what you need it to do.
Which brings us back to the question of how you should collect the answers produced by addIn/4. Perhaps you are happy to print them out one at a time. Perhaps you meant to collect all the solutions produced by addIn/4 into a single list. To finish up the exercise I'll need you to clarify what you really want to do with the Answers from addIn/4.
Let's say you want to print them all out and then stop, with a special case being to print nothing if the arguments being passed in don't allow a solution. Then you'd probably want something of this nature:
newtest :-
addIn(12,[7, 3, 5, 4, 6, 4, 5, 2], Answer),
format("Answer = ~w\n",[Answer]),
fail.
newtest.
This is a standard way of getting predicate addIn/4 to try all possible solutions, and then stop with the "fall-through" success of the second clause of newtest/0.
(Added) Suggestions about coding addIn/4:
It will make the code more readable and maintainable if the variable names are clear. I'd suggest using Cap instead of C as the first argument to addIn/4 and BinSum when you take the sum of items assigned to a "bin". Likewise Bin would be better where you used Members. In the third argument to addIn/4 (in the head of the second clause) you don't need an explicit list structure [F|R] since you never refer to either part F or R by itself. So there I'd use Bins.
Some of your predicate calls don't accomplish much that you cannot do more easily. For example, your second call to sumlist/2 involves a list with one item. Thus the sum is just the same as that item, i.e. ElementLength is the same as Element. Here you could just replace both calls to sumlist/2 with one such call:
sumlist([Element|Bin],BinSum)
and then do your test comparing BinSum with Cap. Similarly your call to append/3 just adjoins the single item Element to the front of the list (I'm calling) Bin, so you could just replace what you have called New with [Element|Bin].
You have used an extra pair of parentheses around the last four subgoals (in the second clause for addIn/4). Since AND is implied for all the subgoals of this clause, using the extra pair of parentheses is unnecessary.
The code for insert/4 isn't shown now, but it could be a source of some unintended "backtracking" in special cases. The better approach would be to have the first call (currently to member/2) be your only point of indeterminacy, i.e. when you choose one of the bins, do it by replacing it with a free variable that gets unified with [Element|Bin] at the next to last step.

Resources