Understanding Recursive Rule and Unification in Prolog - recursion

I'm a beginning Prolog student following the "LearnPrologNow!" set of tutorials. I'm doing my best to get a grip on the concepts and vocabulary. I've been able to understand everything up until Chapter 3 on Recursive Definitions when presented with this problem:
numeral(0).
numeral(succ(X)) :- numeral(X).
given the query
numeral(X).
Now, I understand that the idea of the program is that Prolog will begin counting numbers in this system in a sequence such as
X=0
X=succ(0)
X=succ(succ(0))
But I do not understand what causes it to "scale back" and ascend each time. I understand the principle of unification in that the program is trying to unify the query of X, but should it just follow the recursive rule once, and then return zero? What allows it to add a succ() around the query? Is that not traversing the recursive rule in the opposite direction?

Please think declaratively:
The rule
numeral(succ(X)) :- numeral(X).
means:
If X is a numeral, then succ(X) is a numeral.
:- is like an arrow used in logical implication (it looks similar to <==).
Seeing that you successfully derived that 0 is a numeral (first answer), it is thus no surprise that succ(0) is another solution.
I recommend you think in terms of such relations, instead of trying to trace the actual control flow.
Notice that succ/1 is not added "around the query", but is a part of the actual answer. The term succ(0) is just a normal Prolog term, with functor succ and argument 0.

Already given answer being good, i'll add some more:
Prolog uses denotational syntax (or declarative syntax) to define logical relations/"equations" between terms
A term is an object comprised of variables/functions/placeholders etc..
Unification is the process to check if two expressions (or two terms) can be equal with respect to the given relations/equations.
numeral(succ(X)) :- numeral(X)
Is such a relation/equation which says that the fact that variable-term X is of numeral type (or class), implies the successor functional succ is also of same type. So Prolog here can unify the expression (in other words solve the equation) and replace X with succ(X) and so on, untill the domain of X is covered. So this unification implies X replaced by succ(X) and then unification can be re-applied.

Just to add a proof tree to this answers, which may make things more clear for others:
Base call: numeral(succ(succ(0))).
: ^
rule1 : : {X/0}
: :
V :
numeral(succ(X)).
: ^
rule1 : : {X/0}
: :
V :
numeral(X).
: ^
fact1 : : {X/0}
: :
V :
Found proof [.]
You start with the downwards arrows and move then back up to the previous calls with the new found unificator in the last step. Please note that Prolog declares each variable in each step as a new variable, which I omitted in this scheme.

Related

Recursive addition in Prolog

Knowledge Base
add(0,Y,Y). // clause 1
add(succ(X),Y,succ(Z)) :- add(X,Y,Z). // clause 2
Query
add(succ(succ(succ(0))), succ(succ(0)), R)
Trace
Call: (6) add(succ(succ(succ(0))), succ(succ(0)), R)
Call: (7) add(succ(succ(0)), succ(succ(0)), _G648)
Call: (8) add(succ(0), succ(succ(0)), _G650)
Call: (9) add(0, succ(succ(0)), _G652)
Exit: (9) add(0, succ(succ(0)), succ(succ(0)))
Exit: (8) add(succ(0), succ(succ(0)), succ(succ(succ(0))))
Exit: (7) add(succ(succ(0)), succ(succ(0)), succ(succ(succ(succ(0)))))
Exit: (6) add(succ(succ(succ(0))), succ(succ(0)), succ(succ(succ(succ(succ(0))))))
My Question
I see how the recursive call in clause 2 strips the outermost succ()
at each call for argument 1.
I see how it adds an outer succ() to argument 3 at each call.
I see when the 1st argument as a result of these recursive calls
reaches 0. At that point, I see how the 1st clause copies the 2nd
argument to the 3rd argument.
This is where I get confused.
Once the 1st clause is executed, does the 2nd clause automatically
get executed as well, then adding succ() to the first argument?
Also, how does the program terminate, and why doesn't it just keep
adding succ() to the first and 3rd arguments infinitely?
Explanation from LearnPrologNow.com (which I don't understand)
Let’s go step by step through the way Prolog processes this query. The
trace and search tree for the query are given below.
The first argument is not 0 , which means that only the second clause
for add/3 can be used. This leads to a recursive call of add/3 . The
outermost succ functor is stripped off the first argument of the
original query, and the result becomes the first argument of the
recursive query. The second argument is passed on unchanged to the
recursive query, and the third argument of the recursive query is a
variable, the internal variable _G648 in the trace given below. Note
that _G648 is not instantiated yet. However it shares values with R
(the variable that we used as the third argument in the original
query) because R was instantiated to succ(_G648) when the query was
unified with the head of the second clause. But that means that R is
not a completely uninstantiated variable anymore. It is now a complex
term, that has a (uninstantiated) variable as its argument.
The next two steps are essentially the same. With every step the first
argument becomes one layer of succ smaller; both the trace and the
search tree given below show this nicely. At the same time, a succ
functor is added to R at every step, but always leaving the innermost
variable uninstantiated. After the first recursive call R is
succ(_G648) . After the second recursive call, _G648 is instantiated
with succ(_G650) , so that R is succ(succ(_G650) . After the third
recursive call, _G650 is instantiated with succ(_G652) and R therefore
becomes succ(succ(succ(_G652))) . The search tree shows this step by
step instantiation.
At this stage all succ functors have been stripped off the first
argument and we can apply the base clause. The third argument is
equated with the second argument, so the ‘hole’ (the uninstantiated
variable) in the complex term R is finally filled, and we are through.
Let us start by getting the terminology right.
These are the clauses, as you correctly indicate:
add(0, Y, Y).
add(succ(X), Y, succ(Z)) :- add(X, Y, Z).
Let us first read this program declaratively, just to make sure we understand its meaning correctly:
0 plus Y is Y. This makes sense.
If it is true that X plus Y is Z then it is true that the successor of X plus Y is the successor of Z.
This is a good way to read this definition, because it is sufficiently general to cover various modes of use. For example, let us start with the most general query, where all arguments are fresh variables:
?- add(X, Y, Z).
X = 0,
Y = Z ;
X = succ(0),
Z = succ(Y) ;
X = succ(succ(0)),
Z = succ(succ(Y)) .
In this case, there is nothing to "strip", since none of the arguments is instantiated. Yet, Prolog still reports very sensible answers that make clear for which terms the relation holds.
In your case, you are considering a different query (not a "predicate definition"!), namely the query:
?- add(succ(succ(succ(0))), succ(succ(0)), R).
R = succ(succ(succ(succ(succ(0))))).
This is simply a special case of the more general query shown above, and a natural consequence of your program.
We can also go in the other direction and generalize this query. For example, this is a generalization, because we replace one ground argument by a logical variable:
?- add(succ(succ(succ(0))), B, R).
R = succ(succ(succ(B))).
If you follow the explanation you posted, you will make your life very difficult, and arrive at a very limited view of logic programs: Realistically, you will only be able to trace a tiny fragment of modes in which you could use your predicates, and a procedural reading thus falls quite short of what you are actually describing.
If you really insist on a procedural reading, start with a simpler case first. For example, let us consider:
?- add(succ(0), succ(0), R).
To "step through" procedurally, we can proceed as follows:
Does the first clause match? (Note that "matching" is already limited reading: Prolog actually applies unification, and a procedural reading leads us away from this generality.)
Answer: No, because s(_) does not unify with 0. So only the second clause applies.
The second clause only holds if its body holds, and in this case if add(0, succ(0), Z) holds. And this holds (by applying the first clause) if Z is succ(0) and R is succ(Z).
Therefore, one answer is R = succ(succ(0)).. This answer is reported.
Are there other solutions? These are only reported on backtracking.
Answer: No, there are no other solutions, because no further clause matches.
I leave it as an exercise to apply this painstaking method to the more complex query shown in the book. It is straight-forward to do it, but will increasingly lead you away from the most valuable aspects of logic programs, found in their generality and elegant declarative expression.
Your question regarding termination is both subtle and insightful. Note that we must distinguish between existential and universal termination in Prolog.
For example, consider again the most general query shown above: It yields answers, but it does not terminate. For an answer to be reported, it is enough that an answer substitution is found that makes the query true. This is the case in your example. Alternatives, if any potentially remain, are tried and reported on backtracking.
You can always try the following to test termination of your query: Simply append false/0, for example:
?- add(X, Y, Z), false.
nontermination
This lets you focus on termination properties without caring about concrete answers.
Note also that add/3 is a terrible name for a relation: An imperative always implies a direction, but this is in fact much more general and usable also if none of the arguments are even instantiated! A good predicate name should reflect this generality.

Correct way of writing recursive functions in CLP(R) with Prolog

I am very confused in how CLP works in Prolog. Not only do I find it hard to see the benefits (I do see it in specific cases but find it hard to generalise those) but more importantly, I can hardly make up how to correctly write a recursive predicate. Which of the following would be the correct form in a CLP(R) way?
factorial(0, 1).
factorial(N, F):- {
N > 0,
PrevN = N - 1,
factorial(PrevN, NewF),
F = N * NewF}.
or
factorial(0, 1).
factorial(N, F):- {
N > 0,
PrevN = N - 1,
F = N * NewF},
factorial(PrevN, NewF).
In other words, I am not sure when I should write code outside the constraints. To me, the first case would seem more logical, because PrevN and NewF belong to the constraints. But if that's true, I am curious to see in which cases it is useful to use predicates outside the constraints in a recursive function.
There are several overlapping questions and issues in your post, probably too many to coherently address to your complete satisfaction in a single post.
Therefore, I would like to state a few general principles first, and then—based on that—make a few specific comments about the code you posted.
First, I would like to address what I think is most important in your case:
LP &subseteq; CLP
This means simply that CLP can be regarded as a superset of logic programming (LP). Whether it is to be considered a proper superset or if, in fact, it makes even more sense to regard them as denoting the same concept is somewhat debatable. In my personal view, logic programming without constraints is much harder to understand and much less usable than with constraints. Given that also even the very first Prolog systems had a constraint like dif/2 and also that essential built-in predicates like (=)/2 perfectly fit the notion of "constraint", the boundaries, if they exist at all, seem at least somewhat artificial to me, suggesting that:
LP &approx; CLP
Be that as it may, the key concept when working with CLP (of any kind) is that the constraints are available as predicates, and used in Prolog programs like all other predicates.
Therefore, whether you have the goal factorial(N, F) or { N > 0 } is, at least in principle, the same concept: Both mean that something holds.
Note the syntax: The CLP(&Rscr;) constraints have the form { C }, which is {}(C) in prefix notation.
Note that the goal factorial(N, F) is not a CLP(&Rscr;) constraint! Neither is the following:
?- { factorial(N, F) }.
ERROR: Unhandled exception: type_error({factorial(_3958,_3960)},...)
Thus, { factorial(N, F) } is not a CLP(&Rscr;) constraint either!
Your first example therefore cannot work for this reason alone already. (In addition, you have a syntax error in the clause head: factorial (, so it also does not compile at all.)
When you learn working with a constraint solver, check out the predicates it provides. For example, CLP(&Rscr;) provides {}/1 and a few other predicates, and has a dedicated syntax for stating relations that hold about floating point numbers (in this case).
Other constraint solver provide their own predicates for describing the entities of their respective domains. For example, CLP(FD) provides (#=)/2 and a few other predicates to reason about integers. dif/2 lets you reason about any Prolog term. And so on.
From the programmer's perspective, this is exactly the same as using any other predicate of your Prolog system, whether it is built-in or stems from a library. In principle, it's all the same:
A goal like list_length(Ls, L) can be read as: "The length of the list Ls is L."
A goal like { X = A + B } can be read as: The number X is equal to the sum of A and B. For example, if you are using CLP(Q), it is clear that we are talking about rational numbers in this case.
In your second example, the body of the clause is a conjunction of the form (A, B), where A is a CLP(&Rscr;) constraint, and B is a goal of the form factorial(PrevN, NewF).
The point is: The CLP(&Rscr;) constraint is also a goal! Check it out:
?- write_canonical({a,b,c}).
{','(a,','(b,c))}
true.
So, you are simply using {}/1 from library(clpr), which is one of the predicates it exports.
You are right that PrevN and NewF belong to the constraints. However, factorial(PrevN, NewF) is not part of the mini-language that CLP(&Rscr;) implements for reasoning over floating point numbers. Therefore, you cannot pull this goal into the CLP(&Rscr;)-specific part.
From a programmer's perspective, a major attraction of CLP is that it blends in completely seamlessly into "normal" logic programming, to the point that it can in fact hardly be distinguished at all from it: The constraints are simply predicates, and written down like all other goals.
Whether you label a library predicate a "constraint" or not hardly makes any difference: All predicates can be regarded as constraints, since they can only constrain answers, never relax them.
Note that both examples you post are recursive! That's perfectly OK. In fact, recursive predicates will likely be the majority of situations in which you use constraints in the future.
However, for the concrete case of factorial, your Prolog system's CLP(FD) constraints are likely a better fit, since they are completely dedicated to reasoning about integers.

Equating nodes in prolog?

I'm working on a small function which checks to see if a tree is just a reversed version of another tree.
For example,
1 1
2 3 = 3 2
1 1
My code is just versions of the following:
treeRev(leaf(Leaf1), leaf(Leaf2)) :-
leaf(Leaf1) is leaf(Leaf2).
treeRev(node1(Leaf1, Node1), node1(Leaf2, Node2)) :-
node1(Leaf1,treeRev(Node1)) is node1(Leaf2, treeRev(Node2)).
treeRev(node2(Leaf1, Node1, Node2), node2(Leaf2, Node3, Node4)) :-
node2(Leaf1, treeRev(Node1), treeRev(Node2)) is
node2(Leaf2, treeRev(Node4), treeRev(Node3)).
Where my basis is as following:
Base case is the two leaves are equal, which just returns true. If it has one node, check the leaves are equal, and call the function recursively on the node.
If it's two nodes, check the trees are equal, and then call the recursive function after having flipped the nodes from the second tree.
My issue is, I keep getting the bug
ERROR: is/2: Arithmetic: `leaf/1' is not a function
Thing is, I don't get this error when using other operations on the tree. Any advice on how to get around this? The only limitation imposed is that I can't use =.
I also figured that the most probable cause is that the sides of the is don't return the same "type", according to searches on google and stackoverflow. The way I see it though, is that shouldn't be the case here since I have almost the exact same thing on both ends.
Thank you for reading, and any help is greatly appreciated :)
The is/2 predicate is used for arithmetic. It calculates and assigns the value of an expression (second argument) to the variable-first argument. For example:
X is 1+(2*Y)/2 where Y is already instantiated so it has a value (in order to calculate the value of the expression otherwise it throws instantiation error).
In you case you can't use is/2 since you don't want to calculate any arithmetic expression (that's why the error). What you need is unification, you need to unify a term (e.g a leaf or node) with another term by using =.
For example:
treeRev(leaf(Leaf1), leaf(Leaf2)) :-
leaf(Leaf1) = leaf(Leaf2).
treeRev(node1(Leaf1, Node1), node1(Leaf2, Node2)) :-
node1(Leaf1,treeRev(Node1)) = node1(Leaf2, treeRev(Node2)).
treeRev(node2(Leaf1, Node1, Node2), node2(Leaf2, Node3, Node4)) :-
node2(Leaf1, treeRev(Node1), treeRev(Node2)) =
node2(Leaf2, treeRev(Node4), treeRev(Node3)).
By using pattern matching you could simply do:
treeRev(leaf(Leaf2), leaf(Leaf2)).
treeRev(node1(Leaf2, treeRev(Node2)), node1(Leaf2, Node2)).
treeRev(node2(Leaf2,Node1,Node2), node2(Leaf2, Node3, Node4)):-
treeRev(Node1,Node4),treeRev(Node2,Node3).
... then call the recursive function...
Prolog predicates are not functions. Writing node1(Leaf1,treeRev(Node1)) will not build a node with "the result of calling the treeRev function", as in other programming languages. Instead, Prolog predicates have extra arguments for the "result". You typically call the predicate and bind such "results" to a variable or unify it with a term.
You will need something like this for a binary node (not tested, and not following your teacher's strange and undocumented tree representation):
tree_mirrored(node(LeftTree, RightTree), node(RightMirrored, LeftMirrored)) :-
tree_mirrored(LeftTree, LeftMirrored),
tree_mirrored(RightTree, RightMirrored).

Using "find_theorems" in Isabelle

I want to find theorems. I have read the section on find_theorems in the Isabelle/Isar reference manual:
find_theorems criteria
Retrieves facts from the theory or proof context matching all of given search
criteria. The criterion name: p selects all theorems whose fully qualified
name matches pattern p, which may contain "*" wildcards. The criteria intro,
elim, and dest select theorems that match the current goal as introduction,
elimination or destruction rules, respectively. The criterion solves returns
all rules that would directly solve the current goal. The criterion simp: t
selects all rewrite rules whose left-hand side matches the given term. The
criterion term t selects all theorems that contain the pattern t -- as usual,
patterns may contain occurrences of the dummy "_" , schematic variables, and
type constraints.
Criteria can be preceded by "-" to select theorems that do not match. Note
that giving the empty list of criteria yields all currently known facts. An
optional limit for the number of printed facts may be given; the default is 40.
By default, duplicates are removed from the search result. Use with_dups to
display duplicates.
As far as I understand, find_theorems is used in the find window of Isabelle/jEdit. The above does not help me finding relevant theorems for the following situation (Lambda is a theory of the Nominal Isabelle extension. The tarball is here):
theory First
imports Lambda
begin
theorem "Lam [x].(Lam [y].(App (Var x)(Var y))) = Lam [y].(Lam [x].(App (Var y)(Var x)))"
When I try the search expression Lam Isabelle/jedit says
Inner syntax error: unexpected end of input
Failed to parse term
How can I make it look for all the theorems that contain the constant Lam?
Since Lam like the ordinary lambda (%) is not a term on its own, you should add the remaining parts to get a proper term, which may contain wildcards. In your example, I would perform
find_theorems "Lam [_]. _"
which gives lots of answers.
Typically this happens whenever special syntax was defined for some constant. But there is (almost) always an underlying ("raw") constant. To find out which constant provides the Lam [_]. _ syntax. You can Ctrl-click Lam (inside a proper term) within Isabelle/jEdit. This will jump to the definition of the underlying constant.
For Lam there is the additional complication that the binder syntax uses exactly the same string as the underlying constant, namely Lam, as can be seen at the place of definition:
nominal_datatype lam =
Var "name"
| App "lam" "lam"
| Lam x::"name" l::"lam" binds x in l ("Lam [_]. _" [100, 100] 100)
In such cases you can use the long name of the constant by prefixing it with the theory name, i.e., Lambda.Lam.
Note: The same works for binders like ALL x. P x (with underlying constant All), but not for the built-in %x. x.

How to prove by induction that a program does something?

I have a computer program that reads in an array of chars that operands and operators written in postfix notation. The program then scans through the array works out the result by using a stack as shown :
get next char in array until there are no more
if char is operand
push operand into stack
if char is operator
a = pop from stack
b = pop from stack
perform operation using a and b as arguments
push result
result = pop from stack
How do I prove by induction that this program correctly evaluates any postfix expression? (taken from exercise 4.16 Algorithms in Java (Sedgewick 2003))
I'm not sure which expressions you need to prove the algorithm against. But if they look like typical RPN expressions, you'll need to establish something like the following:
1) algoritm works for 2 operands (and one operator)
and
algorithm works for 3 operands (and 2 operators)
==> that would be your base case
2) if algorithm works for n operands (and n-1 operators)
then it would have to work for n+1 operands.
==> that would be the inductive part of the proof
Good luck ;-)
Take heart concerning mathematical proofs, and also their sometimes confusing names. In the case of an inductive proof one is still expected to "figure out" something (some fact or some rule), sometimes by deductive logic, but then these facts and rules put together constitute an broader truth, buy induction; That is: because the base case is established as true and because one proved that if X was true for an "n" case then X would also be true for an "n+1" case, then we don't need to try every case, which could be a big number or even infinite)
Back on the stack-based expression evaluator... One final hint (in addtion to Captain Segfault's excellent explanation you're gonna feel over informed...).
The RPN expressions are such that:
- they have one fewer operator than operand
- they never provide an operator when the stack has fewer than 2 operands
in it (if they didn;t this would be the equivalent of an unbalanced
parenthesis situation in a plain expression, i.e. a invalid expression).
Assuming that the expression is valid (and hence doesn't provide too many operators too soon), the order in which the operand/operators are fed into the algorithm do not matter; they always leave the system in a stable situtation:
- either with one extra operand on the stack (but the knowledge that one extra operand will eventually come) or
- with one fewer operand on the stack (but the knowledge that the number of operands still to come is also one less).
So the order doesn't matter.
You know what induction is? Do you generally see how the algorithm works? (even if you can't prove it yet?)
Your induction hypothesis should say that, after processing the N'th character, the stack is "correct". A "correct" stack for a full RPN expression has just one element (the answer). For a partial RPN expression the stack has several elements.
Your proof is then to think of this algorithm (minus the result = pop from stack line) as a parser that turns partial RPN expressions into stacks, and prove that it turns them into the correct stacks.
It might help to look at your definition of an RPN expression and work backwards from it.

Resources