Avoiding infinite recursion but still using unbound parameter passing only - recursion

I have the following working program: (It can be tested on this site: http://swish.swi-prolog.org, I've removed the direct link to a saved program, because I noticed that anybody can edit it.)
It searches for a path between two points in an undirected graph. The important part is that the result is returned in the scope of the "main" predicate. (In the Track variable)
edge(a, b).
edge(b, c).
edge(d, b).
edge(d, e).
edge(v, w).
connected(Y, X) :-
(
edge(X, Y);
edge(Y, X)
).
path(X, X, _, []) :-
connected(X, _).
path(X, Y, _, [X, Y]) :-
connected(Y, X).
path(X, Z, Visited, [X|Track]) :-
connected(X, Y),
not(member(X, Visited)),
path(Y, Z, [X|Visited], Track).
main(X, Y) :-
path(X, Y, [], Track),
print(Track),
!.
Results:
?- main(a, e).
[a, b, d, e]
true
?- main(c, c).
[]
true
?- main(b, w).
false
My questions:
The list of visited nodes is passed down to the predicates in 2 different ways. In the bound Visited variable and in the unbound Track variable. What are the names of these 2 different forms of parameter passing?
Normally I only wanted to use the unbound parameter passing (Track variable), to have the results in the scope of the main predicate. But I had to add the Visited variable too, because the member checking didn't work on the Track variable (I don't know why). Is it possible to make it work with only passing the Track in an unbound way? (without the Visited variable)
Many thanks!

The short answer: no, you cannot avoid the extra argument without making everything much messier. This is because this particular algorithm for finding a path needs to keep a state; basically, your extra argument is your state.
There might be other ways to keep a state, like using a global, mutable variable, or dynamically changing the Prolog data base, but both are more difficult to get right and will involve more code.
This extra argument is often called an accumulator, because it accumulates something as you go down the proof tree. The simplest example would be traversing a list:
foo([]).
foo([X|Xs]) :-
foo(Xs).
This is fine, unless you need to know what elements you have already seen before getting here:
bar(List) :-
bar_(List, []).
bar_([], _).
bar_([X|Xs], Acc) :-
/* Acc is a list of all elements so far */
bar_(Xs, [X|Acc]).
This is about the same as what you are doing in your code. And if you look at this in particular:
path(X, Z, Visited, /* here */[X|Track]) :-
connected(X, Y),
not(member(X, Visited)),
path(Y, Z, [X|Visited], /* and here */Track).
The last argument of path/4 has one element more at a depth of one less in the proof tree! And, of course, the third argument is one longer (it grows as you go down the proof tree).
For example, you can reverse a list by adding another argument to the silly bar predicate above:
list_reverse(L, R) :-
list_reverse_(L, [], R).
list_reverse_([], R, R).
list_reverse_([X|Xs], R0, R) :-
list_reverse_(Xs, [X|R0], R).
I am not aware of any special name for the last argument, the one that is free at the beginning and holds the solution at the end. In some cases it could be an output argument, because it is meant to capture the output, after transforming the input somehow. There are many cases where it is better to avoid thinking about arguments as strictly input or output arguments. For example, length/2:
?- length([a,b], N).
N = 2.
?- length(L, 3).
L = [_2092, _2098, _2104].
?- length(L, N).
L = [],
N = 0 ;
L = [_2122],
N = 1 ;
L = [_2122, _2128],
N = 2 . % and so on
Note: there are quite a few minor issues with your code that are not critical, and giving that much advice is not a good idea on Stackoverflow. If you want you could submit this as a question on Code Review.
Edit: you should definitely study this question.
I also provided a somewhat simpler solution here. Note the use of term_expansion/2 for making directed edges from undirected edges at compile time. More important: you don't need the main, just call the predicate you want from the top level. When you drop the cut, you will get all possible solutions when one or both of your From and To arguments are free variables.

Related

Recursive addition in Prolog

Knowledge Base
add(0,Y,Y). // clause 1
add(succ(X),Y,succ(Z)) :- add(X,Y,Z). // clause 2
Query
add(succ(succ(succ(0))), succ(succ(0)), R)
Trace
Call: (6) add(succ(succ(succ(0))), succ(succ(0)), R)
Call: (7) add(succ(succ(0)), succ(succ(0)), _G648)
Call: (8) add(succ(0), succ(succ(0)), _G650)
Call: (9) add(0, succ(succ(0)), _G652)
Exit: (9) add(0, succ(succ(0)), succ(succ(0)))
Exit: (8) add(succ(0), succ(succ(0)), succ(succ(succ(0))))
Exit: (7) add(succ(succ(0)), succ(succ(0)), succ(succ(succ(succ(0)))))
Exit: (6) add(succ(succ(succ(0))), succ(succ(0)), succ(succ(succ(succ(succ(0))))))
My Question
I see how the recursive call in clause 2 strips the outermost succ()
at each call for argument 1.
I see how it adds an outer succ() to argument 3 at each call.
I see when the 1st argument as a result of these recursive calls
reaches 0. At that point, I see how the 1st clause copies the 2nd
argument to the 3rd argument.
This is where I get confused.
Once the 1st clause is executed, does the 2nd clause automatically
get executed as well, then adding succ() to the first argument?
Also, how does the program terminate, and why doesn't it just keep
adding succ() to the first and 3rd arguments infinitely?
Explanation from LearnPrologNow.com (which I don't understand)
Let’s go step by step through the way Prolog processes this query. The
trace and search tree for the query are given below.
The first argument is not 0 , which means that only the second clause
for add/3 can be used. This leads to a recursive call of add/3 . The
outermost succ functor is stripped off the first argument of the
original query, and the result becomes the first argument of the
recursive query. The second argument is passed on unchanged to the
recursive query, and the third argument of the recursive query is a
variable, the internal variable _G648 in the trace given below. Note
that _G648 is not instantiated yet. However it shares values with R
(the variable that we used as the third argument in the original
query) because R was instantiated to succ(_G648) when the query was
unified with the head of the second clause. But that means that R is
not a completely uninstantiated variable anymore. It is now a complex
term, that has a (uninstantiated) variable as its argument.
The next two steps are essentially the same. With every step the first
argument becomes one layer of succ smaller; both the trace and the
search tree given below show this nicely. At the same time, a succ
functor is added to R at every step, but always leaving the innermost
variable uninstantiated. After the first recursive call R is
succ(_G648) . After the second recursive call, _G648 is instantiated
with succ(_G650) , so that R is succ(succ(_G650) . After the third
recursive call, _G650 is instantiated with succ(_G652) and R therefore
becomes succ(succ(succ(_G652))) . The search tree shows this step by
step instantiation.
At this stage all succ functors have been stripped off the first
argument and we can apply the base clause. The third argument is
equated with the second argument, so the ‘hole’ (the uninstantiated
variable) in the complex term R is finally filled, and we are through.
Let us start by getting the terminology right.
These are the clauses, as you correctly indicate:
add(0, Y, Y).
add(succ(X), Y, succ(Z)) :- add(X, Y, Z).
Let us first read this program declaratively, just to make sure we understand its meaning correctly:
0 plus Y is Y. This makes sense.
If it is true that X plus Y is Z then it is true that the successor of X plus Y is the successor of Z.
This is a good way to read this definition, because it is sufficiently general to cover various modes of use. For example, let us start with the most general query, where all arguments are fresh variables:
?- add(X, Y, Z).
X = 0,
Y = Z ;
X = succ(0),
Z = succ(Y) ;
X = succ(succ(0)),
Z = succ(succ(Y)) .
In this case, there is nothing to "strip", since none of the arguments is instantiated. Yet, Prolog still reports very sensible answers that make clear for which terms the relation holds.
In your case, you are considering a different query (not a "predicate definition"!), namely the query:
?- add(succ(succ(succ(0))), succ(succ(0)), R).
R = succ(succ(succ(succ(succ(0))))).
This is simply a special case of the more general query shown above, and a natural consequence of your program.
We can also go in the other direction and generalize this query. For example, this is a generalization, because we replace one ground argument by a logical variable:
?- add(succ(succ(succ(0))), B, R).
R = succ(succ(succ(B))).
If you follow the explanation you posted, you will make your life very difficult, and arrive at a very limited view of logic programs: Realistically, you will only be able to trace a tiny fragment of modes in which you could use your predicates, and a procedural reading thus falls quite short of what you are actually describing.
If you really insist on a procedural reading, start with a simpler case first. For example, let us consider:
?- add(succ(0), succ(0), R).
To "step through" procedurally, we can proceed as follows:
Does the first clause match? (Note that "matching" is already limited reading: Prolog actually applies unification, and a procedural reading leads us away from this generality.)
Answer: No, because s(_) does not unify with 0. So only the second clause applies.
The second clause only holds if its body holds, and in this case if add(0, succ(0), Z) holds. And this holds (by applying the first clause) if Z is succ(0) and R is succ(Z).
Therefore, one answer is R = succ(succ(0)).. This answer is reported.
Are there other solutions? These are only reported on backtracking.
Answer: No, there are no other solutions, because no further clause matches.
I leave it as an exercise to apply this painstaking method to the more complex query shown in the book. It is straight-forward to do it, but will increasingly lead you away from the most valuable aspects of logic programs, found in their generality and elegant declarative expression.
Your question regarding termination is both subtle and insightful. Note that we must distinguish between existential and universal termination in Prolog.
For example, consider again the most general query shown above: It yields answers, but it does not terminate. For an answer to be reported, it is enough that an answer substitution is found that makes the query true. This is the case in your example. Alternatives, if any potentially remain, are tried and reported on backtracking.
You can always try the following to test termination of your query: Simply append false/0, for example:
?- add(X, Y, Z), false.
nontermination
This lets you focus on termination properties without caring about concrete answers.
Note also that add/3 is a terrible name for a relation: An imperative always implies a direction, but this is in fact much more general and usable also if none of the arguments are even instantiated! A good predicate name should reflect this generality.

Recursive search & Accumulators & Counters in Prolog

After a long search on google I couldn't find a clear answer of this:
In Prolog doing recursion by itself its easy. My main problem is understanding where to place accumulators and counters. Here is an example:
nXlist(N,X,[X|T]):-
N \=0,
N1 is N-1,
nXList(N1,X,T).
nXList(0,_,[]).
media([X|L], N, Soma):-
media(L, N1, Soma1),
N is N1 + 1,
Soma is Soma1 + X.
media([], 0, 0).
On the first example i used a counter BEFORE the recursion but in the second example I use it AFTER. The reason I have done that is the called try and see cause i really can't understand why sometimes is before and sometimes is after...
Maybe the central point of your question is in the preamble:
In Prolog doing recursion by itself its easy
It's not easy, it's mandatory. We don't have loops, because we don't have a way to control them. Variables are assign once.
So, I think the practical answer is rather simple: if the 'predicate' (like is/2) needs a variable value, you ground the variable before calling it.
To me, it helps to consider a Prolog program (a set of clauses) as grammar productions, and clause arguments as attributes, either inherited (values computed before the 'instruction pointer') or synthesized (values computed 'here', to be returned).
update: Most importantly, if the recursive call is not last, the predicate is not tail recursive. So, having anything after the recursive call should be avoided if possible. Notice that both definitions in the answer by user false are tail recursive, and that's precisely due to the fact that the arithmetic conditions there are placed before the recursive call, in both of them. That's so basic, that we have to make an effort to notice it explicitly.
Sometimes we count down, sometimes we count up. I discuss this in another answer at length. It talks of accumulators, befores and afters. :)
There's also this thing called "associativity" of an operation (say, +), where
a+(b+(c+....)) == (a+b)+(c+...)
that lets us regroup and (partially) calculate sooner rather than later. As soon as possible, but not sooner.
Short answer: you can place such arithmetical relations both before and thereafter. At least, if you are using constraints in place of (is)/2. The only difference may be in termination and errors.
So let's see how your predicates can be defined with constraints:
:- use_module(library(clpfd)).
nXList(0,_,[]).
nXList(N,X,[X|T]):-
N #> 0,
N1 #= N-1,
nXList(N1,X,T).
media([], 0, 0).
media([X|L], N, Soma):-
N #> 0,
N #= N1 + 1,
Soma #= Soma1 + X,
media(L, N1, Soma1).
You can now use these definitions in a much more general way, say:
?- nXList(3, X, T).
T = [X, X, X]
; false.
?- media(Xs, 3, S).
Xs = [_A, _B, _C], _D+_A#=S, _C+_B#=_D
; false.
... nXList/3 can be more compactly expressed by:
..., length(T, N), maplist(=(X), T), ...

How to create a Prolog predicate that removes 2nd to last element?

I need help creating a predicate that removes the 2nd to last element of a list and returns that list written in Prolog. So far I have
remove([],[]).
remove([X],[X]).
remove([X,Y],[Y]).
That is as far as I've gotten. I need to figure out a way to recursively go through the list until it is only two elements long and then reassemble the list to be returned. Help with explanation if you can.
Your definition so far is perfect! It is a little bit too specialized, so we will have to extend it. But your program is a solid foundation.
You "only" need to extend it.
remove([],[]).
remove([X],[X]).
remove([_,X],[X]).
remove([X,_,Y], [X,Y]).
remove([X,Y,_,Z], [X,Y,Z]).
remove([X,Y,Z,_,Z2], [X,Y,Z,Z2]).
...
OK, you see how to continue. Now, let us identify common cases:
...
remove([X,Y,_,Z], [X,Y,Z]).
% ^^^ ^^^
remove([X,Y,Z,_,Z2], [X,Y,Z,Z2]).
% ^^^^^ ^^^^^
...
So, we have a common list prefix. We could say:
Whenever we have a list and its removed list, we can conclude that by adding one element on both sides, we get a longer list of that kind.
remove([X|Xs], [X|Ys]) :-
remove(Xs,Ys).
Please note that the :- is really an arrow. It means: Provided what is true on the right-hand side, also what is found on the left-hand side will be true.
H-h-hold a minute! Is this really the case? How to test this? (If you test just for positive cases, you will always get a "yes".) We don't have the time to conjure up some test cases, do we? So let us let Prolog do the hard work for us! So, Prolog, fill in the blanks!
remove([],[]).
remove([X],[X]).
remove([_,X],[X]).
remove([X|Xs], [X|Ys]) :-
remove(Xs,Ys).
?- remove(Xs,Ys). % most general goal
Xs = [], Ys = []
; Xs = [A], Ys = [A]
; Xs = [_,A], Ys = [A]
; Xs = [A], Ys = [A] % redundant, but OK
; Xs = [A,B], Ys = [A,B], unexpected % WRONG
; Xs = [A,_,B], Ys = [A,B]
; Xs = [A,B], Ys = [A,B], unexpected % WRONG again!
; Xs = [A,B,C], Ys = [A,B,C], unexpected % WRONG
; Xs = [A,B,_,C], Ys = [A,B,C]
; ... .
It is tempting to reject everything and start again from scratch.
But in Prolog you can do better than that, so let's calm down to estimate the actual damage:
Some answers are incorrect. And some answers are correct.
It could be that our current definition is just a little bit too general.
To better understand the situation, I will look at the unexpected success remove([1,2],[1,2]) in detail. Who is the culprit for it?
Even the following program slice/fragment succeeds.
remove([],[]).
remove([X],[X]) :- false.
remove([_,X],[X]) :- false.
remove([X|Xs], [X|Ys]) :-
remove(Xs,Ys).
While this is a specialization of our program it reads: that remove/2 holds for all lists that are the same. That can't be true! To fix the problem we have to do something in the remaining visible part. And we have to specialize it. What is problematic here is that the recursive rule also holds for:
remove([1,2], [1,2]) :-
remove([2], [2]).
remove([2], [2]) :-
remove([], []).
That kind of conclusion must be avoided. We need to restrict the rule to those cases were the list has at least two further elements by adding another goal (=)/2.
remove([X|Xs], [Y|Ys]) :-
Xs = [_,_|_],
remove(Xs, Ys).
So what was our error? In the informal
Whenever we have a list and its removed list, ...
the term "removed list" was ambiguous. It could mean that we are referring here to the relation remove/2 (which is incorrect, because remove([],[]) holds, but still nothing is removed), or we are referring here to a list with an element removed. Such errors inevitably happen in programming since you want to keep your intuitions afresh by using a less formal language than Prolog itself.
For reference, here again (and for comparison with other definitions) is the final definition:
remove([],[]).
remove([X],[X]).
remove([_,X],[X]).
remove([X|Xs], [X|Ys]) :-
Xs = [_,_|_],
remove(Xs,Ys).
There are more efficient ways to do this, but this is the most straight-forward way.
I will try to provide another solution which is easier to construct if you only consider the meaning of "second last element", and describe each possible case explicitly:
rem_2nd_last([], []).
rem_2nd_last([First|Rest], R) :-
rem_2nd_last_2(Rest, First, R). % "Lag" the list once
rem_2nd_last_2([], First, [First]).
rem_2nd_last_2([Second|Rest], First, R) :-
rem_2nd_last_3(Rest, Second, First, R). % "Lag" the list twice
rem_2nd_last_3([], Last, _SecondLast, [Last]). % End of list: drop second last
rem_2nd_last_3([This|Rest], Prev, PrevPrev, [PrevPrev|R]) :-
rem_2nd_last_3(Rest, This, Prev, R). % Rest of list
The explanation is hiding in plain view in the definition of the three predicates.
"Lagging" is a way to reach back from the end of the list but keep the predicate always deterministic. You just grab one element and pass the rest of the list as the first argument of a helper predicate. One way, for example, to define last/2, is:
last([H|T], Last) :-
last_1(T, H, Last).
last_1([], Last, Last).
last_1([H|T], _, Last) :-
last_1(T, H, Last).

How to write a Prolog predicate to split a list into a list of paired elements?

This was a question on a sample exam I did.
Give the definition of a Prolog predicate split_into_pairs that takes as arguments a list and returns as a result a list which consists of paired elements. For example, split_into_pairs([1,2,3,4,5,6],X) would return as a result X=[[1,2],[3,4],[5,6]]. Similarly, split_into_pairs([a,2,3,4,a,a,a,a],X) would return as result X=[[a,2],[3,4],[a,a],[a,a]] while split_into_pairs([1,2,3],X) would return No.
It's not meant to be done using built-in predicates I believe, but it shouldn't need to be too complicated either as it was only worth 8/120 marks.
I'm not sure what it should do for a list of two elements, so I guess that would either be not specified so that it returns no, or split_into_pairs([A,B],[[A,B]]).
My main issue is how to do the recursive call properly, without having extra brackets, not ending up as something like X=[[A,B],[[C,D],[[E,F]]]]?.
My most recent attempts have been variations of the code below, but obviously this is incorrect.
split_into_pairs([A,B],[A,B])
split_into_pairs([A,B|T], X) :- split_into_pairs(T, XX), X is [A,B|XX]
This is a relatively straightforward recursion:
split_into_pairs([], []).
split_into_pairs([First, Second | Tail], [[First, Second] | Rest]) :-
split_into_pairs(Tail, Rest).
The first rule says that an empty list is already split into pairs; the second requires that the source list has at least two items, pairs them up, and inserts the result of pairing up the tail list behind them.
Here is a demo on ideone.
Your solution could be fixed as well by adding square brackets in the result, and moving the second part of the rule into the header, like this:
split_into_pairs([A,B],[[A,B]]).
split_into_pairs([A,B|T], [[A,B]|XX]) :- split_into_pairs(T, XX).
Note that this solution does not consider an empty list a list of pairs, so split_into_pairs([], X) would fail.
Your code is almost correct. It has obvious syntax issues, and several substantive issues:
split_into_pairs([A,B], [ [ A,B ] ] ):- !.
split_into_pairs([A,B|T], X) :- split_into_pairs(T, XX),
X = [ [ A,B ] | XX ] .
Now it is correct: = is used instead of is (which is normally used with arithmetic operations), both clauses are properly terminated by dots, and the first one has a cut added into it, to make the predicate deterministic, to produce only one result. The correct structure is produced by enclosing each pair of elements into a list of their own, with brackets.
This is inefficient though, because it describes a recursive process - it constructs the result on the way back from the base case.
The efficient definition works on the way forward from the starting case:
split_into_pairs([A,B],[[A,B]]):- !.
split_into_pairs([A,B|T], X) :- X = [[A,B]|XX], split_into_pairs(T, XX).
This is the essence of tail recursion modulo cons optimization technique, which turns recursive processes into iterative ones - such that are able to run in constant stack space. It is very similar to the tail-recursion with accumulator technique.
The cut had to be introduced because the two clauses are not mutually exclusive: a term unifying with [A,B] could also be unifiable with [A,B|T], in case T=[]. We can get rid of the cut by making the two clauses to be mutually-exclusive:
split_into_pairs([], [] ).
split_into_pairs([A,B|T], [[A,B]|XX]):- split_into_pairs(T, XX).

GNU Prolog - Recursion issue (easy?)

Ok, so i have this
edu_less(hs,college).
edu_less(college,masters).
edu_less(masters,phd).
I need to write a function to tell if something is less than the other. The predicate is
edu_le.
So if i put edu_le(hs,phd). it should return yes.
I came up with this.
edu_le(A,B) :- A = B.
edu_le(A,B) :- edu_less(A,B).
edu_le(A,B) :- edu_less(A,C), edu_le(C,B).
I really don't want it to go through everything and return multiple answers.
Is it possible to only return yes or no if it finds that it is in fact less than or equal to the 2nd argument?
So basically if i use the example edu_le(hs,phd) again, then because hs is less than college, and college is than masters, and masters is less than phd, then hs must be less than phd and it would say yes.
Sorry, really new to prolog, still trying to get the hang of this.
In the predicate definition
edu_le(A,B) :- A = B.
edu_le(A,B) :- edu_less(A,B).
edu_le(A,B) :- edu_less(A,C), edu_le(C,B).
the second clause is superfluous and causes repeated generation of answers. Use
edu_le(A,B) :- A = B.
edu_le(A,B) :- edu_less(A,C), edu_le(C,B).
This gives you one true answer, then no more answers (false) on backtracking. You can use a cut in the first clause, but then generating won't work anymore.
?- edu_le(hs,X).
X = hs ;
X = college ;
X = masters ;
X = phd ;
false.
becomes incomplete:
?- edu_le(hs,X).
X = hs.
As mat suggested, use once/1 instead. In a good Prolog implementation, this predicate works as if your program had cuts in strategic places, speeding up your program without disturbing its logical semantics.
The most practical way to write predicates like that is to use the cut (!). The cut causes further clauses not to be considered when backtracking. You would write your predicate as following:
edu_le(A,B) :- A = B, !.
edu_le(A,B) :- edu_less(A,B), !.
edu_le(A,B) :- edu_less(A,C), edu_le(C,B).
The last clause does not need a cut because there are no further clauses to consider anyway. The cut is placed after any tests to determine whether the clause should succeed.
Logic programming purists disapprove of the cut, because it makes the meaning of a predicate depend on the ordering of the clauses, which is unlike logic in mathematics.
!/0 also makes this program incomplete, consider for example the most general query with both versions:
?- edu_le(X, Y).
It is often better to use once/1 if you only want a single proof of a particular goal:
?- once(edu_le(hs, phd)).
I would suggest you NOT to follow the path proposed by Juho Östman and keep purity - otherwise, why should you use Prolog in first instance? If you are too lenient with sticking to the logical paradigm you obtain some unpleasing results. In this case, Juho's predicate is definitely different from yours, and I'll show you why.
First, just drop the useless edu_le(A,B) :- edu_less(A,B). rule, as larsmans suggests. You will obtain a less redundant version of your original predicate:
edu_le1(A, A).
edu_le1(A, B) :- edu_less(A, C), edu_le1(C, B).
It just behaves as edu_le, meaning: given an arbitrary query, it produces exactly the same answer, except for duplicates (edu_le1 has less). You may just be happy with it, but it still has some redundant answers that you may not like; e.g, under SWI:
?- edu_le1(hs, hs)
true ;
false.
Now you may say you do not like it because it still has the redundant false, but if you use Juho's predicate instead (without the useless rule):
edu_le2(A, A) :- !.
edu_le2(A, B) :- edu_less(A, C), edu_le2(C, B).
it's true that you eliminate the useless final false:
?- edu_le2(hs, hs)
true.
?-
but you lose more than that: You lose, as mat remarks, the possibility of generating all the solutions when one variable is not instantiated:
?- edu_le1(hs, B) %same, with more copies, for edu_le
B = hs ;
B = college ;
B = masters ;
B = phd ;
false.
?- edu_le2(hs, B)
B = hs. %bad!
?-
In other words, the latter predicate is NOT equivalent to the former: edu_le and edu_le1 have type edu_le(?A, ?B), while instead edu_le2 has type edu_le2(+A, +B) (see [1] for the meaning). Be sure: edu_le2 is less useful because it does less things, and thus can be reused in less contexts. This because the cut in edu_le2 is a red cut, i.e., a cut that changes the meaning of the predicate where it is introduced. You may nevertheless be content with it, given that you understand the difference between the two. It all depends on what you want to do with it.
If you want to get the best of the two worlds, you need to introduce in edu_le1 a proper green cut that lowers the redundancy when A and B are completely instantiated to terms. At the purpose, you must check that A and B are instantiated to the same term before cutting. You cannot do it with =, because = does not check, but unifies. The right operator is ==:
edu_le3(A, B) :- (A == B -> ! ; true), A = B.
edu_le3(A, B) :- edu_less(A, C), edu_le3(C, B).
Note that the additional cut in the first rule is active only when A and B happen to be the same term. Now that the cut is a proper green cut, the predicate works also in the most general cases as your original one:
?- edu_le3(A, A).
true.
?- edu_le3(A, B). %note that A and B are not the same term
A = B ;
A = hs,
B = college ;
A = hs,
B = masters ;
A = hs,
B = phd ;
A = college,
B = masters ;
A = college,
B = phd ;
A = masters,
B = phd ;
false.
?-
with Prolog backtracking through all the solutions.
I don't think there is some way to eliminate the last false without introducing too strong dependency on edu_lt. This because we must keep open the possibility that there is another edu_lt to explore, in the case you decide later to enrich it with more ground facts. So, in my opinion, this is the best you can have.
[1] SWI Prolog reference manual, section 4.1.

Resources