prolog in math - searching the level of node in prolog - math

Assuming here is a binary search tree, and given the rule that above(X,Y) - X is directly above Y. Also I created the rule root(X) - X has no parent.
Then, I was trying to figure out what the depth of node in this tree.
Assume the root node of tree is "r" So I got fact level(r,0). In order to implement the rule level(N,D) :-, what I was thinking is it should be have a recursion here.
Thus, I tried
level(N,D): \+ root(N), above(X,N), D is D+1, level(X,D).
So if N is not a root, there has a node X above N and level D plus one, then recursion. But when I tested this, it just works for the root condition. When I created more facts, such as node "s" is leftchild of node "r", My query is level(s,D). It returns me "no". I traced the query, it shows me
1 1 Call: level(s,_16) ?
1 1 Fail: level(s,_16) ?
I just confusing why it fails when I call level(s,D)?

There are some problems with your query:
In Prolog you cannot write something like D is D+1, because a variable can only be assigned one value;
at the moment you call D is D+1, D is not yet instantiated, so it will probably cause an error; and
You never state (at least not in the visible code) that the level/2 of the root is 0.
A solution is to first state that the level of any root is 0:
level(N,0) :-
root(N).
Now we have to define the inductive case: first we indeed look for a parent using the above/2 predicate. Performing a check that N is no root/1 is not necessary strictly speaking, because it would conflict with the fact that there is an above/2. Next we determine the level of that parent LP and finally we calculate the level of our node by stating that L is LP+1 where L is the level of N and LP the level op P:
level(N,L) :-
above(P,N),
level(P,LP),
L is LP+1.
Or putting it all together:
level(N,0) :-
root(N).
level(N,L) :-
above(P,N),
level(P,LP),
L is LP+1.
Since you did not provide a sample tree, I have no means to test whether this predicate behaves as you expect it to.
About root/1
Note that by writing root/1, you introduce data duplication: you can simply write:
root(R) :-
\+ above(_,R).

Related

Equating nodes in prolog?

I'm working on a small function which checks to see if a tree is just a reversed version of another tree.
For example,
1 1
2 3 = 3 2
1 1
My code is just versions of the following:
treeRev(leaf(Leaf1), leaf(Leaf2)) :-
leaf(Leaf1) is leaf(Leaf2).
treeRev(node1(Leaf1, Node1), node1(Leaf2, Node2)) :-
node1(Leaf1,treeRev(Node1)) is node1(Leaf2, treeRev(Node2)).
treeRev(node2(Leaf1, Node1, Node2), node2(Leaf2, Node3, Node4)) :-
node2(Leaf1, treeRev(Node1), treeRev(Node2)) is
node2(Leaf2, treeRev(Node4), treeRev(Node3)).
Where my basis is as following:
Base case is the two leaves are equal, which just returns true. If it has one node, check the leaves are equal, and call the function recursively on the node.
If it's two nodes, check the trees are equal, and then call the recursive function after having flipped the nodes from the second tree.
My issue is, I keep getting the bug
ERROR: is/2: Arithmetic: `leaf/1' is not a function
Thing is, I don't get this error when using other operations on the tree. Any advice on how to get around this? The only limitation imposed is that I can't use =.
I also figured that the most probable cause is that the sides of the is don't return the same "type", according to searches on google and stackoverflow. The way I see it though, is that shouldn't be the case here since I have almost the exact same thing on both ends.
Thank you for reading, and any help is greatly appreciated :)
The is/2 predicate is used for arithmetic. It calculates and assigns the value of an expression (second argument) to the variable-first argument. For example:
X is 1+(2*Y)/2 where Y is already instantiated so it has a value (in order to calculate the value of the expression otherwise it throws instantiation error).
In you case you can't use is/2 since you don't want to calculate any arithmetic expression (that's why the error). What you need is unification, you need to unify a term (e.g a leaf or node) with another term by using =.
For example:
treeRev(leaf(Leaf1), leaf(Leaf2)) :-
leaf(Leaf1) = leaf(Leaf2).
treeRev(node1(Leaf1, Node1), node1(Leaf2, Node2)) :-
node1(Leaf1,treeRev(Node1)) = node1(Leaf2, treeRev(Node2)).
treeRev(node2(Leaf1, Node1, Node2), node2(Leaf2, Node3, Node4)) :-
node2(Leaf1, treeRev(Node1), treeRev(Node2)) =
node2(Leaf2, treeRev(Node4), treeRev(Node3)).
By using pattern matching you could simply do:
treeRev(leaf(Leaf2), leaf(Leaf2)).
treeRev(node1(Leaf2, treeRev(Node2)), node1(Leaf2, Node2)).
treeRev(node2(Leaf2,Node1,Node2), node2(Leaf2, Node3, Node4)):-
treeRev(Node1,Node4),treeRev(Node2,Node3).
... then call the recursive function...
Prolog predicates are not functions. Writing node1(Leaf1,treeRev(Node1)) will not build a node with "the result of calling the treeRev function", as in other programming languages. Instead, Prolog predicates have extra arguments for the "result". You typically call the predicate and bind such "results" to a variable or unify it with a term.
You will need something like this for a binary node (not tested, and not following your teacher's strange and undocumented tree representation):
tree_mirrored(node(LeftTree, RightTree), node(RightMirrored, LeftMirrored)) :-
tree_mirrored(LeftTree, LeftMirrored),
tree_mirrored(RightTree, RightMirrored).

How could I calculate the number of recursions that a recursive rule does?

I deal with a problem; I want to calculate how many recursions a recursive rule of my code does.
My program examines whether an object is component of a computer hardware or not(through component(X,Y) predicate).E.g component(computer,motherboard) -> true.
It does even examine the case an object is not directly component but subcomponent of another component. E.g. subcomponent(computer,ram) -> true. (as ram is component of motherboard and motherboard is component of computer)
Because my code is over 400 lines I will present you just some predicates of the form component(X,Y) and the rule subcomponent(X,Y).
So, some predicates are below:
component(computer,case).
component(computer,power_supply).
component(computer,motherboard).
component(computer,storage_devices).
component(computer,expansion_cards).
component(case,buttons).
component(case,fans).
component(case,ribbon_cables).
component(case,cables).
component(motherboard,cpu).
component(motherboard,chipset).
component(motherboard,ram).
component(motherboard,rom).
component(motherboard,heat_sink).
component(cpu,chip_carrier).
component(cpu,signal_pins).
component(cpu,control_pins).
component(cpu,voltage_pins).
component(cpu,capacitors).
component(cpu,resistors).
and so on....
My rule is:
subcomponent(X,Z):- component(X,Z).
subcomponent(X,Z):- component(X,Y),subcomponent(Y,Z).
Well, in order to calculate the number of components that a given component X to a given component Y has-that is the number of recursions that the recursive rule subcomponents(X,Y), I have made some attempts that failed. However, I present them below:
i)
number_of_components(X,Y,N,T):- T is N+1, subcomponent(X,Y).
number_of_components(X,Y,N,T):- M is N+1, subcomponent(X,Z), number_of_components(Z,Y,M,T).
In this case I get this error: "ERROR: is/2: Arguments are not sufficiently instantiated".
ii)
number_of_components(X,Y):- bagof(Y,subcomponent(X,Y),L),
length(L,N),
write(N).
In this case I get as a result either 1 or 11 and after this number true and that's all. No logic at all!
iii)
count_elems_acc([], Total, Total).
count_elems_acc([Head|Tail], Sum, Total) :-
Count is Sum + 1,
count_elems_acc(Tail, Count, Total).
number_of_components(X,Y):- bagof(Y,subcomponent(X,Y),L),
count_elems_acc(L,0,Total),
write(Total).
In this case I get as results numbers which are not right according to my knowledge base.(or I mistranslate them-because this way seems to have some logic)
So, what am I doing wrong and what should I write instead?
I am looking forward to reading your answers!
One thing you could do is iterative deepening with call_with_depth_limit/3. You call your predicate (in this case, subcomponent/2). You increase the limit until you get a result, and if you get a result, the limit is the deepest recursion level used. You can see the documentation for this.
However, there is something easier you can do. Your database can be represented as an unweighted, directed, acyclic graph. So, stick your whole database in a directed graph, as implemented in library(ugraphs), and find its transitive closure. In the transitive closure, the neighbours of a component are all its subcomponents. Done!
To make the graph:
findall(C-S, component(C, S), Es),
vertices_edges_to_ugraph([], Es, Graph)
To find the transitive closure:
transitive_closure(Graph, Closure)
And to find subcomponents:
neighbours(Component, Closure, Subcomponents)
The Subcomponents will be a list, and you can just get its length with length/2.
EDIT
Some random thoughts: in your case, your database seems to describe a graph that is by definition both directed and acyclic (the component-subcomponent relationship goes strictly one way, right?). This is what makes it unnecessary to define your own walk through the graph, as for example nicely demonstrated in this great question and answers. So, you don't need to define your own recursive subcomponent predicate, etc.
One great thing about representing the database as a term when working with it, instead of keeping it as a flat table, is that it becomes trivial to write predicates that manipulate it: you get Prolog's backtracking for free. And since the S-representation of a graph that library(ugraph) uses is well-suited for Prolog, you most probably end up with a more efficient program, too.
The number of calls of a predicate can be a difficult concept. I would say, use the tools that your system make available.
?- profile(number_of_components(computer,X)).
20===================================================================
Total time: 0.00 seconds
=====================================================================
Predicate Box Entries = Calls+Redos Time
=====================================================================
$new_findall_bag/0 1 = 1+0 0.0%
$add_findall_bag/1 20 = 20+0 0.0%
$free_variable_set/3 1 = 1+0 0.0%
...
so:count_elems_acc/3 1 = 1+0 0.0%
so:subcomponent/2 22 = 1+21 0.0%
so:component/2 74 = 42+32 0.0%
so:number_of_components/2 2 = 1+1 0.0%
On the other hand, what is of utmost importance is the relation among clause variables. This is the essence of Prolog. So, try to read - let's say, in plain English - your rules.
i) number_of_components(X,Y,N,T) what relation N,T have to X ? I cannot say. So
?- leash(-all),trace.
?- number_of_components(computer,Y,N,T).
Call: (7) so:number_of_components(computer, _G1931, _G1932, _G1933)
Call: (8) _G1933 is _G1932+1
ERROR: is/2: Arguments are not sufficiently instantiated
Exception: (8) _G1933 is _G1932+1 ?
ii) number_of_components(X,Y) here would make much sense if Y would be the number_of_components of X. Then,
number_of_components(X,Y):- bagof(S,subcomponent(X,S),L), length(L,Y).
that yields
?- number_of_components(computer,X).
X = 20.
or better
?- aggregate(count, S^subcomponent(computer,S), N).
N = 20.
Note the usage of S. It is 'existentially quantified' in the goal where it appears. That is, allowed to change while proving the goal.
iii) count_elements_acc/3 is - more or less - equivalent to length/2, so the outcome (printed) seems correct, but again, it's the relation between X and Y that your last clause fails to establish. Printing from clauses should be used only when the purpose is to perform side effects... for instance, debugging...

Using a recursive / fixed point / iterative structure in a Neo4j Cypher query

My Neo4j database contains relationships that may have a special property:
(a) -[{sustains:true}]-> (b)
This means that a sustains b: when the last node that sustains b is deleted, b itself should be deleted. I'm trying to write a Cypher statement that deletes a given node PLUS all nodes that now become unsustained as a result. This may set off a chain reaction, and I don't know how to encode this in Cypher. Is Cypher expressive enough?
In any other language, I could come up with a number of ways to implement this. A recursive algorithm for this would be something like:
delete(a) :=
MATCH (a) -[{sustains:true}]-> (b)
REMOVE a
WITH b
MATCH (aa) -[{sustains:true}]-> (b)
WHERE count(aa) = 0
delete(b)
Another way to describe the additional set of nodes to delete would be with a fixed point function:
setOfNodesToDelete(Set) :=
RETURN Set' ⊆ Set such that for all n ∈ Set'
there is no (m) -[{sustains:true}]-> (n) with m ∉ Set
We would start with the set of all z such that (a) -[{sustains:true}*1..]-> (z), then delete a, run setOfNodesToDelete on the set until it doesn't change anymore, then delete the nodes specified by the set. This requires an unspecified number of iterations.
Any way to accomplish my goal in Cypher?

what is the lowelink mean of Tarjan's algorithm

I was reading the description of Tarjan's algorithm for finding the strongly connected components in a driected graph.
But I find it hard to understand these codes snippet:
if (w.index is undefined) then
// Successor w has not yet been visited; recurse on it
strongconnect(w)
v.lowlink := min(v.lowlink, w.lowlink)
else if (w is in S) then
// Successor w is in stack S and hence in the current SCC
v.lowlink := min(v.lowlink, w.index)
end if
the fourth and the seventh lines are different, this make me confused.
And in my opinion,the seveth line could write as the same way with the fourth line
v.lowkink := min(v.lowlink, w.index)
I test this in my program and it works fine, and for me, it's better to understand bcz verdex v cloud reach hight up root, but i couldn't prove itT_T.
I wrote a program that enumerated all graphs of size 4, then run each version (with either min(v.lowlink, w.index) or min(v.lowlink, w.lowlink) if w is in S) and compared the results. Both were exactly identical in all cases, even though w.lowlink and w.index were often different.
The reason why we can use w.index is this: consider where on the stack S relative to the current node v the other node w is.
If it's earlier on the stack then it has a smaller index than the current node (because it was visited earlier, duh), so the current node is not the "head" of its connected component and that would be reflected in v.lowlink <= w.index < v.index anyway. And it's not like w.lowlink has any particular meaning at this point either, it's in the progress of being computed and doesn't necessarily have its final value yet.
Now, if w is later in the stack than v, then the crucial property that the algorithm depends on is that then w is a descendant of v, not some sibling/cousing node still left there from an earlier recursive call. Or, as it is usually stated in a complete proof, strongly connected components never span several unconnected branches of our search tree (forest). Because since it's an SCC, there must be a path from w to v, and since we are enumerating stuff in a depth-first order, we must have visited v using that path from w before we have finished processing w, so w should be earlier in the stack than v!
And if w is a descendant of v then we already got its actual lowlink value the first time we visited it and are not interested in it any more.
On a side note, it's trivial to get rid of the lowlink property on nodes and make strongconnect return it directly. Then we wouldn't be tempted to check it instead of w.index in the second case =)

How does is_integer(X) procedure work?

I read in Mellish, Clocksin book about Prolog and got to this:
is_integer(0).
is_integer(X) :- is_integer(Y), X is Y + 1.
with the query ?- is_integer(X). the zero output is easy but how does it get 1, 2, 3, 4...
I know it is not easy to explain writing only but I will appreciate any attempt.
After the 1-st result X=0 I hit ; then the query becomes is_integer(0) or is still is_integer(X)?
It's long time I search for a good explanation to this issue. Thanks in advance.
This strikes to the heart of what makes Prolog so interesting and difficult. You're definitely not stupid, it's just extremely different.
There are two rules here. The existence of alternatives causes choice points to be created. You can think of the choice point as a moment when Prolog saw an alternate way of proceeding with the computation. Prolog always tries rules in the order they appear in the database, which will correspond to the order they appear in the source file. So when you issue the query is_integer(X), the first rule matches and unifies X with 0. This is your first result.
When you press ';' you are telling Prolog to fail, that this answer is not acceptable, which triggers backtracking. The only thing for Prolog to do is try entering the other rule, which begins is_integer(Y). Y is a new variable; it may or may not wind up instantiated to the same value as X, so far you haven't seen any reason why that wouldn't be the case.
This call, is_integer(Y) essentially duplicates the computation that's been attempted so far. It will enter the first rule, is_integer(0) and try that. This rule will succeed, Y will be unified with 0 and then X will be unified with Y+1, and the rule will exit having unified X with 1.
When you press ';' again, Prolog will back up to the nearest choice point. This time, the nearest choice point is the call is_integer(Y) within the second rule for is_integer/1. So the depth of the call stack is greater, but we haven't left the second rule. Indeed, each subsequent result will be had by backtracking from the first to the second rule at this location in the previous location's activation of the second rule. I doubt very seriously a verbal explanation like the preceeding is going to help, so please look at this trashy ASCII art of how the call tree is evolving like this:
1 2 2
/ \
1 2
/
1
^ ^ ^
| | |
0 | |
1+0 |
1+(1+0)
where the numbers are indicating which rule is activated and the level is indicating the depth of the call stack. The next several steps will evolve like this:
2 2
\ \
2 2
\ \
2 2
/ \
1 2
/
1
^ ^
| |
1+(1+(1+0)) |
= 3 1+(1+(1+(1+0)))
= 4
Notice that we always produce a value by increasing the stack depth by 1 and reaching the first rule.
The answer of Daniel is very good, I just want to offer another way to look at it.
Take this trivial Prolog definition of natural numbers based on TNT (so 0 is 0, 1 is s(0), 2 is s(s(0)) etc):
n(0). % (1)
n(s(N)) :- n(N). % (2)
The declarative meaning is very clear. (1) says that 0 is a number. (2) says that s(N) is a number if N is a number. When called with a free variable:
?- n(X).
it gives you the expected X = 0 (from (1)), then looks at (2), and goes into a "new" invocation of n/1. In this new invocation, (1) succeeds, the recursive call to n/1 succeeds, and (2) succeeds with X = s(0). Then it looks at (2) of the new invocation, and so on, and so on.
This works by unification in the head of the second clause. Nothing stops you, however, from saying:
n_(0).
n_(S) :- n_(N), S = s(N).
This simply delays the unification of S with s(N) until after n_(N) is evaluated. As nothing happens between evaluating n_(N) and the unification, the result, when called with a free variable, is identical.
Do you see how this is isomorphic to your is_integer/1 predicate?
A word of warning. As pointed out in the comments, this query:
?- n_(0).
as well as the corresponding
?- is_integer(0).
have the annoying property of not terminating (you can call them with any natural number, not only 0). This is because after the first clause has been reached recursively, and the call succeeds, the second clause still gets evaluated. At that point you are "past" the end-of-recursion of the first clause.
n/1 defined above does not suffer from this, as Prolog can recognize by looking at the two clause heads that only one of them can succeed (in other words, the two clauses are mutually exclusive).
I attempted to put into a graphic #daniel's great answer. I found his answer enlightening and could not have figured out what was going on here without his help. I hope that this image helps someone the way that #daniel's answer helped me!

Resources