Related
I deal with a problem; I want to calculate how many recursions a recursive rule of my code does.
My program examines whether an object is component of a computer hardware or not(through component(X,Y) predicate).E.g component(computer,motherboard) -> true.
It does even examine the case an object is not directly component but subcomponent of another component. E.g. subcomponent(computer,ram) -> true. (as ram is component of motherboard and motherboard is component of computer)
Because my code is over 400 lines I will present you just some predicates of the form component(X,Y) and the rule subcomponent(X,Y).
So, some predicates are below:
component(computer,case).
component(computer,power_supply).
component(computer,motherboard).
component(computer,storage_devices).
component(computer,expansion_cards).
component(case,buttons).
component(case,fans).
component(case,ribbon_cables).
component(case,cables).
component(motherboard,cpu).
component(motherboard,chipset).
component(motherboard,ram).
component(motherboard,rom).
component(motherboard,heat_sink).
component(cpu,chip_carrier).
component(cpu,signal_pins).
component(cpu,control_pins).
component(cpu,voltage_pins).
component(cpu,capacitors).
component(cpu,resistors).
and so on....
My rule is:
subcomponent(X,Z):- component(X,Z).
subcomponent(X,Z):- component(X,Y),subcomponent(Y,Z).
Well, in order to calculate the number of components that a given component X to a given component Y has-that is the number of recursions that the recursive rule subcomponents(X,Y), I have made some attempts that failed. However, I present them below:
i)
number_of_components(X,Y,N,T):- T is N+1, subcomponent(X,Y).
number_of_components(X,Y,N,T):- M is N+1, subcomponent(X,Z), number_of_components(Z,Y,M,T).
In this case I get this error: "ERROR: is/2: Arguments are not sufficiently instantiated".
ii)
number_of_components(X,Y):- bagof(Y,subcomponent(X,Y),L),
length(L,N),
write(N).
In this case I get as a result either 1 or 11 and after this number true and that's all. No logic at all!
iii)
count_elems_acc([], Total, Total).
count_elems_acc([Head|Tail], Sum, Total) :-
Count is Sum + 1,
count_elems_acc(Tail, Count, Total).
number_of_components(X,Y):- bagof(Y,subcomponent(X,Y),L),
count_elems_acc(L,0,Total),
write(Total).
In this case I get as results numbers which are not right according to my knowledge base.(or I mistranslate them-because this way seems to have some logic)
So, what am I doing wrong and what should I write instead?
I am looking forward to reading your answers!
One thing you could do is iterative deepening with call_with_depth_limit/3. You call your predicate (in this case, subcomponent/2). You increase the limit until you get a result, and if you get a result, the limit is the deepest recursion level used. You can see the documentation for this.
However, there is something easier you can do. Your database can be represented as an unweighted, directed, acyclic graph. So, stick your whole database in a directed graph, as implemented in library(ugraphs), and find its transitive closure. In the transitive closure, the neighbours of a component are all its subcomponents. Done!
To make the graph:
findall(C-S, component(C, S), Es),
vertices_edges_to_ugraph([], Es, Graph)
To find the transitive closure:
transitive_closure(Graph, Closure)
And to find subcomponents:
neighbours(Component, Closure, Subcomponents)
The Subcomponents will be a list, and you can just get its length with length/2.
EDIT
Some random thoughts: in your case, your database seems to describe a graph that is by definition both directed and acyclic (the component-subcomponent relationship goes strictly one way, right?). This is what makes it unnecessary to define your own walk through the graph, as for example nicely demonstrated in this great question and answers. So, you don't need to define your own recursive subcomponent predicate, etc.
One great thing about representing the database as a term when working with it, instead of keeping it as a flat table, is that it becomes trivial to write predicates that manipulate it: you get Prolog's backtracking for free. And since the S-representation of a graph that library(ugraph) uses is well-suited for Prolog, you most probably end up with a more efficient program, too.
The number of calls of a predicate can be a difficult concept. I would say, use the tools that your system make available.
?- profile(number_of_components(computer,X)).
20===================================================================
Total time: 0.00 seconds
=====================================================================
Predicate Box Entries = Calls+Redos Time
=====================================================================
$new_findall_bag/0 1 = 1+0 0.0%
$add_findall_bag/1 20 = 20+0 0.0%
$free_variable_set/3 1 = 1+0 0.0%
...
so:count_elems_acc/3 1 = 1+0 0.0%
so:subcomponent/2 22 = 1+21 0.0%
so:component/2 74 = 42+32 0.0%
so:number_of_components/2 2 = 1+1 0.0%
On the other hand, what is of utmost importance is the relation among clause variables. This is the essence of Prolog. So, try to read - let's say, in plain English - your rules.
i) number_of_components(X,Y,N,T) what relation N,T have to X ? I cannot say. So
?- leash(-all),trace.
?- number_of_components(computer,Y,N,T).
Call: (7) so:number_of_components(computer, _G1931, _G1932, _G1933)
Call: (8) _G1933 is _G1932+1
ERROR: is/2: Arguments are not sufficiently instantiated
Exception: (8) _G1933 is _G1932+1 ?
ii) number_of_components(X,Y) here would make much sense if Y would be the number_of_components of X. Then,
number_of_components(X,Y):- bagof(S,subcomponent(X,S),L), length(L,Y).
that yields
?- number_of_components(computer,X).
X = 20.
or better
?- aggregate(count, S^subcomponent(computer,S), N).
N = 20.
Note the usage of S. It is 'existentially quantified' in the goal where it appears. That is, allowed to change while proving the goal.
iii) count_elements_acc/3 is - more or less - equivalent to length/2, so the outcome (printed) seems correct, but again, it's the relation between X and Y that your last clause fails to establish. Printing from clauses should be used only when the purpose is to perform side effects... for instance, debugging...
I have come across two completely different answers.
One says that:
Yes, there does exist a context-free grammar for {0i1j | 1≤i≤j≤2i}, the following grammar ensures that there can be half or lesser 0’s for the 1’s that are present:
S -> 0S11 | 0S1 | 01
The other:
No, proof by contradiction:
Case 1:
Suppose you push i 0s onto the stack.
Pop off j 1s.
You can’t determine if j<=2i.
Case 2:
Suppose you push 2i 0s onto the stack.
Pop off j 1s.
You can’t determine if j>=i.
Any other value pushed on the stack not equal to i or 2i is a value relative to either of these two values, so the same reasoning applies.
Are either correct? Thanks so much!
Since a grammar exists and you can pretty clearly check it matches the whole language, the language must be context-free. So the proof by contradiction is wrong. But why?
The proof assumes the machine must be deterministic. But you need a nondeterministic pushdown automata to recognize some context-free grammars. Thus, all the second proof proves (if it is correct) is that the language isn't a deterministic context-free language, but it doesn't show that it isn't a context-free language.
Indeed, if you let the machine be nondeterministic, then basically you push i 0s, then for each 0 on the stack, nondeterministically pop 1 or 2 1s. One of the computations will accept if the string is in the language.
I'm working on a problem from the Languages and Machines: An Introduction to the Theory of Computer Science (3rd Edition) in Chapter 2 Example 6.
I need help finding the answer of:
Recursive definition of set strings over {a,b} that contains one b and even number of a's before the first b?
When looking for a recursive definition, start by identifying the base cases and then look for the recursive steps - like you're doing induction. What are the smallest strings in this language? Well, any string must have a b. Is b alone a string in the language? Why yes it is, since there are zero as that come before it and zero is an even number.
Rule 1: b is in L.
Now, given any string in the language, how can we get more strings? Well, we can apparently add any number of as to the end of the string and get another string in the language. In fact, we can get all such strings from b if we simply allow you to add one more a to the end of a string in the language. From x in L, we therefore recover xa, xaa, ..., xa^n, ... = xa*.
Rule 2: if x is in L then xa is in L.
Finally, what can we do to the beginning of strings in our language? The number of as must be even. So far, rules 1 and 2 only allow us to construct strings that have zero as before the b. We should be able to get two, four, six, etc., all the even numbers, of as. A rule that lets us add two as to any string in our language will let us add ever more as to the beginning while maintaining the evenness property we require. Starting with x in L, we recover aax, aaaax, ..., (aa)^(2n)x, ... = (aa)*x.
Rule 3: if x is in L, then aax is in L.
Optionally, you may add the sometimes implicitly understood rule that only those things allowed by the aforementioned rules are allowed. Otherwise, technically anything is allowed since we haven't explicitly disallowed anything yet.
Rule 4: Nothing is in L unless by virtue of some combination of the rules 1, 2 and/or 3 above.
I am learning Prolog under my Artificial Intelligence Lab, from the source Learn Prolog Now!.
In the 5th Chapter we come to learn about Accumulators. And as an example, these two code snippets are given.
To Find the Length of a List
without accumulators:
len([],0).
len([_|T],N) :- len(T,X), N is X+1.
with accumulators:
accLen([_|T],A,L) :- Anew is A+1, accLen(T,Anew,L).
accLen([],A,A).
I am unable to understand, how the two snippets are conceptually different? What exactly an accumulator is doing different? And what are the benefits?
Accumulators sound like intermediate variables. (Correct me if I am wrong.) And I had already used them in my programs up till now, so is it really that big a concept?
When you give something a name, it suddenly becomes more real than it used to be. Discussing something can now be done by simply using the name of the concept. Without getting any more philosophical, no, there is nothing special about accumulators, but they are useful.
In practice, going through a list without an accumulator:
foo([]).
foo([H|T]) :-
foo(T).
The head of the list is left behind, and cannot be accessed by the recursive call. At each level of recursion you only see what is left of the list.
Using an accumulator:
bar([], _Acc).
bar([H|T], Acc) :-
bar(T, [H|Acc]).
At every recursive step, you have the remaining list and all the elements you have gone through. In your len/3 example, you only keep the count, not the actual elements, as this is all you need.
Some predicates (like len/3) can be made tail-recursive with accumulators: you don't need to wait for the end of your input (exhaust all elements of the list) to do the actual work, instead doing it incrementally as you get the input. Prolog doesn't have to leave values on the stack and can do tail-call optimization for you.
Search algorithms that need to know the "path so far" (or any algorithm that needs to have a state) use a more general form of the same technique, by providing an "intermediate result" to the recursive call. A run-length encoder, for example, could be defined as:
rle([], []).
rle([First|Rest],Encoded):-
rle_1(Rest, First, 1, Encoded).
rle_1([], Last, N, [Last-N]).
rle_1([H|T], Prev, N, Encoded) :-
( dif(H, Prev)
-> Encoded = [Prev-N|Rest],
rle_1(T, H, 1, Rest)
; succ(N, N1),
rle_1(T, H, N1, Encoded)
).
Hope that helps.
TL;DR: yes, they are.
Imagine you are to go from a city A on the left to a city B on the right, and you want to know the distance between the two in advance. How are you to achieve this?
A mathematician in such a position employs magic known as structural recursion. He says to himself, what if I'll send my own copy one step closer towards the city B, and ask it of its distance to the city? I will then add 1 to its result, after receiving it from my copy, since I have sent it one step closer towards the city, and will know my answer without having moved an inch! Of course if I am already at the city gates, I won't send any copies of me anywhere since I'll know that the distance is 0 - without having moved an inch!
And how do I know that my copy-of-me will succeed? Simply because he will follow the same exact rules, while starting from a point closer to our destination. Whatever value my answer will be, his will be one less, and only a finite number of copies of us will be called into action - because the distance between the cities is finite. So the total operation is certain to complete in a finite amount of time and I will get my answer. Because getting your answer after an infinite time has passed, is not getting it at all - ever.
And now, having found out his answer in advance, our cautious magician mathematician is ready to embark on his safe (now!) journey.
But that of course wasn't magic at all - it's all being a dirty trick! He didn't find out the answer in advance out of thin air - he has sent out the whole stack of others to find it for him. The grueling work had to be done after all, he just pretended not to be aware of it. The distance was traveled. Moreover, the distance back had to be traveled too, for each copy to tell their result to their master, the result being actually created on the way back from the destination. All this before our fake magician had ever started walking himself. How's that for a team effort. For him it could seem like a sweet deal. But overall...
So that's how the magician mathematician thinks. But his dual the brave traveler just goes on a journey, and counts his steps along the way, adding 1 to the current steps counter on each step, before the rest of his actual journey. There's no pretense anymore. The journey may be finite, or it may be infinite - he has no way of knowing upfront. But at each point along his route, and hence when ⁄ if he arrives at the city B gates too, he will know his distance traveled so far. And he certainly won't have to go back all the way to the beginning of the road to tell himself the result.
And that's the difference between the structural recursion of the first, and tail recursion with accumulator ⁄ tail recursion modulo cons ⁄ corecursion employed by the second. The knowledge of the first is built on the way back from the goal; of the second - on the way forth from the starting point, towards the goal. The journey is the destination.
see also:
Technical Report TR19: Unwinding Structured Recursions into Iterations. Daniel P. Friedman and David S. Wise (Dec 1974).
What are the practical implications of all this, you ask? Why, imagine our friend the magician mathematician needs to boil some eggs. He has a pot; a faucet; a hot plate; and some eggs. What is he to do?
Well, it's easy - he'll just put eggs into the pot, add some water from the faucet into it and will put it on the hot plate.
And what if he's already given a pot with eggs and water in it? Why, it's even easier to him - he'll just take the eggs out, pour out the water, and will end up with the problem he already knows how to solve! Pure magic, isn't it!
Before we laugh at the poor chap, we mustn't forget the tale of the centipede. Sometimes ignorance is bliss. But when the required knowledge is simple and "one-dimensional" like the distance here, it'd be a crime to pretend to have no memory at all.
accumulators are intermediate variables, and are an important (read basic) topic in Prolog because allow reversing the information flow of some fundamental algorithm, with important consequences for the efficiency of the program.
Take reversing a list as example
nrev([],[]).
nrev([H|T], R) :- nrev(T, S), append(S, [H], R).
rev(L, R) :- rev(L, [], R).
rev([], R, R).
rev([H|T], C, R) :- rev(T, [H|C], R).
nrev/2 (naive reverse) it's O(N^2), where N is list length, while rev/2 it's O(N).
I have a computer program that reads in an array of chars that operands and operators written in postfix notation. The program then scans through the array works out the result by using a stack as shown :
get next char in array until there are no more
if char is operand
push operand into stack
if char is operator
a = pop from stack
b = pop from stack
perform operation using a and b as arguments
push result
result = pop from stack
How do I prove by induction that this program correctly evaluates any postfix expression? (taken from exercise 4.16 Algorithms in Java (Sedgewick 2003))
I'm not sure which expressions you need to prove the algorithm against. But if they look like typical RPN expressions, you'll need to establish something like the following:
1) algoritm works for 2 operands (and one operator)
and
algorithm works for 3 operands (and 2 operators)
==> that would be your base case
2) if algorithm works for n operands (and n-1 operators)
then it would have to work for n+1 operands.
==> that would be the inductive part of the proof
Good luck ;-)
Take heart concerning mathematical proofs, and also their sometimes confusing names. In the case of an inductive proof one is still expected to "figure out" something (some fact or some rule), sometimes by deductive logic, but then these facts and rules put together constitute an broader truth, buy induction; That is: because the base case is established as true and because one proved that if X was true for an "n" case then X would also be true for an "n+1" case, then we don't need to try every case, which could be a big number or even infinite)
Back on the stack-based expression evaluator... One final hint (in addtion to Captain Segfault's excellent explanation you're gonna feel over informed...).
The RPN expressions are such that:
- they have one fewer operator than operand
- they never provide an operator when the stack has fewer than 2 operands
in it (if they didn;t this would be the equivalent of an unbalanced
parenthesis situation in a plain expression, i.e. a invalid expression).
Assuming that the expression is valid (and hence doesn't provide too many operators too soon), the order in which the operand/operators are fed into the algorithm do not matter; they always leave the system in a stable situtation:
- either with one extra operand on the stack (but the knowledge that one extra operand will eventually come) or
- with one fewer operand on the stack (but the knowledge that the number of operands still to come is also one less).
So the order doesn't matter.
You know what induction is? Do you generally see how the algorithm works? (even if you can't prove it yet?)
Your induction hypothesis should say that, after processing the N'th character, the stack is "correct". A "correct" stack for a full RPN expression has just one element (the answer). For a partial RPN expression the stack has several elements.
Your proof is then to think of this algorithm (minus the result = pop from stack line) as a parser that turns partial RPN expressions into stacks, and prove that it turns them into the correct stacks.
It might help to look at your definition of an RPN expression and work backwards from it.