I have a couple of linked elements with provided cost for traversing them:
link(a, b, 100).
link(b, c, 223).
link(c, d, 311).
I want to find whether the traversal is possible and if it is then return the total cost.
So that the question ?-count(a, d, X). returns X = 634.
Here is my attempt at doing it:
sum(A, B, X) :-
X is A + B.
count(Start, Finish, Cost) :-
link(Start, Finish, Cost).
count(Start, Finish, Cost) :-
link(Start, Through, Tempcost),
count(Through, Finish, Newcost),
sum(Cost, Tempcost, Newcost).
The problem is that while I have a general idea how to increment by a fixed number I have a hard time adding up totally different numbers and passing them on to recursion.
My current code returns an "Argument Insuff. Instantiated" error. I know that it often has something to do with the order of the code so I tried to change it around but so far had no luck.
A classic mistake is that you here sum up the Cost with Tempcost to a Newcost. But Newcost is here the cost to get from Through to Finish.
The relation is thus in the opposite way you define it. We can say that the Cost to get from Start to Finish is the same as the cost to take the Start to Through (with a HopCost), plus the RestCost: the cost to get from Through to Finish:
sum(A, B, X) :-
X is A + B.
count(Start, Finish, Cost) :-
link(Start, Finish, Cost).
count(Start, Finish, Cost) :-
link(Start, Through, HopCost),
count(Through, Finish, RestCost),
sum(HopCost, RestCost, Cost).
Related
The time complexity of a recursive algorithm is said to be
Given a recursion algorithm, its time complexity O(T) is typically
the product of the number of recursion invocations (denoted as R)
and the time complexity of calculation (denoted as O(s))
that incurs along with each recursion
O(T) = R * O(s)
Looking at a recursive function:
void algo(n){
if (n == 0) return; // base case just to not have stack overflow
for(i = 0; i < n; i++);// to do O(n) work
algo(n/2);
}
According to the definition above I may say that, the time complexity is, R is logn times and O(s) is n. So the result should be n logn where as with mathmetical induction it is proved that the result in o(n).
Please do not prove the induction method. I am asking why the given definition does not work with my approach.
Great question! This hits at two different ways of accounting for the amount of work that's done in a recursive call chain.
The original strategy that you described for computing the amount of work done in a recursive call - multiply the work done per call by the number of calls - has an implicit assumption buried within it. Namely, this assumes that every recursive call does the same amount of work. If that is indeed the case, then you can determine the total work done as the product of the number of calls and the work per call.
However, this strategy doesn't usually work if the amount of work done per call varies as a function of the arguments to the call. After all, we can't talk about multiplying "the" amount of work done by a call by the number of calls if there isn't a single value representing how much work is done!
A more general strategy for determining how much work is done by a recursive call chain is to add up the amount of work done by each individual recursive call. In the case of the function that you've outlined above, the work done by the first call is n. The second call does n/2 work, because the amount of work it does is linear in its argument. The third call does n/4 work, the fourth n/8 work, etc. This means that the total work done is bounded by
n + n/2 + n/4 + n/8 + n/16 + ...
= n(1 + 1/2 + 1/4 + 1/8 + 1/16 + ...)
≤ 2n,
which is where the tighter O(n) bound comes from.
As a note, the idea of "add up all the work done by all the calls" is completely equivalent to "multiply the amount of work done per call by the number of calls" in the specific case where the amount of work done by each call is the same. Do you see why?
Alternatively, if you're okay getting a conservative upper bound on the amount of work done by a recursive call chain, you can multiply the number of calls by the maximum work done by any one call. That will never underestimate the total, but it won't always give you the right bound. That's what's happening here in the example you've listed - each call does at most n work, and there are O(log n) calls, so the total work is indeed O(n log n). That just doesn't happen to be a tight bound.
A quick note - I don't think it would be appropriate to call the strategy of multiplying the total work done by the number of calls the "definition" of the amount of work done by a recursive call chain. As mentioned above, that's more of a "strategy for determining the work done" than a formal definition. If anything, I'd argue that the correct formal definition would be "the sum of the amounts of work done by each individual recursive calls," since that more accurately accounts for how much total time will be spent.
Hope this helps!
I think you are trying to find information about master theorem which is what is used to prove the time complexity of recursive algorithms.
https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)
Also, you usually can't determine an algorithms runtime just from looking at it, especially recursive ones. That's why your quick analysis is different than the proof by induction.
Here is the main part of my code in Prolog:
state(N, Sf) :-
get_initial_state('test.csv',S),
state_sequence(N, S, Sf).
state_sequence(N, S, S).
state_sequence(N, S, Sf) :-
transition_state(S, S, [], Sn),
N > 0,
N1 is N - 1,
state_sequence(N1, Sn, Sf).
transition_state is just a set of rules which does not matter here. And it is a recursion which keeps going to get value of next state until N reaches 0.
Then for example I want the 48th result of state. So my query is
state(48,S).
Then I need to keep pressing ; and prolog keeps telling me next state and then until 48th it results in false.
So how can I get the 48th result directly without telling me results of each state??
The fastest way is to generate all solutions using findall/3, then get the solution you want. This is a simple example to show you the idea:
test(1,0).
test(2,0).
test(3,0).
test(4,0).
test(5,0).
test(6,0).
getNthSolution(Sol,N):-
findall(S,test(S,0),L),
nth1(N,L,Sol).
?- getNthSolution(S,3)
S = 3
After a long search on google I couldn't find a clear answer of this:
In Prolog doing recursion by itself its easy. My main problem is understanding where to place accumulators and counters. Here is an example:
nXlist(N,X,[X|T]):-
N \=0,
N1 is N-1,
nXList(N1,X,T).
nXList(0,_,[]).
media([X|L], N, Soma):-
media(L, N1, Soma1),
N is N1 + 1,
Soma is Soma1 + X.
media([], 0, 0).
On the first example i used a counter BEFORE the recursion but in the second example I use it AFTER. The reason I have done that is the called try and see cause i really can't understand why sometimes is before and sometimes is after...
Maybe the central point of your question is in the preamble:
In Prolog doing recursion by itself its easy
It's not easy, it's mandatory. We don't have loops, because we don't have a way to control them. Variables are assign once.
So, I think the practical answer is rather simple: if the 'predicate' (like is/2) needs a variable value, you ground the variable before calling it.
To me, it helps to consider a Prolog program (a set of clauses) as grammar productions, and clause arguments as attributes, either inherited (values computed before the 'instruction pointer') or synthesized (values computed 'here', to be returned).
update: Most importantly, if the recursive call is not last, the predicate is not tail recursive. So, having anything after the recursive call should be avoided if possible. Notice that both definitions in the answer by user false are tail recursive, and that's precisely due to the fact that the arithmetic conditions there are placed before the recursive call, in both of them. That's so basic, that we have to make an effort to notice it explicitly.
Sometimes we count down, sometimes we count up. I discuss this in another answer at length. It talks of accumulators, befores and afters. :)
There's also this thing called "associativity" of an operation (say, +), where
a+(b+(c+....)) == (a+b)+(c+...)
that lets us regroup and (partially) calculate sooner rather than later. As soon as possible, but not sooner.
Short answer: you can place such arithmetical relations both before and thereafter. At least, if you are using constraints in place of (is)/2. The only difference may be in termination and errors.
So let's see how your predicates can be defined with constraints:
:- use_module(library(clpfd)).
nXList(0,_,[]).
nXList(N,X,[X|T]):-
N #> 0,
N1 #= N-1,
nXList(N1,X,T).
media([], 0, 0).
media([X|L], N, Soma):-
N #> 0,
N #= N1 + 1,
Soma #= Soma1 + X,
media(L, N1, Soma1).
You can now use these definitions in a much more general way, say:
?- nXList(3, X, T).
T = [X, X, X]
; false.
?- media(Xs, 3, S).
Xs = [_A, _B, _C], _D+_A#=S, _C+_B#=_D
; false.
... nXList/3 can be more compactly expressed by:
..., length(T, N), maplist(=(X), T), ...
I have the following working program: (It can be tested on this site: http://swish.swi-prolog.org, I've removed the direct link to a saved program, because I noticed that anybody can edit it.)
It searches for a path between two points in an undirected graph. The important part is that the result is returned in the scope of the "main" predicate. (In the Track variable)
edge(a, b).
edge(b, c).
edge(d, b).
edge(d, e).
edge(v, w).
connected(Y, X) :-
(
edge(X, Y);
edge(Y, X)
).
path(X, X, _, []) :-
connected(X, _).
path(X, Y, _, [X, Y]) :-
connected(Y, X).
path(X, Z, Visited, [X|Track]) :-
connected(X, Y),
not(member(X, Visited)),
path(Y, Z, [X|Visited], Track).
main(X, Y) :-
path(X, Y, [], Track),
print(Track),
!.
Results:
?- main(a, e).
[a, b, d, e]
true
?- main(c, c).
[]
true
?- main(b, w).
false
My questions:
The list of visited nodes is passed down to the predicates in 2 different ways. In the bound Visited variable and in the unbound Track variable. What are the names of these 2 different forms of parameter passing?
Normally I only wanted to use the unbound parameter passing (Track variable), to have the results in the scope of the main predicate. But I had to add the Visited variable too, because the member checking didn't work on the Track variable (I don't know why). Is it possible to make it work with only passing the Track in an unbound way? (without the Visited variable)
Many thanks!
The short answer: no, you cannot avoid the extra argument without making everything much messier. This is because this particular algorithm for finding a path needs to keep a state; basically, your extra argument is your state.
There might be other ways to keep a state, like using a global, mutable variable, or dynamically changing the Prolog data base, but both are more difficult to get right and will involve more code.
This extra argument is often called an accumulator, because it accumulates something as you go down the proof tree. The simplest example would be traversing a list:
foo([]).
foo([X|Xs]) :-
foo(Xs).
This is fine, unless you need to know what elements you have already seen before getting here:
bar(List) :-
bar_(List, []).
bar_([], _).
bar_([X|Xs], Acc) :-
/* Acc is a list of all elements so far */
bar_(Xs, [X|Acc]).
This is about the same as what you are doing in your code. And if you look at this in particular:
path(X, Z, Visited, /* here */[X|Track]) :-
connected(X, Y),
not(member(X, Visited)),
path(Y, Z, [X|Visited], /* and here */Track).
The last argument of path/4 has one element more at a depth of one less in the proof tree! And, of course, the third argument is one longer (it grows as you go down the proof tree).
For example, you can reverse a list by adding another argument to the silly bar predicate above:
list_reverse(L, R) :-
list_reverse_(L, [], R).
list_reverse_([], R, R).
list_reverse_([X|Xs], R0, R) :-
list_reverse_(Xs, [X|R0], R).
I am not aware of any special name for the last argument, the one that is free at the beginning and holds the solution at the end. In some cases it could be an output argument, because it is meant to capture the output, after transforming the input somehow. There are many cases where it is better to avoid thinking about arguments as strictly input or output arguments. For example, length/2:
?- length([a,b], N).
N = 2.
?- length(L, 3).
L = [_2092, _2098, _2104].
?- length(L, N).
L = [],
N = 0 ;
L = [_2122],
N = 1 ;
L = [_2122, _2128],
N = 2 . % and so on
Note: there are quite a few minor issues with your code that are not critical, and giving that much advice is not a good idea on Stackoverflow. If you want you could submit this as a question on Code Review.
Edit: you should definitely study this question.
I also provided a somewhat simpler solution here. Note the use of term_expansion/2 for making directed edges from undirected edges at compile time. More important: you don't need the main, just call the predicate you want from the top level. When you drop the cut, you will get all possible solutions when one or both of your From and To arguments are free variables.
I'm trying to create a predicate in prolog which will hold true if it reaches the lowest numerical value out of a set of values.
For example:
I have something like this at the moment
Base Step
lowest(Object, Value) :- \+ lessThan(Object, Value, NewValue).
Recursive Step
lowest(Object, Value) :- lessThan(Object, Value, NewValue), lowest(Object, NewValue).
Where Object is some abstract object which can have multiple numerical values attached to it.
lessThan returns Values (NewValue) for the Object which are less than the input Value.
And since NewValue will be lower than the input of Value I can assume that with each recursive step Value will be decreasing.
I have abstracted this problem from another which I am trying to solve, but basically what is happening is that I expect only 2 outputs from the whole recursive function, but instead I am getting as many outputs as lessThan(Object, Initial, X) + 2.
I'm not sure if this question is clear, please let me know so I can clarify.
I believe that my base step is correct, since I am making the assumption that if Value is the lowest coupled with Object, then there are no other values less than Value.
I am unsure where to terminate the recursion also, which is adding to my confusion. My guess is that it will terminate once it reaches a state where there are no lower Values for Object.
This sample should work, renaming value/2 as appropriate for your domain.
value(a, 10).
value(a, 3).
value(a, 100).
lowest(Object, L) :-
value(Object, First), !, lowest(Object, First, L).
lowest(Object, LowestSoFar, Lowest) :-
value(Object, Try), Try < LowestSoFar, !,
lowest(Object, Try, Lowest).
lowest(_, Lowest, Lowest).
it yields
?- lowest(a,X).
X = 3.
Note that it repeats the value 'peek' each time, then is not efficient.
A possible alternative is storing the lower value and run a failure driven loop.
Otherwise, SWI-Prolog (and YAP) has library(aggregate):
?- aggregate(min(V), value(a,V), M).
M = 3.