advice needed with prolog cut? - recursion

in this task i have a Prolog database filled with e.g.
edge(1,0)
edge(2,0)
edge(1,3)
an edge signifies that two points are joined.
I am asked to write a function called reach(i,j,k) where i is the start point j is the end point and k is the number of steps you may use.
K is needed to stop the recursion looping e.g.
Suppose the only edge I’ve got goes from 1 to3,and I’m trying to get to 6. Then I can’t get from 1to6 in one go. so I’ll look for somewhere I can get to, and see if I can get from there to 6. The first place I can get to in one go is 3, so I’ll try to get from there to 6.
i have done this as so:
%% Can you get there in one step (need two rules because all links are
%% from smaller to greater, but we may need to get from greater to smaller.
reach1(I, J,_K) :-
edge(I, J).
reach1(I, J,_K) :-
edge(J, I).
%% Chhose somewhere you can get to in one step: can you get from there
%% to your target?
reach1(I,J,K) :-
K>1,
edge(I, B),
K1 is K-1,
reach1(B,J,K1).
reach1(I,J,K) :-
K>1,
edge(B, I),
K1 is K-1,
reach1(B,J,K1).
this works, however i am stuck with the second part in which we are asked not to use k but to use a "cut" to do this.
does anyone know how to do this or can give me some pointers?

The cut ensures that once a goal has been resolved in one way, it doesn't look for another way.
example:
reach(I, J,_K) :-
edge(I, J).
no cut - if for some reason Prolog backtracks, it will try to reach from I to J another way.
You might feel there's no point reaching this node another way if the simple edge works, and in that case you can do:
reach(I, J,_K) :-
edge(I, J),
!.
which "cuts" any alternative to this goal, but the one Prolog has found.

Related

Prolog recursive accumulator

I am trying to make a knowledge base for college courses. Specifically, right now I am trying to make an accumulator that will take a course and provide a list of all classes that must be taken first, i.e. The course's prereqs, the prereqs to those prereqs, etc... Based on this chart.
Here is a sample of the predicates:
prereq(cst250, cst126).
prereq(cst223, cst126).
prereq(cst126, cst116).
prereq(cst105, cst102).
prereq(cst250, cst130).
prereq(cst131, cst130).
prereq(cst130, cst162).
prereq(anth452, wri122).
prereq(hist452, wri122).
And here is my attempt at an accumulator:
prereq_chain(Course, PrereqChain):-
%Get the list of prereqs for Course
findall(Prereq, prereq(Course, Prereq), Prereqs),
%Recursive call to all prereqs in X
forall(member(X, Prereqs),
(prereq_chain(X, Y),
%Combine current layer prereqs with deeper
append(Prereqs, Y, Z))),
%Return PrereqChain
PrereqChain = Z.
The desired output from a query would be:
?- prereq_chain(cst250, PrereqList).
PrereqList = [cst116, cst126, cst162, cst130]
Instead, I get an answer of true, and a warning about Z being a singleton.
I have looked at other posts asking on similar issues, but they all had a single lane of backward traversal, whereas my solution requires multiple lanes.
Thanks in advance for any guidance.
The problem with using forall/2 is that it does not establish bindings. Look at this contrived example:
?- forall(member(X, [1,2,3]), append(['hi'], X, R)).
true.
If a binding were established for X or R by the forall/2, it would appear in the result; instead we just got true because it succeeded. So you need to use a construct that doesn't just run some computation but something that will produce a value. The thing you want in this case is maplist/3, which takes a goal and a list of parameters and builds a larger goal, giving you back the results. You will be able to see the effect in your console after you put in the solution below.
?- maplist(prereq_chain, [cst126, cst130], X).
X = [[cst116], [cst162]].
So this went and got the list of prerequisites for the two classes in the list, but gave us back a list of lists. This is where append/2 comes in handy, because it essentially flattens a list of lists:
?- append([[cst116], [cst162]], X).
X = [cst116, cst162].
Here's the solution I came up with:
prereq_chain(Class, Prereqs) :-
findall(Prereq, prereq(Class, Prereq), TopPrereqs),
maplist(prereq_chain, TopPrereqs, MorePrereqs),
append([TopPrereqs|MorePrereqs], Prereqs).

what is the lowelink mean of Tarjan's algorithm

I was reading the description of Tarjan's algorithm for finding the strongly connected components in a driected graph.
But I find it hard to understand these codes snippet:
if (w.index is undefined) then
// Successor w has not yet been visited; recurse on it
strongconnect(w)
v.lowlink := min(v.lowlink, w.lowlink)
else if (w is in S) then
// Successor w is in stack S and hence in the current SCC
v.lowlink := min(v.lowlink, w.index)
end if
the fourth and the seventh lines are different, this make me confused.
And in my opinion,the seveth line could write as the same way with the fourth line
v.lowkink := min(v.lowlink, w.index)
I test this in my program and it works fine, and for me, it's better to understand bcz verdex v cloud reach hight up root, but i couldn't prove itT_T.
I wrote a program that enumerated all graphs of size 4, then run each version (with either min(v.lowlink, w.index) or min(v.lowlink, w.lowlink) if w is in S) and compared the results. Both were exactly identical in all cases, even though w.lowlink and w.index were often different.
The reason why we can use w.index is this: consider where on the stack S relative to the current node v the other node w is.
If it's earlier on the stack then it has a smaller index than the current node (because it was visited earlier, duh), so the current node is not the "head" of its connected component and that would be reflected in v.lowlink <= w.index < v.index anyway. And it's not like w.lowlink has any particular meaning at this point either, it's in the progress of being computed and doesn't necessarily have its final value yet.
Now, if w is later in the stack than v, then the crucial property that the algorithm depends on is that then w is a descendant of v, not some sibling/cousing node still left there from an earlier recursive call. Or, as it is usually stated in a complete proof, strongly connected components never span several unconnected branches of our search tree (forest). Because since it's an SCC, there must be a path from w to v, and since we are enumerating stuff in a depth-first order, we must have visited v using that path from w before we have finished processing w, so w should be earlier in the stack than v!
And if w is a descendant of v then we already got its actual lowlink value the first time we visited it and are not interested in it any more.
On a side note, it's trivial to get rid of the lowlink property on nodes and make strongconnect return it directly. Then we wouldn't be tempted to check it instead of w.index in the second case =)

modifying an element of a list in-place in J, can it be done?

I have been playing with an implementation of lookandsay (OEIS A005150) in J. I have made two versions, both very simple, using while. type control structures. One recurs, the other loops. Because I am compulsive, I started running comparative timing on the versions.
look and say is the sequence 1 11 21 1211 111221 that s, one one, two ones, etc.
For early elements of the list (up to around 20) the looping version wins, but only by a tiny amount. Timings around 30 cause the recursive version to win, by a large enough amount that the recursive version might be preferred if the stack space were adequate to support it. I looked at why, and I believe that it has to do with handling intermediate results. The 30th number in the sequence has 5808 digits. (32nd number, 9898 digits, 34th, 16774.)
When you are doing the problem with recursion, you can hold the intermediate results in the recursive call, and the unstacking at the end builds the results so that there is minimal handling of the results.
In the list version, you need a variable to hold the result. Every loop iteration causes you to need to add two elements to the result.
The problem, as I see it, is that I can't find any way in J to modify an extant array without completely reassigning it. So I am saying
try. o =. o,e,(0&{y) catch. o =. e,(0&{y) end.
to put an element into o where o might not have a value when we start. That may be notably slower than
o =. i.0
.
.
.
o =. (,o),e,(0&{y)
The point is that the result gets the wrong shape without the ravels, or so it seems. It is inheriting a shape from i.0 somehow.
But even functions like } amend don't modify a list, they return a list that has a modification made to it, and if you want to save the list you need to assign it. As the size of the assigned list increases (as you walk the the number from the beginning to the end making the next number) the assignment seems to take more time and more time. This assignment is really the only thing I can see that would make element 32, 9898 digits, take less time in the recursive version while element 20 (408 digits) takes less time in the loopy version.
The recursive version builds the return with:
e,(0&{y),(,lookandsay e }. y)
The above line is both the return line from the function and the recursion, so the whole return vector gets built at once as the call gets to the end of the string and everything unstacks.
In APL I thought that one could say something on the order of:
a[1+rho a] <- new element
But when I try this in NARS2000 I find that it causes an index error. I don't have access to any other APL, I might be remembering this idiom from APL Plus, I doubt it worked this way in APL\360 or APL\1130. I might be misremembering it completely.
I can find no way to do that in J. It might be that there is no way to do that, but the next thought is to pre-allocate an array that could hold results, and to change individual entries. I see no way to do that either - that is, J does not seem to support the APL idiom:
a<- iota 5
a[3] <- -1
Is this one of those side effect things that is disallowed because of language purity?
Does the interpreter recognize a=. a,foo or some of its variants as a thing that it should fastpath to a[>:#a]=.foo internally?
This is the recursive version, just for the heck of it. I have tried a bunch of different versions and I believe that the longer the program, the slower, and generally, the more complex, the slower. Generally, the program can be chained so that if you want the nth number you can do lookandsay^: n ] y. I have tried a number of optimizations, but the problem I have is that I can't tell what environment I am sending my output into. If I could tell that I was sending it to the next iteration of the program I would send it as an array of digits rather than as a big number.
I also suspect that if I could figure out how to make a tacit version of the code, it would run faster, based on my finding that when I add something to the code that should make it shorter, it runs longer.
lookandsay=: 3 : 0
if. 0 = # ,y do. return. end. NB. return on empty argument
if. 1 ~: ##$ y do. NB. convert rank 0 argument to list of digits
y =. (10&#.^:_1) x: y
f =. 1
assert. 1 = ##$ y NB. the converted argument must be rank 1
else.
NB. yw =. y
f =. 0
end.
NB. e should be a count of the digits that match the leading digit.
e=.+/*./\y=0&{y
if. f do.
o=. e,(0&{y),(,lookandsay e }. y)
assert. e = 0&{ o
10&#. x: o
return.
else.
e,(0&{y),(,lookandsay e }. y)
return.
end.
)
I was interested in the characteristics of the numbers produced. I found that if you start with a 1, the numerals never get higher than 3. If you start with a numeral higher than 3, it will survive as a singleton, and you can also get a number into the generated numbers by starting with something like 888888888 which will generate a number with one 9 in it and a single 8 at the end of the number. But other than the singletons, no digit gets higher than 3.
Edit:
I did some more measuring. I had originally written the program to accept either a vector or a scalar, the idea being that internally I'd work with a vector. I had thought about passing a vector from one layer of code to the other, and I still might using a left argument to control code. With I pass the top level a vector the code runs enormously faster, so my guess is that most of the cpu is being eaten by converting very long numbers from vectors to digits. The recursive routine always passes down a vector when it recurs which might be why it is almost as fast as the loop.
That does not change my question.
I have an answer for this which I can't post for three hours. I will post it then, please don't do a ton of research to answer it.
assignments like
arr=. 'z' 15} arr
are executed in place. (See JWiki article for other supported in-place operations)
Interpreter determines that only small portion of arr is updated and does not create entire new list to reassign.
What happens in your case is not that array is being reassigned, but that it grows many times in small increments, causing memory allocation and reallocation.
If you preallocate (by assigning it some large chunk of data), then you can modify it with } without too much penalty.
After I asked this question, to be honest, I lost track of this web site.
Yes, the answer is that the language has no form that means "update in place, but if you use two forms
x =: x , most anything
or
x =: most anything } x
then the interpreter recognizes those as special and does update in place unless it can't. There are a number of other specials recognized by the interpreter, like:
199(1000&|#^)199
That combined operation is modular exponentiation. It never calculates the whole exponentiation, as
199(1000&|^)199
would - that just ends as _ without the #.
So it is worth reading the article on specials. I will mark someone else's answer up.
The link that sverre provided above ( http://www.jsoftware.com/jwiki/Essays/In-Place%20Operations ) shows the various operations that support modifying an existing array rather than creating a new one. They include:
myarray=: myarray,'blah'
If you are interested in a tacit version of the lookandsay sequence see this submission to RosettaCode:
las=: ,#((# , {.);.1~ 1 , 2 ~:/\ ])&.(10x&#.inv)#]^:(1+i.#[)
5 las 1
11 21 1211 111221 312211

Simple Recursion?

I'm new to programming and have had a hard time understanding recursion. There's a problem I've been working on but can't figure out. I really just don't understand how they are solvable.
"Define a procedure plus that takes two non-negative integers and returns their sum. The only procedures (other than recursive calls to plus) that you may use are: zero?, sub1, and add1.
I know that this is a built in functions in scheme, so I know they're possible to solve, I just don't understand how. Recursion is so tricky!
Any help would be greatly appreciated!
=] Thanks
I'm working in Petite Chez Scheme (with the SWL editor)
Recursion is a very important concept in software development. I don't know (petit chez) scheme so I will approach this from a general angle.
The concept of a recursive function is to repeat the same task over and over again until you reach some limiting boundary. Taking your first question, you have two numbers and you need to add them together. However, you only have the ability to add 1 to a number or subtract 1 from a number. You also have the literal value zero.
So. consider you numbers as two buckets. They each have 10 stones in them. You want to "add" those two buckets together. You are only permitted to move one stone at a time (i.e. you can't grab a handful or tip one bucket into the other).
Lets say you want to move everything from the left bucket into the right bucket, one stone at a time. What are you going to have to do?
First, you have to take 1 stone from the left bucket, i.e. you are using sub1 to remove one stone from the bucket. You then add that same stone to the right bucket, i.e. you add1 to the right bucket.
Now you could do this in a loop, but you don't know how many stones there will be in any given solution. What you really want to do is say "Take one stone from the left bucket, put it in the right bucket and repeat until there are no stones in the left bucket.' This case of there being no stones in the left bucket is called you "Base Case". It's the point at which you say OK, I'm done now.
A pseudocode example of this situation would be (using your plus, add1, sub1 and zero):
plus(leftBucket, rightBucket)
{
if(leftBucket == zero) // check if the left bucket is empty yet
{
// the left bucket is empty, we've moved all the stones
return rightBucket; // the right bucket must be full
}
else
{
// we still have stones in the left bucket, remove 1,
// put it in the right bucket, repeat.
return plus(sub1(leftBucket), add1(rightBucket));
}
}
If you still need more help, let me know, I can run through other examples but this looks like it's probably a homework problem for you and recursion is incredibly important to understand so I don't want to just give you all the answers.
Recursion is simply a function that calls itself. The most common, easily understood examples of recursion is walking a data structure that looks like a tree.
How would you visit each branch of a tree? You would start at the trunk and call visit(branch), passing the trunk of the tree as the first branch. Visit() calls itself for each branch of each branch, and so on.
public void visit(Branch branch)
{
// do something with this branch here
// visit the branches of this branch
foreach(var subbranch in branch.branches)
{
visit(subbranch)
}
}
Recursion is closely related to induction - first you solve (or prove) a base case, and then you assume your solution is correct for some value n, and use that to solve (or prove) it for n + 1.
So the first step here is to look at the first problem. What would be a good base case for adding two numbers together?
Alright, so we have our base case: when one of the numbers is zero.
For simplicity's sake, we'll assume that the second number is zero, just to make things a little easier.
So we know that (+ n 0) is equal to n. So now for our recursive step, we want to take an arbitrary call (+ x y), and turn that into a call which is closer to our ideal (+ n 0). That way we'll have made some progress and will eventually solve our problem.
So how are we going to do this?
(+ x y) is of course equivalent to (+ (add1 x) (sub1 y)) - which takes us closer to our base case of (zero? y).
This gives us our final solution:
(define (+ x y)
(if (zero? y)
(x)
(+ (add1 x) (sub1 y))
))
(you can, of course, swap the order of the arguments and it will still be equivalent).
A similar mechanism can be used to solve the other two problems.

Big O Log problem solving

I have question that comes from a algorithms book I'm reading and I am stumped on how to solve it (it's been a long time since I've done log or exponent math). The problem is as follows:
Suppose we are comparing implementations of insertion sort and merge sort on the same
machine. For inputs of size n, insertion sort runs in 8n^2 steps, while merge sort runs in 64n log n steps. For which values of n does insertion sort beat merge sort?
Log is base 2. I've started out trying to solve for equality, but get stuck around n = 8 log n.
I would like the answer to discuss how to solve this mathematically (brute force with excel not admissible sorry ;) ). Any links to the description of log math would be very helpful in my understanding your answer as well.
Thank you in advance!
http://www.wolframalpha.com/input/?i=solve%288+log%282%2Cn%29%3Dn%2Cn%29
(edited since old link stopped working)
Your best bet is to use Newton;s method.
http://en.wikipedia.org/wiki/Newton%27s_method
One technique to solving this would be to simply grab a graphing calculator and graph both functions (see the Wolfram link in another answer). Find the intersection that interests you (in case there are multiple intersections, as there are in your example).
In any case, there isn't a simple expression to solve n = 8 log₂ n (as far as I know). It may be simpler to rephrase the question as: "Find a zero of f(n) = n - 8 log₂ n". First, find a region containing the intersection you're interested in, and keep shrinking that region. For instance, suppose you know your target n is greater than 42, but less than 44. f(42) is less than 0, and f(44) is greater than 0. Try f(43). It's less than 0, so try 43.5. It's still less than 0, so try 43.75. It's greater than 0, so try 43.625. It's greater than 0, so keep going down, and so on. This technique is called binary search.
Sorry, that's just a variation of "brute force with excel" :-)
Edit:
For the fun of it, I made a spreadsheet that solves this problem with binary search: binary‑search.xls . The binary search logic is in the second data column, and I just auto-extended that.

Resources