I am to write a lisp program to produce the actual value of a hexadecimal number. I have written a function but seem to be getting a stackoverflow (deep) error. I was wondering if anyone could point out my mistake or guide me in the right direction.
I would appreciate it if no code was posted for this question as this is part of a homework assignment. Hence I would only like an explanation or direction where I might be going wrong.
I feel my problem is that my recursion is not terminating but I don't know how to fix it.
Here's my code:
(defun calc (hex)
(if hex
(if (> (length hex) 1)
( + (first (reverse hex)) (* 16 (calc (reverse hex)))) hex)))
Thanks in advance.
The "base case" (the case/state where recursion actually stops) is that hex has length of one or less. Tell me, each time you call calc again, is the input to calc ever getting smaller? if not, then it's mathematically impossible for the input to ever reach the base case.
Let's say hex starts with length 9. When you call calc again, you have reversed hex. So now hex is reversed, but it still has a length of 9. I suspect that is why the recursion never stops.
Related
The Third Commandment of The Little Schemer states:
When building a list, describe the first typical element, and then cons it onto the natural recursion.
What is the exact definition of "natural recursion"? The reason why I am asking is because I am taking a class on programming language principles by Daniel Friedman and the following code is not considered "naturally recursive":
(define (plus x y)
(if (zero? y) x
(plus (add1 x) (sub1 y))))
However, the following code is considered "naturally recursive":
(define (plus x y)
(if (zero? y) x
(add1 (plus x (sub1 y)))))
I prefer the "unnaturally recursive" code because it is tail recursive. However, such code is considered anathema. When I asked as to why we shouldn't write the function in tail recursive form then the associate instructor simply replied, "You don't mess with the natural recursion."
What's the advantage of writing the function in the "naturally recursive" form?
"Natural" (or just "Structural") recursion is the best way to start teaching students about recursion. This is because it has the wonderful guarantee that Joshua Taylor points out: it's guaranteed to terminate[*]. Students have a hard enough time wrapping their heads around this kind of program that making this a "rule" can save them a huge amount of head-against-wall-banging.
When you choose to leave the realm of structural recursion, you (the programmer) have taken on an additional responsibility, which is to ensure that your program halts on all inputs; it's one more thing to think about & prove.
In your case, it's a bit more subtle. You have two arguments, and you're making structurally recursive calls on the second one. In fact, with this observation (program is structurally recursive on argument 2), I would argue that your original program is pretty much just as legitimate as the non-tail-calling one, since it inherits the same proof-of-convergence. Ask Dan about this; I'd be interested to hear what he has to say.
[*] To be precise here you have to legislate out all kinds of other goofy stuff like calls to other functions that don't terminate, etc.
The natural recursion has to do with the "natural", recursive definition of the type you are dealing with. Here, you are working with natural numbers; since "obviously" a natural number is either zero or the successor of another natural number, when you want to build a natural number, you naturally output 0 or (add1 z) for some other natural z which happens to be computed recursively.
The teacher probably wants you to make the link between recursive type definitions and recursive processing of values of that type. You would not have the kind of problem you have with numbers if you tried to process trees or lists, because you routinely use natural numbers in "unnatural ways" and thus, you might have natural objections thinking in terms of Church numerals.
The fact that you already know how to write tail-recursive functions is irrelevant in that context: this is apparently not the objective of your teacher to talk about tail-call optimizations, at least for now.
The associate instructor was not very helpful at first ("messing with natural recursion" sounds as "don't ask"), but the detailed explanation he/she gave in the snapshot you gave was more appropriate.
(define (plus x y)
(if (zero? y) x
(add1 (plus x (sub1 y)))))
When y != 0 it has to remember that once the result of (plus x (sub1 y)) is known, it has to compute add1 on it. Hence when y reaches zero, the recursion is at its deepest. Now the backtracking phase begins and the add1's are executed. This process can be observed using trace.
I did the trace for :
(require racket/trace)
(define (add1 x) ...)
(define (sub1 x) ...)
(define (plus x y) ...)
(trace plus)
(plus 2 3)
Here's the trace :
>(plus 2 3)
> (plus 2 2)
> >(plus 2 1)
> > (plus 2 0) // Deepest point of recursion
< < 2 // Backtracking begins, performing add1 on the results
< <3
< 4
<5
5 // Result
The difference is that the other version has no backtracking phase. It is calling itself for a few times but it is iterative, because it is remembering intermediate results (passed as arguments). Hence the process is not consuming extra space.
Sometimes implementing a tail-recursive procedure is easier or more elegant then writing it's iterative equivalent. But for some purposes you can not/may not implement it in a recursive way.
PS : I had a class which was covering a bit about garbage collection algorithms. Such algorithms may not be recursive as there may be no space left, hence having no space for the recursion. I remember an algorithm called "Deutsch-Schorr-Waite" which was really hard to understand at first. First he implemented the recursive version just to understand the concept, afterwards he wrote the iterative version (hence manually having to remember from where in memory he came), believe me the recursive one was way easier but could not be used in practice...
I am stuck with exercise 4.28. of the book A Gentle Introduction to Symbolic Computation (p. 129):
We can usually rewrite an IF as a combination of AND plus OR by
following this simple scheme: Replace (IF test true-part false-part)
with the equivalent expression (OR (AND test true-part)
false-part). But this scheme fails for the expression (IF (ODDP 5) (EVENP 7) ’FOO). Why does it fail? Suggest a more sophisticated way to
rewrite IF as a combination of ANDs and ORs that does not fail.
(or (and (oddp 5) (or (evenp 7) t)) 'foo) evaluates the true-part and stops, and it would evaluate the false-part if test were NIL, but it always returns T if test is T, which does not reflect the behaviour of IF. Is there a correct solution to the problem with what one has learned so far in the book, or is the answer to the exercise that there is none at this point?
I am not asking for the solution for the exercise if there is one, just whether it makes sense for me to continue looking for one.
You are quite close. It fails because (EVENP 7) (the consequent) happens to be the false value NIL and then the whole and form turns into NIL and or will evaluate the alternative.
Yes. There is a way to fix this even when only knowing the forms you already have presented in the question, but it might not work with side effects (expressions that print stuff) in the future.
I'm new to Clojure and I think my approach to writing code so far is not in line with the "Way of Clojure". At least, I keep writing functions that keep leading to StackOverflow errors with large values. I've learned about using recur which has been a good step forward. But, how to make functions like the one below work for values like 2500000?
(defn fib [i]
(if (>= 2 i)
1
(+ (fib (dec i))
(fib (- i 2)))))
The function is, to my eyes, the "plain" implementation of a Fibonacci generator. I've seen other implementations that are much more optimized, but less obvious in terms of what they do. I.e. when you read the function definition, you don't go "oh, fibonacci".
Any pointers would be greatly appreciated!
You need to have a mental model of how your function works. Let's say that you execute your function yourself, using scraps of paper for each invocation. First scrap, you write (fib 250000), then you see "oh, I need to calculate (fib 249999) and (fib 249998) and finally add them", so you note that and start two new scraps. You can't throw away the first, because it still has things to do for you when you finish the other calculations. You can imagine that this calculation will need a lot of scraps.
Another way is not to start at the top, but at the bottom. How would you do this by hand? You would start with the first numbers, 1, 1, 2, 3, 5, 8 …, and then always add the last two, until you have done it i times. You can even throw away all numbers except the last two at each step, so you can re-use most scraps.
(defn fib [i]
(loop [a 0
b 1
n 1]
(if (>= n i)
b
(recur b
(+ a b)
(inc n)))))
This is also a fairly obvious implementation, but of the how to, not of the what. It always seems quite elegant when you can simply write down a definition and it gets automatically transformed into an efficient calculation, but programming is that transformation. If something gets transformed automatically, then this particular problem has already been solved (often in a more general way).
Thinking "how would I do this step by step on paper" often leads to a good implementaion.
A Fibonacci generator implemented in a "plain" way, as in the definition of the sequence, will always blow your stack up. Neither of two recursive calls to fib are tail recursive, such definition cannot be optimised.
Unfortunately, if you'd like to write an efficient implementation working for big numbers you'll have to accept the fact that mathematical notation doesn't translate to code as cleanly as we'd like it to.
For instance, a non-recursive implementation can be found in clojure.contrib.lazy-seqs. A whole range of various approaches to this problem can be found on Haskell wiki. It shouldn't be difficult to understand with knowledge of basics of functional programming.
;;from "The Pragmatic Programmers Programming Clojure"
(defn fib [] (map first (iterate (fn [[a b]][b (+ a b)])[0N 1N])))
(nth (fib) 2500000)
Nested lists in Common Lisp really confused me. Here is the problem:
By using recursion, let (nested-list 'b '(a (b c) d)) return t
if the first argument appears in the second argument (which could be
a nested list), and nil otherwise.
I tried find, but it only works if the first argument is '(b c).
I turned my eyes on lambda expressions. I want to flatten the second
argument first, and then use eq to compare the arguments.
(defun nested-list (x y)
(cond
((null y) ())
(t (append (lambda (flatten) (first y))
Then I got stuck. Even though I read a lot of stuff about lambda
expessions, it still confused me. I do not know how to recall it when
I need, I knew the funcall function, but you know I just cannot get
it. I just learnt Common Lisp for 5 days, so I hope you can give me a
hint. Thanks a lot!
First of all unless you mistyped if instead of iff the problem is quite trivial, just return t and you're done :-)
Seriously speaking instead when you need to solve a problem using recursion the idea often is simply:
if we're in a trivial case just return the answer
otherwise the answer is the same answer we'd get by solving a problem that it's just a little bit simpler that this, and we call ourself to solve this simplified version.
In the specific consider:
If the second argument is an empty list the answer is NIL
If the first argument is equal to the first element of the second argument then just return T instead
Otherwise if the first element of the second list is a list (therefore also possibly a nested list) and the element is contained in this multilist then return true (to check this case the function is calling itself)
Otherwise just check the same problem, but first dropping the first element of the second argument, because it has been checked (this also calls recursively the same function)
So basically 1 and 2 are the trivial cases; 3 and 4 are the cases in which you solve a simpler version of the problem.
If I need to provide a constant value to a function which I am mapping to the items of a sequence, is there a better way than what I'm doing at present:
(map my-function my-sequence (cycle [my-constant-value]))
where my-constant-value is a constant in the sense that it's going to be the same for the mappings over my-sequence, although it may be itself a result of some function further out. I get the feeling that later I'll look at what I'm asking here and think it's a silly question because if I structured my code differently it wouldn't be a problem, but well there it is!
In your case I would use an anonymous function:
(map #(my-function % my-constant-value) my-sequence)
Using a partially applied function is another option, but it doesn't make much sense in this particular scenario:
(map (partial my-function my-constant-value) my-sequence)
You would (maybe?) need to redefine my-function to take the constant value as the first argument, and you don't have any need to accept a variable number of arguments so using partial doesn't buy you anything.
I'd tend to use partial or an anonymous function as dbyrne suggests, but another tool to be aware of is repeat, which returns an infinite sequence of whatever value you want:
(map + (range 4) (repeat 10))
=> (10 11 12 13)
Yet another way that I find sometimes more readable than map is the for list comprehension macro:
(for [x my-sequence]
(my-function x my-constant-value))
yep :) a little gem from the "other useful functions" section of the api constantly
(map my-function my-sequence (constantly my-constant-value))
the pattern of (map compines-data something-new a-constant) is rather common in idomatic clojure. its relativly fast also with chunked sequences and such.
EDIT: this answer is wrong, but constantly and the rest of the "other useful functions" api are so cool i would like to leave the reference here to them anyway.