Is there a tco pattern with two accumulating variables? - recursion

Just for fun (Project Euler #65) I want to implement the formula
n_k = a_k*n_k-1 + n_k-2
in an efficient way. a_k is either 1 or (* 2 (/ k 3)), depending on k.
I started with a recursive solution:
(defun numerator-of-convergence-for-e-rec (k)
"Returns the Nth numerator of convergence for Euler's number e."
(cond ((or (minusp k)) (zerop k) 0)
((= 1 k) 2)
((= 2 k) 3)
((zerop (mod k 3)) (+ (* 2 (/ k 3) (numerator-of-convergence-for-e-rec (1- k)))
(numerator-of-convergence-for-e-rec (- k 2))))
(t (+ (numerator-of-convergence-for-e-rec (1- k))
(numerator-of-convergence-for-e-rec (- k 2))))))
which works for small k but gets pretty slow for k = 100, obviously.
I have no real idea how to transform this function to a version with could be tail-call optimized. I have seen a pattern using two accumulating variables for fibonacci numbers but fail to transform this pattern to my function.
Is there a general guideline how to transform complex recursions to tco versions or should I implement an iterative solution directly.?

First, note that memoization is probably the simplest way optimize your code: it does not reverse the flow of operations; you call your function with a given k and it goes back to zero to compute the previous values, but with a cache. If however you want to turn your function from recursive to iterative with TCO, you'll have to compute things from zero up to k and pretend you have a constant-sized stack / memory.
Step function
First, write a function which computes current n given k, n-1 and n-2:
(defun n (k n1 n2)
(if (plusp k)
(case k
(1 2)
(2 3)
(t (multiple-value-bind (quotient remainder) (floor k 3)
(if (zerop remainder)
(+ (* 2 quotient n1) n2)
(+ n1 n2)))))
0))
This step should be easy; here, I rewrote your function a little bit but I actually only extracted the part that computes n given the previous n and k.
Modified function with recursive (iterative) calls
Now, you need to call n from k starting from 0 to the maximal value you want to be computed, named m hereafter. Thus, I am going to add a parameter m, which controls when the recursive call stops, and call n recursively with the modified arguments. You can see the arguments are shifted, current n1 is the next n2, etc.
(defun f (m k n1 n2)
(if (< m k)
n1
(if (plusp k)
(case k
(1 (f m (1+ k) 2 n1))
(2 (f m (1+ k) 3 n1))
(t (multiple-value-bind (quotient remainder) (floor k 3)
(if (zerop remainder)
(f m (1+ k) (+ (* 2 quotient n1) n2) n1)
(f m (1+ k) (+ n1 n2) n1)))))
(f m (1+ k) 0 n1))))
That's all, except that you don't want to show this interface to your user. The actual function g properly bootstraps the initial call to f:
(defun g (m)
(f m 0 0 0))
The trace for this function exhibits an arrow ">" shape, which is the case with tail-recursive functions (tracing is likely to inhibit tail-call optimization):
0: (G 5)
1: (F 5 0 0 0)
2: (F 5 1 0 0)
3: (F 5 2 2 0)
4: (F 5 3 3 2)
5: (F 5 4 8 3)
6: (F 5 5 11 8)
7: (F 5 6 19 11)
7: F returned 19
6: F returned 19
5: F returned 19
4: F returned 19
3: F returned 19
2: F returned 19
1: F returned 19
0: G returned 19
19
Driver function with a loop
The part that can be slightly difficult, or make your code hard to read, is when we inject tail-recursive calls inside the original function n. I think it is better to use a loop instead, because:
unlike with the tail-recursive call, you can guarantee that the code will behave as you wish, without worrying whether your implementation will actually optimize tail-calls or not.
the code for the step function n is simpler and only expresses what is happening, instead of detailing how (tail-recursive calls are just an implementation detail here).
With the above function n, you can change g to:
(defun g (m)
(loop
for k from 0 to m
for n2 = 0 then n1
for n1 = 0 then n
for n = (n k n1 n2)
finally (return n)))
Is there a general guideline how to transform complex recursions to
tco versions or should I implement an iterative solution directly?
Find a step function which advances the computation from the base case to the general case, and put intermediate variables as parameters, in particular results from past calls. This function can call itself (in which case it will be tail-recursive, because you have to compute all the arguments first), or simply called in a loop. You have to be careful when computing the initial values, you might have more corner cases than with a simple recursive function.
 See also
Scheme's named let, the RECUR macro in Common Lisp and the recur special form in Clojure.

Related

Why is the recursive function performing better than the iterative function in elisp?

As a test for one of my classes, our teacher asked us to test a recursive and non-recursive approach to the famous Euclidean Algorithm:
Iterative
(defun gcdi (a b)
(let ((x a) (y b) r)
(while (not (zerop y))
(setq r (mod x y) x y y r))
x))
Recursive
(defun gcdr (a b)
(if (zerop b)
a
(gcdr b (mod a b))))
And then I ran a test:
(defun test-iterative ()
(setq start (float-time))
(loop for x from 1 to 100000
do (gcdi 14472334024676221 8944394323791464)) ; Fibonacci Numbers close to 2^64 >:)
(- (float-time) start))
(defun test-recursive ()
(setq start (float-time))
(loop for x from 1 to 100000
do (gcdr 14472334024676221 8944394323791464)) ; Fibonacci Numbers close to 2^64 >:)
(- (float-time) start))
And then I ran the timers:
(test-recursive)
: 1.359128475189209
(test-iterative)
: 1.7059495449066162
So my question is this, why did the recursive test perform faster than the iterative test? Isn't iterative almost always better than recursion? Is elisp an exception to this?
The theoretical answer is that the recursive version is actually tail
recursive and thus should compile to iteration.
However, disassembling
the functions reveals the truth:
byte code for gcdi:
args: (a b)
0 varref a
1 varref b
2 constant nil
3 varbind r
4 varbind y
5 varbind x
6 varref y
7:1 constant 0
8 eqlsign
9 goto-if-not-nil 2
12 constant mod
13 varref x
14 varref y
15 call 2
16 varset r
17 varref y
18 varset x
19 varref r
20 dup
21 varset y
22 goto 1
25:2 varref x
26 unbind 3
27 return
vs
byte code for gcdr:
args: (a b)
0 varref b
1 constant 0
2 eqlsign
3 goto-if-nil 1
6 varref a
7 return
8:1 constant gcdr
9 varref b
10 constant mod
11 varref a
12 varref b
13 call 2
14 call 2
15 return
You can see that the gcdr has almost half as many instructions, but contains two call instructions, which means that ELisp does not, apparently, convert the tail recursive call to iteration.
However, function calls in ELisp are relatively cheap and
thus the recursive version executes faster.
PS. While the question makes sense, and the answer is actually generally applicable (e.g., the same approach is valid for Python and CLISP, inter alia), one should be aware that choosing the right algorithm (e.g., linearithmic merge-sort instead of quadratic bubble-sort) is much more important than "micro-optimizations" of the implementation.
Hmm... indeed that's weird, since Emacs's implementation of function calls (and hence recursion) is not very efficient.
I just evaluated the code below:
(defun sm-gcdi (a b)
(let ((x a) (y b) r)
(while (not (zerop y))
(setq r (mod x y) x y y r))
x))
(defun sm-gcdr (a b)
(if (zerop b)
a
(sm-gcdr b (mod a b))))
(defun sm-test-iterative ()
(let ((start (float-time)))
(dotimes (_ 100000)
(sm-gcdi 14472334024676221 8944394323791464))
(- (float-time) start)))
(defun sm-test-recursive ()
(let ((start (float-time)))
(dotimes (_ 100000)
(sm-gcdr 14472334024676221 8944394323791464))
(- (float-time) start)))
and then tried M-: (sm-test-recursive) and M-: (sm-test-iterative) and sure enough the iterative version is faster for me. I then did M-: (byte-compile 'sm-gcdi) and M-: (byte-compile 'sm-gcdr) and tried again, and the speed difference was even larger.
So your measurements come as a surprise to me: they don't match my expectations, and don't match my tests either.

Understanding Let/cc and throw in racket

In class we talked about two functions in racket i-e letcc and throw. The instructor said that let/cc has something to do with calling with the current continuation and throw just applies a to b e.g.
(throw a b)
I haven't been able to find much about these online. I did find a question asked for letcc on stackoverflow but I have not been able to understand the functioning of letcc from it completely. Could someone explain these two in simple words along with a simple example?
Edit1: Also in my practice mid exam we are given two question related to it.
For each of the following expressions that contain uses of let/cc, what is the value of each expression?
(let/cc k (throw (throw k 5) 6))
(let/cc k (throw k ((( lambda (x) x) k) (∗ 5 5))))
The answers to these are 5 and 25 respectively. I just wanna understand the two concepts so that I can work with questions like these in my midterm exam.
Let's look at let/cc first.
The expression (let/cc k e) will 1) capture the current continuation (represented as a function) then 2) bind the variable k to the captured continuation and finally 3) evaluate the expression e.
A few examples are in order.
If during evaluation of the expression e, the captured contination k is not called, then the value of the let/cc expression is simply the value(s) to which the expression e evaluated to.
> (+ 10 (let/cc k 32))
42
If on the other hand k is called with a value v then the value of the entire let\cc expression becomes v.
> (+ 10 (let/cc k (+ 1 (k 2))))
12
Notice that the part (+ _) around the call (k 2) is skipped.
The value is return to the continuation of (let/cc ...) immediately.
The most common use of let/cc is to mimic the control struct return known from many statement based languages. Here is the classical is-it-a-leap-year problem:
(define (divisible-by? y k)
(zero? (remainder y k)))
(define (leap-year? y)
(let/cc return
(when (not (divisible-by? y 4))
(return #f))
(when (not (divisible-by? y 100))
(return #t))
(when (not (divisible-by? y 400))
(return #f))
#t))
(for/list ([y (in-range 1898 1906)])
(list y (leap-year? y)))
Now what about throw ? That depends on which throw we are talking about.
Is it the one from misc1/throw ?
Or perhaps the one from Might's article? http://matt.might.net/articles/programming-with-continuations--exceptions-backtracking-search-threads-generators-coroutines/
Or perhaps you are using the definition
(define (throw k v)
(k v))
If the latter, then you can replace (k v) in my examples with (throw k v).
UPDATE
Note that the continuation bound to k can be used more than once - it can also be used outside the let/cc expression. Consider this example:
(define n 0)
(let ([K (let/cc k k)])
(when (< n 10)
(displayln n)
(set! n (+ n 1))
(K K)))
Can you figure out what it does without running it?
Here is a "step by step" evaluation of the nested let/cc example.
(let/cc k0
((let/cc k1
(k0 (sub1 (let/cc k2
(k1 k2)))))
1))
(let/cc k0
(k2 1))
(let/cc k0
((let/cc k1
(k0 (sub1 1)))
1))
(let/cc k0
((let/cc k1
(k0 0))
1))
0

Defining a recursive function as iterative?

I have the following recursive function that I need to convert to iterative in Scheme
(define (f n)
(if (< n 3) n
(+
(f (- n 1))
(* 2 (f(- n 2)))
(* 3 (f(- n 3)))
)
))
My issue is that I'm having difficulty converting it to iterative (i.e. make the recursion have linear execution time). I can think of no way to do this, because I just can't figure out how to do this.
The function is defined as follows:
f(n) = n if n<3 else f(n-1) + 2f(n-2) + 3f(n-3)
I have tried to calculate it for 5 linearly, like so
1 + 2 + f(3) + f(4) + f(5)
But in order to calculate say f(5) I'd need to refer back to f(4), f(3), f(2) and for f(4) Id have to refer back to f(3), f(2), f(1)
This is a problem from the SICP book.
In the book, authors have an example of formulating an iterative process for computing the Fibonacci numbers.
(define (fib n)
(fib-iter 1 0 n))
(define (fib-iter a b count)
(if (= count 0)
b
(fib-iter (+ a b) a (- count 1))))
The point here is that use two parameter a and b to memorise f(n+1) and f(n) during computing. The similar could be applied: we need a, b, c to memorise f(n+2), f(n+1) and f(n)
;; an interative process implementation
(define (f-i n)
;; f2 is f(n+2), f1 is f(n+1), f0 is f(n)
(define (interative-f f2 f1 f0 count)
(cond
((= count 0) f0)
(else (interative-f
(+ f2 (* f1 2) (* f0 3))
f2
f1
(- count 1)))))
(interative-f 2 1 0 n))

Recursive Addition in Lisp

I'm a beginner and I am trying to teach myself Common Lisp and during my self-study I have written a function that I believe should work for recursive addition of two arguments. However, the function always fails. Why is that?
(defun sum (n m)
;;;Returns the sum of n and m using recursion
(cond ((eq m 0) n))
(sum (1+ n) (1- m)))
As I understand it, it should continually add 1 to n while decrementing m until m is 0 at which point, the recursive addition is complete
I think you have 2 simple typos:
one parenthesis too many and
missing t in your cond clause.
What you probably meant was:
(defun sum (n m)
(cond
((eq m 0) n) ; here you had one parenthesis too many
(t (sum (1+ n) (1- m))))) ; here you were missing the `t` symbol
It's really wierd use case to do such addition, but I'll explain you where is your mistakes:
(defun sum (n m)
;;;Returns the sum of n and m using recursion
(cond ((eq m 0) n)) ;; <= This line is ignored, you not returnin N.
(sum (1+ n) (1- m))) ;; <= this will be called forever
You should write:
(defun sum (n m)
"Recursively increment N and decrement M untill M = 0"
(if (= m 0) ;; don't use EQ for numbers, use EQL or =.
n
(sum (1+ n) (1- m)))) ;; Otherwise recursive call
Let's trace it to see how it works:
CL-USER> (sum 0 10)
0: (SUM 0 10)
1: (SUM 1 9)
2: (SUM 2 8)
3: (SUM 3 7)
4: (SUM 4 6)
5: (SUM 5 5)
6: (SUM 6 4)
7: (SUM 7 3)
8: (SUM 8 2)
9: (SUM 9 1)
10: (SUM 10 0)
10: SUM returned 10
9: SUM returned 10
8: SUM returned 10
7: SUM returned 10
6: SUM returned 10
5: SUM returned 10
4: SUM returned 10
3: SUM returned 10
2: SUM returned 10
1: SUM returned 10
0: SUM returned 10
10
If you'll take an advice - don't try to do such weird things with recursion, if you want to leard how to use it, try it for some natural-recursive cases like factorials, fibonacci, tree processing and such.

How to improve this piece of code?

My solution to exercise 1.11 of SICP is:
(define (f n)
(if (< n 3)
n
(+ (f (- n 1)) (* 2 (f (- n 2))) (* 3 (f (- n 3))))
))
As expected, a evaluation such as (f 100) takes a long time. I was wondering if there was a way to improve this code (without foregoing the recursion), and/or take advantage of multi-core box. I am using 'mit-scheme'.
The exercise tells you to write two functions, one that computes f "by means of a recursive process", and another that computes f "by means of an iterative process". You did the recursive one. Since this function is very similar to the fib function given in the examples of the section you linked to, you should be able to figure this out by looking at the recursive and iterative examples of the fib function:
; Recursive
(define (fib n)
(cond ((= n 0) 0)
((= n 1) 1)
(else (+ (fib (- n 1))
(fib (- n 2))))))
; Iterative
(define (fib n)
(fib-iter 1 0 n))
(define (fib-iter a b count)
(if (= count 0)
b
(fib-iter (+ a b) a (- count 1))))
In this case you would define an f-iter function which would take a, b, and c arguments as well as a count argument.
Here is the f-iter function. Notice the similarity to fib-iter:
(define (f-iter a b c count)
(if (= count 0)
c
(f-iter (+ a (* 2 b) (* 3 c)) a b (- count 1))))
And through a little trial and error, I found that a, b, and c should be initialized to 2, 1, and 0 respectively, which also follows the pattern of the fib function initializing a and b to 1 and 0. So f looks like this:
(define (f n)
(f-iter 2 1 0 n))
Note: f-iter is still a recursive function but because of the way Scheme works, it runs as an iterative process and runs in O(n) time and O(1) space, unlike your code which is not only a recursive function but a recursive process. I believe this is what the author of Exercise 1.1 was looking for.
I'm not sure how best to code it in Scheme, but a common technique to improve speed on something like this would be to use memoization. In a nutshell, the idea is to cache the result of f(p) (possibly for every p seen, or possibly the last n values) so that next time you call f(p), the saved result is returned, rather than being recalculated. In general, the cache would be a map from a tuple (representing the input arguments) to the return type.
Well, if you ask me, think like a mathematician. I can't read scheme, but if you're coding a Fibonacci function, instead of defining it recursively, solve the recurrence and define it with a closed form. For the Fibonacci sequence, the closed form can be found here for example. That'll be MUCH faster.
edit: oops, didn't see that you said forgoing getting rid of the recursion. In that case, your options are much more limited.
See this article for a good tutorial on developing a fast Fibonacci function with functional programming. It uses Common LISP, which is slightly different from Scheme in some aspects, but you should be able to get by with it. Your implementation is equivalent to the bogo-fig function near the top of the file.
To put it another way:
To get tail recursion, the recursive call has to be the very last thing the procedure does.
Your recursive calls are embedded within the * and + expressions, so they are not tail calls (since the * and + are evaluated after the recursive call.)
Jeremy Ruten's version of f-iter is tail-recursive rather than iterative (i.e. it looks like a recursive procedure but is as efficient as the iterative equivalent.)
However you can make the iteration explicit:
(define (f n)
(let iter
((a 2) (b 1) (c 0) (count n))
(if (<= count 0)
c
(iter (+ a (* 2 b) (* 3 c)) a b (- count 1)))))
or
(define (f n)
(do
((a 2 (+ a (* 2 b) (* 3 c)))
(b 1 a)
(c 0 b)
(count n (- count 1)))
((<= count 0) c)))
That particular exercise can be solved by using tail recursion - instead of waiting for each recursive call to return (as is the case in the straightforward solution you present), you can accumulate the answer in a parameter, in such a way that the recursion behaves exactly the same as an iteration in terms of the space it consumes. For instance:
(define (f n)
(define (iter a b c count)
(if (zero? count)
c
(iter (+ a (* 2 b) (* 3 c))
a
b
(- count 1))))
(if (< n 3)
n
(iter 2 1 0 n)))

Resources