I was wondering how do you implement a tail-resursive power function in scheme?
I've got my recursive one defined for scheme:
(define power-is-fun
(lambda (x y)
(cond [(= y 0)
1]
[(> y 0)
(* (power-is-fun x (- y 1)) x)])))
But I can't quite figure out how the other one is supposed to be.
The answer is similar, you just have to pass the accumulated result as a parameter:
(define power-is-fun
(lambda (x y acc)
(cond [(= y 0)
acc]
[(> y 0)
(power-is-fun x (- y 1) (* x acc))])))
Call it like this, notice that the initial value for acc is 1 (you can build a helper function for this, so you don't have to remember to pass the 1 every time):
(power-is-fun 2 3 1)
> 8
Some general pointers to transform a recursive procedure to a tail-recursion:
Add an extra parameter to the function to hold the result accumulated so far
Pass the initial value for the accumulator the first time you call the procedure, typically this is the same value that you'd have returned at the base case in a "normal" (non-tail-recursive) recursion.
Return the accumulator at the base case of the recursion
At the recursive step, update the accumulated result with a new value and pass it to the recursive call
And the most important: when the time comes to call the recursion, make sure to call it as the last expression with no "additional work" to be performed. For example, in your original code you performed a multiplication after calling power-is-fun, whereas in the tail-recursive version, the call to power-is-fun is the last thing that happens before exiting the procedure
By the way, there is a faster way to implement integer exponentiation. Rather than multiplying x over and over again, y times (which makes it O(y)), there is an approach that is O(log y) in time complexity:
(define (integer-expt x y)
(do ((x x (* x x))
(y y (quotient y 2))
(r 1 (if (odd? y) (* r x) r)))
((zero? y) r)))
If you dislike do (as many Schemers I know do), here's a version that tail-recurses explicitly (you can also write it with named let too, of course):
(define (integer-expt x y)
(define (inner x y r)
(if (zero? y) r
(inner (* x x)
(quotient y 2)
(if (odd? y) (* r x) r))))
(inner x y 1))
(Any decent Scheme implementation should macro-expand both versions to exactly the same code, by the way. Also, just like Óscar's solution, I use an accumulator, only here I call it r (for "result").)
Óscar's list of advice is excellent.
If you like a more in-depth treatment of iterative (aka linear recursive) and tree recursion processes, then don't miss out on the great treatment in SICP:
http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.html#%_sec_1.2
Related
I'm trying to write a map function in a Haskell-like language. The operation I'm trying to use is a fold_right. So basically writing a map using foldr. However, I get a "parameter mismatch error".
Thank you.
map = lambda X. lambda Y.
lambda f: X -> Y.
lambda l: List X.
l [X] (
lambda hd:X.
lambda tl: List Y.
(cons[Y](f hd))(tl)
) (nil [Y]);
The first argument to l should be the type of the result of the fold (at least, that's my educated guess), the "motive". You want the end result to be a List Y, not an X, so you should say that:
map =
lambda X. lambda Y. lambda f: X -> Y. lambda l: List X.
l
[List Y]
(lambda x: X. lambda rec_xs: List Y. cons [Y] (f x) rec_xs)
(nil [Y]);
Maybe you got confused and wrote X because l is a List X? l already knows that it contains Xs; you don't need to point it out again. You just need to point out what you want get out (which could be X, but it isn't in this case).
Im trying to solve the n rooks problem with tail recursion since it is faster than standard recursion, but am having trouble figuring our how to make it all work. I've looked up the theory behind this problem and found that the solution is given by something called "telephone numbers," which are given by the equation:
where T(1) = 1 and T(2) = 2.
I have created a recursive function that evaluates this equation but it only works quickly up to T(40), and I need it to calculate where n > 1000, which currently by my estimates will take days of computation.
Tail recursion seems to be my best option, but I was hoping someone here might know how to program this relation using tail recursion, as I don't really understand it.
I'm working in LISP, but would be open to using any language that supports tail recursion
Tail recursion is just a loop expressed as recursion.
When you have a recursive definition that "forks", the naive implementation is most likely exponential in time, going from T(n) to T(n-1) and T(n-2), then T(n-2), T(n-3), T(n-3), T(n-4), doubling the computation at each step.
The trick is reversing the computation so that you build up from T(1) and T(2). This just needs constant time at each step so the overall computation is linear.
Start with
(let ((n 2)
(t-n 2)
(t-n-1 1))
…)
Let's use a do loop for updating:
(do ((n 2 (1+ n))
(t-n 2 (+ t-n (* n t-n-1)))
(t-n-1 1 t-n))
…)
Now you just need to stop when you reach your desired n:
(defun telephone-number (x)
(do ((n 2 (1+ n))
(t-n 2 (+ t-n (* n t-n-1)))
(t-n-1 1 t-n))
((= n x) t-n)))
To be complete, check your inputs:
(defun telephone-number (x)
(check-type x (integer 1))
(if (< x 3)
x
(do ((n 2 (1+ n))
(t-n 2 (+ t-n (* n t-n-1)))
(t-n-1 1 t-n))
((= n x) t-n))))
Also, do write tests and add documentation what this is for and how to use it. This is untested yet.
When writing this tail recursive, you recurse with the new values:
(defun telephone (x)
(labels ((tel-aux (n t-n t-n-1)
(if (= n x)
t-n
(tel-aux (1+ n)
(+ t-n (* n t-n-1))
t-n))))
(tel-aux 2 2 1)))
When tail recursion is optimized, this scales like the loop (but the constant factor might differ). Note that Common Lisp does not mandate tail call optimization.
I would like to perform some symbolic calculations on lisp.
I found useful derivative function and I would like to know how to write simple recursive function to add/substract/etc. polynomials.
Input (e.g.): (addpolynomial '(+ (^ (* 2 x) 5) 3) '(+ (^ (* 3 x) 5) (^ (* 3 x) 2)))
Output: (+ (^ (* 5 x) 5) (^ (* 3 x) 2)) 3)
Do you know how to do this?
Or maybe you know other symbolic calculation examples?
When I've dealt with polynomials in Lisp in the past, I've used arrays of numbers (letting the variable be assumed, which means I couldn't trivially have things like "x*x + y", but since I didn't need that...).
That allows you to represent "2x^5 + 3" as #(3 0 0 0 0 2), finding the factor of x^n by (aref poly n) and other handy operations.
This also allows you to define addition as simply (map 'vector #'+ ...) (multiplication requires a bit more work).
The problem is to find the nth power of x^n of a number x, where n i s a positive integer. What is the difference between the two pieces of code below. They both produce the same result.
This is the code for the first one:
(define (power x n)
(define (square n) (* n n))
(cond ((= n 1) x)
((even? n)
(square (power x (/ n 2))))
(else
(* (power x (- n 1)) x))))
This is the second one:
(define (power x n)
(if (= n 1)
x
(* x (power (- n 1) x))))
The difference is in the time it takes for the two algorithms to run.
The second one is the simpler, but also less efficient: it requires O(n) multiplications to calculate x^n.
The first one is called the square-and-multiply algorithm. Essentially, it uses the binary representation of the exponent, and uses the identities
x^(ab) = ((x^a)^b)
x^(a+b) = (x^a)(x^b)
to calculate the result. It needs only O(log n) multiplications to calculate the result.
Wikipedia has some detailed analysis of this.
in lambda calculus (λ x. λ y. λ s. λ z. x s (y s z)) is used for addition of two Church numerals how can we explain this, is there any good resource the lambda calculus for functional programming ? your help is much appreciated
Actually λ f1. λ f2. λ s. λ z. (f1 s (f2 s z)) computes addition because it is in effect substituting (f2 s z), the number represented by f2, to the "zero" inside (f1 s z).
Example: Let's take two for f2, s s z in expanded form. f1 is one: s z. Replace that last z by f2 and you get s s s z, the expanded form for three.
This would be easier with a blackboard and hand-waving, sorry.
In lambda calculus, you code a datatype in terms of the operations it induces. For instance, a boolean is a just a choice function that takes in input two values a and b and either returns a or b:
true = \a,b.a false = \a,b.b
What is the use of a natural number? Its main computational purpose is to
provide a bound to iteration. So, we code a natural number as an operator
that takes in input a function f, a value x, and iterate the application
of f over x for n times:
n = \f,x.f(f(....(f x)...))
with n occurrences of f.
Now, if you want to iterate n + m times the function f starting from x
you must start iterating n times, that is (n f x), and then iterate for m
additional times, starting from the previous result, that is
m f (n f x)
Similarly, if you want to iterate n*m times you need to iterate m times
the operation of iterating n times f (like in two nested loops), that is
m (n f) x
The previous encoding of datatypes is more formally explained in terms
of constructors and corresponding eliminators (the so called
Bohm-Berarducci encoding).