I have an assignment to make a tail-recursive function that takes 3 integers(possibly very large), p q and r, and calculates the modulo of the division (p^q)/r. I figured out how to do make a function that achieves the goal, but it is not tail recursive.
(define (mod-exp p q r)
(if (= 0 p)
0
(if (= 0 q)
1
(if (= 0 (remainder r 2))
(remainder (* (mod-exp p (quotient q 2) r)
(mod-exp p (quotient q 2) r))
r)
(remainder (* (remainder p r)
(remainder (mod-exp p (- q 1) r) r))
r)))))
I'm having a hard time wrapping my head around making this tail-recursive, I don't see how I can "accumulate" the remainder.
I'm pretty much restricted to using the basic math operators and quotient and remainder for this task.
I see that you're implementing binary exponentiation, with the extra feature that it's reduced mod r.
What you may want to do is take a normal (tail recursive) binary exponentiation algorithm and simply change the 2-ary functions + and * to your own user defined 3-ary functions +/mod and */mode which also take r and reduce the result mod r before returning it.
Now how do you do binary exponentiation in a tail recursive way? You need the main function to call into a helper function that takes an extra accumulator parameter - initial value 1. This is kind of similar to tail recursive REVERSE using a helper function REVAPPEND - if you're familiar with that.
Hope that helps and feel free to ask if you need more information.
Related
Structure and Interpretation of Programs defines Fermat's Little Theorem thus:
If n is a prime number and a is any positive integer less than n, then
a raised to the nth power is congruent to a modulo n.
(Two numbers are said to be congruent modulo n if they both have the
same remainder when divided by n. The remainder of a when divided by n
is also referred to as the remainder of a modulo n, or simply a modulo
n.
Based on that description, I write this code:
(define (fermat-test a n)
(congruent? (expt a n) a n))
(define (congruent? x y n)
(= (modulo x n)
(modulo y n)))
Later, SICP says this:
This leads to the following algorithm for testing primality: Given a
number n, pick a random number a < n and compute the remainder of a^n
modulo n. If the result is not equal to a, then n is certainly not a
prime.
and gives this code:
(define (fermat-test)
(define (try-it)
(= (expmod a n n) a))
(try-it (+ 1 (random (- n 1)))))
(define (expmod base exp m)
(cond ((= exp 0) 1)
((even? exp)
(remainder (expmod base (/ exp 2) m) m))
(else
(remainder (* base (expmod base (- exp 1) m)) m))))
where expmod is "a procedure that computes the exponential of a number modulo another number".
I don't understand the correspondence between this code and the first definition of Fermat's theorem. I understand "a raised to the nth power is congruent to a modulo n" as: a^n modulo n = a modulo n. But SICP's code seems to imply a^n modulo n = a. The condition in the fermat-test procedure does not include a modulo n. Of course, their implementation works, so I must be misunderstanding.
Please note, this is not a confusion about recursive versus iterative processes.
The condition in the test has a rather than a modulo n because if a < n then a modulo n is a, so that the modulo n is superfluous.
What they are really testing is if n is a Fermat pseudoprime. It doesn't work as a full-fledged test for primality (and SICP doesn't claim that it does) but the ideas involved in the test ultimately lead to the fully practical Miller-Rabin test.
(expt a n) will compute a very large number
(expmod a n n) will compute a number in the range 0 to n
the two values will be congruent (modulo n).
The reason for using expmod is that fermat-test can be made much faster by using it. The fermat-test function will give the same result either way.
I'm learning the book SICP, and for the exercise 1.15:
Exercise 1.15. The sine of an angle (specified in radians) can be computed by making use of the approximation sin x x if x is sufficiently small, and the trigonometric identity
to reduce the size of the argument of sin. (For purposes of this exercise an angle is considered ``sufficiently small'' if its magnitude is not greater than 0.1 radians.) These ideas are incorporated in the following procedures:
(define (cube x) (* x x x))
(define (p x) (- (* 3 x) (* 4 (cube x))))
(define (sine angle)
(if (not (> (abs angle) 0.1))
angle
(p (sine (/ angle 3.0)))))
a. How many times is the procedure p applied when (sine 12.15) is evaluated?
b. What is the order of growth in space and number of steps (as a function of a) used by the process generated by the sine procedure when (sine a) is evaluated?
I get the answer of the "order of growth in number of steps" by myself is log3 a. But I found something say, the constants in the expression can be ignored, so it's the same as log a, which looks simpler.
I understand the 2n can be simplified to n, and 2n2 + 1 can be simplified to n2, but not sure if this is applied to log3 a too.
Yes, you can (since we're just interested in the order of the number of steps, not the exact number of steps)
Consider the formula for changing the base of a logarithm:
logb(x) = loga(x) / loga(b)
In other words you can rewrite log3(a) as:
log3(a) = log10(a) / log10(3)
Since log10(3) is just a constant, then for the order of growth we are only interested in the log10(a) term.
(define (myminus x y)
(cond ((zero? y) x)
(else (sub1 (myminus x (sub1 y))))))
(define (myminus_v2 x y)
(cond ((zero? y) x)
(else (myminus_v2 (sub1 x) (sub1 y)))))
Please comment on the differences between these functions in terms of how
much memory is required on the stack for each recursive call. Also, which version might
you expect to be faster, and why?
Thanks!
They should both have a number of steps proportional to y.
The second one is a tail call meaning the interpreter can do a tail elimination meaning it takes up a constant space on the stack whereas in the first the size of the stack is proportional to Y.
myminus creates y continuations to sub1 what the recursion evaluates to. This means you can exhaust rackets memory limit making the program fail. In my trials even as little as 10 million will not succeed with the standard 128MB limit in DrRacket.
myminus_v2 is tail recursive and since racket have same properties as what scheme requires, that tail calls are to be optimized to a goto and not grow the stack, y can be any size, i.e. only your available memory and processing power is the limit to the size.
Your procedures are fine examples of peano arithmetic.
By primitive expressions, I mean + - * / sqrt, unless there are others that I am missing. I'm wondering how to write a Scheme expression that finds the 6th root using only these functions.
I know that I can find the cube root of the square root, but cube root doesn't appear to be a primitive expression.
Consider expt, passing in a fractional power as its second argument.
But let's say we didn't know about expt. Could we still compute it?
One way to do it is by applying something like Newton's method. For example, let's say we wanted to compute n^(1/4). Of course, we already know we can just take the sqrt twice to do this, but let's see how Newton's method may apply to this problem.
Given n, we'd like to discover roots x of the function:
f(x) = x^4 - n
Concretely, if we wanted to look for 16^(1/4), then we'd look for a root for the function:
f(x) = x^4 - 16
We already know if we plug in x=2 in there, we'll discover that 2 is a root of this function. But say that we didn't know that. How would we discover the x values that make this function zero?
Newton's method says that if we have a guess at x, call it x_0, we can improve that guess by doing the following process:
x_1 = x_0 - f(x_0) / f'(x_0)
where f'(x) is notation for the derivative of f(x). For the case above, the derivative of f(x) is 4x^3.
And we can get better guesses x_2, x_3, ... by repeating the computation:
x_2 = x_1 - f(x_1) / f'(x_1)
x_3 = x_2 - f(x_2) / f'(x_2)
...
until we get tired.
Let's write this all in code now:
(define (f x)
(- (* x x x x) 16))
(define (f-prime x)
(* 4 x x x))
(define (improve guess)
(- guess (/ (f guess)
(f-prime guess))))
(define approx-quad-root-of-16
(improve (improve (improve (improve (improve 1.0))))))
The code above just expresses f(x), f'(x), and the idea of improving an initial guess five times. Let's see what the value of approx-quad-root-of-16 is:
> approx-quad-root-of-16
2.0457437305170534
Hey, cool. It's actually doing something, and it's close to 2. Not bad for starting with such a bad first guess of 1.0.
Of course, it's a little silly to hardcode 16 in there. Let's generalize, and turn it into a function that takes an arbitrary n instead, so we can compute the quad root of anything:
(define (approx-quad-root-of-n n)
(define (f x)
(- (* x x x x) n))
(define (f-prime x)
(* 4 x x x))
(define (improve guess)
(- guess (/ (f guess)
(f-prime guess))))
(improve (improve (improve (improve (improve 1.0))))))
Does this do something effective? Let's see:
> (approx-quad-root-of-n 10)
1.7800226459895
> (expt (approx-quad-root-of-n 10) 4)
10.039269440807693
Cool: it is doing something useful. But note that it's not that precise yet. To get better precision, we should keep calling improve, and not just four or five times. Think loops or recursion: repeat the improvement till the solution is "close enough".
This is a sketch of how to solve these kinds of problems. For a little more detail, look at the section on computing square roots in Structure and Interpretation of Computer Programs.
You may want to try out a numeric way, which may be inefficient for larger numbers, but it works.
Also if you also count pow as a primitive (since you also count sqrt) you could do this:
pow(yournum, 1/6);
I am currently reading Simon Thompson's The Craft of Functional Programming and when describing recursion, he also mentions a form of recursion called Primitive Recursion.
Can you please explain how this type of recursion is different from "normal" recursive functions?
Here's an example of a primitive recursion function (in Haskell):
power2 n
| n == 0 = 1
| n > 0 = 2 * power2(n - 1)
A simplified answer is that primitive recursive functions are those which are defined in terms of other primitive recursive functions, and recursion on the structure of natural numbers. Natural numbers are conceptually like this:
data Nat
= Zero
| Succ Nat -- Succ is short for 'successor of', i.e. n+1
This means you can recurse on them like this:
f Zero = ...
f (Succ n) = ...
We can write your example as:
power2 Zero = Succ Zero -- (Succ 0) == 1
power2 (Succ n) = 2 * power2 n -- this is allowed because (*) is primitive recursive as well
Composition of primitive recursive functions is also primitive recursive.
Another example is Fibonacci numbers:
fib Zero = Zero
fib (Succ Zero) = (Succ Zero)
fib (Succ n#(Succ n' )) = fib n + fib n' -- addition is primitive recursive
Primitive recursive functions are a (mathematician's) natural response to the halting problem, by stripping away the power to do arbitrary unbounded self recursion.
Consider an "evil" function
f n
| n is an odd perfect number = true
| otherwise = f n+2
Does f terminate? You can't know without solving the open problem of whether there are odd perfect numbers. It's the ability to create functions like these that makes the halting problem hard.
Primitive recursion as a construct doesn't let you do that; the point is to ban the "f n+2" thing while still remaining as flexible as possible -- you can't primitive-recursively define f(n) in terms of f(n+1).
Note that just because a function is not primitive recursive does not mean it doesn't terminate; Ackermann's function is the canonical example.
the recursive functions that can only be implemented by do loops are Primitive recursive functions.
http://en.wikipedia.org/wiki/Primitive_recursive_function