No idea how to solve SICP exercise 1.11 - recursion

Exercise 1.11:
A function f is defined by the rule that f(n) = n if n < 3 and f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3) if n > 3. Write a procedure that computes f by means of a recursive process. Write a procedure that computes f by means of an iterative process.
Implementing it recursively is simple enough. But I couldn't figure out how to do it iteratively. I tried comparing with the Fibonacci example given, but I didn't know how to use it as an analogy. So I gave up (shame on me) and Googled for an explanation, and I found this:
(define (f n)
(if (< n 3)
n
(f-iter 2 1 0 n)))
(define (f-iter a b c count)
(if (< count 3)
a
(f-iter (+ a (* 2 b) (* 3 c))
a
b
(- count 1))))
After reading it, I understand the code and how it works. But what I don't understand is the process needed to get from the recursive definition of the function to this. I don't get how the code could have formed in someone's head.
Could you explain the thought process needed to arrive at the solution?

You need to capture the state in some accumulators and update the state at each iteration.
If you have experience in an imperative language, imagine writing a while loop and tracking information in variables during each iteration of the loop. What variables would you need? How would you update them? That's exactly what you have to do to make an iterative (tail-recursive) set of calls in Scheme.
In other words, it might help to start thinking of this as a while loop instead of a recursive definition. Eventually you'll be fluent enough with recursive -> iterative transformations that you won't need to extra help to get started.
For this particular example, you have to look closely at the three function calls, because it's not immediately clear how to represent them. However, here's the likely thought process: (in Python pseudo-code to emphasise the imperativeness)
Each recursive step keeps track of three things:
f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3)
So I need three pieces of state to track the current, the last and the penultimate values of f. (that is, f(n-1), f(n-2) and f(n-3).) Call them a, b, c. I have to update these pieces inside each loop:
for _ in 2..n:
a = NEWVALUE
b = a
c = b
return a
So what's NEWVALUE? Well, now that we have representations of f(n-1), f(n-2) and f(n-3), it's just the recursive equation:
for _ in 2..n:
a = a + 2 * b + 3 * c
b = a
c = b
return a
Now all that's left is to figure out the initial values of a, b and c. But that's easy, since we know that f(n) = n if n < 3.
if n < 3: return n
a = 2 # f(n-1) where n = 3
b = 1 # f(n-2)
c = 0 # f(n-3)
# now start off counting at 3
for _ in 3..n:
a = a + 2 * b + 3 * c
b = a
c = b
return a
That's still a little different from the Scheme iterative version, but I hope you can see the thought process now.

I think you are asking how one might discover the algorithm naturally, outside of a 'design pattern'.
It was helpful for me to look at the expansion of the f(n) at each n value:
f(0) = 0 |
f(1) = 1 | all known values
f(2) = 2 |
f(3) = f(2) + 2f(1) + 3f(0)
f(4) = f(3) + 2f(2) + 3f(1)
f(5) = f(4) + 2f(3) + 3f(2)
f(6) = f(5) + 2f(4) + 3f(3)
Looking closer at f(3), we see that we can calculate it immediately from the known values.
What do we need to calculate f(4)?
We need to at least calculate f(3) + [the rest]. But as we calculate f(3), we calculate f(2) and f(1) as well, which we happen to need for calculating [the rest] of f(4).
f(3) = f(2) + 2f(1) + 3f(0)
↘ ↘
f(4) = f(3) + 2f(2) + 3f(1)
So, for any number n, I can start by calculating f(3), and reuse the values I use to calculate f(3) to calculate f(4)...and the pattern continues...
f(3) = f(2) + 2f(1) + 3f(0)
↘ ↘
f(4) = f(3) + 2f(2) + 3f(1)
↘ ↘
f(5) = f(4) + 2f(3) + 3f(2)
Since we will reuse them, lets give them a name a, b, c. subscripted with the step we are on, and walk through a calculation of f(5):
Step 1: f(3) = f(2) + 2f(1) + 3f(0) or f(3) = a1 + 2b1 +3c1
where
a1 = f(2) = 2,
b1 = f(1) = 1,
c1 = 0
since f(n) = n for n < 3.
Thus:
f(3) = a1 + 2b1 + 3c1 = 4
Step 2: f(4) = f(3) + 2a1 + 3b1
So:
a2 = f(3) = 4 (calculated above in step 1),
b2 = a1 = f(2) = 2,
c2 = b1 = f(1) = 1
Thus:
f(4) = 4 + 2*2 + 3*1 = 11
Step 3: f(5) = f(4) + 2a2 + 3b2
So:
a3 = f(4) = 11 (calculated above in step 2),
b3 = a2 = f(3) = 4,
c3 = b2 = f(2) = 2
Thus:
f(5) = 11 + 2*4 + 3*2 = 25
Throughout the above calculation we capture state in the previous calculation and pass it to the next step,
particularily:
astep = result of step - 1
bstep = astep - 1
cstep = bstep -1
Once I saw this, then coming up with the iterative version was straightforward.

Since the post you linked to describes a lot about the solution, I'll try to only give complementary information.
You're trying to define a tail-recursive function in Scheme here, given a (non-tail) recursive definition.
The base case of the recursion (f(n) = n if n < 3) is handled by both functions. I'm not really sure why the author does this; the first function could simply be:
(define (f n)
(f-iter 2 1 0 n))
The general form would be:
(define (f-iter ... n)
(if (base-case? n)
base-result
(f-iter ...)))
Note I didn't fill in parameters for f-iter yet, because you first need to understand what state needs to be passed from one iteration to another.
Now, let's look at the dependencies of the recursive form of f(n). It references f(n - 1), f(n - 2), and f(n - 3), so we need to keep around these values. And of course we need the value of n itself, so we can stop iterating over it.
So that's how you come up with the tail-recursive call: we compute f(n) to use as f(n - 1), rotate f(n - 1) to f(n - 2) and f(n - 2) to f(n - 3), and decrement count.
If this still doesn't help, please try to ask a more specific question — it's really hard to answer when you write "I don't understand" given a relatively thorough explanation already.

I'm going to come at this in a slightly different approach to the other answers here, focused on how coding style can make the thought process behind an algorithm like this easier to comprehend.
The trouble with Bill's approach, quoted in your question, is that it's not immediately clear what meaning is conveyed by the state variables, a, b, and c. Their names convey no information, and Bill's post does not describe any invariant or other rule that they obey. I find it easier both to formulate and to understand iterative algorithms if the state variables obey some documented rules describing their relationships to each other.
With this in mind, consider this alternative formulation of the exact same algorithm, which differs from Bill's only in having more meaningful variable names for a, b and c and an incrementing counter variable instead of a decrementing one:
(define (f n)
(if (< n 3)
n
(f-iter n 2 0 1 2)))
(define (f-iter n
i
f-of-i-minus-2
f-of-i-minus-1
f-of-i)
(if (= i n)
f-of-i
(f-iter n
(+ i 1)
f-of-i-minus-1
f-of-i
(+ f-of-i
(* 2 f-of-i-minus-1)
(* 3 f-of-i-minus-2)))))
Suddenly the correctness of the algorithm - and the thought process behind its creation - is simple to see and describe. To calculate f(n):
We have a counter variable i that starts at 2 and climbs to n, incrementing by 1 on each call to f-iter.
At each step along the way, we keep track of f(i), f(i-1) and f(i-2), which is sufficient to allow us to calculate f(i+1).
Once i=n, we are done.

What did help me was running the process manually using a pencil and using hint author gave for the fibonacci example
a <- a + b
b <- a
Translating this to new problem is how you push state forward in the process
a <- a + (b * 2) + (c * 3)
b <- a
c <- b
So you need a function with an interface to accept 3 variables: a, b, c. And it needs to call itself using process above.
(define (f-iter a b c)
(f-iter (+ a (* b 2) (* c 3)) a b))
If you run and print each variable for each iteration starting with (f-iter 1 0 0), you'll get something like this (it will run forever of course):
a b c
=========
1 0 0
1 1 0
3 1 1
8 3 1
17 8 3
42 17 8
100 42 17
235 100 42
...
Can you see the answer? You get it by summing columns b and c for each iteration. I must admit I found it by doing some trail and error. Only thing left is having a counter to know when to stop, here is the whole thing:
(define (f n)
(f-iter 1 0 0 n))
(define (f-iter a b c count)
(if (= count 0)
(+ b c)
(f-iter (+ a (* b 2) (* c 3)) a b (- count 1))))

A function f is defined by the rule that f(n) = n, if n<3 and f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3), if n > 3. Write a procedure that computes f by means of a recursive process.
It is already written:
f(n) = n, (* if *) n < 3
= f(n - 1) + 2f(n - 2) + 3f(n - 3), (* if *) n > 3
Believe it or not, there was once such a language. To write this down in another language is just a matter of syntax. And by the way, the definition as you (mis)quote it has a bug, which is now very apparent and clear.
Write a procedure that computes f by means of an iterative process.
Iteration means going forward (there's your explanation!) as opposed to the recursion's going backwards at first, to the very lowest level, and then going forward calculating the result on the way back up:
f(0) = 0
f(1) = 1
f(2) = 2
f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3)
= a + 2b + 3c
f(n+1) = f(n ) + 2f(n - 1) + 3f(n - 2)
= a' + 2b' + 3c' where
a' = f(n) = a+2b+3c,
b' = f(n-1) = a,
c' = f(n-2) = b
......
This thus describes the problem's state transitions as
(n, a, b, c) -> (n+1, a+2*b+3*c, a, b)
We could code it as
g (n, a, b, c) = g (n+1, a+2*b+3*c, a, b)
but of course it wouldn't ever stop. So we must instead have
f n = g (2, 2, 1, 0)
where
g (k, a, b, c) = g (k+1, a+2*b+3*c, a, b), (* if *) k < n
g (k, a, b, c) = a, otherwise
and this is already exactly like the code you asked about, up to syntax.
Counting up to n is more natural here, following our paradigm of "going forward", but counting down to 0 as the code you quote does is of course entirely equivalent.
The corner cases and possible off-by-one errors are left out as exercise non-interesting technicalities.

Related

several questions about this sml recursion function

When f(x-1) is called, is it calling f(x) = x+10 or f(x) = if ...
Is this a tail recursion?
How should I rewrite it using static / dynamic allocation?
let fun f(x) = x + 10
in
let fun f(x) = if x < 1 then 0 else f(x-1)
in f(3)
end
end
Before addressing your questions, here are some observations about your code:
There are two functions f, one inside the other. They're different from one another.
To lessen this confusion you can rename the inner function to g:
let fun f(x) = x + 10
in
let fun g(x) = if x < 1 then 0 else g(x-1)
in g(3)
end
end
This clears up which function calls which by the following rules: The outer f is defined inside the outer in-end, but is immediately shadowed by the inner f. So any reference to f on the right-hand side of the inner fun f(x) = if ... is shadowed because fun enables self-recursion. And any reference to f within the inner in-end is shadowed
In the following tangential example the right-hand side of an inner declaration f does not shadow the outer f if we were using val rather than fun:
let fun f(x) = if (x mod 2 = 0) then x - 10 else x + 10
in
let val f = fn x => f(x + 2) * 2
in f(3)
end
end
If the inner f is renamed to g in this second piece of code, it'd look like:
let fun f(x) = if (x mod 2 = 0) then x - 10 else x + 10
in
let val g = fn x => f(x + 2) * 2
in g(3)
end
end
The important bit is that the f(x + 2) part was not rewritten into g(x + 2) because val means that references to f are outer fs, not the f being defined, because a val is not a self-recursive definition. So any reference to an f within that definition would have to depend on it being available in the outer scope.
But the g(3) bit is rewritten because between in-end, the inner f (now g) is shadowing. So whether it's a fun or a val does not matter with respect to the shadowing of let-in-end.
(There are some more details wrt. val rec and the exact scope of a let val f = ... that I haven't elaborated on.)
As for your questions,
You should be able to answer this now. A nice way to provide the answer is 1) rename the inner function for clarity, 2) evaluate the code by hand using substitution (one rewrite per line, ~> denoting a rewrite, so I don't mean an SML operator here).
Here's an example of how it'd look with my second example (not your code):
g(3)
~> (fn x => f(x + 2) * 2)(3)
~> f(3 + 2) * 2
~> f(5) * 2
~> (if (5 mod 2 = 0) then 5 - 10 else 5 + 10) * 2
~> (if (1 = 0) then 5 - 10 else 5 + 10) * 2
~> (5 + 10) * 2
~> 15 * 2
~> 30
Your evaluation by hand would look different and possibly conclude differently.
What is tail recursion? Provide a definition and ask if your code satisfies that definition.
I'm not sure what you mean by rewriting it using static / dynamic allocation. You'll have to elaborate.

last digit of a^b^c

I've got stuck on this problem :
Given a, b and c three
natural numbers (such that 1<= a, b, c <= 10^9), you are supposed to find the last digit of the number a^b^c."
What I've firstly thought was the O(log n) algorithm for raising a at power n.
int acc=1; //accumulator
while(n>0) {
if(n%2==1)
acc*=a;
a=a*a;
n/=2;
}
Obviously, some basic math might help, like the "last digit" stuff :
Last_digit(2^n) = Last_digit(2^(n%4))
Where n%4 is the remainder of the division n/4
In a nutshell, I've tried to combine these, but I couldn't get on the good way.
Some help would really be apreciated.
The problem is that b^c may be very large. So you want to reduce it before using the standard modular exponentiation.
You can remark that a^(b^c) MOD 10 can have a maximum of 10 different values.
Because of the pigeonhole principle, there will be a number p such that for some r:
a^r MOD 10 = a^(p+r) MOD 10
p <= 10
r <= 10
This implies that for any q:
a^r MOD 10 = a^r*a^p MOD 10
= (a^r*a^p)*a^p MOD 10
= ...
= a^(r+q*p) MOD 10
For any n = s+r+q*p, with s < p you have:
a^n MOD 10 = a^s*a^(r+q*p) MOD 10
= a^s*a^r MOD 10
= a^((n-r) MOD p)*a^r MOD 10
You can just replace n= (b^c) in the previous equation.
You will only compute (b^c-r) MOD p where p <= 10 which is easily done and then compute a^((b^c-r) MOD p)*a^r MOD 10.
Like I mentioned in my comments, this really doesn't have much to do with smart algorithms. The problem can be reduced completely using some elementary number theory. This will yield an O(1) algorithm.
The Chinese remainder theorem says that if we know some number x modulo 2 and modulo 5, we know it modulo 10. So finding a^b^c modulo 10 can be reduced to finding a^b^c modulo 2 and a^b^c modulo 5. Fermat's little theorem says that for any prime p, if p does not divide a, then a^(p-1) = 1 (mod p), so a^n = a^(n mod (p-1)) (mod p). If p does divide a, then obviously a^n = 0 (mod p) for any n > 0. Note that x^n = x (mod 2) for any n>0, so a^b^c = a (mod 2).
What remains is to find a^b^c mod 5, which reduces to finding b^c mod 4. Unfortunately, we can use neither the Chinese remainder theorem, nor Fermat's little theorem here. However, mod 4 there are only 4 possibilities for b, so we can check them separately. If we start with b = 0 (mod 4) or b = 1 (mod 4), then of course b^c = b (mod 4). If we have b = 2 (mod 4) then it is easily seen that b^c = 2 (mod 4) if c = 1, and b^c = 0 (mod 4) if c > 1. If b = 3 (mod 4) then b^c = 3 if c is even, and b^c = 1 if c is odd. This gives us b^c (mod 4) for any b and c, which then gives us a^b^c (mod 5), all in constant time.
Finally with a^b^c = a (mod 2) we can use the Chinese remainder theorem to find a^b^c (mod 10). This requires a mapping between (x (mod 2), y (mod 5)) and z (mod 10). The Chinese remainder theorem only tells us that this mapping is bijective, it doesn't tell us how to find it. However, there are only 10 options, so this is easily done on a piece of paper or using a little program. Once we find this mapping we simply store it in an array, and we can do the entire calculation in O(1).
By the way, this would be the implementation of my algorithm in python:
# this table only needs to be calculated once
# can also be hard-coded
mod2mod5_to_mod10 = [[0 for i in range(5)] for j in range(2)]
for i in range(10):
mod2mod5_to_mod10[i % 2][i % 5] = i
[a,b,c] = [int(input()) for i in range(3)]
if a % 5 == 0:
abcmod5 = 0
else:
bmod4 = b % 4
if bmod4 == 0 or bmod4 == 1:
bcmod4 = bmod4
elif bmod4 == 2:
if c == 1:
bcmod4 = 2
else:
bcmod4 = 0
else:
if c % 2 == 0:
bcmod4 = 1
else:
bcmod4 = 3
abcmod5 = ((a % 5)**bcmod4) % 5
abcmod2 = a % 2
abcmod10 = mod2mod5_to_mod10[abcmod2][abcmod5]
print(abcmod10)

equivalent expressions

I'm trying to figure out an equivalent expressions of the following equations using bitwise, addition, and/or subtraction operators. I know there's suppose to be an answer (which furthermore generalizes to work for any modulus 2^a-1, where a is a power of 2), but for some reason I can't seem to figure out what the relation is.
Initial expressions:
x = n % (2^32-1);
c = (int)n / (2^32-1); // ints are 32-bit, but x, c, and n may have a greater number of bits
My procedure for the first expression was to take the modulo of 2^32, then try to make up the difference between the two modulo's. I'm having trouble on this second part.
x = n & 0xFFFFFFFF + difference // how do I calculate difference?
I know that the difference n%(2^32)-n%(2^32-1) is periodic (with a period of 2^32*(2^32-1)), and there's a "spike up' starting at multiples of 2^32-1 and ending at 2^32. After each 2^32 multiple, the difference plot decreases by 1 (hopefully my descriptions make sense)
Similarly, the second expression could be calculated in a similar fashion:
c = n >> 32 + makeup // how do I calculate makeup?
I think makeup steadily increases by 1 at multiples of 2^32-1 (and decreases by 1 at multiples of 2^32), though I'm having troubles expressing this idea in terms of the available operators.
You can use these identities:
n mod (x - 1) = (((n div x) mod (x - 1)) + ((n mod x) mod (x - 1))) mod (x - 1)
n div (x - 1) = (n div x) + (((n div x) + (n mod x)) div (x - 1))
First comes from (ab+c) mod d = ((a mod d) (b mod d) + (c mod d)) mod d.
Second comes from expanding n = ax + b = a(x-1) + a + b, while dividing by x-1.
I think I've figured out the answer to my question:
Compute c first, then use the results to compute x. Assumes that the comparison returns 1 for true, 0 for false. Also, the shifts are all logical shifts.
c = (n>>32) + ((t & 0xFFFFFFFF) >= (0xFFFFFFFF - (n>>32)))
x = (0xFFFFFFFE - (n & 0xFFFFFFFF) - ((c - (n>>32))<<32)-c) & 0xFFFFFFFF
edit: changed x (only need to keep lower 32 bits, rest is "junk")

How to calculate the explicit form of a recursive function?

I have this recursive function:
f(n) = 2 * f(n-1) + 3 * f(n-2) + 4
f(1) = 2
f(2) = 8
I know from experience that explicit form of it would be:
f(n) = 3 ^ n - 1 // pow(3, n) - 1
I wanna know if there's any way to prove that. I googled a bit, yet didn't find anything simple to understand. I already know that generation functions probably solve it, they're too complex, I'd rather not get into them. I'm looking for a simpler way.
P.S.
If it helps I remember something like this solved it:
f(n) = 2 * f(n-1) + 3 * f(n-2) + 4
// consider f(n) = x ^ n
x ^ n = 2 * x ^ (n-1) + 3 * x ^ (n-2) + 4
And then you somehow computed x that lead to explicit form of the recursive formula, yet I can't quite remember
f(n) = 2 * f(n-1) + 3 * f(n-2) + 4
f(n+1) = 2 * f(n) + 3 * f(n-1) + 4
f(n+1)-f(n) = 2 * f(n) - 2 * f(n-1) + 3 * f(n-1) - 3 * f(n-2)
f(n+1) = 3 * f(n) + f(n-1) - 3 * f(n-2)
Now the 4 is gone.
As you said the next step is letting f(n) = x ^ n
x^(n+1) = 3 * x^n + x^(n-1) - 3 * x^(n-2)
divide by x^(n-2)
x^3 = 3 * x^2 + x - 3
x^3 - 3 * x^2 - x + 3 = 0
factorise to find x
(x-3)(x-1)(x+1) = 0
x = -1 or 1 or 3
f(n) = A * (-1)^n + B * 1^n + C * 3^n
f(n) = A * (-1)^n + B + C * 3^n
Now find A,B and C using the values you have
f(1) = 2; f(2) = 8; f(3) = 26
f(1) = 2 = -A + B + 3C
f(2) = 8 = A + B + 9C
f(3) = 26 = -A + B + 27C
solving for A,B and C:
f(3)-f(1) = 24 = 24C => C = 1
f(2)-f(1) = 6 = 2A + 6 => A = 0
2 = B + 3 => B = -1
Finally
f(n) = 3^n - 1
Ok, I know you didn't want generating functions (GF from now on) and all the complicated stuff, but my problem turned out to be nonlinear and simple linear methods didn't seem to work. So after a full day of searching, I found the answer and hopefully these findings will be of help to others.
My problem: a[n+1]= a[n]/(1+a[n]) (i.e. not linear (nor polynomial), but also not completely nonlinear - it is a rational difference equation)
if your recurrence is linear (or polynomial), wikihow has step-by-step instructions (with and without GF)
if you want to read something about GF, go to this wiki, but I didn't get it till I started doing examples (see next)
GF usage example on Fibonacci
if the previous example didn't make sense, download GF book and read the simplest GF example (section 1.1, ie a[n+1]= 2 a[n]+1, then 1.2, a[n+1]= 2 a[n]+1, then 1.3 - Fibonacci)
(while I'm on the book topic) templatetypedef mentioned Concrete Mathematics, download here, but I don't know much about it except it has a recurrence, sums, and GF chapter (among others) and a table of simple GFs on page 335
as I dove deeper for nonlinear stuff, I saw this page, using which I failed at z-transforms approach and didn't try linear algebra, but the link to rational difference eqn was the best (see next step)
so as per this page, rational functions are nice because you can transform them into polynomials and use linear methods of step 1. 3. and 4. above, which I wrote out by hand and probably made some mistake, because (see 8)
Mathematica (or even the free WolframAlpha) has a recurrence solver, which with RSolve[{a[n + 1] == a[n]/(1 + a[n]), a[1] == A}, a[n], n] got me a simple {{a[n] -> A/(1 - A + A n)}}. So I guess I'll go back and look for mistake in hand-calculations (they are good for understanding how the whole conversion process works).
Anyways, hope this helps.
In general, there is no algorithm for converting a recursive form into an iterative one. This problem is undecidable. As an example, consider this recursive function definition, which defines the Collatz sequence:
f(1) = 0
f(2n) = 1 + f(n)
f(2n + 1) = 1 + f(6n + 4)
It's not known whether or not this is even a well-defined function or not. Were an algorithm to exist that could convert this into a closed-form, we could decide whether or not it was well-defined.
However, for many common cases, it is possible to convert a recursive definition into an iterative one. The excellent textbook Concrete Mathematics spends much of its pages showing how to do this. One common technique that works quite well when you have a guess of what the answer is is to use induction. As an example for your case, suppose that you believe that your recursive definition does indeed give 3^n - 1. To prove this, try proving that it holds true for the base cases, then show that this knowledge lets you generalize the solution upward. You didn't put a base case in your post, but I'm assuming that
f(0) = 0
f(1) = 2
Given this, let's see whether your hunch is correct. For the specific inputs of 0 and 1, you can verify by inspection that the function does compute 3^n - 1. For the inductive step, let's assume that for all n' < n that f(n) = 3^n - 1. Then we have that
f(n) = 2f(n - 1) + 3f(n - 2) + 4
= 2 * (3^{n-1} - 1) + 3 * (3^{n-2} - 1) + 4
= 2 * 3^{n-1} - 2 + 3^{n-1} - 3 + 4
= 3 * 3^{n-1} - 5 + 4
= 3^n - 1
So we have just proven that this recursive function does indeed produce 3^n - 1.

Can someone explain Mathematical Induction (to prove a recursive method)

Can someone explain mathematical induction to prove a recursive method? I am a freshmen computer science student and I have not yet taken Calculus (I have had up through Trig). I kind of understand it but I have trouble when asked to write out an induction proof for a recursive method.
Here is a explanation by example:
Let's say you have the following formula that you want to prove:
sum(i | i <- [1, n]) = n * (n + 1) / 2
This formula provides a closed form for the sum of all integers between 1 and n.
We will start by proving the formula for the simple base case of n = 1. In this case, both sides of the formula reduce to 1. This in turn means that the formula holds for n = 1.
Next, we will prove that if the formula holds for a value n, then it holds for the next value of n (or n + 1). In other words, if the following is true:
sum(i | i <- [1, n]) = n * (n + 1) / 2
Then the following is also true:
sum(i | i <- [1, n + 1]) = (n + 1) * (n + 2) / 2
To do so, let's start with the first side of the last formula:
s1 = sum(i | i <- [1, n + 1]) = sum(i | i <- [1, n]) + (n + 1)
That is, the sum of all integers between 1 and n + 1 is equal to the sum of integers between 1 and n, plus the last term n + 1.
Since we are basing this proof on the condition that the formula holds for n, we can write:
s1 = n * (n + 1) / 2 + (n + 1) = (n + 1) * (n + 2) / 2 = s2
As you can see, we have arrived at the second side of the formula we are trying to prove, which means that the formula does indeed hold.
This finishes the inductive proof, but what does it actually mean?
The formula is correct for n = 0.
If the formula is correct for n, then it is correct for n + 1.
From 1 and 2, we can say: if the formula is correct for n = 0, then it is correct for 0 + 1 = 1. Since we proved the case of n = 0, then the case of n = 1 is indeed correct.
We can repeat this above process again. The case of n = 1 is correct, then the case of n = 2 is correct. This reasoning can go ad infinitum; the formula is correct for all integer values of n >= 1.
induction != Calc!!!
I can get N guys drunk with 10*N beers.
Base Case: 1 guy
I can get one guy drunk with 10 beers
Inductive step, given p(n) prove p(n + 1)
I can get i guys drunk with 10 * i beers, if I add another guy, I can get him drunk with 10 more beers. Therefore, I can get i + 1 guys drunk with 10 * (i + 1) beers.
p(1) -> p(i + 1) -> p(i + 2) ... p(inf)
Discrete Math is easy!
First, you need a base case. Then you need an inductive step that holds true for some step n. In your inductive step, you will need an inductive hypothesis. That hypothesis is the assumption that you needed to have made. Finally, use that assumption to prove step n+1

Resources