So I've encountered a case where I have 2 recursive calls - rather than one. I do know how to solve for one recursive call, but in this case I'm not sure whether I'm right or wrong.
I have the following problem:
T(n) = T(2n/5) + T(3n/5) + n
And I need to find the worst-case complexity for this.
(FYI - It's some kind of augmented merge sort)
My feeling was to use the first equation from the Theorem, but I feel something is wrong with my idea. Any explanation on how to solve problems like this will be appreciated :)
The recursion tree for the given recursion will look like this:
Size Cost
n n
/ \
2n/5 3n/5 n
/ \ / \
4n/25 6n/25 6n/25 9n/25 n
and so on till size of input becomes 1
The longes simple path from root to a leaf would be n-> 3/5n -> (3/5) ^2 n .. till 1
Therefore let us assume the height of tree = k
((3/5) ^ k )*n = 1 meaning k = log to the base 5/3 of n
In worst case we expect that every level gives a cost of n and hence
Total Cost = n * (log to the base 5/3 of n)
However we must keep one thing in mind that ,our tree is not complete and therefore
some levels near the bottom would be partially complete.
But in asymptotic analysis we ignore such intricate details.
Hence in worst Case Cost = n * (log to the base 5/3 of n)
which is O( n * log n )
Now, let us verify this using substitution method:
T(n) = O( n * log n) iff T(n) < = dnlog(n) for some d>0
Assuming this to be true:
T(n) = T(2n/5) + T(3n/5) + n
<= d(2n/5)log(2n/5) + d(3n/5)log(3n/5) + n
= d*2n/5(log n - log 5/2 ) + d*3n/5(log n - log 5/3) + n
= dnlog n - d(2n/5)log 5/2 - d(3n/5)log 5/3 + n
= dnlog n - dn( 2/5(log 5/2) - 3/5(log 5/3)) + n
<= dnlog n
as long as d >= 1/( 2/5(log 5/2) - 3/5(log 5/3) )
I cant seem to find any kind of answer to this, but if I have an equation like the square root of (X^2-4n) where 4n is a constant, how could I set x so the equation gives a whole number.
I know setting x to n+1 works, but I'm looking for an algorithm that would generate all solutions.
So, the problem is to find all pairs of integers (x, m) such that:
sqrt(x^2 - 4n) = m
We have:
x^2 - 4n = m^2
or
x^2 - mˆ2 = 4n
so
(x + m)(x - m) = 4n
Now, 2 divides 4n and so it must divide (x+m) or (x-m). But if it divides any of them it will divide the other too. Thus a := (x+m)/2 and b := (x-m)/2 are both integers. Therefore
a*b = n
So, it is just a matter of factoring n as a*b in all possible ways and recover x and m from the equations above:
x = a + b.
m = a - b.
Your solution x = n+1 corresponds to the trivial factorization n = n*1 where a=n and b=1.
UPDATE
Here is an algorithm that prints all pairs (x, m)
[Initialize] a := n.
[Check] if n % a = 0 then
b := n / a.
print(a + b), print(a - b)
[Decrement] a := a - 1.
[End?] if a * a > n go to Step 2.
In our Data Structures class we are learning how to solve recurrence relations in 1 variable. Unfortunately some things seem to come "out of the blue".
For example, some exercises already tell you how to substitute the variable n:
Compute T(n) for n = 2^k
T(n) = a for n =< 2
T(n) = 8T(n/2) + bn^2 (a and b are > 0)
But some exercises just give you the T(n) without providing a replacement for the variable n:
T(n) = 1 n =<1
T(n) = 2T(n/4) + sqrt(n)
I used the iterative method and arrived to the right answer: sqrt(n) + (1/2) * sqrt(n) * Log(n).
But when the professor explained she started by saying: "Let n = 4^k", which is what I mean by "out of the blue". Using that fact the answer is simpler to obtain.
But how is the student supposed to come up with that?
This is another example:
T(n) = 1 n =<1
T(n) = 2T( (n-1)/2 ) + n
Here I started again with the iterative method but I can't reach a definitive answer, it looks more complex that way.
After 3 iterative steps I arrived to this:
T(n) = 4T( (n-2)/4 ) + 2n - 1
T(n) = 8T( (n-3)/8 ) + 3n - 3
T(n) = 16T( (n-4)/16 ) + 4n - 6
I am inclined to say T(i) = 2^i * T( (n-i)/2^i ) + i*n - ? This last part I can't figure out, maybe I made a mistake.
However in the answer she provides she starts again with another substitution: Let n = (2^k) -1. I don’t see where this comes from - why would I do this? What is the logic behind that?
In all of these cases, these substitutions are reasonable because they rewrite the recurrence as one of the form S(k) = aS(k - 1) + f(k). These recurrences are often easier to solve than other recurrences because they define S(k) purely in terms of S(k - 1).
Let's do some examples to see how this works. Consider this recurrence:
T(n) = 1 (if n ≤ 1)
T(n) = 2T(n/4) + sqrt(n) (otherwise)
Here, the size of the problem shrinks by a factor of four on each iteration. Therefore, if the input is a perfect power of four, then the input will shrink from size 4k to 4k-1, from 4k-1 to 4k-2, etc. until the recursion bottoms out. If we make this substitution and let S(k) = T(4k), then we get hat
S(0) = 1
S(k) = 2S(k - 1) + 2k
This is now a recurrence relation where S(k) is defined in terms of S(k - 1), which can make the recurrence easier to solve.
Let's look at your original recurrence:
T(n) = a (for n ≤ 2)
T(n) = 8T(n/2) + bn2
Notice that the recursive step divides n by two. If n is a perfect power of two, then the recursive step considers the power of two that comes right before n. Letting S(k) = T(2k) gives
S(k) = a (for k ≤ 1)
S(k) = 8S(k - 1) + b22k
Notice how that S(k) is defined in terms of S(k - 1), which is a much easier recurrence to solve. The choice of powers of two was "natural" here because it made the recursive step talk purely about the previous value of S and not some arbitrarily smaller value of S.
Now, look at the last recurrence:
T(n) = 1 (n ≤ 1)
T(n) = 2T( (n-1)/2 ) + n
We'd like to make some substitution k = f(n) such that T(f(n)) = 2T(f(n) - 1) + n. The question is how to do that.
With some trial and error, we get that setting f(n) = 2n - 1 fits the bill, since
(f(n) - 1) / 2 = ((2n - 1) - 1) / 2 = (2n - 2) / 2 = 2n-1 - 1 = f(n) - 1
Therefore, letting k = 2n - 1 and setting S(k) = T(2n - 1), we get
S(n) = 1 (if n ≤ 1)
S(n) = 2S(n - 1) + 2n - 1
Hope this helps!
I'm trying to figure out an equivalent expressions of the following equations using bitwise, addition, and/or subtraction operators. I know there's suppose to be an answer (which furthermore generalizes to work for any modulus 2^a-1, where a is a power of 2), but for some reason I can't seem to figure out what the relation is.
Initial expressions:
x = n % (2^32-1);
c = (int)n / (2^32-1); // ints are 32-bit, but x, c, and n may have a greater number of bits
My procedure for the first expression was to take the modulo of 2^32, then try to make up the difference between the two modulo's. I'm having trouble on this second part.
x = n & 0xFFFFFFFF + difference // how do I calculate difference?
I know that the difference n%(2^32)-n%(2^32-1) is periodic (with a period of 2^32*(2^32-1)), and there's a "spike up' starting at multiples of 2^32-1 and ending at 2^32. After each 2^32 multiple, the difference plot decreases by 1 (hopefully my descriptions make sense)
Similarly, the second expression could be calculated in a similar fashion:
c = n >> 32 + makeup // how do I calculate makeup?
I think makeup steadily increases by 1 at multiples of 2^32-1 (and decreases by 1 at multiples of 2^32), though I'm having troubles expressing this idea in terms of the available operators.
You can use these identities:
n mod (x - 1) = (((n div x) mod (x - 1)) + ((n mod x) mod (x - 1))) mod (x - 1)
n div (x - 1) = (n div x) + (((n div x) + (n mod x)) div (x - 1))
First comes from (ab+c) mod d = ((a mod d) (b mod d) + (c mod d)) mod d.
Second comes from expanding n = ax + b = a(x-1) + a + b, while dividing by x-1.
I think I've figured out the answer to my question:
Compute c first, then use the results to compute x. Assumes that the comparison returns 1 for true, 0 for false. Also, the shifts are all logical shifts.
c = (n>>32) + ((t & 0xFFFFFFFF) >= (0xFFFFFFFF - (n>>32)))
x = (0xFFFFFFFE - (n & 0xFFFFFFFF) - ((c - (n>>32))<<32)-c) & 0xFFFFFFFF
edit: changed x (only need to keep lower 32 bits, rest is "junk")
Exercise 1.11:
A function f is defined by the rule that f(n) = n if n < 3 and f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3) if n > 3. Write a procedure that computes f by means of a recursive process. Write a procedure that computes f by means of an iterative process.
Implementing it recursively is simple enough. But I couldn't figure out how to do it iteratively. I tried comparing with the Fibonacci example given, but I didn't know how to use it as an analogy. So I gave up (shame on me) and Googled for an explanation, and I found this:
(define (f n)
(if (< n 3)
n
(f-iter 2 1 0 n)))
(define (f-iter a b c count)
(if (< count 3)
a
(f-iter (+ a (* 2 b) (* 3 c))
a
b
(- count 1))))
After reading it, I understand the code and how it works. But what I don't understand is the process needed to get from the recursive definition of the function to this. I don't get how the code could have formed in someone's head.
Could you explain the thought process needed to arrive at the solution?
You need to capture the state in some accumulators and update the state at each iteration.
If you have experience in an imperative language, imagine writing a while loop and tracking information in variables during each iteration of the loop. What variables would you need? How would you update them? That's exactly what you have to do to make an iterative (tail-recursive) set of calls in Scheme.
In other words, it might help to start thinking of this as a while loop instead of a recursive definition. Eventually you'll be fluent enough with recursive -> iterative transformations that you won't need to extra help to get started.
For this particular example, you have to look closely at the three function calls, because it's not immediately clear how to represent them. However, here's the likely thought process: (in Python pseudo-code to emphasise the imperativeness)
Each recursive step keeps track of three things:
f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3)
So I need three pieces of state to track the current, the last and the penultimate values of f. (that is, f(n-1), f(n-2) and f(n-3).) Call them a, b, c. I have to update these pieces inside each loop:
for _ in 2..n:
a = NEWVALUE
b = a
c = b
return a
So what's NEWVALUE? Well, now that we have representations of f(n-1), f(n-2) and f(n-3), it's just the recursive equation:
for _ in 2..n:
a = a + 2 * b + 3 * c
b = a
c = b
return a
Now all that's left is to figure out the initial values of a, b and c. But that's easy, since we know that f(n) = n if n < 3.
if n < 3: return n
a = 2 # f(n-1) where n = 3
b = 1 # f(n-2)
c = 0 # f(n-3)
# now start off counting at 3
for _ in 3..n:
a = a + 2 * b + 3 * c
b = a
c = b
return a
That's still a little different from the Scheme iterative version, but I hope you can see the thought process now.
I think you are asking how one might discover the algorithm naturally, outside of a 'design pattern'.
It was helpful for me to look at the expansion of the f(n) at each n value:
f(0) = 0 |
f(1) = 1 | all known values
f(2) = 2 |
f(3) = f(2) + 2f(1) + 3f(0)
f(4) = f(3) + 2f(2) + 3f(1)
f(5) = f(4) + 2f(3) + 3f(2)
f(6) = f(5) + 2f(4) + 3f(3)
Looking closer at f(3), we see that we can calculate it immediately from the known values.
What do we need to calculate f(4)?
We need to at least calculate f(3) + [the rest]. But as we calculate f(3), we calculate f(2) and f(1) as well, which we happen to need for calculating [the rest] of f(4).
f(3) = f(2) + 2f(1) + 3f(0)
↘ ↘
f(4) = f(3) + 2f(2) + 3f(1)
So, for any number n, I can start by calculating f(3), and reuse the values I use to calculate f(3) to calculate f(4)...and the pattern continues...
f(3) = f(2) + 2f(1) + 3f(0)
↘ ↘
f(4) = f(3) + 2f(2) + 3f(1)
↘ ↘
f(5) = f(4) + 2f(3) + 3f(2)
Since we will reuse them, lets give them a name a, b, c. subscripted with the step we are on, and walk through a calculation of f(5):
Step 1: f(3) = f(2) + 2f(1) + 3f(0) or f(3) = a1 + 2b1 +3c1
where
a1 = f(2) = 2,
b1 = f(1) = 1,
c1 = 0
since f(n) = n for n < 3.
Thus:
f(3) = a1 + 2b1 + 3c1 = 4
Step 2: f(4) = f(3) + 2a1 + 3b1
So:
a2 = f(3) = 4 (calculated above in step 1),
b2 = a1 = f(2) = 2,
c2 = b1 = f(1) = 1
Thus:
f(4) = 4 + 2*2 + 3*1 = 11
Step 3: f(5) = f(4) + 2a2 + 3b2
So:
a3 = f(4) = 11 (calculated above in step 2),
b3 = a2 = f(3) = 4,
c3 = b2 = f(2) = 2
Thus:
f(5) = 11 + 2*4 + 3*2 = 25
Throughout the above calculation we capture state in the previous calculation and pass it to the next step,
particularily:
astep = result of step - 1
bstep = astep - 1
cstep = bstep -1
Once I saw this, then coming up with the iterative version was straightforward.
Since the post you linked to describes a lot about the solution, I'll try to only give complementary information.
You're trying to define a tail-recursive function in Scheme here, given a (non-tail) recursive definition.
The base case of the recursion (f(n) = n if n < 3) is handled by both functions. I'm not really sure why the author does this; the first function could simply be:
(define (f n)
(f-iter 2 1 0 n))
The general form would be:
(define (f-iter ... n)
(if (base-case? n)
base-result
(f-iter ...)))
Note I didn't fill in parameters for f-iter yet, because you first need to understand what state needs to be passed from one iteration to another.
Now, let's look at the dependencies of the recursive form of f(n). It references f(n - 1), f(n - 2), and f(n - 3), so we need to keep around these values. And of course we need the value of n itself, so we can stop iterating over it.
So that's how you come up with the tail-recursive call: we compute f(n) to use as f(n - 1), rotate f(n - 1) to f(n - 2) and f(n - 2) to f(n - 3), and decrement count.
If this still doesn't help, please try to ask a more specific question — it's really hard to answer when you write "I don't understand" given a relatively thorough explanation already.
I'm going to come at this in a slightly different approach to the other answers here, focused on how coding style can make the thought process behind an algorithm like this easier to comprehend.
The trouble with Bill's approach, quoted in your question, is that it's not immediately clear what meaning is conveyed by the state variables, a, b, and c. Their names convey no information, and Bill's post does not describe any invariant or other rule that they obey. I find it easier both to formulate and to understand iterative algorithms if the state variables obey some documented rules describing their relationships to each other.
With this in mind, consider this alternative formulation of the exact same algorithm, which differs from Bill's only in having more meaningful variable names for a, b and c and an incrementing counter variable instead of a decrementing one:
(define (f n)
(if (< n 3)
n
(f-iter n 2 0 1 2)))
(define (f-iter n
i
f-of-i-minus-2
f-of-i-minus-1
f-of-i)
(if (= i n)
f-of-i
(f-iter n
(+ i 1)
f-of-i-minus-1
f-of-i
(+ f-of-i
(* 2 f-of-i-minus-1)
(* 3 f-of-i-minus-2)))))
Suddenly the correctness of the algorithm - and the thought process behind its creation - is simple to see and describe. To calculate f(n):
We have a counter variable i that starts at 2 and climbs to n, incrementing by 1 on each call to f-iter.
At each step along the way, we keep track of f(i), f(i-1) and f(i-2), which is sufficient to allow us to calculate f(i+1).
Once i=n, we are done.
What did help me was running the process manually using a pencil and using hint author gave for the fibonacci example
a <- a + b
b <- a
Translating this to new problem is how you push state forward in the process
a <- a + (b * 2) + (c * 3)
b <- a
c <- b
So you need a function with an interface to accept 3 variables: a, b, c. And it needs to call itself using process above.
(define (f-iter a b c)
(f-iter (+ a (* b 2) (* c 3)) a b))
If you run and print each variable for each iteration starting with (f-iter 1 0 0), you'll get something like this (it will run forever of course):
a b c
=========
1 0 0
1 1 0
3 1 1
8 3 1
17 8 3
42 17 8
100 42 17
235 100 42
...
Can you see the answer? You get it by summing columns b and c for each iteration. I must admit I found it by doing some trail and error. Only thing left is having a counter to know when to stop, here is the whole thing:
(define (f n)
(f-iter 1 0 0 n))
(define (f-iter a b c count)
(if (= count 0)
(+ b c)
(f-iter (+ a (* b 2) (* c 3)) a b (- count 1))))
A function f is defined by the rule that f(n) = n, if n<3 and f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3), if n > 3. Write a procedure that computes f by means of a recursive process.
It is already written:
f(n) = n, (* if *) n < 3
= f(n - 1) + 2f(n - 2) + 3f(n - 3), (* if *) n > 3
Believe it or not, there was once such a language. To write this down in another language is just a matter of syntax. And by the way, the definition as you (mis)quote it has a bug, which is now very apparent and clear.
Write a procedure that computes f by means of an iterative process.
Iteration means going forward (there's your explanation!) as opposed to the recursion's going backwards at first, to the very lowest level, and then going forward calculating the result on the way back up:
f(0) = 0
f(1) = 1
f(2) = 2
f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3)
= a + 2b + 3c
f(n+1) = f(n ) + 2f(n - 1) + 3f(n - 2)
= a' + 2b' + 3c' where
a' = f(n) = a+2b+3c,
b' = f(n-1) = a,
c' = f(n-2) = b
......
This thus describes the problem's state transitions as
(n, a, b, c) -> (n+1, a+2*b+3*c, a, b)
We could code it as
g (n, a, b, c) = g (n+1, a+2*b+3*c, a, b)
but of course it wouldn't ever stop. So we must instead have
f n = g (2, 2, 1, 0)
where
g (k, a, b, c) = g (k+1, a+2*b+3*c, a, b), (* if *) k < n
g (k, a, b, c) = a, otherwise
and this is already exactly like the code you asked about, up to syntax.
Counting up to n is more natural here, following our paradigm of "going forward", but counting down to 0 as the code you quote does is of course entirely equivalent.
The corner cases and possible off-by-one errors are left out as exercise non-interesting technicalities.