How to prove 3n + 2log n = O(n) - math

How would I be able to prove 3n + 2log n = O(n) using the definition of big-O?
The C is supposedly 6, & the k is 1, but I have no idea how that is found. Much help will greatly be appreciated.

To formally prove this result, you need to find a choice of n0 and c such that
For any n ≥ n0: 3n + 2log n ≤ cn
To start this off, note that if you have any n ≥ 1, then log n < n. Consequently, if you consider any n ≥ 1, you have that
3n + 2log n < 3n + 2n = 5n
Consequently, if you pick n0 = 1 and c = 5, you have that
For any n ≥ n0: 3n + 2log n < 3n + 2n = 5n ≤ cn
And therefore 3n + 2 log n = O(n).
More generally, when given problems like these, try identifying the dominant term (here, the n term) and trying to find some choice of n0 such that the non-dominant terms are overwhelmed by the dominant term. Once you've done this, all that's left to do is choose the right constant c.
Hope this helps!

Wild guess (your question is quite unclear): the task is to show that
O(3n + 2log n) = O(n)
Now that's how it comes out: n -> n grows faster than n -> log n, and since the complexity is asymptotic, only the fastest-growing term matters, which is n in this case.

You can prove the following if I remember correctly:
if f1(n)=O(g1(n)), f2(n)=O(g2(n)) then f1(n)+f2(n)=O(max{g1(n),g2(n)}).
From there it's pretty straight forward.

Related

Runtime Complexity | Recursive calculation using Master's Theorem

So I've encountered a case where I have 2 recursive calls - rather than one. I do know how to solve for one recursive call, but in this case I'm not sure whether I'm right or wrong.
I have the following problem:
T(n) = T(2n/5) + T(3n/5) + n
And I need to find the worst-case complexity for this.
(FYI - It's some kind of augmented merge sort)
My feeling was to use the first equation from the Theorem, but I feel something is wrong with my idea. Any explanation on how to solve problems like this will be appreciated :)
The recursion tree for the given recursion will look like this:
Size Cost
n n
/ \
2n/5 3n/5 n
/ \ / \
4n/25 6n/25 6n/25 9n/25 n
and so on till size of input becomes 1
The longes simple path from root to a leaf would be n-> 3/5n -> (3/5) ^2 n .. till 1
Therefore let us assume the height of tree = k
((3/5) ^ k )*n = 1 meaning k = log to the base 5/3 of n
In worst case we expect that every level gives a cost of n and hence
Total Cost = n * (log to the base 5/3 of n)
However we must keep one thing in mind that ,our tree is not complete and therefore
some levels near the bottom would be partially complete.
But in asymptotic analysis we ignore such intricate details.
Hence in worst Case Cost = n * (log to the base 5/3 of n)
which is O( n * log n )
Now, let us verify this using substitution method:
T(n) = O( n * log n) iff T(n) < = dnlog(n) for some d>0
Assuming this to be true:
T(n) = T(2n/5) + T(3n/5) + n
<= d(2n/5)log(2n/5) + d(3n/5)log(3n/5) + n
= d*2n/5(log n - log 5/2 ) + d*3n/5(log n - log 5/3) + n
= dnlog n - d(2n/5)log 5/2 - d(3n/5)log 5/3 + n
= dnlog n - dn( 2/5(log 5/2) - 3/5(log 5/3)) + n
<= dnlog n
as long as d >= 1/( 2/5(log 5/2) - 3/5(log 5/3) )

How can i find a running time of a recurrence relation?

The running time for this recurrence relation is O(nlogn). Since I am new to algorithm how would I show that mathematically?
T(n) = 2⋅T(n/2) + O(n)
T(n) = 2 ( 2⋅T(n/4) + O(n) ) + O(n) // since T(n/2) = 2⋅T(n/4) + O(n)
So far I can see that if I suppose n to be a power of 2 like n = 2m, then may be I can show that, but I am not getting the clear picture. Can anyone help me?
If you use the master theorem, you get the result you expected.
If you want to proof this "by hand", you can see this easily by supposing n = 2m is a power of 2 (as you already said). This leads you to
T(n) = 2⋅T(n/2) + O(n)
= 2⋅(2⋅T(n/4) + O(n/2)) + O(n)
= 4⋅T(n/4) + 2⋅O(n/2) + O(n)
= 4⋅(2⋅T(n/8) + O(n/4)) + 2⋅O(n/2) + O(n)
= Σk=1,...,m 2k⋅O(n/2k)
= Σk=1,...,m O(n)
= m⋅O(n)
Since m = log₂(n), you can write this as O(n log n).
At the end it doesn't matter if n is a power of 2 or not.
To see this, you can think about this: You have an input of n (which is not a power of 2) and you add more elements to the input until it contains n' = 2m Elements with m ∈ ℕ and log(n) ≤ m ≤ log(n) + 1, i.e. n' is the smalest power of 2 that is greater than n. Obviously T(n) ≤ T(n') holds and we know T(n') is in
O(n'⋅log(n')) = O(c⋅n⋅log(c⋅n)) = O(n⋅log(n) + n⋅log(c)) = O(n⋅log(n))
where c is a constant between 1 and 2.
You can do the same with the greatest power of 2 that is smaller than n. This gives leads you to T(n) ≥ T(n'') and we know T(n'') is in
O(n''⋅log(n'')) = O(c⋅n⋅log(c⋅n)) = O(n⋅log(n))
where c is a constant between 1/2 and 1.
In total you get, that the complexity of T(n) is bounded by the complexitys of T(n'') and T(n') wich are both O(n⋅log(n))and so T(n) is also in O(n⋅log(n)), even if it is not a power of 2.

O notation: 2^log(O(n^2)) = 2^O(log(n^2))?

I tried to solve this with logarithm rules:
O(n^2) = 2^O(log(n^2))
c*n^2 = 2^log(n^2c)
Im not sure that is true?
No, no, no. You can't just take logarithms.
2^log (O (n^2)) = 2 ^ log (c * n^2)) = c * n^2
2^ O (log n^2) = 2^ (c * log n^2) = (2 ^ (log n^2)) ^ c = (n^2) ^ c.
The first is just O (n^2). The second one is n raised to sum unknown but limited power.
I think this depends on what the equals sign means here. If the equals sign means
"Any function that is of 2log O(n2) is also 2O(log n2)"
Then the claim is true. Let f(n) be some function that's O(n2). This means that there's a c and n0 such that for any n ≥ n0, we know that f(n) ≤ cn2. Therefore, for any n ≥ n0, we know that
2log f(n) ≤ 2log (cn2) = 2(log c + log n2)
The function log c + log n2 is itself O(log n2), so we see that
2log f(n) ≤ 2(log c + log n2) = 2O(log n2)
On the other hand, if the equals sign means
"The class of functions that is 2log O(n2) is the same class of functions that is 2O(log n2)"
then the claim is false. For example, the function n4 is in the second class because it can be written as 22 log n2, but it's not in the first class.
Hope this helps!

How do you pick variable substitutions in recurrence relations?

In our Data Structures class we are learning how to solve recurrence relations in 1 variable. Unfortunately some things seem to come "out of the blue".
For example, some exercises already tell you how to substitute the variable n:
Compute T(n) for n = 2^k
T(n) = a for n =< 2
T(n) = 8T(n/2) + bn^2 (a and b are > 0)
But some exercises just give you the T(n) without providing a replacement for the variable n:
T(n) = 1 n =<1
T(n) = 2T(n/4) + sqrt(n)
I used the iterative method and arrived to the right answer: sqrt(n) + (1/2) * sqrt(n) * Log(n).
But when the professor explained she started by saying: "Let n = 4^k", which is what I mean by "out of the blue". Using that fact the answer is simpler to obtain.
But how is the student supposed to come up with that?
This is another example:
T(n) = 1 n =<1
T(n) = 2T( (n-1)/2 ) + n
Here I started again with the iterative method but I can't reach a definitive answer, it looks more complex that way.
After 3 iterative steps I arrived to this:
T(n) = 4T( (n-2)/4 ) + 2n - 1
T(n) = 8T( (n-3)/8 ) + 3n - 3
T(n) = 16T( (n-4)/16 ) + 4n - 6
I am inclined to say T(i) = 2^i * T( (n-i)/2^i ) + i*n - ? This last part I can't figure out, maybe I made a mistake.
However in the answer she provides she starts again with another substitution: Let n = (2^k) -1. I don’t see where this comes from - why would I do this? What is the logic behind that?
In all of these cases, these substitutions are reasonable because they rewrite the recurrence as one of the form S(k) = aS(k - 1) + f(k). These recurrences are often easier to solve than other recurrences because they define S(k) purely in terms of S(k - 1).
Let's do some examples to see how this works. Consider this recurrence:
T(n) = 1 (if n ≤ 1)
T(n) = 2T(n/4) + sqrt(n) (otherwise)
Here, the size of the problem shrinks by a factor of four on each iteration. Therefore, if the input is a perfect power of four, then the input will shrink from size 4k to 4k-1, from 4k-1 to 4k-2, etc. until the recursion bottoms out. If we make this substitution and let S(k) = T(4k), then we get hat
S(0) = 1
S(k) = 2S(k - 1) + 2k
This is now a recurrence relation where S(k) is defined in terms of S(k - 1), which can make the recurrence easier to solve.
Let's look at your original recurrence:
T(n) = a (for n ≤ 2)
T(n) = 8T(n/2) + bn2
Notice that the recursive step divides n by two. If n is a perfect power of two, then the recursive step considers the power of two that comes right before n. Letting S(k) = T(2k) gives
S(k) = a (for k ≤ 1)
S(k) = 8S(k - 1) + b22k
Notice how that S(k) is defined in terms of S(k - 1), which is a much easier recurrence to solve. The choice of powers of two was "natural" here because it made the recursive step talk purely about the previous value of S and not some arbitrarily smaller value of S.
Now, look at the last recurrence:
T(n) = 1 (n ≤ 1)
T(n) = 2T( (n-1)/2 ) + n
We'd like to make some substitution k = f(n) such that T(f(n)) = 2T(f(n) - 1) + n. The question is how to do that.
With some trial and error, we get that setting f(n) = 2n - 1 fits the bill, since
(f(n) - 1) / 2 = ((2n - 1) - 1) / 2 = (2n - 2) / 2 = 2n-1 - 1 = f(n) - 1
Therefore, letting k = 2n - 1 and setting S(k) = T(2n - 1), we get
S(n) = 1 (if n ≤ 1)
S(n) = 2S(n - 1) + 2n - 1
Hope this helps!

Can someone explain Mathematical Induction (to prove a recursive method)

Can someone explain mathematical induction to prove a recursive method? I am a freshmen computer science student and I have not yet taken Calculus (I have had up through Trig). I kind of understand it but I have trouble when asked to write out an induction proof for a recursive method.
Here is a explanation by example:
Let's say you have the following formula that you want to prove:
sum(i | i <- [1, n]) = n * (n + 1) / 2
This formula provides a closed form for the sum of all integers between 1 and n.
We will start by proving the formula for the simple base case of n = 1. In this case, both sides of the formula reduce to 1. This in turn means that the formula holds for n = 1.
Next, we will prove that if the formula holds for a value n, then it holds for the next value of n (or n + 1). In other words, if the following is true:
sum(i | i <- [1, n]) = n * (n + 1) / 2
Then the following is also true:
sum(i | i <- [1, n + 1]) = (n + 1) * (n + 2) / 2
To do so, let's start with the first side of the last formula:
s1 = sum(i | i <- [1, n + 1]) = sum(i | i <- [1, n]) + (n + 1)
That is, the sum of all integers between 1 and n + 1 is equal to the sum of integers between 1 and n, plus the last term n + 1.
Since we are basing this proof on the condition that the formula holds for n, we can write:
s1 = n * (n + 1) / 2 + (n + 1) = (n + 1) * (n + 2) / 2 = s2
As you can see, we have arrived at the second side of the formula we are trying to prove, which means that the formula does indeed hold.
This finishes the inductive proof, but what does it actually mean?
The formula is correct for n = 0.
If the formula is correct for n, then it is correct for n + 1.
From 1 and 2, we can say: if the formula is correct for n = 0, then it is correct for 0 + 1 = 1. Since we proved the case of n = 0, then the case of n = 1 is indeed correct.
We can repeat this above process again. The case of n = 1 is correct, then the case of n = 2 is correct. This reasoning can go ad infinitum; the formula is correct for all integer values of n >= 1.
induction != Calc!!!
I can get N guys drunk with 10*N beers.
Base Case: 1 guy
I can get one guy drunk with 10 beers
Inductive step, given p(n) prove p(n + 1)
I can get i guys drunk with 10 * i beers, if I add another guy, I can get him drunk with 10 more beers. Therefore, I can get i + 1 guys drunk with 10 * (i + 1) beers.
p(1) -> p(i + 1) -> p(i + 2) ... p(inf)
Discrete Math is easy!
First, you need a base case. Then you need an inductive step that holds true for some step n. In your inductive step, you will need an inductive hypothesis. That hypothesis is the assumption that you needed to have made. Finally, use that assumption to prove step n+1

Resources