Big-O of a function taking on a negative value? [closed] - math

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Hello I was wondering what would be my Big-Oh for this function: f(n) = 7n – 3nlogn+100000. I checked other similar questions. Some say that since nlogn is -3 we can ignore it and as such the result is O(n). I check with my professor and he said no we don't ignore the negative and still choose the largest of them all and as such the Big-Oh would be O(nlogn). Not ignoring the negative I end up with this. Am I right??
7n – 3nlogn+100000 ≤ (7+3+10000) nlogn where c= 100010 & n≥n0
7n – 3nlogn+100000 ≤ 100010 nlogn n0= 2
O(nlogn) – Linear logarithmic or linearithmic
or is it more like
7n - 3nlogn + 100000 ≤ 100010 n where c= 100010 & n≥n0 & n0 = 1
Thanks alot

If you look at the formal definition of big-O notation, you'll notice that f(x) = O(g(x)) iff there are constants c and x0 such that
∀x > x0. |f(x)| ≤ c|g(x)|
Notice that there are absolute value bars here. Consequently, while it's true that
7n - 3nlogn + 100000 ≤ 100010 n
under the circumstances you mentioned, you would really need to show that
|7n - 3nlogn + 100000| ≤ 100010 |n|
which is not in general true. Using the fact that you need to have absolute value bars, you can prove that this function is Θ(n log n) by repeating your analysis, but taking care to watch for the sign flip. One way to do this is to use the triangle inequality:
|7n - 3n log n + 100000|
≤ |7n| + |-3n log n| + |100000|
= 7|n| + 3|n log n| + |100000|
≤ 7n + 3n log n + 100000
< 7n log n + 3n log n + 100000 n log n (when n > 10, say)
= 100010 n log n
So the function is O(n log n). You can repeat this analysis to get a matching lower bound as well.
Hope this helps!

Related

getting closed form of recursion equation and compare which is faster

Get closed form of these equations if possible. Then, determine which would be faster than the other.
f(n) = 0.25f(n/3)+ f(n/10) + logn, f(1) = 1
g(n) = n + log(n-1)^2 + 1
In these equations, do I have to expand these recursions and try to discover patterns within? I really don't know how to calculate closed form intuitively
Short answer: g(n)>f(n)
Long answer: g is not even recursive, so you can see immediately that g(n)=O(n).
You can approximate f(n) <f(n/2)+logn
which, by the master theorem, is Θ(logn)

how to solve 50 noodles/shoelaces puzzle? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
There are 50 noodles in a bowl. You can tie two ends of either one noodle or two different noodles, forming a nod.
Q: What is the expected value of number of loops we can have in the bowl?
∑(1/i) for i from 1 to 50.
When you have n noodles lets take a look at noodle number n. It can either be tied to itself with probability 1/n or some other noodle with probability (n-1)/n. When it gets tied to itself the loop is formed and we need to find the expected value for the rest n-1 noodles. When it gets tied to some other noodle then it is the same as we have taken away this noodle so the answer is expected value for the rest n-1 noodles.
f(n) = 1/n * (f(n-1) + 1) + (n-1)/n * f(n-1);
f(n) = 1/n * f(n-1) + 1/n + (n-1)/n * f(n-1);
f(n) = f(n-1) + 1/n
f(n) = 1 + 1/2 + ... + 1/n

homework: Proving n <= 2^(n/4)? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
So I have an assignment question where I have to prove:
n^4 is in O(2^n)
Just by looking at the graphs of the functions I know that with c=1 and n[0] = 16 this is true.
While trying to prove it on paper I managed to reduce the inequality down to n <= 2^(n/4), however, I cannot figure out how to simplify this further or adequately prove from here that with n[0]=16 the big-O assertion holds.
Any help?
The title is incorrect, and the error is important.
You are not trying to prove that n ≤ 2n/4, you are trying to prove that n ∊ O(2n/4), which is a strictly weaker claim. It is impossible to prove that n ≤ 2n/4 because at n=2, the inequality is false.
By taking the logarithm of both sides, we can reduce the problem to that of showing that log n ∊ O(n), which is easy to show because d/dn log n ≤ 1 for n ≥ 1.
It is easy to prove that the inequality holds for n >= 16 using induction, no calculus required:
First, for n=16 you have 164=216.
If the inequality holds for n=k, for n=k+1 you have (k+1)4 = (####)·k4 < 2k4 &leq; 2·2k = 2k+1.
QED.
Since this is homework, I'll leave leave the crucial step, finding what goes in place of ####, to the reader.

Proving big O of statement [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am having a hard time proving that n^k is O(2^n) for all k. I tried taking lg2 of both sides and have k*lgn=n, but this is wrong. I am not sure how else I can prove this.
To show that nk is O(2n), note that
nk = (2lg n)k = 2k lg n
So now you want to find an n0 and c such that for all n ≥ n0,
2k lg n ≤ c 2n
Now, let's let c = 1 and then consider what happens when n = 2m for some m. If we do this, we get
2k lg n ≤ c 2n = 2n
2k lg 2m ≤ 22m
2km ≤ 22m
And, since 2n is a monotonically-increasing function, this is equivalent to
km ≤ 2m
Now, let's finish things off. Let's suppose that we let m = max{k, 4}, so k ≤ m. Thus we have that
km ≤ m2
We also have that
m2 ≤ 2m
Since for any m ≥ 4, m2 ≤ 2m, and we've ensured by our choice of m that m = max{k, 4}. Combining this, we get that
km ≤ 2m
Which is equivalent to what we wanted to show above. Consequently, if we pick any n ≥ 2m = 2max{4, k}, it will be true that nk ≤ 2n. Thus by the formal definition of big-O notation, we get that nk = O(2n).
I think this math is right; please let me know if I'm wrong!
Hope this helps!
I can't comment yet, so I will make this an answer.
Instead of reducing the equation like you have been trying to do, you should try to find an n0 and a M that satisfy the formal definition of big O notation found here: http://en.wikipedia.org/wiki/Big_O_notation#Formal_definition
Something along the lines of n0=M=k might work (I haven't written it out so maybe that doesn't work, thats just to give you an idea)

Recursion and Big O

I've been working through a recent Computer Science homework involving recursion and big-O notation. I believe I understand this pretty well (certainly not perfectly, though!) But there is one question in particular that is giving me the most problems. The odd thing is that by looking it, it looks to be the most simple one on the homework.
Provide the best rate of growth using the big-Oh notation for the solution to the following recurrence?
T(1) = 2
T(n) = 2T(n - 1) + 1 for n>1
And the choices are:
O(n log n)
O(n^2)
O(2^n)
O(n^n)
I understand that big O works as an upper bound, to describe the most amount of calculations, or the highest running time, that program or process will take. I feel like this particular recursion should be O(n), since, at most, the recursion only occurs once for each value of n. Since n isn't available, it's either better than that, O(nlogn), or worse, being the other three options.
So, my question is: Why isn't this O(n)?
There's a couple of different ways to solve recurrences: substitution, recurrence tree and master theorem. Master theorem won't work in the case, because it doesn't fit the master theorem form.
You could use the other two methods, but the easiest way for this problem is to solve it iteratively.
T(n) = 2T(n-1) + 1
T(n) = 4T(n-2) + 2 + 1
T(n) = 8T(n-3) + 4 + 2 + 1
T(n) = ...
See the pattern?
T(n) = 2n-1⋅T(1) + 2n-2 + 2n-3 + ... + 1
T(n) = 2n-1⋅2 + 2n-2 + 2n-3 + ... + 1
T(n) = 2n + 2n-2 + 2n-3 + ... + 1
Therefore, the tightest bound is Θ(2n).
I think you have misunderstood the question a bit. It does not ask you how long it would take to solve the recurrence. It is asking what the big-O (the asymptotic bound) of the solution itself is.
What you have to do is to come up with a closed form solution, i. e. the non-recursive formula for T(n), and then determine what the big-O of that expression is.
The question is asking for the big-Oh notation for the solution to the recurrence, not the cost of calculation the recurrence.
Put another way: the recurrence produces:
1 -> 2
2 -> 5
3 -> 11
4 -> 23
5 -> 47
What big-Oh notation best describes the sequence 2, 5, 11, 23, 47, ...
The correct way to solve that is to solve the recurrence equations.
I think this will be exponential. Each increment to n makes the value to be twice as large.
T(2) = 2 * T(1) = 4
T(3) = 2 * T(2) = 2 * 4
...
T(x) would be the running time of the following program (for example):
def fn(x):
if (x == 1):
return # a constant time
# do the calculation for n - 1 twice
fn(x - 1)
fn(x - 1)
I think this will be exponential. Each increment to n brings twice as much calculation.
No, it doesn't. Quite on the contrary:
Consider that for n iterations, we get running time R. Then for n + 1 iterations we'll get exactly R + 1.
Thus, the growth rate is constant and the overall runtime is indeed O(n).
However, I think Dima's assumption about the question is right although his solution is overly complicated:
What you have to do is to come up with a closed form solution, i. e. the non-recursive formula for T(n), and then determine what the big-O of that expression is.
It's sufficient to examine the relative size of T(n) and T(n + 1) iterations and determine the relative growth rate. The amount obviously doubles which directly gives the asymptotic growth.
First off, all four answers are worse than O(n)... O(n*log n) is more complex than plain old O(n). What's bigger: 8 or 8 * 3, 16 or 16 * 4, etc...
On to the actual question. The general solution can obviously be solved in constant time if you're not doing recursion
( T(n) = 2^(n - 1) + 2^(n) - 1 ), so that's not what they're asking.
And as you can see, if we write the recursive code:
int T( int N )
{
if (N == 1) return 2;
return( 2*T(N-1) + 1);
}
It's obviously O(n).
So, it appears to be a badly worded question, and they are probably asking you the growth of the function itself, not the complexity of the code. That's 2^n. Now go do the rest of your homework... and study up on O(n * log n)
Computing a closed form solution to the recursion is easy.
By inspection, you guess that the solution is
T(n) = 3*2^(n-1) - 1
Then you prove by induction that this is indeed a solution. Base case:
T(1) = 3*2^0 - 1 = 3 - 1 = 2. OK.
Induction:
Suppose T(n) = 3*2^(n-1) - 1. Then
T(n+1) = 2*T(n) + 1 = 3*2^n - 2 + 1 = 3*2^((n+1)-1) - 1. OK.
where the first equality stems from the recurrence definition,
and the second from the inductive hypothesis. QED.
3*2^(n-1) - 1 is clearly Theta(2^n), hence the right answer is the third.
To the folks that answered O(n): I couldn't agree more with Dima. The problem does not ask the tightest upper bound to the computational complexity of an algorithm to compute T(n) (which would be now O(1), since its closed form has been provided). The problem asks for the tightest upper bound on T(n) itself, and that is the exponential one.

Resources