How can I proof a difficult big o notation? [closed] - math

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I can easily understand how to proof a simple big o notation like n5 + 3n3 ∈ O(n5).
But how can I proof something more complex like 3n or 2n ∉ O(nk)?

Use a proof by contradiction.
Let's prove that 2n ∉ O(n2). We assume the opposite, and deduce a contradiction from a consequence.
So: assumption: there exists M and n0 such that 2n < M n2 for all n >= n0.
Let x be an number such that x > 5, and x > n0 and 2x > 4 M. Do you agree that such a number must exist?
Finish off the proof by deducing a contradiction based on the inequality that 22x < 4 M x2 by assumption.
Now do the analogous proof for k = 3. Then do it for k = 4. Then generalize your result for all k.

Related

What is "h" in numerical differentiation? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
I would like to know what h from the numerical differentiation formulas is and how I can calculate it when I have a function.
I am speaking about this formulas:
f'(x0) = (f(x0 + h) - f(x0)) / h
f'(x0) = (f(x0) - f(x0 - h)) / h
f'(x0) = (f(x0 + h) - f(x0 - h)) / 2*h
I would really appreciate any kind of help!
In such formulae h is usually a "very small number", similar to epsilon in Calculus.
For example, the derivative of f at a is defined as:
Note how h is defined as approaching 0.
When programming, e.g. doing numerical gradient computation, it usually works to set h to something very small - many programming environments have an "epsilon" quantity; lacking that, you can just use a very small floating-point number.
Using the usual 8 byte floats, sensible values for h are 1e-8 for the first and second formula and 1e-5 for the third central difference quotient. This is valid for medium values of x, for larger x one would have to include the scale of x in some way.
In general, for a kth order difference quotient with error order p, the balance between floating point noise and numerical error is reached for h about pow(2e-16, 1.0/(p+k)).

Show that the summation ∑ i to n (logi) is O(nlogn) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
One way I thought it works is that we can say that ∑_i^{n (log i)} < ∑_i^{n (log n)} and then try to argue that it's O(n log n), but where to go from here? Any suggestions?
If you just need to show that the sum is O(n log n), you can show that
Σ log i ≤ Σ log n = n log n
Therefore, your function is O(n log n). If you want to be even more formal, you can use the constants c = 1 and n0 = 1.
The more interesting question is to show that the sum is Θ(n log n) by proving an Ω(n log n) lower bound. To do this, note that the sum is greater than or equal to the sum of the last n / 2 terms in the summation. Each of those terms in the summation is at least log (n / 2). This gives a lower bound of (n / 2) log(n / 2) = (n / 2) (log n - log 2), which is Ω(n log n). Therefore, your summation is O(n log n) and Ω(n log n), so it's Θ(n log n).
Hope this helps!

homework: Proving n <= 2^(n/4)? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
So I have an assignment question where I have to prove:
n^4 is in O(2^n)
Just by looking at the graphs of the functions I know that with c=1 and n[0] = 16 this is true.
While trying to prove it on paper I managed to reduce the inequality down to n <= 2^(n/4), however, I cannot figure out how to simplify this further or adequately prove from here that with n[0]=16 the big-O assertion holds.
Any help?
The title is incorrect, and the error is important.
You are not trying to prove that n ≤ 2n/4, you are trying to prove that n ∊ O(2n/4), which is a strictly weaker claim. It is impossible to prove that n ≤ 2n/4 because at n=2, the inequality is false.
By taking the logarithm of both sides, we can reduce the problem to that of showing that log n ∊ O(n), which is easy to show because d/dn log n ≤ 1 for n ≥ 1.
It is easy to prove that the inequality holds for n >= 16 using induction, no calculus required:
First, for n=16 you have 164=216.
If the inequality holds for n=k, for n=k+1 you have (k+1)4 = (####)·k4 < 2k4 &leq; 2·2k = 2k+1.
QED.
Since this is homework, I'll leave leave the crucial step, finding what goes in place of ####, to the reader.

Algorithm for Solving a Linear Combination? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have run into the following problem that I need to solve in a project that I'm working on:
Given some number of vectors v_i (in the math sense), and a target vector H, compute a linear combination of the vectors v_i that most closely matches the target vector H, with the constraint that the coefficients must be in [0, 1].
I do not know much about what kind of algorithms / math should be used to approach such a problem. Any prods in the right general direction would be much appreciated!
It's a constrained least square problem. Basically you want to solve the optimization problem:
argmin ||Ax-H||
x
s.t. 0<=x_j<=1
where x=(x_1, ..., x_j, ..., x_n) consists the coefficients you are seeking, and a column of A corresponds to a vector v_i.
Assuming that you want to solve in the least squares sense, then you have a quadratic programming problem. For example, say that your set of vectors is
x1 = 1 2 3]' x2 = [3 2 1]'
and your target vector is
H = [1 -1 1]'
Then you can create the matrix whose columns are your vectors:
A = [1 3;
2 2;
3 1]
and the thing you are trying to minimize is
norm(A*x - H) = (A*x - H)' * (A*x - H) = x' * (A'*A) * x - (2*H'*A) * x + const
If you define
B = A' * A
C = -2 * H' * A
then you have a problem that can be solved optimally my Matlab's quadprog function
quadprog(B,C,[],[],[],[],0,1)
ans =
0.16667
0.16667
so the optimal solution in this case is
1/6 * x1 + 1/6 * x2 = [2/3, 2/3, 2/3]
This is a combinatorial optimization problem. This kind of problems are NP-hard. But I guess for the binary one, there should be polynomial algorithms that can solve, or there may be some relaxation to get an approximate solution. Some googling on "integer programming" may help.

Proving big O of statement [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am having a hard time proving that n^k is O(2^n) for all k. I tried taking lg2 of both sides and have k*lgn=n, but this is wrong. I am not sure how else I can prove this.
To show that nk is O(2n), note that
nk = (2lg n)k = 2k lg n
So now you want to find an n0 and c such that for all n ≥ n0,
2k lg n ≤ c 2n
Now, let's let c = 1 and then consider what happens when n = 2m for some m. If we do this, we get
2k lg n ≤ c 2n = 2n
2k lg 2m ≤ 22m
2km ≤ 22m
And, since 2n is a monotonically-increasing function, this is equivalent to
km ≤ 2m
Now, let's finish things off. Let's suppose that we let m = max{k, 4}, so k ≤ m. Thus we have that
km ≤ m2
We also have that
m2 ≤ 2m
Since for any m ≥ 4, m2 ≤ 2m, and we've ensured by our choice of m that m = max{k, 4}. Combining this, we get that
km ≤ 2m
Which is equivalent to what we wanted to show above. Consequently, if we pick any n ≥ 2m = 2max{4, k}, it will be true that nk ≤ 2n. Thus by the formal definition of big-O notation, we get that nk = O(2n).
I think this math is right; please let me know if I'm wrong!
Hope this helps!
I can't comment yet, so I will make this an answer.
Instead of reducing the equation like you have been trying to do, you should try to find an n0 and a M that satisfy the formal definition of big O notation found here: http://en.wikipedia.org/wiki/Big_O_notation#Formal_definition
Something along the lines of n0=M=k might work (I haven't written it out so maybe that doesn't work, thats just to give you an idea)

Resources