Is it true that f (n) = Θ(f (n))? - reflection

Can you prove using reflexivity that f(n) equals big Theta(f(n))? It seems straight forward when thinking about it because f(n) is bounded above and below by itself. But how will I write this down? And does this apply to big Omega and big O

I believe what you are intending to ask is (w.r.t. #emory:s answer) is something along the lines:
"For some function f(n), is it true that f ∈ ϴ(f(n))?"
If you emanate from the formal definition of Big-ϴ notation, it is quite apparent that this holds.
f ∈ ϴ(g(n))
⇨ For some positive constants c1, c2, and n0, the following holds:
c1 · |g(n)| ≤ |f(n)| ≤ c2 · |g(n)|, for all n ≥ n0 (+)
Let f(n) be some arbitrary real-valued function. Set g(n) = f(n) and choose, e.g., c1=0.5, c2=2, and n0 = 1. Then, naturally, (+) holds:
0.5 · |f(n)| ≤ |f(n)| ≤ 2 · |f(n)|, for all n ≥ 1
Hence, f ∈ ϴ(f(n)) holds.

No we can not because it is not true. ϴ(f(n)) is a set. f(n) is a member of that set. f(n)+1 is also a member of that set.

Related

How to use the "THE" syntax in Isabelle/HOL?

I am trying to learn how to use the THE syntax in Isabelle/HOL (2020). In the tutorial main.pdf, there is:
The basic logic: x = y, True, False, ¬ P, P ∧ Q, P ∨ Q, P −→ Q, ∀ x. P,
∃ x. P, ∃!x. P, THE x. P.
I can understand what the others mean, but not the last one "THE x. P". My best guess is "the (maybe unique) x that satisfy property P". So I tried to state a toy lemma as follows:
lemma "0 = THE x::nat. (x ≥ 0 ∧ x ≤ 0)"
, which means that the x that is both ge and le 0 is 0.
But I get an error in Isabelle/jEdit with a highlight on the "THE" word.
I tried to search with the keywords Isabelle and "THE", but obviously the word "THE" is ignored by search engines. Hence the question here.
Can someone help explain the meaning and use of the "THE" syntax, hopefully with the example here?
You need more parentheses.
lemma "0 = (THE x::nat. (x ≥ 0 ∧ x ≤ 0))"
(*the proof*)
using theI[of ‹λx::nat. (x ≥ 0 ∧ x ≤ 0)› 0]
by auto
SOME (resp. THE) is (a variant of) Hilbert's epsilon operator that returns a (the) element that respects a certain property. If none (none or more than one) exist, an underspecified element is returned.
SOME and THE are not executable. They are rarely useful for beginners.

Functions f not in O(g) and g not in O(f)

The question is:
Show or disprove that for two functions f,g, if f is not in O(g) then g is in O(f).
My counterexample:
Let f(n) be f(n) = n^2 : if n is even
or n^4 : if n is odd
Let g(n) be g(n) = n^3
This is an example for f not in O(g) and g not in O(f).
Is my example wrong? If so, why?
Do you have any other examples?
Your counterexample works. A proof might look like this:
Suppose f were O(g). Then there is a positive constant c and an n0 such that for n >= n0, f(n) <= c * g(n). Let n' be an odd integer greater than or equal to n0. Then we have n'^4 <= c * n'^3. Dividing both sides by n'^3 gives n' <= c. However, this cannot be true for all n' > n0; so there are even n greater than n0 for which the condition does not hold, a contradiction.
The proof the other way around is similar, except you divide both sides by n'^2.
I think the kind of counterexample you identified is a good one for this; a function that oscillates by an asymptotically increasing amount and a function that goes somewhere in the middle.

Checking complexity (Big O notation)

On job interview I got a question:
is it true that 40^n is O(2^n) I said yes because only the exponent counts and constant doesn't matter. Then I got question if (40n)^2 is O(n^2), here I feel like no, it's not because the differences for next n will be huge but can't formally prove it. What is the answer for both of those which won't leave any doubts?
is it true that 40^n is O(2^n) I said yes because only the exponent counts and constant doesn't matter.
That's a big shortcut, it doesn't work here. For 40^n to be in O(2^n), there would have to be a pair of constants c and n0 such that 40^n <= c * 2^n if n >= n0. But there isn't. If you try to solve that for c, it turns out c has to be 20^n, which is not a constant. The base of an exponential cannot be ignored like that.
Then I got question if (40n)^2 is O(n^2)
If you work out the square, you get 1600 n^2. Now there is a solution such that c and n0 are constants, for example c = 1600, n0 = 1. So yes, (40n)^2 is an element of O(n^2).
Use the definition of Big O:
f(x) ∈ g(x) if and only if |f(x)| <= cg(x), ∀x:x>=x0, for some c,x0
1) 40^n ∉ O(2^n): There is no constant c and choice of x0 such that 40^x <= c2^x, for all x >= x0
2) (40n)^2 ∈ O(n^2): Choose c = 1600, x0 to be arbitrary: 1600x^2 <= 1600x^2 for all x >= x0

Finding the upper bound of a mathematical function (function analysis)

I am trying to understand Big-O notation through a book I have and it is covering Big-O by using functions although I am a bit confused. The book says that O(g(n)) where g(n) is the upper bound of f(n). So I understand that means that g(n) gives the max rate of growth for f(n) at larger values of n.
and that there exists an n_0 where the rate of growth of cg(n) (where c is some constant) and f(n) have the same rate of growth.
But what I am confused is on these examples on finding Big O in mathmatical functions.
This book says findthe upper bound for f(n) = n^4 +100n^2 + 50
they then state that n^4 +100n^2 + 50 <= 2n^4 (unsure why the 2n^4)
then they some how find n_0 =11 and c = 2, I understand why the big O is O(n^4) but I am just confused about the rest.
This is all discouraging as I don't understand but I feel like this is an important topic that I must understand.
If any one is curious the book is Data Structures and Algorithms Made Easy by Narasimha Karumanchi
Not sure if this post belongs here or in the math board.
Preparations
First, lets state, loosely, the definition of f being in O(g(n)) (note: O(g(n)) is a set of functions, so to be picky, we say that f is in O(...), rather than f(n) being in O(...)).
If a function f(n) is in O(g(n)), then c · g(n) is an upper bound on
f(n), for some constant c such that f(n) is always ≤ c · g(n),
for large enough n (i.e. , n ≥ n0 for some constant n0).
Hence, to show that f(n) is in O(g(n)), we need to find a set of constants (c, n0) that fulfils
f(n) < c · g(n), for all n ≥ n0, (+)
but this set is not unique. I.e., the problem of finding the constants (c, n0) such that (+) holds is degenerate. In fact, if any such pair of constants exists, there will exist an infinite amount of different such pairs.
Showing that f ∈ O(n^4)
Now, lets proceed and look at the example that confused you
Find an upper asymptotic bound for the function
f(n) = n^4 + 100n^2 + 50 (*)
One straight-forward approach is to express the lower-order terms in (*) in terms of the higher order terms, specifically, w.r.t. bounds (... < ...).
Hence, we see if we can find a lower bound on n such that the following holds
100n^2 + 50 ≤ n^4, for all n ≥ ???, (i)
We can easily find when equality holds in (i) by solving the equation
m = n^2, m > 0
m^2 - 100m - 50 = 0
(m - 50)^2 - 50^2 - 50 = 0
(m - 50)^2 = 2550
m = 50 ± sqrt(2550) = { m > 0, single root } ≈ 100.5
=> n ≈ { n > 0 } ≈ 10.025
Hence, (i) holds for n ≳ 10.025, bu we'd much rather present this bound on n with a neat integer value, hence rounding up to 11:
100n^2 + 50 ≤ n^4, for all n ≥ 11, (ii)
From (ii) it's apparent that the following holds
f(n) = n^4 + 100n^2 + 50 ≤ n^4 + n^4 = 2 · n^4, for all n ≥ 11, (iii)
And this relation is exactly (+) with c = 2, n0 = 11 and g(n) = n^4, and hence we've shown that f ∈ O(n^4). Note again, however, that the choice of constants c and n0 is one of convenience, that is not unique. Since we've shown that (+) holds for on set of constants (c,n0), we can show that it holds for an infinite amount of different such choices of constants (e.g., it naturally holds for c=10 and n0=20, ..., and so on).

Why is an + b = O(n^2)?

I need to prove that an + b = O(n2) using the formal definition of big-O notation. I have searched several textbooks I own on discrete mathematics as well as several online sources for any examples or theorems that are related to this proof, with no good results. I am not looking for a direct solution, but perhaps the right methods or paradigms to solve the proof.
Can anyone point me in the right direction?
Here's a hint: nr ≤ ns for all n ≥ 1 if r ≤ s. Therefore:
an + b ≤ an2 + bn2 = (a + b)n2 if n ≥ 1
From this, can you see what choice of n0 and c you would pick to show that an + b ≤ cn2 whenever n ≥ n0? Can you see how you could also use this to prove that an + b = O(n)?
Hope this helps!

Resources