Asymptotic bounds and Big O notation - math

Is it right to say that suppose we have two monotonically increasing functions f,g so that f(n)=Ω(n) and f(g(n))=O(n). Then I want to conclude that g(n)=O(n).
I think that this is a false claim, and I've been trying to provide counter example to show that this is false claim, but after many attempts I'm starting to think otherwise.
Can you please provide some kind of explanation or example if this is a false claim or a way to prove if it's a correct one.

I believe this claim is true. Here's a proof.
Suppose that f(n) = Ω(n). That means that there are constants c, n0 such that
f(n) ≥ cn for any n ≥ n0. (1)
Similarly, since f(g(n)) = O(n), we know that there are constants d, n1 such that
f(g(n)) ≤ dn for any n ≥ n1. (2)
Now, there are two options. The first is that g(n) = O(1), in which case we're done because g(n) is then O(n). The second case is that g(n) ≠ O(1), in which case g grows without bound. That means that there is an n2 such that g(n2) ≥ n0 (g grows without bound, so it eventually overtakes n0) and n2 ≥ n1 (just pick a big n2).
Now, pick any n ≥ n2. Since n ≥ n2, we have that g(n) ≥ g(n2) ≥ n0 because g is monotone increasing, and therefore by (1) we see that
f(g(n)) ≥ cg(n).
Since n ≥ n2 ≥ n1, we can combine this inequality with equation (2) to see that
dn ≥ f(g(n)) ≥ cg(n).
so, in particular, we have that
g(n) ≤ (d / c)n
for all n ≥ n2, so g(n) = O(n).

Related

Big O as an exponent on both sides?

Lets say we have an expression like
2f(n) = O(2g(n))
I don't understand the expression. I know that what f(n) = O(n) is. It basically means that the left side is asymptotically bounded above at O(n).
O being the Big O notation.
Basically it means 2g(n) is an asymptotic upper bound for 2f(n).
Now one can think this is the same as f(n) ∈ O(g(n)), but this is only correct in one direction.
2f(n) ∈ O(2g(n)) ⇒ f(n) ∈ O(g(n))
But the other way around is not correct.
E.g.:
f(n) = 2n, g(n) = n so
2n ∈ O(n) holds, but 22n = 4n ∉ O(2n).

Big O notation O(n²)

I want to know, why this is O(n2) for 1+2+3+...+n?
For example, 1+2+3+4 = 4·(4+1)/2 = 10 but 42=16, so how come it's O(n2)?
In Big-O notation you forget about constant factors.
In your example S(n) = 1+2+...+n = n·(n+1)/2 is in O(n2) since you can find a constant number c with
S(n) < c · n2 for all n > n0
(just choose c = 1).
Notice: Big-O notation is an upper bound, i.e. S(n) grows not faster than n2.
Notice also, that S(n) also grows obviously not faster than n3 so it is also in O(n3).
Some additional:
You can also proof the other way around that n2 is in O(S(n)).
n2 < c·S(n) = c·n·(n+1)/2 holds for any c &geq; 2 for all n
So n2 is in O(S(n)). This means both functions grow asymtoticly equal. You can wirt this as S(n) is in Θ(n2).
When computing O(n) n(n-1)/2 is the same as n^2 since it has the highest complexity
1+2+3+...+n Sum of this equation is n(n+1)/2
Means it is n^2+n/2 therefore O(n^2).
Complexity,Big O will look at the biggest factor in the equation because logically it is the one that takes most time.( think of it in a computer program)

Recurrence relations and asymptotic complexity

I am trying to understand the recurrence relation of f(n) = n^cos n and g(n) = n. I am told that this relation has no asymptotic behavior related to Big O, little o, Big Omega, little omega, or Theta. Something about the oscillations of cos n? Can I receive a little more understanding on this behavior?
When I use L' Hospital rule on my calculator, I get undefined.
The function ncos n is O(n). Since -1 ≤ cos n ≤ 1, the function ncos n is always bounded between n-1 and n1, so in particular it's always upper-bounded by O(n). However, it's not Ω(n), because for any number n0 and any constant c, you can find an n > n0 where ncos n < cn. One way to do this is to look for choices of n where cos n is negative; the value of n-ε for any ε > 0 will eventually be smaller than cn for any c.
Hope this helps!

Asymptotic complexity constant, why the constant?

Big oh notation says that all g(n) are an element c.f(n), O(g(n)) for some constant c.
I have always wondered and never really understood why we need this arbitrary constant to multiply with the bounding function f(n) to get our bounds?
Also how does one decide what number this constant should be?
The constant itself doesn't characterize the limiting behavior of the f(n) compared to g(n).
It is used for the mathematical definition, which enforces the existence of a constant M such that
If such a constant exists then you can state that f(x) is an O(g(x)), and this is the usual notation when analyzing algorithms, you just don't care about which is the constant but just the complexity of operations itself. The constant is able make that disequation correct by ensuring that M|g(x)| is an upper bound of f(x).
How to find that constant depends on f(x) and g(x) and it is the mathematical point that must be proved to ensure that f(x) has a g(x) big-o so there's not a general rule. Look at this example.
Consider function
f(n) = 4 * n
Doesn't it make sense to call this function O(n) since it grows "as fast" as g(n) = n.
But without constant in definition of O you can't find n0 such as that for all n > n0, f(n) <= n. That's why you need constant, and indeed from condition,
4 * n <= c * n for all n > n0
you can get n0 == 0, c == 4.

finding a/b mod c

I know this may seem like a math question but i just saw this in a contest and I really want to know how to solve it.
We have
a (mod c)
and
b (mod c)
and we're looking for the value of the quotient
(a/b) (mod c)
Any ideas?
In the ring of integers modulo C, these equations are equivalent:
A / B (mod C)
A * (1/B) (mod C)
A * B-1(mod C).
Thus you need to find B-1, the multiplicative inverse of B modulo C. You can find it using e.g. extended Euclidian algorithm.
Note that not every number has a multiplicative inverse for the given modulus.
Specifically, B-1 exists if and only if gcd(B, C) = 1 (i.e. B and C are coprime).
See also
Wikipedia/Modular multiplicative inverse
Wikipedia/Extended Euclidian algorithm
Modular multiplicative inverse: Example
Suppose we want to find the multiplicative inverse of 3 modulo 11.
That is, we want to find
x = 3-1(mod 11)
x = 1/3 (mod 11)
3x = 1 (mod 11)
Using extended Euclidian algorithm, you will find that:
x = 4 (mod 11)
Thus, the modular multiplicative inverse of 3 modulo 11 is 4. In other words:
A / 3 == A * 4 (mod 11)
Naive algorithm: brute force search
One way to solve this:
3x = 1 (mod 11)
Is to simply try x for all values 0..11, and see if the equation holds true. For small modulus, this algorithm may be acceptable, but extended Euclidian algorithm is much better asymptotically.
There are potentially many answers. When all you have is k = B mod C, then B could be any k+CN for all integer N.
This means B could potentially be very large. So large, in fact, to make A/B approach zero.
However, that's just one way to respond.
I think it can be written as(But not sure)
(a/b)%c = ((a)%(b*c))/b

Resources