I want to know, why this is O(n2) for 1+2+3+...+n?
For example, 1+2+3+4 = 4·(4+1)/2 = 10 but 42=16, so how come it's O(n2)?
In Big-O notation you forget about constant factors.
In your example S(n) = 1+2+...+n = n·(n+1)/2 is in O(n2) since you can find a constant number c with
S(n) < c · n2 for all n > n0
(just choose c = 1).
Notice: Big-O notation is an upper bound, i.e. S(n) grows not faster than n2.
Notice also, that S(n) also grows obviously not faster than n3 so it is also in O(n3).
Some additional:
You can also proof the other way around that n2 is in O(S(n)).
n2 < c·S(n) = c·n·(n+1)/2 holds for any c ≥ 2 for all n
So n2 is in O(S(n)). This means both functions grow asymtoticly equal. You can wirt this as S(n) is in Θ(n2).
When computing O(n) n(n-1)/2 is the same as n^2 since it has the highest complexity
1+2+3+...+n Sum of this equation is n(n+1)/2
Means it is n^2+n/2 therefore O(n^2).
Complexity,Big O will look at the biggest factor in the equation because logically it is the one that takes most time.( think of it in a computer program)
Related
a^ b mod c
I know how to solve if c is prime, what will be the approach if c is not prime.
Any mathematical approach.
Assuming the factorization of c is known, you can compute the Euler totient value phi(c) and know that with that number, but only if a and c have no common divisors,
a^b == a^( b mod phi(c) ) mod c
Thus if b is, as in your original edit, some very very large Fibonacci number, you have to compute this number with the algorithm of your choice with all arithmetic operations mod phi(c). This at the same time ensures that the exponent stays inside the same number format as used for c and that the complexity of the exponentiation is reduced.
I am trying to understand the recurrence relation of f(n) = n^cos n and g(n) = n. I am told that this relation has no asymptotic behavior related to Big O, little o, Big Omega, little omega, or Theta. Something about the oscillations of cos n? Can I receive a little more understanding on this behavior?
When I use L' Hospital rule on my calculator, I get undefined.
The function ncos n is O(n). Since -1 ≤ cos n ≤ 1, the function ncos n is always bounded between n-1 and n1, so in particular it's always upper-bounded by O(n). However, it's not Ω(n), because for any number n0 and any constant c, you can find an n > n0 where ncos n < cn. One way to do this is to look for choices of n where cos n is negative; the value of n-ε for any ε > 0 will eventually be smaller than cn for any c.
Hope this helps!
Big oh notation says that all g(n) are an element c.f(n), O(g(n)) for some constant c.
I have always wondered and never really understood why we need this arbitrary constant to multiply with the bounding function f(n) to get our bounds?
Also how does one decide what number this constant should be?
The constant itself doesn't characterize the limiting behavior of the f(n) compared to g(n).
It is used for the mathematical definition, which enforces the existence of a constant M such that
If such a constant exists then you can state that f(x) is an O(g(x)), and this is the usual notation when analyzing algorithms, you just don't care about which is the constant but just the complexity of operations itself. The constant is able make that disequation correct by ensuring that M|g(x)| is an upper bound of f(x).
How to find that constant depends on f(x) and g(x) and it is the mathematical point that must be proved to ensure that f(x) has a g(x) big-o so there's not a general rule. Look at this example.
Consider function
f(n) = 4 * n
Doesn't it make sense to call this function O(n) since it grows "as fast" as g(n) = n.
But without constant in definition of O you can't find n0 such as that for all n > n0, f(n) <= n. That's why you need constant, and indeed from condition,
4 * n <= c * n for all n > n0
you can get n0 == 0, c == 4.
I know this may seem like a math question but i just saw this in a contest and I really want to know how to solve it.
We have
a (mod c)
and
b (mod c)
and we're looking for the value of the quotient
(a/b) (mod c)
Any ideas?
In the ring of integers modulo C, these equations are equivalent:
A / B (mod C)
A * (1/B) (mod C)
A * B-1(mod C).
Thus you need to find B-1, the multiplicative inverse of B modulo C. You can find it using e.g. extended Euclidian algorithm.
Note that not every number has a multiplicative inverse for the given modulus.
Specifically, B-1 exists if and only if gcd(B, C) = 1 (i.e. B and C are coprime).
See also
Wikipedia/Modular multiplicative inverse
Wikipedia/Extended Euclidian algorithm
Modular multiplicative inverse: Example
Suppose we want to find the multiplicative inverse of 3 modulo 11.
That is, we want to find
x = 3-1(mod 11)
x = 1/3 (mod 11)
3x = 1 (mod 11)
Using extended Euclidian algorithm, you will find that:
x = 4 (mod 11)
Thus, the modular multiplicative inverse of 3 modulo 11 is 4. In other words:
A / 3 == A * 4 (mod 11)
Naive algorithm: brute force search
One way to solve this:
3x = 1 (mod 11)
Is to simply try x for all values 0..11, and see if the equation holds true. For small modulus, this algorithm may be acceptable, but extended Euclidian algorithm is much better asymptotically.
There are potentially many answers. When all you have is k = B mod C, then B could be any k+CN for all integer N.
This means B could potentially be very large. So large, in fact, to make A/B approach zero.
However, that's just one way to respond.
I think it can be written as(But not sure)
(a/b)%c = ((a)%(b*c))/b
For binary search tree type of data structures, I see the Big O notation is typically noted as O(logn). With a lowercase 'l' in log, does this imply log base e (n) as described by the natural logarithm? Sorry for the simple question but I've always had trouble distinguishing between the different implied logarithms.
Once expressed in big-O() notation, both are correct. However, during the derivation of the O() polynomial, in the case of binary search, only log2 is correct. I assume this distinction was the intuitive inspiration for your question to begin with.
Also, as a matter of my opinion, writing O(log2 N) is better for your example, because it better communicates the derivation of the algorithm's run-time.
In big-O() notation, constant factors are removed. Converting from one logarithm base to another involves multiplying by a constant factor.
So O(log N) is equivalent to O(log2 N) due to a constant factor.
However, if you can easily typeset log2 N in your answer, doing so is more pedagogical. In the case of binary tree searching, you are correct that log2 N is introduced during the derivation of the big-O() runtime.
Before expressing the result as big-O() notation, the difference is very important. When deriving the polynomial to be communicated via big-O notation, it would be incorrect for this example to use a logarithm other than log2 N, prior to applying the O()-notation. As soon as the polynomial is used to communicate a worst-case runtime via big-O() notation, it doesn't matter what logarithm is used.
Big O notation is not affected by logarithmic base, because all logarithms in different bases are related by a constant factor, O(ln n) is equivalent to O(log n).
Both are correct. Think about this
log2(n)=log(n)/log(2)=O(log(n))
log10(n)=log(n)/log(10)=O(log(n))
logE(n)=log(n)/log(E)=O(log(n))
It doesn't really matter what base it is, since big-O notation is usually written showing only the asymptotically highest order of n, so constant coefficients will drop away. Since a different logarithm base is equivalent to a constant coefficient, it is superfluous.
That said, I would probably assume log base 2.
Yes, when talking about big-O notation, the base does not matter. However, computationally when faced with a real search problem it does matter.
When developing an intuition about tree structures, it's helpful to understand that a binary search tree can be searched in O(n log n) time because that is the height of the tree - that is, in a binary tree with n nodes, the tree depth is O(n log n) (base 2). If each node has three children, the tree can still be searched in O(n log n) time, but with a base 3 logarithm. Computationally, the number of children each node has can have a big impact on performance (see for example: link text)
Enjoy!
Paul
First you must understand what it means for a function f(n) to be O( g(n) ).
The formal definition is: *A function f(n) is said to be O(g(n)) iff |f(n)| <= C * |g(n)| whenever n > k, where C and k are constants.*
so let f(n) = log base a of n, where a > 1 and g(n) = log base b of n, where b > 1
NOTE: This means the values a and b could be any value greater than 1, for example a=100 and b = 3
Now we get the following: log base a of n is said to be O(log base b of n) iff |log base a of n| <= C * |log base b of n| whenever n > k
Choose k=0, and C= log base a of b.
Now our equation looks like the following: |log base a of n| <= log base a of b * |log base b of n| whenever n > 0
Notice the right hand side, we can manipulate the equation: = log base a of b * |log base b of n| = |log base b of n| * log base a of b = |log base a of b^(log base b of n)| = |log base a of n|
Now our equation looks like the following: |log base a of n| <= |log base a of n| whenever n > 0
The equation is always true no matter what the values n,b, or a are, other than their restrictions a,b>1 and n>0.
So log base a of n is O(log base b of n) and since a,b doesn't matter we can simply omit them.
You can see a YouTube video on it here: https://www.youtube.com/watch?v=MY-VCrQCaVw
You can read an article on it here: https://medium.com/#randerson112358/omitting-bases-in-logs-in-big-o-a619a46740ca
Technically the base doesn't matter, but you can generally think of it as base-2.