How can I express this in Big O notation? - math

How can I express 2(3(logn-1)) in Big O notation?
Is it 2n or 2log n?

This is one case where the basis of the logarithm matters. So lets say, the basis of your logarithm is a. You can change it to base 3 by
logan = log₃n / log₃a
Now you can simplify the exponent
3logan - 1 = 3log₃n / log₃a - 1 = n1/log₃a / 3
So in total you get
2n1/log₃a / 3 = 2n1/log₃a / 3 ⋅ 21/3 ∈ O(2n1/log₃a)
If a = 3 the complexity would be O(2ⁿ). If a = 2, the complexity would be O(2nc), with
c = 1/log₃2 ≈ 1.5850.
Notice: 2nc = 2(nc) ≠ (2n)c = 2cn. So you cannot simplify the complexity more.

Related

Is f(n) = 1000n + 4500lgn + 54n O(n)?

I was given the following question in my test:
Is f(n) = 1000n + 4500lgn + 54n O(n)?
I answered this question by applying the following definition:
Definition of O(n), which is that for some function f(n) there must be two positive constants, c and k, such that c > 0, k > 0, n >= k, and 0 <= f(n) <= cn. If we can show that constants c and k exist then the function is O(n) (and if those constants don't exist then the function is actually larger than O(n)).
Solution:
0 ≤ 1000n + 4500lgn + 54n ≤ cn
0 ≤ 4000 + 9000 + 216 ≤ 4c when k=4
0 ≤ 3304 ≤ c
0 ≤ 8000 + 13500 + 432 ≤ 8n when n=8>k
0 ≤ 21932 ≤ 8n
0 ≤ 2741.5 ≤ n (last time c=3304 but now it is 2741.5....as n increases, c is not constant!)
Conclusion:
This function is not O(n) - we can't find constant values c and k because they simply don't exist.
Is my solution correct?
0 ≤ 2741.5 ≤ n (last time c=3304 but now it is 2741.5....as n increases, c is not constant!)
The flaw in your solution is that if you stick with the original value of c, the constraint is still satisfied. It is not the actual value of the constants that matters, simply that there exists a pair of constants c and k for which the inequality is satisfied for all n > k.
I don't know what level of rigor is required (by your teachers) in an answer to that question. However, a rigorous solution would require a mathematical proof (from first principles or established theorems) that either c and k do exist1, or that they cannot exist.
1 - A pair of c and k that you can prove does satisfy the constraint for all N > k would be a sufficient proof.
log2n < n, so 1000n + 4500 log2n + 54n ≤ 1000n + 4500n + 54n.
Just add up the coefficients. For k = 1 and c = 1000 + 4500 + 54 = 5554, f(n) ≤ c*n for all n ≥ k. Therefore f is O(n).

How can i find a running time of a recurrence relation?

The running time for this recurrence relation is O(nlogn). Since I am new to algorithm how would I show that mathematically?
T(n) = 2⋅T(n/2) + O(n)
T(n) = 2 ( 2⋅T(n/4) + O(n) ) + O(n) // since T(n/2) = 2⋅T(n/4) + O(n)
So far I can see that if I suppose n to be a power of 2 like n = 2m, then may be I can show that, but I am not getting the clear picture. Can anyone help me?
If you use the master theorem, you get the result you expected.
If you want to proof this "by hand", you can see this easily by supposing n = 2m is a power of 2 (as you already said). This leads you to
T(n) = 2⋅T(n/2) + O(n)
= 2⋅(2⋅T(n/4) + O(n/2)) + O(n)
= 4⋅T(n/4) + 2⋅O(n/2) + O(n)
= 4⋅(2⋅T(n/8) + O(n/4)) + 2⋅O(n/2) + O(n)
= Σk=1,...,m 2k⋅O(n/2k)
= Σk=1,...,m O(n)
= m⋅O(n)
Since m = log₂(n), you can write this as O(n log n).
At the end it doesn't matter if n is a power of 2 or not.
To see this, you can think about this: You have an input of n (which is not a power of 2) and you add more elements to the input until it contains n' = 2m Elements with m ∈ ℕ and log(n) ≤ m ≤ log(n) + 1, i.e. n' is the smalest power of 2 that is greater than n. Obviously T(n) ≤ T(n') holds and we know T(n') is in
O(n'⋅log(n')) = O(c⋅n⋅log(c⋅n)) = O(n⋅log(n) + n⋅log(c)) = O(n⋅log(n))
where c is a constant between 1 and 2.
You can do the same with the greatest power of 2 that is smaller than n. This gives leads you to T(n) ≥ T(n'') and we know T(n'') is in
O(n''⋅log(n'')) = O(c⋅n⋅log(c⋅n)) = O(n⋅log(n))
where c is a constant between 1/2 and 1.
In total you get, that the complexity of T(n) is bounded by the complexitys of T(n'') and T(n') wich are both O(n⋅log(n))and so T(n) is also in O(n⋅log(n)), even if it is not a power of 2.

How to prove 3n + 2log n = O(n)

How would I be able to prove 3n + 2log n = O(n) using the definition of big-O?
The C is supposedly 6, & the k is 1, but I have no idea how that is found. Much help will greatly be appreciated.
To formally prove this result, you need to find a choice of n0 and c such that
For any n ≥ n0: 3n + 2log n ≤ cn
To start this off, note that if you have any n ≥ 1, then log n < n. Consequently, if you consider any n ≥ 1, you have that
3n + 2log n < 3n + 2n = 5n
Consequently, if you pick n0 = 1 and c = 5, you have that
For any n ≥ n0: 3n + 2log n < 3n + 2n = 5n ≤ cn
And therefore 3n + 2 log n = O(n).
More generally, when given problems like these, try identifying the dominant term (here, the n term) and trying to find some choice of n0 such that the non-dominant terms are overwhelmed by the dominant term. Once you've done this, all that's left to do is choose the right constant c.
Hope this helps!
Wild guess (your question is quite unclear): the task is to show that
O(3n + 2log n) = O(n)
Now that's how it comes out: n -> n grows faster than n -> log n, and since the complexity is asymptotic, only the fastest-growing term matters, which is n in this case.
You can prove the following if I remember correctly:
if f1(n)=O(g1(n)), f2(n)=O(g2(n)) then f1(n)+f2(n)=O(max{g1(n),g2(n)}).
From there it's pretty straight forward.

How to implement c=m^e mod n for enormous numbers?

I'm trying to figure out how to implement RSA crypto from scratch (just for the intellectual exercise), and i'm stuck on this point:
For encryption, c = me mod n
Now, e is normally 65537. m and n are 1024-bit integers (eg 128-byte arrays). This is obviously too big for standard methods. How would you implement this?
I've been reading a bit about exponentiation here but it just isn't clicking for me:
Wikipedia-Exponentiation by squaring
This Chapter (see section 14.85)
Thanks.
edit: Also found this - is this more what i should be looking at? Wikipedia- Modular Exponentiation
Exponentiation by squaring:
Let's take an example. You want to find 1723. Note that 23 is 10111 in binary. Let's try to build it up from left to right.
// a exponent in binary
a = 17 //17^1 1
a = a * a //17^2 10
a = a * a //17^4 100
a = a * 17 //17^5 101
a = a * a //17^10 1010
a = a * 17 //17^11 1011
a = a * a //17^22 10110
a = a * 17 //17^23 10111
When you square, you double the exponent (shift left by 1 bit). When you multiply by m, you add 1 to the exponent.
If you want to reduce modulo n, you can do it after each multiplication (rather than leaving it to the end, which would make the numbers get very large).
65537 is 10000000000000001 in binary which makes all of this pretty easy. It's basically
a = m
repeat 16 times:
a = a * a
a = a mod n
a = a * m
a = a mod n
where of course a, n and m are "big integers". a needs to be at least 2048 bits as it can get as large as (n-1)2.
For an efficient algorithm you need to combine the exponentiation by squaring with repeated application of mod after each step.
For odd e this holds:
me mod n = m ⋅ me-1 mod n
For even e:
me mod n = (me/2 mod n)2 mod n
With m1 = m as a base case this defines a recursive way to do efficient modular exponentiation.
But even with an algorithm like this, because m and n will be very large, you will still need to use a type/library that can handle integers of such sizes.
result = 1
while e>0:
if (e & 1) != 0:
result = result * m
result = result mod n
m = m*m
m = m mod n
e = e>>1
return result
This checks bits in the exponent starting with the least significant bit. Each time we move up a bit it corresponds to doubling the power of m - hence we shift e and square m. The result only gets the power of m multiplied in if the exponent has a 1 bit in that position. All multiplications need to be reduced mod n.
As an example, consider m^13. 11 = 1101 in binary. so this is the same as m^8 * m^4 * m. Notice the powers 8,4,(not 2),1 which is the same as the bits 1101. And then recall that m^8 = (m^4)^2 and m^4 = (m^2)^2.
If g(x) = x mod 2^k is faster to calculate for your bignum library than f(x) = x mod N for N not divisible by 2, then consider using Montgomery multiplication. When used with modular exponentiation, it avoids having to calculate modulo N at each step, you just need to do the "Montgomeryization" / "un-Montgomeryization" at the beginning and end.

Using the masters method

On my midterm I had the problem:
T(n) = 8T(n/2) + n^3
and I am supposed to find its big theta notation using either the masters or alternative method. So what I did was
a = 8, b = 2 k = 3
log28 = 3 = k
therefore, T(n) is big theta n3. I got 1/3 points so I must be wrong. What did I do wrong?
T(n) = aT(n/b) + f(n)
You applied the version when f(n) = O(n^(log_b(a) - e)) for some e > 0.
This is important, you need this to be true for some e > 0.
For f(n) = n^3, b = 2 and a = 8,
n^3 = O(n^(3-e)) is not true for any e > 0.
So your picked the wrong version of the Master theorem.
You need to apply a different version of Master theorem:
if f(n) = Theta ((log n)^k * n^log_b(a)) for some k >= 0,
then
T(n) = Theta((log n)^(k+1) * n^log_b(a))
In your problem, you can apply this case, and that gives T(n) = Theta(n^3 log n).
An alternative way to solve your problem would be:
T(n) = 8 T(n/2) + n^3.
Let g(n) = T(n)/n^3.
Then
n^3 *g(n) = 8 * (n/2)^3 * g(n/2)+ n^3
i.e g(n) = g(n/2) + 1.
This implies g(n) = Theta(logn) and so T(n) = Theta(n^3 logn).

Resources