O notation: 2^log(O(n^2)) = 2^O(log(n^2))? - math

I tried to solve this with logarithm rules:
O(n^2) = 2^O(log(n^2))
c*n^2 = 2^log(n^2c)
Im not sure that is true?

No, no, no. You can't just take logarithms.
2^log (O (n^2)) = 2 ^ log (c * n^2)) = c * n^2
2^ O (log n^2) = 2^ (c * log n^2) = (2 ^ (log n^2)) ^ c = (n^2) ^ c.
The first is just O (n^2). The second one is n raised to sum unknown but limited power.

I think this depends on what the equals sign means here. If the equals sign means
"Any function that is of 2log O(n2) is also 2O(log n2)"
Then the claim is true. Let f(n) be some function that's O(n2). This means that there's a c and n0 such that for any n ≥ n0, we know that f(n) ≤ cn2. Therefore, for any n ≥ n0, we know that
2log f(n) ≤ 2log (cn2) = 2(log c + log n2)
The function log c + log n2 is itself O(log n2), so we see that
2log f(n) ≤ 2(log c + log n2) = 2O(log n2)
On the other hand, if the equals sign means
"The class of functions that is 2log O(n2) is the same class of functions that is 2O(log n2)"
then the claim is false. For example, the function n4 is in the second class because it can be written as 22 log n2, but it's not in the first class.
Hope this helps!

Related

Implementation of Pohlig-Hellman unable to solve for large exponents

I'm trying to create an implementation of the Pohlig-Hellman algorithm in order to create a utility to craft / exploit backdoors in implementations of the Diffie-Hellman protocol. This project is inspired by this 2016 white-paper by NCC Group.
I currently have an implementation, here, that works for relatively small exponents – i.e. Given a linear congruence, g^x = h (mod n), for some specially-crafted modulus, n = pq, where p and q are prime, my implementation can solve for values of x smaller than min{ p, q }.
However, if x is larger than the smallest prime factor of n, then my implementation will give an incorrect solution. I suspect that the issue may not be with my implementation of Pohlig-Hellman, itself, but with the arguments I am passing to it. All the code can be found at the link, provided above, but I'll copy the relevant code snippets, here:
#
# Implementation of Pohlig-Hellman algorithm
#
# The `crt` function implements the Chinese Remainder Theorem, and the `pollard` function implements
# Pollard's Rho algorithm for discrete logarithms (see /dph/crt.py and /dph/pollard.py).
#
def pohlig(G, H, P, factors):
g = [pow(G, divexact(P - 1, f), P) for f in factors]
h = [pow(H, divexact(P - 1, f), P) for f in factors]
if Config.verbose:
x = []
total = len(factors)
for i, (gi, hi) in enumerate(zip(g, h), start=1):
print('Solving discrete logarithm {}/{}...'.format(str(i).rjust(len(str(total))), total))
result = pollard(gi, hi, P)
x.append(result)
print(f'x = 0x{result.digits(16)}')
else:
x = [pollard(gi, hi, P) for gi, hi in zip(g, h)]
return crt(x, factors)
Above is my implementation of Pohlig-Hellman, and below is where I call it to exploit a backdoor in some implementation of the Diffie-Hellman protocol.
def _exp(args):
g = args.g
h = args.h
p_factors = list(map(mpz, args.p_factors.split(',')))
try:
p_factors.remove(2)
except ValueError:
pass
q_factors = list(map(mpz, args.q_factors.split(',')))
try:
q_factors.remove(2)
except ValueError:
pass
p = 2 * _product(*p_factors) + 1
q = 2 * _product(*q_factors) + 1
if Config.verbose:
print(f'p = 0x{p.digits(16)}')
print(f'q = 0x{q.digits(16)}')
print()
print(f'Compute the discrete logarithm modulo `p`')
print(f'-----------------------------------------')
px = pohlig(g % p, h % p, p, p_factors)
if Config.verbose:
print()
print(f'Compute the discrete logarithm modulo `q`')
print(f'-----------------------------------------')
qx = pohlig(g % q, h % q, q, q_factors)
if Config.verbose:
print()
x = crt([px, qx], [p, q])
print(f'x = 0x{x.digits(16)}')
Here is a summary of what I am doing:
Choose a prime p = 2 * prod{ p_i } + 1, where p_i denotes a set of primes.
Choose a prime q = 2 * prod{ q_j } + 1, where q_j denotes a set of primes.
Inject n = pq as the backdoor modulus in some implementation of Diffie-Hellman.
Wait for a victim (e.g. Alice computes A = g^a (mod n), and Bob computes B = g^b (mod n)).
Solve for Alice's or Bob's secret exponent, a or b, and compute their shared secret key, K = A^b = B^a (mod n).
Step #5 is done by performing the Pohlig-Hellman algorithm twice to solve for x (mod p) and x (mod q), and then the Chinese Remainder Theorem is used to solve for x (mod n).
EDIT
The x that I am referring to in the description of step #5 is either Alice's secret exponent, a, or Bob's secret exponent, b, depending on which we choose to solve for, since only one is needed to compute the shared secret key, K.

Is O(n^(1/logn)) actually constant?

I came across this time complexity function and according to me, it is actually constant. Please correct me if I am wrong.
n^(1/logn) => (2^m)^(1/log(2^m)) => (2^m)^(1/m) => 2
Since any n can be written as a power of 2, I can do the above simplification and prove that it is constant, right?
Assuming log is the natural log, then this is equivalent to e, not 2, but either way it's a constant.
First, let:
k = n^(1 / log n)
Then take the log of both sides:
log k = (1 / log n) * log n
So:
log k = 1
Now raise both sides to the power of e to get:
e^(log k) = e^(1)
And thus:
k = e.
Here's an alternative proof:
1 / (log n) = (log e) / (log n) = logn e by the change of base identity.
Then, nlogn e = e by the definition of the logarithm as the inverse of exponentiation.

Runtime Complexity | Recursive calculation using Master's Theorem

So I've encountered a case where I have 2 recursive calls - rather than one. I do know how to solve for one recursive call, but in this case I'm not sure whether I'm right or wrong.
I have the following problem:
T(n) = T(2n/5) + T(3n/5) + n
And I need to find the worst-case complexity for this.
(FYI - It's some kind of augmented merge sort)
My feeling was to use the first equation from the Theorem, but I feel something is wrong with my idea. Any explanation on how to solve problems like this will be appreciated :)
The recursion tree for the given recursion will look like this:
Size Cost
n n
/ \
2n/5 3n/5 n
/ \ / \
4n/25 6n/25 6n/25 9n/25 n
and so on till size of input becomes 1
The longes simple path from root to a leaf would be n-> 3/5n -> (3/5) ^2 n .. till 1
Therefore let us assume the height of tree = k
((3/5) ^ k )*n = 1 meaning k = log to the base 5/3 of n
In worst case we expect that every level gives a cost of n and hence
Total Cost = n * (log to the base 5/3 of n)
However we must keep one thing in mind that ,our tree is not complete and therefore
some levels near the bottom would be partially complete.
But in asymptotic analysis we ignore such intricate details.
Hence in worst Case Cost = n * (log to the base 5/3 of n)
which is O( n * log n )
Now, let us verify this using substitution method:
T(n) = O( n * log n) iff T(n) < = dnlog(n) for some d>0
Assuming this to be true:
T(n) = T(2n/5) + T(3n/5) + n
<= d(2n/5)log(2n/5) + d(3n/5)log(3n/5) + n
= d*2n/5(log n - log 5/2 ) + d*3n/5(log n - log 5/3) + n
= dnlog n - d(2n/5)log 5/2 - d(3n/5)log 5/3 + n
= dnlog n - dn( 2/5(log 5/2) - 3/5(log 5/3)) + n
<= dnlog n
as long as d >= 1/( 2/5(log 5/2) - 3/5(log 5/3) )

T(n)=T(n-1)+O(log n)$is T(n)=O(n^2) or T(n)=O(n log n)

I have this Recurrence relation: T(n)=T(n-1)+O(log n)
What is the solution? T(n)=O(n^2) or T(n)=O(n log n)
What I did is: I assume that T(n)<=O(n^2)...
And that's bring me to O(n^2), I'm right?
Or I have mistake?
(I heard from someone that he got O(n log n) and I'm curies if I'm right or him...)
Thank you!
if T(n)<=T(n-1) + c log n for some c
=> T(n) <= T(n-2) + c log (n-1) + c log n
=> T(n) <= T(n-3) + c log(n-2) + c log(n-1) + c log n
thus: T(n) <= T(0) + sum_{i=1 .. n} c log i = O(n log n)
but O(n^2) is also right but less specific, since T(n) = O(n^2) means
there are some a, m such that T(n)<=a n^2 for all n>=m

How to prove 3n + 2log n = O(n)

How would I be able to prove 3n + 2log n = O(n) using the definition of big-O?
The C is supposedly 6, & the k is 1, but I have no idea how that is found. Much help will greatly be appreciated.
To formally prove this result, you need to find a choice of n0 and c such that
For any n ≥ n0: 3n + 2log n ≤ cn
To start this off, note that if you have any n ≥ 1, then log n < n. Consequently, if you consider any n ≥ 1, you have that
3n + 2log n < 3n + 2n = 5n
Consequently, if you pick n0 = 1 and c = 5, you have that
For any n ≥ n0: 3n + 2log n < 3n + 2n = 5n ≤ cn
And therefore 3n + 2 log n = O(n).
More generally, when given problems like these, try identifying the dominant term (here, the n term) and trying to find some choice of n0 such that the non-dominant terms are overwhelmed by the dominant term. Once you've done this, all that's left to do is choose the right constant c.
Hope this helps!
Wild guess (your question is quite unclear): the task is to show that
O(3n + 2log n) = O(n)
Now that's how it comes out: n -> n grows faster than n -> log n, and since the complexity is asymptotic, only the fastest-growing term matters, which is n in this case.
You can prove the following if I remember correctly:
if f1(n)=O(g1(n)), f2(n)=O(g2(n)) then f1(n)+f2(n)=O(max{g1(n),g2(n)}).
From there it's pretty straight forward.

Resources