Is O(n^(1/logn)) actually constant? - math

I came across this time complexity function and according to me, it is actually constant. Please correct me if I am wrong.
n^(1/logn) => (2^m)^(1/log(2^m)) => (2^m)^(1/m) => 2
Since any n can be written as a power of 2, I can do the above simplification and prove that it is constant, right?

Assuming log is the natural log, then this is equivalent to e, not 2, but either way it's a constant.
First, let:
k = n^(1 / log n)
Then take the log of both sides:
log k = (1 / log n) * log n
So:
log k = 1
Now raise both sides to the power of e to get:
e^(log k) = e^(1)
And thus:
k = e.

Here's an alternative proof:
1 / (log n) = (log e) / (log n) = logn e by the change of base identity.
Then, nlogn e = e by the definition of the logarithm as the inverse of exponentiation.

Related

Implementation of Pohlig-Hellman unable to solve for large exponents

I'm trying to create an implementation of the Pohlig-Hellman algorithm in order to create a utility to craft / exploit backdoors in implementations of the Diffie-Hellman protocol. This project is inspired by this 2016 white-paper by NCC Group.
I currently have an implementation, here, that works for relatively small exponents – i.e. Given a linear congruence, g^x = h (mod n), for some specially-crafted modulus, n = pq, where p and q are prime, my implementation can solve for values of x smaller than min{ p, q }.
However, if x is larger than the smallest prime factor of n, then my implementation will give an incorrect solution. I suspect that the issue may not be with my implementation of Pohlig-Hellman, itself, but with the arguments I am passing to it. All the code can be found at the link, provided above, but I'll copy the relevant code snippets, here:
#
# Implementation of Pohlig-Hellman algorithm
#
# The `crt` function implements the Chinese Remainder Theorem, and the `pollard` function implements
# Pollard's Rho algorithm for discrete logarithms (see /dph/crt.py and /dph/pollard.py).
#
def pohlig(G, H, P, factors):
g = [pow(G, divexact(P - 1, f), P) for f in factors]
h = [pow(H, divexact(P - 1, f), P) for f in factors]
if Config.verbose:
x = []
total = len(factors)
for i, (gi, hi) in enumerate(zip(g, h), start=1):
print('Solving discrete logarithm {}/{}...'.format(str(i).rjust(len(str(total))), total))
result = pollard(gi, hi, P)
x.append(result)
print(f'x = 0x{result.digits(16)}')
else:
x = [pollard(gi, hi, P) for gi, hi in zip(g, h)]
return crt(x, factors)
Above is my implementation of Pohlig-Hellman, and below is where I call it to exploit a backdoor in some implementation of the Diffie-Hellman protocol.
def _exp(args):
g = args.g
h = args.h
p_factors = list(map(mpz, args.p_factors.split(',')))
try:
p_factors.remove(2)
except ValueError:
pass
q_factors = list(map(mpz, args.q_factors.split(',')))
try:
q_factors.remove(2)
except ValueError:
pass
p = 2 * _product(*p_factors) + 1
q = 2 * _product(*q_factors) + 1
if Config.verbose:
print(f'p = 0x{p.digits(16)}')
print(f'q = 0x{q.digits(16)}')
print()
print(f'Compute the discrete logarithm modulo `p`')
print(f'-----------------------------------------')
px = pohlig(g % p, h % p, p, p_factors)
if Config.verbose:
print()
print(f'Compute the discrete logarithm modulo `q`')
print(f'-----------------------------------------')
qx = pohlig(g % q, h % q, q, q_factors)
if Config.verbose:
print()
x = crt([px, qx], [p, q])
print(f'x = 0x{x.digits(16)}')
Here is a summary of what I am doing:
Choose a prime p = 2 * prod{ p_i } + 1, where p_i denotes a set of primes.
Choose a prime q = 2 * prod{ q_j } + 1, where q_j denotes a set of primes.
Inject n = pq as the backdoor modulus in some implementation of Diffie-Hellman.
Wait for a victim (e.g. Alice computes A = g^a (mod n), and Bob computes B = g^b (mod n)).
Solve for Alice's or Bob's secret exponent, a or b, and compute their shared secret key, K = A^b = B^a (mod n).
Step #5 is done by performing the Pohlig-Hellman algorithm twice to solve for x (mod p) and x (mod q), and then the Chinese Remainder Theorem is used to solve for x (mod n).
EDIT
The x that I am referring to in the description of step #5 is either Alice's secret exponent, a, or Bob's secret exponent, b, depending on which we choose to solve for, since only one is needed to compute the shared secret key, K.

Runtime Complexity | Recursive calculation using Master's Theorem

So I've encountered a case where I have 2 recursive calls - rather than one. I do know how to solve for one recursive call, but in this case I'm not sure whether I'm right or wrong.
I have the following problem:
T(n) = T(2n/5) + T(3n/5) + n
And I need to find the worst-case complexity for this.
(FYI - It's some kind of augmented merge sort)
My feeling was to use the first equation from the Theorem, but I feel something is wrong with my idea. Any explanation on how to solve problems like this will be appreciated :)
The recursion tree for the given recursion will look like this:
Size Cost
n n
/ \
2n/5 3n/5 n
/ \ / \
4n/25 6n/25 6n/25 9n/25 n
and so on till size of input becomes 1
The longes simple path from root to a leaf would be n-> 3/5n -> (3/5) ^2 n .. till 1
Therefore let us assume the height of tree = k
((3/5) ^ k )*n = 1 meaning k = log to the base 5/3 of n
In worst case we expect that every level gives a cost of n and hence
Total Cost = n * (log to the base 5/3 of n)
However we must keep one thing in mind that ,our tree is not complete and therefore
some levels near the bottom would be partially complete.
But in asymptotic analysis we ignore such intricate details.
Hence in worst Case Cost = n * (log to the base 5/3 of n)
which is O( n * log n )
Now, let us verify this using substitution method:
T(n) = O( n * log n) iff T(n) < = dnlog(n) for some d>0
Assuming this to be true:
T(n) = T(2n/5) + T(3n/5) + n
<= d(2n/5)log(2n/5) + d(3n/5)log(3n/5) + n
= d*2n/5(log n - log 5/2 ) + d*3n/5(log n - log 5/3) + n
= dnlog n - d(2n/5)log 5/2 - d(3n/5)log 5/3 + n
= dnlog n - dn( 2/5(log 5/2) - 3/5(log 5/3)) + n
<= dnlog n
as long as d >= 1/( 2/5(log 5/2) - 3/5(log 5/3) )

O notation: 2^log(O(n^2)) = 2^O(log(n^2))?

I tried to solve this with logarithm rules:
O(n^2) = 2^O(log(n^2))
c*n^2 = 2^log(n^2c)
Im not sure that is true?
No, no, no. You can't just take logarithms.
2^log (O (n^2)) = 2 ^ log (c * n^2)) = c * n^2
2^ O (log n^2) = 2^ (c * log n^2) = (2 ^ (log n^2)) ^ c = (n^2) ^ c.
The first is just O (n^2). The second one is n raised to sum unknown but limited power.
I think this depends on what the equals sign means here. If the equals sign means
"Any function that is of 2log O(n2) is also 2O(log n2)"
Then the claim is true. Let f(n) be some function that's O(n2). This means that there's a c and n0 such that for any n ≥ n0, we know that f(n) ≤ cn2. Therefore, for any n ≥ n0, we know that
2log f(n) ≤ 2log (cn2) = 2(log c + log n2)
The function log c + log n2 is itself O(log n2), so we see that
2log f(n) ≤ 2(log c + log n2) = 2O(log n2)
On the other hand, if the equals sign means
"The class of functions that is 2log O(n2) is the same class of functions that is 2O(log n2)"
then the claim is false. For example, the function n4 is in the second class because it can be written as 22 log n2, but it's not in the first class.
Hope this helps!

Analyze BIg O relation between weird math expressions (logN)^logN and n/logN

(logN)^logN and n/logN
what is the Big O relation between these two? and how to derive a proof the relation?
One initial observation you can make is that if you take the log of both of these expressions, you get the following:
log ((log n)log n) = log n log log n
log (n / log n) = log n - log log n
Notice that the first of these terms grows faster than the second, so we'd expect to get that n / log n = O((log n)log n).
To prove this, we can take the limit of the ratio of these expressions as n tends toward infinity. If we get 0, then we're done. I'll leave this as a proverbial exercise to the reader. :-)
Hope this helps!
Substitute x = log n. If the logarithm is in base a, you have n = a^x. Now,
(log n)^(log n) = x^x
n / log n = a^x / x
When x > a and a > 1 you have xx > ax.
On the other hand, when x > 1, ax > ax / x.
Combining these two, you get xx > ax / x. If you now substitute back,
(log n)^(log n) > n / log n when log n > a, i.e. when n > a^a
This proves that n/log n is in O((log n)log n).

Running Time of a d-ary heap?

How is the running time of a d-Ary heap simplified from O(logd n) to O( (log n) / (log d))?
A correct simplification would be:
logdn = log d * log n
How is the division simplification derived?
This uses the common identity to convert between logarithmic bases:
logx(z) = logm(z) / logm(x)
By multiplying both sides by logm(x), you get:
logm(z) = logx(z) * logm(x)
Which is equivalent to the answer in the question you site.
More information is available here.
Suppose x = logd(n) Equivalently we have, n = dx Then log2n = log2(dx) = x log2(d) Dividing through by log2(d) yields: log2(n) / log2(d) = x And so log2(n) / log2(d) = x = logd(n)
Of course, assuming d is fixed, then log2(d) is just a constant. And so O( logd(n) ) = O( 1 / log2(d) * log2(n) ) = O( log2(n) ) That is, so far as Big-Oh notation is concerned, you can change out any logarithm base (larger than 1) for any other (such) logarithm base. And so it's customary to just drop the base and write O( log(n) )

Resources