So i know that to encrypt a message in RSA we use cipher = m^e % n where m is the plain text transformed to an integer of size {0,..,n -1} and n is the modulus.
Let's say that the size of n is 8192bit and e = 65537 and m (as an integer) = n - 4.
So the question is wouldn't be (2^(8192-4))^65537 impossible to calculate ?
Not impossible at all - the exponentiation is performed modulo n, which means that the result will always be less than n. This not only limits the output size, but makes the calculation easier as intermediate stages can be reduced modulo n to keep the numbers involved "small". The Wikipedia page on modular exponentiation provides more detail on how the calculation can be performed.
Related
Totient(N) is a product of (P-1)(Q-1) and (P-1),(Q-1) will not be prime after taken 1 from them and multiple factors can be obtained? Is it true? Or can we find P and Q if we have totient of N?
Since only even prime is 2, rest of primes are odd. Therefore $p-1$ is an even number that can at least has 2 as a divisor.
For the second part of your questions; What you do is playing with the equations;
φ(n)=(p−1)(q−1)=pq−p−q+1=(n+1)−(p+q)
(n+1)−φ(n)=p+q
(n+1)−φ(n)−p=q
and n=pq to obtain this quadratic formula.
p2−(n+1−φ(n))p+n=0
For more details and an example see; Why is it important that phi(n) is kept a secret, in RSA?
How can I perform
x mod y (e.g. 89^3 mod 3127)
on this calculator?
I got Cryptography exam tomorrow and I can't figure out how to do the mod part on the calc that I have..
This is the encrypting part of RSA algorithm.
Any ideas?
I doubt your calculator has a modulus function. Here's a decent algorithm that works:
Compute 89^3 = 704 969. Write this down or store the result somewhere.
Now reduce modulo n. To do this, compute result / modulus and ignore the decimal, e.g. 704 969 / 3127 ≈ 225.
Multiply that number by the modulus and subtract it from the original result, e.g. 704 969 - 225*3127 = 1394.
If the original exponentiation is so large that it overflows your calculator, you can compute a smaller exponent and do the above reduction modulo n multiple times. For example, if you're asking to compute 89^10, you can instead compute 89^5, reduce that modulo n, square that result to get 89^10, and reduce the squared value modulo n as well.
A key point is that at pretty much any point in the computation process, you can reduce the value modulo n and still arrive at the same figure. Your professor may throw a curveball at you like this - or they may not. Still, better to be prepared.
An algorithm runs in polynomial time if it's runtime is O(nk) for some k. However, I've also seen polynomial time defined as time nO(1).
I have some questions about this:
Why is nO(1) polynomial time? What happened to k?
If nO(1) is polynomial time, then 3n2 should be nO(1). But where did the 3 go? How does that work?
Thanks!
When you have an expression like "the runtime is O(n)" or "the runtime is O(n2)," the O(n) and O(n2) terms aren't actual functions. Instead, they're placeholders for some other function with some property. For example, take this statement:
The runtime of the algorithm is O(n)
This statement really means
There is some function f(n) where the runtime of the algorithm is f(n) and f(n) = O(n)
For example, if a function's actual runtime is 137n + 42, the statement "the runtime of the algorithm is O(n)" is true because there is some function (namely, f(n) = 137n + 42) where the runtime of the algorithm is f(n) and f(n) = O(n).
Given this, let's think about what the statement "the runtime of the algorithm is nO(1)" means. This statement is equivalent to
There is some function f(n) where the runtime of the algorithm is nf(n) and f(n) = O(1)
Now that we've gotten the terminology clearer, what exactly does this mean? Intuitively, a function is O(1) if it's eventually bounded from above by some constant. Therefore, any function f(n) that's O(1) must satisfy f(n) ≤ k once n gets sufficiently large. Therefore, at least intuitively, nO(1) means "n raised to some power that's at most k," which sounds like the definition of a polynomial function.
Of course, there's that pesky issue of constant factors. The function 137n3 is definitely O(n3), but it has a huge constant term in front. On the other hand, if we have a function of the form nO(1), there isn't a constant term in front of the n3. How do we handle this?
This is where we can get cute with the math. In the case of 137n3, note that when n > 1, we have
137n3 = nlogn137 n3 = n3 + logn 137
Notice that this is n raised to the power of logn 137. Although it might look like the function logn 137 grows as n grows larger, it actually has the opposite behavior: it decreases as n grows. The reason for this is that we can use the change of base formula to rewrite logn 137 as
logn 137 = log 137 / log n
Which clearly decreases in the long term when log n decreases. Therefore, the expression 3 + logn137 ends up being bounded from above by some constant, so it's O(1).
Using this technique, it's possible to convert O(nk) to nO(1) by choosing the exponent of n to be k plus the log base n of the constant factor in front of the nk term that comes up in the big-O notation. Similarly, we can convert back from nO(1) to O(nk) by choosing k to be any constant that upper-bounds the function hidden by the O(1) term in the exponent of n.
Hope this helps!
When implementing a hash table using a good hash function (one where the probability of any two elements colliding is 1 / m, where m is the number of buckets), it is well-known that the average-case running time for looking up an element is Θ(1 + α), where α is the load factor. The worst-case running time is O(n), though, if all the elements end up put into the same bucket.
I was recently doing some reading on hash tables and found this article which claims (on page 3) that if α = 1, the expected worst-case complexity is Θ(log n / log log n). By "expected worst-case complexity," I mean, on expectation, the maximum amount of work you'll have to do if the elements are distributed by a uniform hash function. This is different from the actual worst-case, since the worst-case behavior (all elements in the same bucket) is extremely unlikely to actually occur.
My question is the following - the author seems to suggest that differing the value of α can change the expected worst-case complexity of a lookup. Does anyone know of a formula, table, or article somewhere that discusses how changing α changes the expected worst-case runtime?
For fixed α, the expected worst time is always Θ(log n / log log n). However if you make α a function of n, then the expected worst time can change. For instance if α = O(n) then the expected worst time is O(n) (that's the case where you have a fixed number of hash buckets).
In general the distribution of items into buckets is approximately a Poisson distribution, the odds of a random bucket having i items is αi e-α / i!. The worst case is just the m'th worst out of m close to independent observations. (Not entirely independent, but fairly close to it.) The m'th worst out of m observations tends to be something whose odds of happening are about 1/m times. (More precisely the distribution is given by a Β distribution, but for our analysis 1/m is good enough.)
As you head into the tail of the Poisson distribution the growth of the i! term dominates everything else, so the cumulative probability of everything above a given i is smaller than the probability of selecting i itself. So to a good approximation you can figure out the expected value by solving for:
αi e-α / i! = 1/m = 1/(n/α) = α/n
Take logs of both sides and we get:
i log(α) - α - (i log(i) - i + O(log(i)) = log(α) - log(n)
log(n) - α = i log(i) - i - i log(α) + O(log(i))
If we hold α constant then this is:
log(n) = i log(i) + O(i)
Can this work if i has the form k log(n) / log(log(n)) with k = Θ(1)? Let's try it:
log(n) = (k log(n) / log(log(n))) (log(k) + log(log(n)) - log(log(log(n)))) + O(log(log(n)))
= k (log(n) + o(log(n)) + o(log(n))
And then we get the sharper estimate that, for any fixed load average α, the expected worst time is (1 + o(1)) log(n) / log(log(n))
After some searching, I came across this research paper that gives a complete analysis of the expected worst-case behavior of a whole bunch of different types of hash tables, including chained hash tables. The author gives as an answer that the expected length is approximately Γ-1(m), where m is the number of buckets and Γ is the Gamma function. Assuming that α is a constant, this is approximately ln m / ln ln m.
Hope this helps!
I have a homework problem for my algorithms class asking me to calculate the maximum size of a problem that can be solved in a given number of operations using an O(n log n) algorithm (ie: n log n = c). I was able to get an answer by approximating, but is there a clean way to get an exact answer?
There is no closed-form formula for this equation. Basically, you can transform the equation:
n log n = c
log(n^n) = c
n^n = exp(c)
Then, this equation has a solution of the form:
n = exp(W(c))
where W is Lambert W function (see especially "Example 2"). It was proved that W cannot be expressed using elementary operations.
However, f(n)=n*log(n) is a monotonic function. You can simply use bisection (here in python):
import math
def nlogn(c):
lower = 0.0
upper = 10e10
while True:
middle = (lower+upper)/2
if lower == middle or middle == upper:
return middle
if middle*math.log(middle, 2) > c:
upper = middle
else:
lower = middle
the O notation only gives you the biggest term in the equation. Ie the performance of your O(n log n ) algorithm could actually be better represented by c = (n log n) + n + 53.
This means that without knowing the exact nature of the performance of your algorithm you wouldn't be able to calculate the exact number of operations required to process an given amount of data.
But it is possible to calculate that the maximum number of operations required to process a data set of size n is more than a certain number, or conversely that the biggest problem set that can be solved, using that algorithm and that number of operations, is smaller than a certain number.
The O notation is useful for comparing 2 algorithms, ie an O(n^2) algorithm is faster than a O(n^3) algorithm etc.
see Wikipedia for more info.
some help with logs