Recurrence relations and asymptotic complexity - math

I am trying to understand the recurrence relation of f(n) = n^cos n and g(n) = n. I am told that this relation has no asymptotic behavior related to Big O, little o, Big Omega, little omega, or Theta. Something about the oscillations of cos n? Can I receive a little more understanding on this behavior?
When I use L' Hospital rule on my calculator, I get undefined.

The function ncos n is O(n). Since -1 ≤ cos n ≤ 1, the function ncos n is always bounded between n-1 and n1, so in particular it's always upper-bounded by O(n). However, it's not Ω(n), because for any number n0 and any constant c, you can find an n > n0 where ncos n < cn. One way to do this is to look for choices of n where cos n is negative; the value of n-ε for any ε > 0 will eventually be smaller than cn for any c.
Hope this helps!

Related

Asymptotic bounds and Big O notation

Is it right to say that suppose we have two monotonically increasing functions f,g so that f(n)=Ω(n) and f(g(n))=O(n). Then I want to conclude that g(n)=O(n).
I think that this is a false claim, and I've been trying to provide counter example to show that this is false claim, but after many attempts I'm starting to think otherwise.
Can you please provide some kind of explanation or example if this is a false claim or a way to prove if it's a correct one.
I believe this claim is true. Here's a proof.
Suppose that f(n) = Ω(n). That means that there are constants c, n0 such that
f(n) ≥ cn for any n ≥ n0. (1)
Similarly, since f(g(n)) = O(n), we know that there are constants d, n1 such that
f(g(n)) ≤ dn for any n ≥ n1. (2)
Now, there are two options. The first is that g(n) = O(1), in which case we're done because g(n) is then O(n). The second case is that g(n) ≠ O(1), in which case g grows without bound. That means that there is an n2 such that g(n2) ≥ n0 (g grows without bound, so it eventually overtakes n0) and n2 ≥ n1 (just pick a big n2).
Now, pick any n ≥ n2. Since n ≥ n2, we have that g(n) ≥ g(n2) ≥ n0 because g is monotone increasing, and therefore by (1) we see that
f(g(n)) ≥ cg(n).
Since n ≥ n2 ≥ n1, we can combine this inequality with equation (2) to see that
dn ≥ f(g(n)) ≥ cg(n).
so, in particular, we have that
g(n) ≤ (d / c)n
for all n ≥ n2, so g(n) = O(n).

Big Theta runtime analysis

I don't really understand the 2 questions below about T(n). I understand what theta means but I'm not sure about the answer for the questions. Can someone explain?
I thought that first one was false because T(2n/3) + 1 = Theta(log n) because
the constant 1 added doesn't make a difference
and log is closer to halving continuously but 2n/3 is not
I thought that second one was true because T(n/2) + n = Theta(n * log n) because
the linear "n *" in Theta represents the "+n" in T(n/2) + n
the "n/2" represents the log n in Theta...
The first is Θ(log n).
Intuitively, when you multiply n by a constant factor, T(n) increases by a constant amount.
Example: T(n) = log(n)/log(3/2)
The second is Θ(n).
Intuitively, when you multiply n by a constant factor, T(n) increases by an amount proportional to n.
Example: T(n) = 2n

Big O notation O(n²)

I want to know, why this is O(n2) for 1+2+3+...+n?
For example, 1+2+3+4 = 4·(4+1)/2 = 10 but 42=16, so how come it's O(n2)?
In Big-O notation you forget about constant factors.
In your example S(n) = 1+2+...+n = n·(n+1)/2 is in O(n2) since you can find a constant number c with
S(n) < c · n2 for all n > n0
(just choose c = 1).
Notice: Big-O notation is an upper bound, i.e. S(n) grows not faster than n2.
Notice also, that S(n) also grows obviously not faster than n3 so it is also in O(n3).
Some additional:
You can also proof the other way around that n2 is in O(S(n)).
n2 < c·S(n) = c·n·(n+1)/2 holds for any c &geq; 2 for all n
So n2 is in O(S(n)). This means both functions grow asymtoticly equal. You can wirt this as S(n) is in Θ(n2).
When computing O(n) n(n-1)/2 is the same as n^2 since it has the highest complexity
1+2+3+...+n Sum of this equation is n(n+1)/2
Means it is n^2+n/2 therefore O(n^2).
Complexity,Big O will look at the biggest factor in the equation because logically it is the one that takes most time.( think of it in a computer program)

Asymptotic complexity constant, why the constant?

Big oh notation says that all g(n) are an element c.f(n), O(g(n)) for some constant c.
I have always wondered and never really understood why we need this arbitrary constant to multiply with the bounding function f(n) to get our bounds?
Also how does one decide what number this constant should be?
The constant itself doesn't characterize the limiting behavior of the f(n) compared to g(n).
It is used for the mathematical definition, which enforces the existence of a constant M such that
If such a constant exists then you can state that f(x) is an O(g(x)), and this is the usual notation when analyzing algorithms, you just don't care about which is the constant but just the complexity of operations itself. The constant is able make that disequation correct by ensuring that M|g(x)| is an upper bound of f(x).
How to find that constant depends on f(x) and g(x) and it is the mathematical point that must be proved to ensure that f(x) has a g(x) big-o so there's not a general rule. Look at this example.
Consider function
f(n) = 4 * n
Doesn't it make sense to call this function O(n) since it grows "as fast" as g(n) = n.
But without constant in definition of O you can't find n0 such as that for all n > n0, f(n) <= n. That's why you need constant, and indeed from condition,
4 * n <= c * n for all n > n0
you can get n0 == 0, c == 4.

Pohlig–Hellman algorithm for computing discrete logarithms

I'm working on coding the Pohlig-Hellman Algorithm but I am having problem understand the steps in the algorithm based on the definition of the algorithm.
Going by the Wiki of the algorithm:
I know the first part 1) is to calculate the prime factor of p-1 - which is fine.
However, I am not sure what I need to do in steps 2) where you calculate the co-efficents:
Let x2 = c0 + c1(2).
125(180/2) = 12590 1 mod (181) so c0 = 0.
125(180/4) = 12545 1 mod (181) so c1 = 0.
Thus, x2 = 0 + 0 = 0.
and 3) put the coefficents together and solve in the chinese remainder theorem.
Can someone help with explaining this in plain english (i) - or pseudocode. I want to code the solution myself obviously but I cannot make any more progress unless i understand the algorithm.
Note: I have done a lot of searching for this and I read S. Pohlig and M. Hellman (1978). "An Improved Algorithm for Computing Logarithms over GF(p) and its Cryptographic Significance but its still not really making sense to me.
Thanks in advance
Update:
how come q(125) stays constant in this example.
Where as in this example is appears like he is calculating a new q each time.
To be more specific I don't understand how the following is computed:
Now divide 7531 by a^c0 to get
7531(a^-2) = 6735 mod p.
Let's start with the main idea behind Pohlig-Hellman. Assume that we are given y, g and p and that we want to find x, such that
y == gx (mod p).
(I'm using == to denote an equivalence relation). To simplify things, I'm also assuming that the order of g is p-1, i.e. the smallest positive k with 1==gk (mod p) is k=p-1.
An inefficient method to find x, would be to simply try all values in the range 1 .. p-1.
Somewhat better is the "Baby-step giant-step" method that requires O(p0.5) arithmetic operations. Both methods are quite slow for large p. Pohlig-Hellman is a significant improvement when p-1 has many factors. I.e. assume that
p-1 = n r
Then what Pohlig and Hellman propose is to solve the equation
yn == (gn)z
(mod p).
If we take logarithms to the basis g on both sides, this is the same as
n logg(y) == logg(yn) == nz (mod p-1).
n can be divided out, giving
logg(y) == z (mod r).
Hence x == z (mod r).
This is an improvement, since we only have to search a range 0 .. r-1 for a solution of z. And again "Baby-step giant-step" can be used to improve the search for z. Obviously, doing this once is not a complete solution yet. I.e. one has to repeat the algorithm above for every prime factor r of p-1 and then to use the Chinese remainder theorem to find x from the partial solutions. This works nicely if p-1 is square free.
If p-1 is divisible by a prime power then a similiar idea can be used. For example let's assume that p-1 = m qk.
In the first step, we compute z such that x == z (mod q) as shown above. Next we want to extend this to a solution x == z' (mod q2). E.g. if p-1 = m q2 then this means that we have to find z' such that
ym == (gm)z' (mod p).
Since we already know that z' == z (mod q), z' must be in the set {z, z+q, z+2q, ..., z+(q-1)q }. Again we could either do an exhaustive search for z' or improve the search with "baby-step giant-step". This step is repeated for every exponent of q, this is from knowing x mod qi we iteratively derive x mod qi+1.
I'm coding it up myself right now (JAVA). I'm using Pollard-Rho to find the small prime factors of p-1. Then using Pohlig-Hellman to solve a DSA private key. y = g^x. I am having the same problem..
UPDATE: "To be more specific I don't understand how the following is computed: Now divide 7531 by a^c0 to get 7531(a^-2) = 6735 mod p."
if you find the modInverse of a^c0 it will make sense
Regards

Resources