Big-O complexity of n/1 + n/2 + n/3 + - math

Is the big O complexity of n/1 + n/2 + n/3 + ... + n/n O(nlogn) or O(n)? I want to know this for calculating all divisors of all numbers from 1 to n. My approach would be to go over all the numbers and marking their multiples. This would take the above-mentioned time.

You have n multiplied with harmonic series sum which has logarithmic growth.
So O(nlogn)

Related

Big Theta runtime analysis

I don't really understand the 2 questions below about T(n). I understand what theta means but I'm not sure about the answer for the questions. Can someone explain?
I thought that first one was false because T(2n/3) + 1 = Theta(log n) because
the constant 1 added doesn't make a difference
and log is closer to halving continuously but 2n/3 is not
I thought that second one was true because T(n/2) + n = Theta(n * log n) because
the linear "n *" in Theta represents the "+n" in T(n/2) + n
the "n/2" represents the log n in Theta...
The first is Θ(log n).
Intuitively, when you multiply n by a constant factor, T(n) increases by a constant amount.
Example: T(n) = log(n)/log(3/2)
The second is Θ(n).
Intuitively, when you multiply n by a constant factor, T(n) increases by an amount proportional to n.
Example: T(n) = 2n

Calculating average case complexity of Quicksort

I'm trying to calculate the big-O for Worst/Best/Average case of QuickSort using recurrence relations. My understanding is that the efficiency of your implementation is depends on how good the partition function is.
Worst Case: pivot always leaves one side empty
T(N) = N + T(N-1) + T(1)
T(N) = N + T(N-1)
T(N) ~ N2/2 => O(n^2)
Best Case: pivot divides elements equally
T(N) = N + T(N/2) + T(N/2)
T(N) = N + 2T(N/2) [Master Theorem]
T(N) ~ Nlog(N) => O(nlogn)
Average Case: This is where I'm confused how to represent the recurrence relation or how to approach it in general.
I know the average case big-O for Quicksort is O(nlogn) I'm just unsure how to derive it.
When you pick the pivot, the worst you can do is 0 | n and the best you can do is n/2 | n/2. The average case will find you getting a split of something more like n/4 | 3n/4, assuming uniform randomness. Plug that in and you get O(nlogn) once constants are eliminated.

Extension of Fast Doubling Method

Nth term of sequence in which
F(N) = F(N-1) + F(N-2) + F(N-1)×F(N-2)
mod any big no. lets say 10^9+7.. F(0)=a and F(1)=b is also given.
I am trying Fast Doubling Method but I am not able to get the matrix. How to efficiently compute except obvious O(n) algorithm
Consider G[n]=log(1+F[n]) to find
G[n] = G[n-1] + G[n-2]
This is the Fibonacci recursion that has the general solution
G[n] = Fib[n-1]*G[0] + Fib[n]*G[1]
which translates to
1+F[n] = (1+F[0])^Fib[n-1] * (1+F[1])^Fib[n]
where Fib is the Fibonacci sequence that has values 1,0,1 for indices n=-1,0,1.
Now apply the usual techniques for the Fibonacci sequence.

Proving worst case running time of QuickSort

I am trying to perform asymptotic analysis on the following recursive function for an efficient way to power a number. I am having trouble determining the recurrence equation due to having different equations for when the power is odd and when the power is even. I am unsure how to handle this situation. I understand that the running time is theta(logn) so any advice on how to proceed to this result would be appreciated.
Recursive-Power(x, n):
if n == 1
return x
if n is even
y = Recursive-Power(x, n/2)
return y*y
else
y = Recursive-Power(x, (n-1)/2)
return y*y*x
In any case, the following condition holds:
T(n) = T(floor(n/2)) + Θ(1)
where floor(n) is the biggest integer not greater than n.
Since floor doesn't have influence on results, the equation is informally written as:
T(n) = T(n/2) + Θ(1)
You have guessed the asymptotic bound correctly. The result could be proved using Substitution method or Master theorem. It is left as an exercise for you.

How could one implement multiplication in finite fields?

If F := GF(p^n) is the finite field with p^n elements, where p is a prime number and n a natural number, is there any efficient algorithm to work out the product of two elements in F?
Here are my thoughts so far:
I know that the standard construction of F is to take an irreducible polynomial f of degree n in GF(p) and then view elements of F as polynomials in the quotient GF(p)[X]/(f), and I have a feeling that this is probably already the right approach since polynomial multiplication and addition should be easy to implement, but I somehow fail to see how this can be actually done. For example, how would one choose an appropriate f, and how can I get the equivalence class of an arbitrary polynomial?
First pick an irreducible polynomial of degree n over GF[p]. Just generate random ones, a random polynomial is irreducible with probability ~1/n.
To test your random polynomials, you'll need some code to factor polynomials over GF[p], see the wikipedia page for some algorithms.
Then your elements in GF[p^n] are just n-degree polynomials over GF[p]. Just do normal polynomial arithmetic and make sure to compute the remainder modulo your irreducible polynomial.
It's pretty easy to code up simple versions of this scheme. You can get arbitrarily complicated in how you implement, say, the modulo operation. See modular exponentiation, Montgomery multiplication, and multiplication using FFT.
Whether there is an efficient algorithm to multiply elements in GF(p^n) depends on how you are representing the elements of GF(p^n).
As you say, one way is indeed to work in GF(p)(X)/(f). Addition and multiplication is relatively straightforward here. However, determining a suitable irreducible polynomial f is not easy - as far as I know there isn't an efficient algorithm for calculating a suitable f.
Another way is to use what are called Zech's logarithms. Magma uses
pre-computed tables of them for working with small finite fields. It is possible that GAP does
too, although its documentation is less clear.
Computing with mathematical structures is often quite tricky. You're certainly not missing anything obvious here.
It depends on your needs and on your field.
When you multiply you have to pick a generator of Fx. When you are adding you have to use the fact that F is a vector space over some smaller Fpm. In practice what you do a lot of time is some mixed approach. E.g. if you are working over F256, take a generator X of F256x, and let G be it's minimal polynomial over F16. You now have
(sumi smaller then 16 ai Xi)(sumj smaller then 16 bj Xj)= sum_k sumi+j = k ai bj Xi+j
All you have to do to make multiplication efficient, is store a multipication table of F16, and (using G) construct X^m in terms of lower powers of X and elements in F16
Finanly, in the rare case where pn = 22n, you get Conways field of nimbers (look in Conways "winning ways", or in Knuth's volume 4A section 7.1.3), for which there are very efficient algorithms.
Galois Field Arithmetic Library (C++, mod 2, doesn't look like it supports other prime elements)
LinBox (C++)
MPFQ (C++)
I have no personal experience w/ these, however (have made my own primitive C++ classes for Galois fields of degree 31 or less, nothing too exotic or worth copying). Like one of the commenters mentioned, you might check mathoverflow.net -- just ask nicely and make sure you've done your homework first. Someone there ought to know what kinds of mathematical software is suitable for manipulation of finite fields, and it's close enough to mathoverflow's area of interest that a well-stated question should not get closed down.
Assume this is the question for an algorithm performing multiplication in finite fields, when a monic irreducible polynomial f(X) is identified (else consider Rabin's Test for Irreducibility)
You have two polynomials of degree n-1
A(X) = a_0 + a_1*X + a_2*X^2 + ... + a_(n-1)*X^(n-1) and
B(X) = b_0 + b_1*X + b_2*X^2 + ... + b_(n-1)*X^(n-1)
coefficients a_k, b_k are out of representatives {0,1,...,p-1} of Z/pZ.
The product is defined as
C(X) = A(X)*B(X) % f(X),
where the modulo operator "%" is the remainder of the polynomial division A(X)*B(X) / f(X).
Following is an approach with complexity O(n^2)
1.) By distributive law the product can be decomposed to
B(X) * X^(n-1) * a_(n-1)
+ B(X) * X^(n-2) * a_(n-2)
+ ...
+ B(X) * a_0
=
(...(a_(n-1) * B(X) * X
+ a_(n-2) * B(X)) * X
+ a_(n-3) * B(X)) * X
...
+ a_1 * B(X)) * X
+ a_0 * B(X)
2.) For the %-operator is a ring homomorphism from Z/pZ[X] onto GF(p^n) it can be applied in each step of the iteration above.
A(X)*B(X) % f(X) =
(...(a_(n-1) * B(X) * X % f(X)
+ a_(n-2) * B(X)) * X % f(X)
+ a_(n-3) * B(X)) * X % f(X)
...
+ a_1 * B(X)) * X % f(X)
+ a_0 * B(X)
3.) After each multiplication with X, i.e. a shift in the coefficient space, you have a polynomial T_k(X) of degree n with element t_kn * X^n. Reduction modulo f(X) is done by
T_k(X) % f(X) = T_k(X) - t_kn*f(X),
which is a polynomial of degree n-1.
Finally, with reduction polynomial
r(x) := f(x) - X^n and
T_k(X) =: t_kn * X^n + U_(n-1)(X)
T_k(X) % f(X) = t_kn * X^n + U_(n-1)(X) - t_kn*( r(x) + X^n)
= U_(n-1)(X) - t_kn*r(x)
i.e. all steps can be done with polynomials of maximum degree n-1.

Resources