The purpose of the following code is to convert a polynomial from coefficient representation into value representation by dividing it into its odd and even powers and then recursing on the smaller polynomials.
function FFT(A, w)
Input: Coefficient representation of a polynomials A(x) of degree ≤ n-1, where n
is a power of 2w, an nth root of unity.
Output: Value representation A(w^0),...,A(w^(n-1))
if w = 1; return A(1)
express A(x) in the form A_e(x^2) and xA_o(x^2) /*where A_e are the even powers and A_o
the odd.*/
call FFT(A_e,w^2) to evaluate A_e at even of powers of w
call FFT(A_o,w^2) to evaluate A_o at even powers of w
for j = 0 to n-1;
compute A(w^j) = A_e(w^(2j))+w^j(A_o(w^(2j)))
return A(w^0),...,A(w^(n-1))
What is the for loop being used for?
Why is the pseudocode only adding the smaller polynomials, doesn't it need to subtract them too? (to calculate A(-x)). Isn't that what the algorithm completely based on? Adding and subtracting the smaller polynomials to reduce the points in half?*
Why are powers of "w" being evaluated as opposed to "x"?
I am not a too sure if this belongs here, since the question is quite mathematical. If you feel this question is off-topic, I would appreciate it if you moved it to a site where you felt this question would be more appropriate, rather that just closing it.
*Psuedocode was gotten from Algorithms by S. Dasgupta. Page 71.
The loop is for recursion.
No need to add for negative x; the FFT transforms from time to frequency space.
Related
I was going through a question which ask to calculate gcd(a-b,a^n+b^n)%(10^9+7) where a,b,n can be as large as 10^12.
I am able to solve this for a,b and n for very small numbers and fermat's theorem also didn't seem to work, and i reached a conclusion that if a,b are coprime then this will always give me gcd as 2 but for the rest i am not able to get it?
i need just a little hint that what i am doing wrong to get gcd for large numbers? I also tried x^y to find gcd by taking modulo at each step but that also didn't work.
Need just direction and i will make my way.
Thanks in advance.
You are correct that a^n + b^n is too large to compute and that working mod 10^9 + 7 at each step doesn't provide a way to compute the answer. But, you can still use modular exponentiation by squaring with a different modulus, namely a-b
Key observations:
1) gcd(a-b,a^n + b^n) = gcd(d,a^n + b^n) where d = abs(a-b)
2) gcd(d,a^n + b^n) = gcd(d,r) where r = (a^n + b^n) % d
3) r can be feasibly computed with modular exponentiation by squaring
The point of 1) is that different programming languages have different conventions for handling negative numbers in the mod operator. Taking the absolute value avoids such complications, though mathematically it doesn't make a difference. The key idea is that it is perfectly feasible to do the first step of the Euclidean algorithm for computing gcds. All you need is the remainder upon division of the larger by the smaller of the two numbers. After the first step is done, all of the numbers are in the feasible range.
An algorithm runs in polynomial time if it's runtime is O(nk) for some k. However, I've also seen polynomial time defined as time nO(1).
I have some questions about this:
Why is nO(1) polynomial time? What happened to k?
If nO(1) is polynomial time, then 3n2 should be nO(1). But where did the 3 go? How does that work?
Thanks!
When you have an expression like "the runtime is O(n)" or "the runtime is O(n2)," the O(n) and O(n2) terms aren't actual functions. Instead, they're placeholders for some other function with some property. For example, take this statement:
The runtime of the algorithm is O(n)
This statement really means
There is some function f(n) where the runtime of the algorithm is f(n) and f(n) = O(n)
For example, if a function's actual runtime is 137n + 42, the statement "the runtime of the algorithm is O(n)" is true because there is some function (namely, f(n) = 137n + 42) where the runtime of the algorithm is f(n) and f(n) = O(n).
Given this, let's think about what the statement "the runtime of the algorithm is nO(1)" means. This statement is equivalent to
There is some function f(n) where the runtime of the algorithm is nf(n) and f(n) = O(1)
Now that we've gotten the terminology clearer, what exactly does this mean? Intuitively, a function is O(1) if it's eventually bounded from above by some constant. Therefore, any function f(n) that's O(1) must satisfy f(n) ≤ k once n gets sufficiently large. Therefore, at least intuitively, nO(1) means "n raised to some power that's at most k," which sounds like the definition of a polynomial function.
Of course, there's that pesky issue of constant factors. The function 137n3 is definitely O(n3), but it has a huge constant term in front. On the other hand, if we have a function of the form nO(1), there isn't a constant term in front of the n3. How do we handle this?
This is where we can get cute with the math. In the case of 137n3, note that when n > 1, we have
137n3 = nlogn137 n3 = n3 + logn 137
Notice that this is n raised to the power of logn 137. Although it might look like the function logn 137 grows as n grows larger, it actually has the opposite behavior: it decreases as n grows. The reason for this is that we can use the change of base formula to rewrite logn 137 as
logn 137 = log 137 / log n
Which clearly decreases in the long term when log n decreases. Therefore, the expression 3 + logn137 ends up being bounded from above by some constant, so it's O(1).
Using this technique, it's possible to convert O(nk) to nO(1) by choosing the exponent of n to be k plus the log base n of the constant factor in front of the nk term that comes up in the big-O notation. Similarly, we can convert back from nO(1) to O(nk) by choosing k to be any constant that upper-bounds the function hidden by the O(1) term in the exponent of n.
Hope this helps!
I have to code to evaluate the value of following sequence :
( pow(1,k) + pow(2,k) + ... + pow(n,k) ) % MOD
for given value of n,k and MOD.
I have tried searching it on internet. I got an equation . It contains zeta functions and it seems difficult in implementation. I want any simple approach for implementing the same. Note that the value of n is large, so that we cannot simply use brute force to pass the time limit.
Newton's identities might be of help. Calculate the coefficients of the polynomial with 1..n as roots. That pretty trivial. Then use the identities.
It's just the first thing that comes to mind when I see sums of powers.
I think it is nicely compatible with modular arithmetics - there are only multiplications and additions.
I must admit, that Newton's identities are only the rearrangement of the terms, so not much speed gain here.
JUST USE PYTHON
k=input("Enter value for K: ")
n=input("Enter value for N: ")
mod=input("Enter value for MOD: ")
sum=0
for i in range(1,n+1):
sum+=pow(i,k)
result=sum % mod
print mod
May be this code is gonna help.
I agree that math.stackexchange.com is a better bet.
But here are random facts that, depending on parameters, may make the problem more manageable.
First, factor MOD, solve for each prime power factor, then use the Chinese Remainder Theorem to find the answer for MOD. Thus without loss of generality, you may assume that MOD is a prime power.
Next, note that 1^k + ... + MOD^k is always divisible by MOD. Therefore you can replace n by n mod MOD.
Next, if MOD = p^i and j is not divisible by p, then j^((p-1) * p^(i-1)) is 1 mod MOD, so we can reduce the size of k.
Of course if (k, n) < MOD and MOD is prime, this will not help you at all. (Which, depending on how this problem arises, may well be the case.)
(If k is small enough, there are explicit formulas that you can produce for the sum. But it seems that for you k can be large enough to make that approach intractable.)
i need to find acceleration of an object the formula for that given in text is a = d^2(L)/d(T)^2 , where L= length and T= time
i calculated this in matlab by using this equation
a = (1/(T3-T1))*(((L3-L2)/(T3-T2))-((L2-L1)/(T2-T1)))
or
a = (v2-v1)/(T2-T1)
but im not getting the right answers ,can any body tell me how to find (a) by any other method in matlab.
This has nothing to do with matlab, you are just trying to numerically differentiate a function twice. Depending on the behaviour of the higher (3rd, 4th) derivatives of the function this will or will not yield reasonable results. You will also have to expect an error of order |T3 - T1|^2 with a formula like the one you are using, assuming L is four times differentiable. Instead of using intervals of different size you may try to use symmetric approximations like
v (x) = (L(x-h) - L(x+h))/ 2h
a (x) = (L(x-h) - 2 L(x) + L(x+h))/ h^2
From what I recall from my numerical math lectures this is better suited for numerical calculation of higher order derivatives. You will still get an error of order
C |h|^2, with C = O( ||d^4 L / dt^4 || )
with ||.|| denoting the supremum norm of a function (that is, the fourth derivative of L needs to be bounded). In case that's true you can use that formula to calculate how small h has to be chosen in order to produce a result you are willing to accept. Note, though, that this is just the theoretical error which is a consequence of an analysis of the Taylor approximation of L, see [1] or [2] -- this is where I got it from a moment ago -- or any other introductory book on numerical mathematics. You may get additional errors depending on the quality of the evaluation of L; also, if |L(x-h) - L(x)| is very small numerical substraction may be ill conditioned.
[1] Knabner, Angermann; Numerik partieller Differentialgleichungen; Springer
[2] http://math.fullerton.edu/mathews/n2003/numericaldiffmod.html
I have polynomials of nontrivial degree (4+) and need to robustly and efficiently determine whether or not they have a root in the interval [0,T]. The precise location or number of roots don't concern me, I just need to know if there is at least one.
Right now I'm using interval arithmetic as a quick check to see if I can prove that no roots can exist. If I can't, I'm using Jenkins-Traub to solve for all of the polynomial roots. This is obviously inefficient since it's checking for all real roots and finding their exact positions, information I don't end up needing.
Is there a standard algorithm I should be using? If not, are there any other efficient checks I could do before doing a full Jenkins-Traub solve for all roots?
For example, one optimization I could do is to check if my polynomial f(t) has the same sign at 0 and T. If not, there is obviously a root in the interval. If so, I can solve for the roots of f'(t) and evaluate f at all roots of f' in the interval [0,T]. f(t) has no root in that interval if and only if all of these evaluations have the same sign as f(0) and f(T). This reduces the degree of the polynomial I have to root-find by one. Not a huge optimization, but perhaps better than nothing.
Sturm's theorem lets you calculate the number of real roots in the range (a, b). Given the number of roots, you know if there is at least one. From the bottom half of page 4 of this paper:
Let f(x) be a real polynomial. Denote it by f0(x) and its derivative f′(x) by f1(x). Proceed as in Euclid's algorithm to find
f0(x) = q1(x) · f1(x) − f2(x),
f1(x) = q2(x) · f2(x) − f3(x),
.
.
.
fk−2(x) = qk−1(x) · fk−1(x) − fk,
where fk is a constant, and for 1 ≤ i ≤ k, fi(x) is of degree lower than that of fi−1(x). The signs of the remainders are negated from those in the Euclid algorithm.
Note that the last non-vanishing remainder fk (or fk−1 when fk = 0) is a greatest common
divisor of f(x) and f′(x). The sequence f0, f1,. . ., fk (or fk−1 when fk = 0) is called a Sturm sequence for the polynomial f.
Theorem 1 (Sturm's Theorem) The number of distinct real zeros of a polynomial f(x) with
real coefficients in (a, b) is equal to the excess of the number of changes of sign in the sequence f0(a), ..., fk−1(a), fk over the number of changes of sign in the sequence f0(b), ..., fk−1(b), fk.
You could certainly do binary search on your interval arithmetic. Start with [0,T] and substitute it into your polynomial. If the result interval does not contain 0, you're done. If it does, divide the interval in 2 and recurse on each half. This scheme will find the approximate location of each root pretty quickly.
If you eventually get 4 separate intervals with a root, you know you are done. Otherwise, I think you need to get to intervals [x,y] where f'([x,y]) does not contain zero, meaning that the function is monotonically increasing or decreasing and hence contains at most one zero. Double roots might present a problem, I'd have to think more about that.
Edit: if you suspect a multiple root, find roots of f' using the same procedure.
Use Descartes rule of signs to glean some information. Just count the number of sign changes in the coefficients. This gives you an upper bound on the number of positive real roots. Consider the polynomial P.
P = 131.1 - 73.1*x + 52.425*x^2 - 62.875*x^3 - 69.225*x^4 + 11.225*x^5 + 9.45*x^6 + x^7
In fact, I've constructed P to have a simple list of roots. They are...
{-6, -4.75, -2, 1, 2.3, -i, +i}
Can we determine if there is a root in the interval [0,3]? Note that there is no sign change in the value of P at the endpoints.
P(0) = 131.1
P(3) = 4882.5
How many sign changes are there in the coefficients of P? There are 4 sign changes, so there may be as many as 4 positive roots.
But, now substitute x+3 for x into P. Thus
Q(x) = P(x+3) = ...
4882.5 + 14494.75*x + 15363.9*x^2 + 8054.675*x^3 + 2319.9*x^4 + 370.325*x^5 + 30.45*x^6 + x^7
See that Q(x) has NO sign changes in the coefficients. All of the coefficients are positive values. Therefore there can be no roots larger than 3.
So there MAY be either 2 or 4 roots in the interval [0,3].
At least this tells you whether to bother looking at all. Of course, if the function has opposite signs on each end of the interval, we know there are an odd number of roots in that interval.
It's not that efficient, but is quite reliable. You can construct the polynomial's Companion Matrix (A sparse matrix whose eigenvalues are the polynomial's roots).
There are efficient eigenvalue algorithms that can find eigenvalues in a given interval. One of them is the inverse iteration (Can find eigenvalues closest to some input value. Just give the middle point of the interval as the above value).
If the value f(0)*f(t)<=0 then you are guaranteed to have a root. Otherwise you can start splitting the domain into two parts (bisection) and check the values in the ends until you are confident there is no root in that segment.
if f(0)*f(t)>0 you either have no, two, four, .. roots. Your limit is the polynomial order. if f(0)*f(t)<0 you may have one, three, five, .. roots.