If F := GF(p^n) is the finite field with p^n elements, where p is a prime number and n a natural number, is there any efficient algorithm to work out the product of two elements in F?
Here are my thoughts so far:
I know that the standard construction of F is to take an irreducible polynomial f of degree n in GF(p) and then view elements of F as polynomials in the quotient GF(p)[X]/(f), and I have a feeling that this is probably already the right approach since polynomial multiplication and addition should be easy to implement, but I somehow fail to see how this can be actually done. For example, how would one choose an appropriate f, and how can I get the equivalence class of an arbitrary polynomial?
First pick an irreducible polynomial of degree n over GF[p]. Just generate random ones, a random polynomial is irreducible with probability ~1/n.
To test your random polynomials, you'll need some code to factor polynomials over GF[p], see the wikipedia page for some algorithms.
Then your elements in GF[p^n] are just n-degree polynomials over GF[p]. Just do normal polynomial arithmetic and make sure to compute the remainder modulo your irreducible polynomial.
It's pretty easy to code up simple versions of this scheme. You can get arbitrarily complicated in how you implement, say, the modulo operation. See modular exponentiation, Montgomery multiplication, and multiplication using FFT.
Whether there is an efficient algorithm to multiply elements in GF(p^n) depends on how you are representing the elements of GF(p^n).
As you say, one way is indeed to work in GF(p)(X)/(f). Addition and multiplication is relatively straightforward here. However, determining a suitable irreducible polynomial f is not easy - as far as I know there isn't an efficient algorithm for calculating a suitable f.
Another way is to use what are called Zech's logarithms. Magma uses
pre-computed tables of them for working with small finite fields. It is possible that GAP does
too, although its documentation is less clear.
Computing with mathematical structures is often quite tricky. You're certainly not missing anything obvious here.
It depends on your needs and on your field.
When you multiply you have to pick a generator of Fx. When you are adding you have to use the fact that F is a vector space over some smaller Fpm. In practice what you do a lot of time is some mixed approach. E.g. if you are working over F256, take a generator X of F256x, and let G be it's minimal polynomial over F16. You now have
(sumi smaller then 16 ai Xi)(sumj smaller then 16 bj Xj)= sum_k sumi+j = k ai bj Xi+j
All you have to do to make multiplication efficient, is store a multipication table of F16, and (using G) construct X^m in terms of lower powers of X and elements in F16
Finanly, in the rare case where pn = 22n, you get Conways field of nimbers (look in Conways "winning ways", or in Knuth's volume 4A section 7.1.3), for which there are very efficient algorithms.
Galois Field Arithmetic Library (C++, mod 2, doesn't look like it supports other prime elements)
LinBox (C++)
MPFQ (C++)
I have no personal experience w/ these, however (have made my own primitive C++ classes for Galois fields of degree 31 or less, nothing too exotic or worth copying). Like one of the commenters mentioned, you might check mathoverflow.net -- just ask nicely and make sure you've done your homework first. Someone there ought to know what kinds of mathematical software is suitable for manipulation of finite fields, and it's close enough to mathoverflow's area of interest that a well-stated question should not get closed down.
Assume this is the question for an algorithm performing multiplication in finite fields, when a monic irreducible polynomial f(X) is identified (else consider Rabin's Test for Irreducibility)
You have two polynomials of degree n-1
A(X) = a_0 + a_1*X + a_2*X^2 + ... + a_(n-1)*X^(n-1) and
B(X) = b_0 + b_1*X + b_2*X^2 + ... + b_(n-1)*X^(n-1)
coefficients a_k, b_k are out of representatives {0,1,...,p-1} of Z/pZ.
The product is defined as
C(X) = A(X)*B(X) % f(X),
where the modulo operator "%" is the remainder of the polynomial division A(X)*B(X) / f(X).
Following is an approach with complexity O(n^2)
1.) By distributive law the product can be decomposed to
B(X) * X^(n-1) * a_(n-1)
+ B(X) * X^(n-2) * a_(n-2)
+ ...
+ B(X) * a_0
=
(...(a_(n-1) * B(X) * X
+ a_(n-2) * B(X)) * X
+ a_(n-3) * B(X)) * X
...
+ a_1 * B(X)) * X
+ a_0 * B(X)
2.) For the %-operator is a ring homomorphism from Z/pZ[X] onto GF(p^n) it can be applied in each step of the iteration above.
A(X)*B(X) % f(X) =
(...(a_(n-1) * B(X) * X % f(X)
+ a_(n-2) * B(X)) * X % f(X)
+ a_(n-3) * B(X)) * X % f(X)
...
+ a_1 * B(X)) * X % f(X)
+ a_0 * B(X)
3.) After each multiplication with X, i.e. a shift in the coefficient space, you have a polynomial T_k(X) of degree n with element t_kn * X^n. Reduction modulo f(X) is done by
T_k(X) % f(X) = T_k(X) - t_kn*f(X),
which is a polynomial of degree n-1.
Finally, with reduction polynomial
r(x) := f(x) - X^n and
T_k(X) =: t_kn * X^n + U_(n-1)(X)
T_k(X) % f(X) = t_kn * X^n + U_(n-1)(X) - t_kn*( r(x) + X^n)
= U_(n-1)(X) - t_kn*r(x)
i.e. all steps can be done with polynomials of maximum degree n-1.
Related
I have a power series with all terms non-negative which I want to evaluate to some arbitrarily set precision p (the length in binary digits of a MPFR floating-point mantissa). The result should be faithfully rounded. The issue is that I don't know when should I stop adding terms to the result variable, that is, how do I know when do I already have p + 32 accurate summed bits of the series? 32 is just an arbitrarily chosen small natural number meant to facilitate more accurate rounding to p binary digits.
This is my original series
0 <= h <= 1
series_orig(h) := sum(n = 0, +inf, a(n) * h^n)
But I actually need to calculate an arbitrary derivative of the above series (m is the order of the derivative):
series(h, m) := sum(n = m, +inf, a(n) * (n - m + 1) * ... * n * h^(n - m))
The rational number sequence a is defined like so:
a(n) := binomial(1/2, n)^2
= (((2*n)!/(n!)) / (n! * 4^n * (2*n - 1)))^2
So how do I know when to stop summing up terms of series?
Is the following maybe a good strategy?
compute in p * 4 (which is assumed to be greater than p + 32).
at each point be able to recall the current partial sum and the previous one.
stop looping when the previous and current partial sums are equal if rounded to precision p + 32.
round to precision p and return.
Clarification
I'm doing this with MPFI, an interval arithmetic addon to MPFR. Thus the [mpfi] tag.
Attempts to get relevant formulas and equations
Guided by Eric in the comments, I have managed to derive a formula for the required working precision and an equation for the required number of terms of the series in the sum.
A problem, however, is that a nice formula for the required number of terms is not possible.
Someone more mathematically capable might instead be able to achieve a formula for a useful upper bound, but that seems quite difficult to do for all possible requested result precisions and for all possible values of m (the order of the derivative). Note that the formulas need to be easily computable so they're ready before I start computing the series.
Another problem is that it seems necessary to assume the worst case for h (h = 1) for there to be any chance of a nice formula, but this is wasteful if h is far from the worst case, that is if h is close to zero.
Assuming complex numbers a1..an, what is the angle phi those numbers have to be rotated by (= multiplied by exp(i*phi) ) to maximize the sum of the absolute value of the real parts?
in:=complex[N]
out:=in.*exp(i*phi)
f:=sum(abs(real(out)))
-> which phi maximizes f?
Is there a elegant solution (As in not iterating over phi) ?
It is not difficult to find the angle each number has to be multiplied by to make it real, but weighting those angles to find a single optimal one for all of them is difficult, because the rotation is obviously not linear - something like
sum(phiN.*abs(in))/sum(abs(in))
does not work (produces lower sum than with an angle found by iterating over -pi to pi)
Any ideas are appreciated..
Although there exists an analytic solution, it is usually too hard to calculate it (may be feasible for a small number of input variables (n)). I'll first go over this solution, then suggest alternatives.
Analytic solution
Given the input numbers (l1, phi1), (l2, phi2), ... (ln, phin), where li is the length and phii the angle of the number, you want to find:
arg max_phi Sum_i abs(li cos(phii + phi))
You only have one independent variable. So, we start by deriving the function with respect to phi:
f'(phi) = Sum_i (-li sin(phii + phi) * abs'(l cos(phii + phi))
abs'(x) is either +1 or -1. Due to its discontinuity, we won't get around trying every combination. So you end up with 2^n variants of f'. The optimum is then one of the (usually four) arguments where f'(phi)=0. This can be calculated as follows. I denote with si the sign of the i-th term, which you need to modify:
numerator = Sum_i si li sin(phii)
denominator = (Sum_i li^2) + (Sum_i Sum_{j>i} 2 * li * lj * si * sj
cos(phii - phij))
Then, the four solution candidates are:
phi* = -arc cos( numerator / sqrt(denominator))
phi** = -arc cos(-numerator / sqrt(denominator))
phi*** = arc cos( numerator / sqrt(denominator))
phi**** = arc cos(-numerator / sqrt(denominator))
Find all candidates for every variation and take the one with maximum f(phi). However, as mentioned, this approach is not suitable for large n. You need 2^n variations of f and each variation requires O(n^2) time to construct the solution.
Numerical solution
An alternative is a numerical optimization approach. The challenge is that your function is not convex. Hence, if you find a local maximum, you cannot say if it is the global one. Most algorithms require good initialization. You could find the initial point by sampling the domain of phi and picking the best one. Then, try some of the standard approaches (Newton, Levenberg-Marquardt, BFGS).
I face the following problem in a cryptographical application: I have given a set of linear congruences
a[1]*x[1]+a[2]*x[2]+a[3]*x[3] == d[1] (mod p)
b[1]*x[1]+b[2]*x[2]+b[3]*x[3] == d[2] (mod p)
c[1]*x[1]+c[2]*x[2]+c[3]*x[3] == d[3] (mod p)
Here, x is unknown an a,b,c,d are given
The system is most likely underdetermined, so I have a large solution space. I need an algorithm that finds an equidistributed solution (that means equidistributed in the solution space) to that problem using a pseudo-random number generator (or fails).
Most standard algorithms for linear equation systems that I know from my linear algebra courses are not directly applicable to congruences as far as I can see...
My current, "safe" algorithm works as follows: Find all variable that appear in only one equation, and assign a random value. Now if in each row, only one variable is unassigned, assign the value according to the congruence. Otherwise fail.
Can anyone give me a clue how to solve this problem in general?
You can use gaussian elimination and similar algorithms just like you learned in your linear algebra courses, but all arithmetic is performed mod p (p is a prime). The one important difference is in the definition of "division": to compute a / b you instead compute a * (1/b) (in words, "a times b inverse"). Consider the following changes to the math operations normally used
addition: a+b becomes a+b mod p
subtraction: a-b becomes a-b mod p
multiplication: a*b becomes a*b mod p
division: a/b becomes: if p divides b, then "error: divide by zero", else a * (1/b) mod p
To compute the inverse of b mod p you can use the extended euclidean algorithm or alternatively compute b**(p-2) mod p.
Rather than trying to roll this yourself, look for an existing library or package. I think maybe Sage can do this, and certainly Mathematica, and Maple, and similar commercial math tools can.
This should be very simple. I have a function f(x), and I want to evaluate f'(x) for a given x in MATLAB.
All my searches have come up with symbolic math, which is not what I need, I need numerical differentiation.
E.g. if I define: fx = inline('x.^2')
I want to find say f'(3), which would be 6, I don't want to find 2x
If your function is known to be twice differentiable, use
f'(x) = (f(x + h) - f(x - h)) / 2h
which is second order accurate in h. If it is only once differentiable, use
f'(x) = (f(x + h) - f(x)) / h (*)
which is first order in h.
This is theory. In practice, things are quite tricky. I'll take the second formula (first order) as the analysis is simpler. Do the second order one as an exercise.
The very first observation is that you must make sure that (x + h) - x = h, otherwise you get huge errors. Indeed, f(x + h) and f(x) are close to each other (say 2.0456 and 2.0467), and when you substract them, you lose a lot of significant figures (here it is 0.0011, which has 3 significant figures less than x). So any error on h is likely to have a huge impact on the result.
So, first step, fix a candidate h (I'll show you in a minute how to chose it), and take as h for your computation the quantity h' = (x + h) - x. If you are using a language like C, you must take care to define h or x as volatile for that computation not to be optimized away.
Next, the choice of h. The error in (*) has two parts: the truncation error and the roundoff error. The truncation error is because the formula is not exact:
(f(x + h) - f(x)) / h = f'(x) + e1(h)
where e1(h) = h / 2 * sup_{x in [0,h]} |f''(x)|.
The roundoff error comes from the fact that f(x + h) and f(x) are close to each other. It can be estimated roughly as
e2(h) ~ epsilon_f |f(x) / h|
where epsilon_f is the relative precision in the computation of f(x) (or f(x + h), which is close). This has to be assessed from your problem. For simple functions, epsilon_f can be taken as the machine epsilon. For more complicated ones, it can be worse than that by orders of magnitude.
So you want h which minimizes e1(h) + e2(h). Plugging everything together and optimizing in h yields
h ~ sqrt(2 * epsilon_f * f / f'')
which has to be estimated from your function. You can take rough estimates. When in doubt, take h ~ sqrt(epsilon) where epsilon = machine accuracy. For the optimal choice of h, the relative accuracy to which the derivative is known is sqrt(epsilon_f), ie. half the significant figures are correct.
In short: too small a h => roundoff error, too large a h => truncation error.
For the second order formula, same computation yields
h ~ (6 * epsilon_f / f''')^(1/3)
and a fractional accuracy of (epsilon_f)^(2/3) for the derivative (which is typically one or two significant figures better than the first order formula, assuming double precision).
If this is too imprecise, feel free to ask for more methods, there are a lot of tricks to get better accuracy. Richardson extrapolation is a good start for smooth functions. But those methods typically compute f quite a few times, this may or not be what you want if your function is complex.
If you are going to use numerical derivatives a lot of times at different points, it becomes interesting to construct a Chebyshev approximation.
To get a numerical difference (symmetric difference), you calculate (f(x+dx)-f(x-dx))/(2*dx)
fx = #(x)x.^2;
fPrimeAt3 = (fx(3.1)-fx(2.9))/0.2;
Alternatively, you can create a vector of function values and apply DIFF, i.e.
xValues = 2:0.1:4;
fValues = fx(xValues);
df = diff(fValues)./0.1;
Note that diff takes the forward difference, and that it assumes that dx equals to 1.
However, in your case, you may be better off to define fx as a polynomial, and evaluating the derivative of the function, rather than the function values.
Lacking the symbolic toolbox, nothing stops you from using Derivest, a tool for automatic adaptive numerical differentiation.
derivest(#sin,pi)
ans =
-1
For your example it does very nicely. In fact, it even provides an estimate of the error in the resulting approximation.
fx = inline('x.^2');
[fp,errest] = derivest(fx,3)
fp =
6
errest =
3.6308e-14
did you try diff (calculates differences and approximates a derivative), gradient, or polyder (calculates the derivative of a polynomial) functions?
You can read more on these functions by using help <commandname> on MATLAB console, or use the function browser in the Help menu.
For a given function in analytical form, you can evaluate the derivative at a desired point with the following code:
syms x
df = diff(x^2);
df3 = subs(df, 'x', 3);
fprintf('f''(3)=%f\n', df3);
For pure numerical derivatives use the already given solutions by Jonas and posdef.
I'm trying to determine the asymptotic run-time of one of my algorithms, which uses exponents, but I'm not sure of how exponents are calculated programmatically.
I'm specifically looking for the pow() algorithm used for double-precision, floating point numbers.
I've had a chance to look at fdlibm's implementation. The comments describe the algorithm used:
* n
* Method: Let x = 2 * (1+f)
* 1. Compute and return log2(x) in two pieces:
* log2(x) = w1 + w2,
* where w1 has 53-24 = 29 bit trailing zeros.
* 2. Perform y*log2(x) = n+y' by simulating muti-precision
* arithmetic, where |y'|<=0.5.
* 3. Return x**y = 2**n*exp(y'*log2)
followed by a listing of all the special cases handled (0, 1, inf, nan).
The most intense sections of the code, after all the special-case handling, involve the log2 and 2** calculations. And there are no loops in either of those. So, the complexity of floating-point primitives notwithstanding, it looks like a asymptotically constant-time algorithm.
Floating-point experts (of which I'm not one) are welcome to comment. :-)
Unless they've discovered a better way to do it, I believe that approximate values for trig, logarithmic and exponential functions (for exponential growth and decay, for example) are generally calculated using arithmetic rules and Taylor Series expansions to produce an approximate result accurate to within the requested precision. (See any Calculus book for details on power series, Taylor series, and Maclaurin series expansions of functions.) Please note that it's been a while since I did any of this so I couldn't tell you, for example, exactly how to calculate the number of terms in the series you need to include guarantee an error that small enough to be negligible in a double-precision calculation.
For example, the Taylor/Maclaurin series expansion for e^x is this:
+inf [ x^k ] x^2 x^3 x^4 x^5
e^x = SUM [ --- ] = 1 + x + --- + ----- + ------- + --------- + ....
k=0 [ k! ] 2*1 3*2*1 4*3*2*1 5*4*3*2*1
If you take all of the terms (k from 0 to infinity), this expansion is exact and complete (no error).
However, if you don't take all the terms going to infinity, but you stop after say 5 terms or 50 terms or whatever, you produce an approximate result that differs from the actual e^x function value by a remainder which is fairly easy to calculate.
The good news for exponentials is that it converges nicely and the terms of its polynomial expansion are fairly easy to code iteratively, so you might (repeat, MIGHT - remember, it's been a while) not even need to pre-calculate how many terms you need to guarantee your error is less than precision because you can test the size of the contribution at each iteration and stop when it becomes close enough to zero. In practice, I do not know if this strategy is viable or not - I'd have to try it. There are important details I have long since forgotten about. Stuff like: machine precision, machine error and rounding error, etc.
Also, please note that if you are not using e^x, but you are doing growth/decay with another base like 2^x or 10^x, the approximating polynomial function changes.
The usual approach, to raise a to the b, for an integer exponent, goes something like this:
result = 1
while b > 0
if b is odd
result *= a
b -= 1
b /= 2
a = a * a
It is generally logarithmic in the size of the exponent. The algorithm is based on the invariant "a^b*result = a0^b0", where a0 and b0 are the initial values of a and b.
For negative or non-integer exponents, logarithms and approximations and numerical analysis are needed. The running time will depend on the algorithm used and what precision the library is tuned for.
Edit: Since there seems to be some interest, here's a version without the extra multiplication.
result = 1
while b > 0
while b is even
a = a * a
b = b / 2
result = result * a
b = b - 1
You can use exp(n*ln(x)) for calculating xn. Both x and n can be double-precision, floating point numbers. Natural logarithm and exponential function can be calculated using Taylor series. Here you can find formulas: http://en.wikipedia.org/wiki/Taylor_series
If I were writing a pow function targeting Intel, I would return exp2(log2(x) * y). Intel's microcode for log2 is surely faster than anything I'd be able to code, even if I could remember my first year calculus and grad school numerical analysis.
e^x = (1 + fraction) * (2^exponent), 1 <= 1 + fraction < 2
x * log2(e) = log2(1 + fraction) + exponent, 0 <= log2(1 + fraction) < 1
exponent = floor(x * log2(e))
1 + fraction = 2^(x * log2(e) - exponent) = e^((x * log2(e) - exponent) * ln2) = e^(x - exponent * ln2), 0 <= x - exponent * ln2 < ln2