How are exponents calculated? - math

I'm trying to determine the asymptotic run-time of one of my algorithms, which uses exponents, but I'm not sure of how exponents are calculated programmatically.
I'm specifically looking for the pow() algorithm used for double-precision, floating point numbers.

I've had a chance to look at fdlibm's implementation. The comments describe the algorithm used:
* n
* Method: Let x = 2 * (1+f)
* 1. Compute and return log2(x) in two pieces:
* log2(x) = w1 + w2,
* where w1 has 53-24 = 29 bit trailing zeros.
* 2. Perform y*log2(x) = n+y' by simulating muti-precision
* arithmetic, where |y'|<=0.5.
* 3. Return x**y = 2**n*exp(y'*log2)
followed by a listing of all the special cases handled (0, 1, inf, nan).
The most intense sections of the code, after all the special-case handling, involve the log2 and 2** calculations. And there are no loops in either of those. So, the complexity of floating-point primitives notwithstanding, it looks like a asymptotically constant-time algorithm.
Floating-point experts (of which I'm not one) are welcome to comment. :-)

Unless they've discovered a better way to do it, I believe that approximate values for trig, logarithmic and exponential functions (for exponential growth and decay, for example) are generally calculated using arithmetic rules and Taylor Series expansions to produce an approximate result accurate to within the requested precision. (See any Calculus book for details on power series, Taylor series, and Maclaurin series expansions of functions.) Please note that it's been a while since I did any of this so I couldn't tell you, for example, exactly how to calculate the number of terms in the series you need to include guarantee an error that small enough to be negligible in a double-precision calculation.
For example, the Taylor/Maclaurin series expansion for e^x is this:
+inf [ x^k ] x^2 x^3 x^4 x^5
e^x = SUM [ --- ] = 1 + x + --- + ----- + ------- + --------- + ....
k=0 [ k! ] 2*1 3*2*1 4*3*2*1 5*4*3*2*1
If you take all of the terms (k from 0 to infinity), this expansion is exact and complete (no error).
However, if you don't take all the terms going to infinity, but you stop after say 5 terms or 50 terms or whatever, you produce an approximate result that differs from the actual e^x function value by a remainder which is fairly easy to calculate.
The good news for exponentials is that it converges nicely and the terms of its polynomial expansion are fairly easy to code iteratively, so you might (repeat, MIGHT - remember, it's been a while) not even need to pre-calculate how many terms you need to guarantee your error is less than precision because you can test the size of the contribution at each iteration and stop when it becomes close enough to zero. In practice, I do not know if this strategy is viable or not - I'd have to try it. There are important details I have long since forgotten about. Stuff like: machine precision, machine error and rounding error, etc.
Also, please note that if you are not using e^x, but you are doing growth/decay with another base like 2^x or 10^x, the approximating polynomial function changes.

The usual approach, to raise a to the b, for an integer exponent, goes something like this:
result = 1
while b > 0
if b is odd
result *= a
b -= 1
b /= 2
a = a * a
It is generally logarithmic in the size of the exponent. The algorithm is based on the invariant "a^b*result = a0^b0", where a0 and b0 are the initial values of a and b.
For negative or non-integer exponents, logarithms and approximations and numerical analysis are needed. The running time will depend on the algorithm used and what precision the library is tuned for.
Edit: Since there seems to be some interest, here's a version without the extra multiplication.
result = 1
while b > 0
while b is even
a = a * a
b = b / 2
result = result * a
b = b - 1

You can use exp(n*ln(x)) for calculating xn. Both x and n can be double-precision, floating point numbers. Natural logarithm and exponential function can be calculated using Taylor series. Here you can find formulas: http://en.wikipedia.org/wiki/Taylor_series

If I were writing a pow function targeting Intel, I would return exp2(log2(x) * y). Intel's microcode for log2 is surely faster than anything I'd be able to code, even if I could remember my first year calculus and grad school numerical analysis.

e^x = (1 + fraction) * (2^exponent), 1 <= 1 + fraction < 2
x * log2(e) = log2(1 + fraction) + exponent, 0 <= log2(1 + fraction) < 1
exponent = floor(x * log2(e))
1 + fraction = 2^(x * log2(e) - exponent) = e^((x * log2(e) - exponent) * ln2) = e^(x - exponent * ln2), 0 <= x - exponent * ln2 < ln2

Related

When have enough bits of my series with non-negative terms been calculated?

I have a power series with all terms non-negative which I want to evaluate to some arbitrarily set precision p (the length in binary digits of a MPFR floating-point mantissa). The result should be faithfully rounded. The issue is that I don't know when should I stop adding terms to the result variable, that is, how do I know when do I already have p + 32 accurate summed bits of the series? 32 is just an arbitrarily chosen small natural number meant to facilitate more accurate rounding to p binary digits.
This is my original series
0 <= h <= 1
series_orig(h) := sum(n = 0, +inf, a(n) * h^n)
But I actually need to calculate an arbitrary derivative of the above series (m is the order of the derivative):
series(h, m) := sum(n = m, +inf, a(n) * (n - m + 1) * ... * n * h^(n - m))
The rational number sequence a is defined like so:
a(n) := binomial(1/2, n)^2
= (((2*n)!/(n!)) / (n! * 4^n * (2*n - 1)))^2
So how do I know when to stop summing up terms of series?
Is the following maybe a good strategy?
compute in p * 4 (which is assumed to be greater than p + 32).
at each point be able to recall the current partial sum and the previous one.
stop looping when the previous and current partial sums are equal if rounded to precision p + 32.
round to precision p and return.
Clarification
I'm doing this with MPFI, an interval arithmetic addon to MPFR. Thus the [mpfi] tag.
Attempts to get relevant formulas and equations
Guided by Eric in the comments, I have managed to derive a formula for the required working precision and an equation for the required number of terms of the series in the sum.
A problem, however, is that a nice formula for the required number of terms is not possible.
Someone more mathematically capable might instead be able to achieve a formula for a useful upper bound, but that seems quite difficult to do for all possible requested result precisions and for all possible values of m (the order of the derivative). Note that the formulas need to be easily computable so they're ready before I start computing the series.
Another problem is that it seems necessary to assume the worst case for h (h = 1) for there to be any chance of a nice formula, but this is wasteful if h is far from the worst case, that is if h is close to zero.

how to solve mathematical expectation in hackerrank 20/20 hack february 2014

So this problem was given in Hackerrank 20/20 hack february :
Let’s consider a random permutation p1, p2, …, pN of numbers 1, 2, …, N and calculate the value F=(X2+…+XN-1)^K, where Xi equals 1 if one of the following two conditions holds: pi-1 < pi > pi+1 or pi-1 > pi < pi+1 and Xi equals 0 otherwise. What is the expected value of F?
Constraints: 1000 <= N <= 10^9, 1 <= K <= 5
I thought it was Eulerian number related problem. As the contest is over,I can see the solutions. But I don't understand any of them. Is there any tricks?
so a few words about my "solution" ;)
What I basically did:
1) write a brute force solver (obviously for N << 20)
-> this solver won't handle high values of N, as given in the constraints
2) analyze the output of the solutions to these (invalid) inputs
-> observe that with K=1, the output follows a straight line
-> K=2, is a quadratic function
-> K=3, is a cubic function, and so on
3) find the parameters for each function (K=1 - 5) by using a solver, or how I did it, wolfram alpha ;)
-> additionally I "normalized" each parameter to only have one division afterwards
4) use any programming language / big integer class to solve the correct inputs in O(1)
I'm pretty sure that one can come up with these parameters in a very clever way, but for me, during the contest, this solution was easy and fast enough without having to think too much about the "why" ;)

how to evaluate derivative of function in matlab?

This should be very simple. I have a function f(x), and I want to evaluate f'(x) for a given x in MATLAB.
All my searches have come up with symbolic math, which is not what I need, I need numerical differentiation.
E.g. if I define: fx = inline('x.^2')
I want to find say f'(3), which would be 6, I don't want to find 2x
If your function is known to be twice differentiable, use
f'(x) = (f(x + h) - f(x - h)) / 2h
which is second order accurate in h. If it is only once differentiable, use
f'(x) = (f(x + h) - f(x)) / h (*)
which is first order in h.
This is theory. In practice, things are quite tricky. I'll take the second formula (first order) as the analysis is simpler. Do the second order one as an exercise.
The very first observation is that you must make sure that (x + h) - x = h, otherwise you get huge errors. Indeed, f(x + h) and f(x) are close to each other (say 2.0456 and 2.0467), and when you substract them, you lose a lot of significant figures (here it is 0.0011, which has 3 significant figures less than x). So any error on h is likely to have a huge impact on the result.
So, first step, fix a candidate h (I'll show you in a minute how to chose it), and take as h for your computation the quantity h' = (x + h) - x. If you are using a language like C, you must take care to define h or x as volatile for that computation not to be optimized away.
Next, the choice of h. The error in (*) has two parts: the truncation error and the roundoff error. The truncation error is because the formula is not exact:
(f(x + h) - f(x)) / h = f'(x) + e1(h)
where e1(h) = h / 2 * sup_{x in [0,h]} |f''(x)|.
The roundoff error comes from the fact that f(x + h) and f(x) are close to each other. It can be estimated roughly as
e2(h) ~ epsilon_f |f(x) / h|
where epsilon_f is the relative precision in the computation of f(x) (or f(x + h), which is close). This has to be assessed from your problem. For simple functions, epsilon_f can be taken as the machine epsilon. For more complicated ones, it can be worse than that by orders of magnitude.
So you want h which minimizes e1(h) + e2(h). Plugging everything together and optimizing in h yields
h ~ sqrt(2 * epsilon_f * f / f'')
which has to be estimated from your function. You can take rough estimates. When in doubt, take h ~ sqrt(epsilon) where epsilon = machine accuracy. For the optimal choice of h, the relative accuracy to which the derivative is known is sqrt(epsilon_f), ie. half the significant figures are correct.
In short: too small a h => roundoff error, too large a h => truncation error.
For the second order formula, same computation yields
h ~ (6 * epsilon_f / f''')^(1/3)
and a fractional accuracy of (epsilon_f)^(2/3) for the derivative (which is typically one or two significant figures better than the first order formula, assuming double precision).
If this is too imprecise, feel free to ask for more methods, there are a lot of tricks to get better accuracy. Richardson extrapolation is a good start for smooth functions. But those methods typically compute f quite a few times, this may or not be what you want if your function is complex.
If you are going to use numerical derivatives a lot of times at different points, it becomes interesting to construct a Chebyshev approximation.
To get a numerical difference (symmetric difference), you calculate (f(x+dx)-f(x-dx))/(2*dx)
fx = #(x)x.^2;
fPrimeAt3 = (fx(3.1)-fx(2.9))/0.2;
Alternatively, you can create a vector of function values and apply DIFF, i.e.
xValues = 2:0.1:4;
fValues = fx(xValues);
df = diff(fValues)./0.1;
Note that diff takes the forward difference, and that it assumes that dx equals to 1.
However, in your case, you may be better off to define fx as a polynomial, and evaluating the derivative of the function, rather than the function values.
Lacking the symbolic toolbox, nothing stops you from using Derivest, a tool for automatic adaptive numerical differentiation.
derivest(#sin,pi)
ans =
-1
For your example it does very nicely. In fact, it even provides an estimate of the error in the resulting approximation.
fx = inline('x.^2');
[fp,errest] = derivest(fx,3)
fp =
6
errest =
3.6308e-14
did you try diff (calculates differences and approximates a derivative), gradient, or polyder (calculates the derivative of a polynomial) functions?
You can read more on these functions by using help <commandname> on MATLAB console, or use the function browser in the Help menu.
For a given function in analytical form, you can evaluate the derivative at a desired point with the following code:
syms x
df = diff(x^2);
df3 = subs(df, 'x', 3);
fprintf('f''(3)=%f\n', df3);
For pure numerical derivatives use the already given solutions by Jonas and posdef.

How to do hypot2(x,y) calculation when numbers can overflow

I'd like to do a hypot2 calculation on a 16-bit processor.
The standard formula is c = sqrt((a * a) + (b * b)). The problem with this is with large inputs it overflows. E.g. 200 and 250, multiply 200 * 200 to get 90,000 which is higher than the max signed value of 32,767, so it overflows, as does b, the numbers are added and the result may as well be useless; it might even signal an error condition because of a negative sqrt.
In my case, I'm dealing with 32-bit numbers, but 32-bit multiply on my processor is very fast, about 4 cycles. I'm using a dsPIC microcontroller. I'd rather not have to multiply with 64-bit numbers because that's wasting precious memory and undoubtedly will be slower. Additionally I only have sqrt for 32-bit numbers, so 64-bit numbers would require another function. So how can I compute a hypot when the values may be large?
Please note I can only really use integer math for this. Using anything like floating point math incurs a speed hit which I'd rather avoid. My processor has a fast integer/fixed point atan2 routine, about 130 cycles; could I use this to compute the hypotenuse length?
Depending on how much accuracy you need you may be able to avoid the squares and the square root operation. There is a section on this topic in Understanding Digital Signal Processing by Rick Lyons (section 10.2, "High-Speed Vector-Magnitude Approximation", starting at page 400 in my edition).
The approximation is essentially:
magnitude = alpha * min + beta * max
where max and min are the maximum and minimum absolute values of the real and imaginary components, and alpha and beta are two constants which are chosen to give a reasonable error distribution over the range of interest. These constants can be represented as fractions with power of 2 divisors to keep the arithemtic simple/efficient. In the book he suggests alpha = 15/16, beta = 15/32, and you can then simplify the formula to:
magnitude = (15 / 16) * (max + min / 2)
which might be implemented as follows using integer operations:
magnitude = 15 * (max + min / 2) / 16
and of course we can use shifts for the divides:
magnitude = (15 * (max + (min >> 1))) >> 4
Error is +/- 5% over a quadrant.
More information on this technique here: http://www.dspguru.com/dsp/tricks/magnitude-estimator
This is taken verbatim from this #John D. Cook blog post, hence CW:
Here’s how to compute sqrt(x*x + y*y)
without risking overflow.
max = maximum(|x|, |y|)
min = minimum(|x|, |y|)
r = min / max
return max*sqrt(1 + r*r)
If #John D. Cook comes along and posts this you should give him the accept :)
Since you essentially can't do any multiplications without overflow you're likely going to lose some precision.
To get the numbers into an acceptable range, pull out some factor x and use
c = x*sqrt( (a/x)*(a/x) + (b/x)*(b/x) )
If x is a common factor, you won't lose precision, but if it's not, you will lose precision.
Update:
Even better, given that you can do some mild work with 64-bit numbers, with just one 64-bit addition, you could do the rest of this problem in 32-bits with only a tiny loss of accuracy. To do this: do the two 32-bit multiplications to give you two 64-bit numbers, add these, and then bit shift as needed to get the sum back down to 32-bits before taking the square root. If you always bit shift by 2 bits, then just multiply the final result by 2^(half the number of bit shifts), based on the rule above. The truncation should only cause a very small loss of accuracy, no more than 2^31, or 0.00000005% error.
Aniko and John, it seems to me that you haven't addressed the OP's problem. If a and b are integers, then a*a + b*b is likely to overflow, because integer operations are being performed. The obvious solution is to convert a and b to floating-point values before computing a*a + b*b. But the OP hasn't let us know what language we should use, so we're a bit stuck.
The standard formula is c = sqrt((a * a) + (b * b)). The problem with this is with large >inputs it overflows.
The solution for overflows (aside from throwing an error) is to saturate your intermediate calculations.
Calculate C = a*a + b*b. If a and b are signed 16-bit numbers, you will never have an overflow. If they are unsigned numbers, you'll need to right-shift the inputs first to get the sum to fit in a 32-bit number.
If C > (MAX_RADIUS)^2, return MAX_RADIUS, where MAX_RADIUS is the maximum value you can tolerate before encounting an overflow.
Otherwise, use either sqrt() or the CORDIC algorithm, which avoids the cost of square roots in favor of loop iteration + adds + shifts, to retrieve the amplitude of the (a,b) vector.
If you can constrain a and b to be at most 7 bits, you won't get any overflow. You can use a count-leading-zeros instruction to figure out how many bits to throw away.
Assume a>=b.
int bits = 16 - count_leading_zeros(a);
if (bits > 7) {
a >>= bits - 7;
b >>= bits - 7;
}
c = sqrt(a*a + b*b);
if (bits > 7) {
c <<= bits - 7;
}
Lots of processors have this instruction nowadays, and if not, you can use other fast techniques.
Although this won't give you the exact answer, it will be very close (at most ~1% low).
Do you need full precision? If you don't, you can increase your range a little bit by discarding a few least significant bits and multiplying them in afterwards.
Can a and b be anything? How about a lookup table if you only have a few a and b that you need to calculate?
A simple solution to avoid overflow is to divide both a and b by a+b before squaring, and then multiply the square root by a+b. Or do the same with max(a,b).
You can do a little simple algebra to bring the results back into range.
sqrt((a * a) + (b * b))
= 2 * sqrt(((a * a) + (b * b)) / 4)
= 2 * sqrt((a * a) / 4 + (b * b) / 4)
= 2 * sqrt((a/2 * a/2) + (b/2 * b/2))

How could one implement multiplication in finite fields?

If F := GF(p^n) is the finite field with p^n elements, where p is a prime number and n a natural number, is there any efficient algorithm to work out the product of two elements in F?
Here are my thoughts so far:
I know that the standard construction of F is to take an irreducible polynomial f of degree n in GF(p) and then view elements of F as polynomials in the quotient GF(p)[X]/(f), and I have a feeling that this is probably already the right approach since polynomial multiplication and addition should be easy to implement, but I somehow fail to see how this can be actually done. For example, how would one choose an appropriate f, and how can I get the equivalence class of an arbitrary polynomial?
First pick an irreducible polynomial of degree n over GF[p]. Just generate random ones, a random polynomial is irreducible with probability ~1/n.
To test your random polynomials, you'll need some code to factor polynomials over GF[p], see the wikipedia page for some algorithms.
Then your elements in GF[p^n] are just n-degree polynomials over GF[p]. Just do normal polynomial arithmetic and make sure to compute the remainder modulo your irreducible polynomial.
It's pretty easy to code up simple versions of this scheme. You can get arbitrarily complicated in how you implement, say, the modulo operation. See modular exponentiation, Montgomery multiplication, and multiplication using FFT.
Whether there is an efficient algorithm to multiply elements in GF(p^n) depends on how you are representing the elements of GF(p^n).
As you say, one way is indeed to work in GF(p)(X)/(f). Addition and multiplication is relatively straightforward here. However, determining a suitable irreducible polynomial f is not easy - as far as I know there isn't an efficient algorithm for calculating a suitable f.
Another way is to use what are called Zech's logarithms. Magma uses
pre-computed tables of them for working with small finite fields. It is possible that GAP does
too, although its documentation is less clear.
Computing with mathematical structures is often quite tricky. You're certainly not missing anything obvious here.
It depends on your needs and on your field.
When you multiply you have to pick a generator of Fx. When you are adding you have to use the fact that F is a vector space over some smaller Fpm. In practice what you do a lot of time is some mixed approach. E.g. if you are working over F256, take a generator X of F256x, and let G be it's minimal polynomial over F16. You now have
(sumi smaller then 16 ai Xi)(sumj smaller then 16 bj Xj)= sum_k sumi+j = k ai bj Xi+j
All you have to do to make multiplication efficient, is store a multipication table of F16, and (using G) construct X^m in terms of lower powers of X and elements in F16
Finanly, in the rare case where pn = 22n, you get Conways field of nimbers (look in Conways "winning ways", or in Knuth's volume 4A section 7.1.3), for which there are very efficient algorithms.
Galois Field Arithmetic Library (C++, mod 2, doesn't look like it supports other prime elements)
LinBox (C++)
MPFQ (C++)
I have no personal experience w/ these, however (have made my own primitive C++ classes for Galois fields of degree 31 or less, nothing too exotic or worth copying). Like one of the commenters mentioned, you might check mathoverflow.net -- just ask nicely and make sure you've done your homework first. Someone there ought to know what kinds of mathematical software is suitable for manipulation of finite fields, and it's close enough to mathoverflow's area of interest that a well-stated question should not get closed down.
Assume this is the question for an algorithm performing multiplication in finite fields, when a monic irreducible polynomial f(X) is identified (else consider Rabin's Test for Irreducibility)
You have two polynomials of degree n-1
A(X) = a_0 + a_1*X + a_2*X^2 + ... + a_(n-1)*X^(n-1) and
B(X) = b_0 + b_1*X + b_2*X^2 + ... + b_(n-1)*X^(n-1)
coefficients a_k, b_k are out of representatives {0,1,...,p-1} of Z/pZ.
The product is defined as
C(X) = A(X)*B(X) % f(X),
where the modulo operator "%" is the remainder of the polynomial division A(X)*B(X) / f(X).
Following is an approach with complexity O(n^2)
1.) By distributive law the product can be decomposed to
B(X) * X^(n-1) * a_(n-1)
+ B(X) * X^(n-2) * a_(n-2)
+ ...
+ B(X) * a_0
=
(...(a_(n-1) * B(X) * X
+ a_(n-2) * B(X)) * X
+ a_(n-3) * B(X)) * X
...
+ a_1 * B(X)) * X
+ a_0 * B(X)
2.) For the %-operator is a ring homomorphism from Z/pZ[X] onto GF(p^n) it can be applied in each step of the iteration above.
A(X)*B(X) % f(X) =
(...(a_(n-1) * B(X) * X % f(X)
+ a_(n-2) * B(X)) * X % f(X)
+ a_(n-3) * B(X)) * X % f(X)
...
+ a_1 * B(X)) * X % f(X)
+ a_0 * B(X)
3.) After each multiplication with X, i.e. a shift in the coefficient space, you have a polynomial T_k(X) of degree n with element t_kn * X^n. Reduction modulo f(X) is done by
T_k(X) % f(X) = T_k(X) - t_kn*f(X),
which is a polynomial of degree n-1.
Finally, with reduction polynomial
r(x) := f(x) - X^n and
T_k(X) =: t_kn * X^n + U_(n-1)(X)
T_k(X) % f(X) = t_kn * X^n + U_(n-1)(X) - t_kn*( r(x) + X^n)
= U_(n-1)(X) - t_kn*r(x)
i.e. all steps can be done with polynomials of maximum degree n-1.

Resources