Adding floating point precision to qnorm/pnorm? - r

I would be interested to increase the floating point limit for when calculating qnorm/pnorm from their current level, for example:
x <- pnorm(10) # 1
qnorm(x) # Inf
qnorm(.9999999999999999444) # The highst limit I've found that still return a <<Inf number
Is that (under a reasonable amount of time) possible to do? If so, how?

If the argument is way in the upper tail, you should be able to get better precision by calculating 1-p. Like this:
> x = pnorm(10, lower.tail=F)
> qnorm(x, lower.tail=F)
10
I would expect (though I don't know for sure) that the pnorm() function is referring to a C or Fortran routine that is stuck on whatever floating point size the hardware supports. Probably better to rearrange your problem so the precision isn't needed.
Then, if you're dealing with really really big z-values, you can use log.p=T:
> qnorm(pnorm(100, low=F, log=T), low=F, log=T)
100
Sorry this isn't exactly what you're looking for. But I think it will be more scalable -- pnorm hits 1 so rapidly at high z-values (it is e^(-x^2), after all) that even if you add more bits they will run out fast.

Related

Efficiently finding the closest zero of an arbitrary function

In summary, I am trying to start at a given x and find the nearest point in the positive direction where f(x) = 0. For simplicity, solutions are only needed in the interval [initial_x, maximum_x] (the maximum is given), but any better reach is desirable. Additionally, a specific precision is not mandatory; I am looking to maximize it, but not at the cost of performance.
While this seems simple, there are a few caveats that make the solution more difficult.
Performance is the first priority, even over some precision. The zero needs to be found in the fewest possible calls to f(x), as this code will be run many times per second.
There are not guaranteed to be any specific number of zeros on this line. There may be zero, one, or many places that the function intersects the x-axis. (This is why a direct binary search will not work.)
The function f(x) cannot be manipulated algebraically, only supporting numerical evaluation at a discrete point. (This is why the solution cannot be found analytically.)
My current strategy is to define a step size that is within an acceptable loss of precision and then test in increments until an interval is found on which there is guaranteed to be at least one zero (in [a,b], a and b are on opposite sides of 0). From there, I use a binary search to narrow down the (more) exact point.
// assuming y != 0
initial_y = f(x);
while (x < maximum_x) {
y = f(x);
// test to see if y has crossed 0
if (initial_y > 0) {
if (y < 0) {
return binary_search(x - step_size, x);
}
} else {
if (y > 0) {
return binary_search(x - step_size, x);
}
}
x += step_size;
}
This has several disadvantages, mainly the fact that there is a significant trade-off between resolution and performance (the smaller step_size is, the better it works but the longer it takes). Is there a more efficient formula or strategy I can take? I thought of using the value of y to scale the step size, but I cannot figure out how to preserve precision while doing that.
The solution can be in any language because I am looking more for a strategy to find the zeros, than a specific program.
(edit:)
The function above is assumed to be continuous.
To clarify the question, I understand that this problem may be impossible to solve exactly. I am just asking for ways to improve the speed or precision of the algorithm. The one I am currently using is working quite well, even though it fails during many edge cases.
For example, a solution that requires fewer steps with similar precision or another algorithm that increases the precision or reliability with some performance impact would both be extremely helpful.
Your problem is essentially impossible to solve in the general case. For example, no algorithm can find the "first" root of sin(1/x), starting from x=0.
A tentative answer is by exponential search, i.e. starting from a small step and increase it following a geometric progression rather than an arithmetic one, until you find a change of sign. But this will fail if the first root is closer than the initial step, or if the first root is followed by a close one.
Without any information on the behavior of f, I would not even try anything (but a "standard" root finder), this is too hopeless ! (But I am sure you do have some information.)

Managing floating point accuracy

I'm struggling with issues re. floating point accuracy, and could not find a solution.
Here is a short example:
aa<-c(99.93029, 0.0697122)
aa
[1] 99.9302900 0.0697122
aa[1]
99.93029
print(aa[1],digits=20)
99.930289999999999
It would appear that, upon storing the vector, R converted the numbers to something with a slightly different internal representation (yes, I have read circle 1 of the "R inferno" and similar material).
How can I force R to store the input values exactly "as is", with no modification?
In my case, my problem is that the values are processed in such a way that the small errors very quickly grow:
aa[2]/(100-aa[1])*100
[1] 100.0032 ## Should be 100, of course !
print(aa[2]/(100-aa[1])*100,digits=20)
[1] 100.00315593171625
So I need to find a way to get my normalization right.
Thanks
PS- There are many questions on this site and elsewhere, discussing the issue of apparent loss of precision, i.e. numbers displayed incorrectly (but stored right). Here, for instance:
How to stop read.table from rounding numbers with different degrees of precision in R?
This is a distinct issue, as the number is stored incorrectly (but displayed right).
(R version 3.2.1 (2015-06-18), win 7 x64)
Floating point precision has always generated lots of confusion. The crucial idea to remember is: when you work with doubles, there is no way to store each real number "as is", or "exactly right" -- the best you can store is the closest available approximation. So when you type (in R or any other modern language) something like x = 99.93029, you'll get this number represented by 99.930289999999999.
Now when you expect a + b to be "exactly 100", you're being inaccurate in terms. The best you can get is "100 up to N digits after the decimal point" and hope that N is big enough. In your case it would be correct to say 99.9302900 + 0.0697122 is 100 with 5 decimal points of accuracy. Naturally, by multiplying that equality by 10^k you'll lose additional k digits of accuracy.
So, there are two solutions here:
a. To get more precision in the output, provide more precision in the input.
bb <- c(99.93029, 0.06971)
print(bb[2]/(100-bb[1])*100, digits = 20)
[1] 99.999999999999119
b. If double precision not enough (can happen in complex algorithms), use packages that provide extra numeric precision operations. For instance, package gmp.
i guess you have misunderstood here. It's the same case where R is storing the correct value but the value is displayed accordingly to the value of option chosen while displaying it.
For Eg:
# the output of below will be:
> print(99.930289999999999,digits=20)
[1] 99.930289999999999395
But
# the output of:
> print(1,digits=20)
[1] 1
Also
> print(1.1,digits=20)
[1] 1.1000000000000000888
In addition to previous answers, I think that a good lecture regarding the subject would be
R Inferno, by P.Burns
http://www.burns-stat.com/documents/books/the-r-inferno/

Looking for good scale factor for converting log to 8.8 fixed point

I have a range of numbers in (0, 1]
I would like to take the natural log of these numbers, and then store as 8.8
fixed point.
My foruma is K*ln(x) + (1<<16)
but I am not sure what the best value is for K .
My thinking is that if x doubles, then ln(x) increases by ln(2), so the fixed point value should increase by 1 in fixed point (i.e. 256)
So, this would mean K = 256/ln(2)
Does this make sense?
As x approaches 0, ln(x) will diverge to negative infinity. So you are essentially trying to map an infinite domain to a finite range.
If you do so in a linear way, you have to cut off at some point. If you choose your cut-off at too low a value, you'll be wasting precision for the numbers you represent. If you choose to high a cut-off, too many values will be clamped to the minimal element of the range. Without knowledge about the distribution of the point, it will be very hard to guess a suitable balance here.
So perhaps you could apply a non-linear map instead of the linear one you proposed. Something like the exponential function? Which would mean you'd actually store x instead of ln(x). So I'd say if you want to store values from [0,1) in 16 bit without too much loss of information, you'd just use Q0.16, i.e. all the digits in the fractional part. For (0,1] you can either store 1 − x or do a special case for x = 1 so that you encode that as 0 instead. If you have Q8.8 numbers, you'd multiply your numbers by 28 = 256 first, but if you have access to the bit representation that multiplication would be a waste of time.
I guess you had a reason you'd want to store logarithms, so this answer may not be what you were hoping for. I don't see an easier way around the underlying problem, though, so you may have to reconsider some of your ideas.

Understanding Floating point precision analysis for Parallel Reduction

I am trying to analyze how reduction (parallel) can be used to add a large array of floating point numbers and precision loss involved in it. Definitely reduction will help in getting more precision compared to serial addition . I'll be really thankful if you can direct me to some detailed source or provide some insight for this analysis. Thanks.
Every primitive floating point operation will have a rounding error; if the result is x then the rounding error is <= c * abs (x) for some rather small constant c > 0.
If you add 1000 numbers, that takes 999 additions. Each addition has a result and a rounding error. The rounding error is small when the result is small. So you want to adjust the order of additions so that the average absolute value of the result is as small as possible. A binary tree is one method. Sorting the values, then adding the smallest two numbers and putting the result back into the sorted list is also quite reasonable. Both methods keep the average result small, and therefore keep the rounding error small.

log-sum-exp trick why not recursive

I have been researching the log-sum-exp problem. I have a list of numbers stored as logarithms which I would like to sum and store in a logarithm.
the naive algorithm is
def naive(listOfLogs):
return math.log10(sum(10**x for x in listOfLogs))
many websites including:
logsumexp implementation in C?
and
http://machineintelligence.tumblr.com/post/4998477107/
recommend using
def recommend(listOfLogs):
maxLog = max(listOfLogs)
return maxLog + math.log10(sum(10**(x-maxLog) for x in listOfLogs))
aka
def recommend(listOfLogs):
maxLog = max(listOfLogs)
return maxLog + naive((x-maxLog) for x in listOfLogs)
what I don't understand is if recommended algorithm is better why should we call it recursively?
would that provide even more benefit?
def recursive(listOfLogs):
maxLog = max(listOfLogs)
return maxLog + recursive((x-maxLog) for x in listOfLogs)
while I'm asking are there other tricks to make this calculation more numerically stable?
Some background for others: when you're computing an expression of the following type directly
ln( exp(x_1) + exp(x_2) + ... )
you can run into two kinds of problems:
exp(x_i) can overflow (x_i is too big), resulting in numbers that you can't add together
exp(x_i) can underflow (x_i is too small), resulting in a bunch of zeroes
If all the values are big, or all are small, we can divide by some exp(const) and add const to the outside of the ln to get the same value. Thus if we can pick the right const, we can shift the values into some range to prevent overflow/underflow.
The OP's question is, why do we pick max(x_i) for this const instead of any other value? Why don't we recursively do this calculation, picking the max out of each subset and computing the logarithm repeatedly?
The answer: because it doesn't matter.
The reason? Let's say x_1 = 10 is big, and x_2 = -10 is small. (These numbers aren't even very large in magnitude, right?) The expression
ln( exp(10) + exp(-10) )
will give you a value very close to 10. If you don't believe me, go try it. In fact, in general, ln( exp(x_1) + exp(x_2) + ... ) will give be very close to max(x_i) if some particular x_i is much bigger than all the others. (As an aside, this functional form, asymptotically, actually lets you mathematically pick the maximum from a set of numbers.)
Hence, the reason we pick the max instead of any other value is because the smaller values will hardly affect the result. If they underflow, they would have been too small to affect the sum anyway, because it would be dominated by the largest number and anything close to it. In computing terms, the contribution of the small numbers will be less than an ulp after computing the ln. So there's no reason to waste time computing the expression for the smaller values recursively if they will be lost in your final result anyway.
If you wanted to be really persnickety about implementing this, you'd divide by exp(max(x_i) - some_constant) or so to 'center' the resulting values around 1 to avoid both overflow and underflow, and that might give you a few extra digits of precision in the result. But avoiding overflow is much more important about avoiding underflow, because the former determines the result and the latter doesn't, so it's much simpler just to do it this way.
Not really any better to do it recursively. The problem's just that you want to make sure your finite-precision arithmetic doesn't swamp the answer in noise. By dealing with the max on its own, you ensure that any junk is kept small in the final answer because the most significant component of it is guaranteed to get through.
Apologies for the waffly explanation. Try it with some numbers yourself (a sensible list to start with might be [1E-5,1E25,1E-5]) and see what happens to get a feel for it.
As you have defined it, your recursive function will never terminate. That's because ((x-maxlog) for x in listOfLogs) still has the same number of elements as listOfLogs.
I don't think that this is easily fixable either, without significantly impacting either the performance or the precision (compared to the non-recursive version).

Resources