How to calculate any negative number to the power of some fraction in R? - r

In the following code:
(-8/27)^(2/3)
I got the result NaN, despite the fact that the correct result should be 4/9 or .444444....
So why does it return NaN? And how can I have it return the correct value?

As documented in help("^"):
Users are sometimes surprised by the value returned, for example
why ‘(-8)^(1/3)’ is ‘NaN’. For double inputs, R makes use of IEC
60559 arithmetic on all platforms, together with the C system
function ‘pow’ for the ‘^’ operator. The relevant standards
define the result in many corner cases. In particular, the result
in the example above is mandated by the C99 standard. On many
Unix-alike systems the command ‘man pow’ gives details of the
values in a large number of corner cases.
So you need to do the operations separately:
R> ((-8/27)^2)^(1/3)
[1] 0.4444444

Here's the operation in the complex domain, which R does support:
(-8/27+0i)^(2/3)
[1] -0.2222222+0.3849002i
Test:
> ((-8/27+0i)^(2/3) )^(3/2)
[1] -0.2962963+0i
> -8/27 # check
[1] -0.2962963
Furthermore the complex conjugate is also a root:
(-0.2222222-0.3849002i)^(3/2)
[1] -0.2962963-0i
To the question what is the third root of -8/27:
polyroot( c(8/27,0,0,1) )
[1] 0.3333333+0.5773503i -0.6666667-0.0000000i 0.3333333-0.5773503i
The middle value is the real root. Since you are saying -8/27 = x^3 you are really asking for the solution to the cubic equation:
0 = 8/27 + 0*x + 0*x^2 + x^2
The polyroot function needs those 4 coefficient values and will return the complex and real roots.

Related

Numerical problems with qnorm

I'm having a numeric issue using qnorm(psn()). The problem is numeric.
Firstly, the Skew-Normal CDF round the result, since psn(9) is not 1:
library(sn)
psn(9)
#[1] 1
then
qnorm(psn(9))
#[1] Inf
And see that:
qnorm(.9999999999999999)
#[1] 8.209536
qnorm(.99999999999999999)
#[1] Inf
note that 8.209536 is not that big, so this rounding is very imprecise.
Then, my final problem is the calculation of qnorm(psn()), that is part of my Copula density. Any hint on how can I avoid these numerical problems?
(This is not a resolution to your dilemma, more of an explanation of why I think you're seeing this and perhaps not likely to find an easy solution.)
I think this is getting into the realm where normal floating-point precision isn't going to work for you. For instance, doing the inverse of your function:
options(digits=22)
pnorm(8.209536)
# [1] 0.99999999999999989
pnorm(8.209536) - 1
# [1] -1.1102230246251565e-16
which is very close to
.Machine$double.eps
# [1] 2.2204460492503131e-16
which, according to ?.Machine, is
double.eps: the smallest positive floating-point number 'x' such that
'1 + x != 1'. It equals 'double.base ^ ulp.digits' if either
'double.base' is 2 or 'double.rounding' is 0; otherwise, it
is '(double.base ^ double.ulp.digits) / 2'. Normally
'2.220446e-16'.
It might be possible to translate what you need into higher-precision using auxiliary packages like gmp or Rmpfr. (I don't know if they support qnorm-like operations.)

Loss of decimal places when calculating mean in R

I have a list entitled SET1Bearing1slope with nine numbers, and each number has at least 10 decimal places. When I use the mean() function on the list I get an arithmetic mean
.
Yet if I list the numbers individually and then use the mean() function, I get a different output
I know that this is caused by a rounding and that the second mean is more accurate. Is there a way to avoid this issue? What method can I use to avoid rounding errors when calculating the mean?
In R, mean() expects a vector of values, not multiple values. It is also a generic function so it is tolerant of additional parameters it doesn't understand (but doesn't warn you about them). See
mean(c(1,5,6))
# [1] 4
mean(1, 5, 6) #only "1" is used here, 5 and 6 are ignored.
# [1] 1
So in your example there are no rounding errors, you are just calling the function incorrectly.
Look at the difference in the way you're calling the function:
mean(c(1,2,5))
[1] 2.666667
mean(1,2,5)
[1] 1
As pointed by MrFlick, in the first case you're passing a vector of numbers (the correct way); in the second, you're passing a list of arguments, and just the first one is considered.
As for the number of digits, you can specify it using options():
options(digits = 10)
x <- runif(10)
x
[1] 0.49957540398 0.71266139182 0.07266473584 0.90541790240 0.41799820261
[6] 0.59809536533 0.88133668737 0.17078919476 0.92475634208 0.48827998806
mean(x)
[1] 0.5671575214
But remember that a greater number of digits is not necessarily better. There's a reason why R and others limits the number os digits. Check this topic: https://en.wikipedia.org/wiki/Significance_arithmetic

R: Raise to the power -- odd behaviour [duplicate]

In the following code:
(-8/27)^(2/3)
I got the result NaN, despite the fact that the correct result should be 4/9 or .444444....
So why does it return NaN? And how can I have it return the correct value?
As documented in help("^"):
Users are sometimes surprised by the value returned, for example
why ‘(-8)^(1/3)’ is ‘NaN’. For double inputs, R makes use of IEC
60559 arithmetic on all platforms, together with the C system
function ‘pow’ for the ‘^’ operator. The relevant standards
define the result in many corner cases. In particular, the result
in the example above is mandated by the C99 standard. On many
Unix-alike systems the command ‘man pow’ gives details of the
values in a large number of corner cases.
So you need to do the operations separately:
R> ((-8/27)^2)^(1/3)
[1] 0.4444444
Here's the operation in the complex domain, which R does support:
(-8/27+0i)^(2/3)
[1] -0.2222222+0.3849002i
Test:
> ((-8/27+0i)^(2/3) )^(3/2)
[1] -0.2962963+0i
> -8/27 # check
[1] -0.2962963
Furthermore the complex conjugate is also a root:
(-0.2222222-0.3849002i)^(3/2)
[1] -0.2962963-0i
To the question what is the third root of -8/27:
polyroot( c(8/27,0,0,1) )
[1] 0.3333333+0.5773503i -0.6666667-0.0000000i 0.3333333-0.5773503i
The middle value is the real root. Since you are saying -8/27 = x^3 you are really asking for the solution to the cubic equation:
0 = 8/27 + 0*x + 0*x^2 + x^2
The polyroot function needs those 4 coefficient values and will return the complex and real roots.

R Exponent Produces NaN

I am running into an issue when I exponentiate floating point data. It seems like it should be an easy fix. Here is my sample code:
temp <- c(-0.005220092)
temp^1.1
[1] NaN
-0.005220092^1.1
[1] -0.003086356
Is there some obvious error I am making with this? It seems like it might be an oversight on my part with regard to exponents.
Thanks,
Alex
The reason for the NaN is because the result of the exponentiation is complex, so you have to pass a complex argument:
as.complex(temp)^1.1
[1] -0.002935299-0.000953736i
# or
(temp + 0i)^1.1
[1] -0.002935299-0.000953736i
The reason that your second expression works is because unary - has lower precedence than ^, so this is equivalent to -(0.005220092^1.1). See ?Syntax.

acos(1) returns NaN for some values, not others

I have a list of latitude and longitude values, and I'm trying to find the distance between them. Using a standard great circle method, I need to find:
acos(sin(lat1)*sin(lat2) + cos(lat1)*cos(lat2) * cos(long2-long1))
And multiply this by the radius of earth, in the units I am using. This is valid as long as the values we take the acos of are in the range [-1,1]. If they are even slightly outside of this range, it will return NaN, even if the difference is due to rounding.
The issue I have is that sometimes, when two lat/long values are identical, this gives me an NaN error. Not always, even for the same pair of numbers, but always the same ones in a list. For instance, I have a person stopped on a road in the desert:
Time |lat |long
1:00PM|35.08646|-117.5023
1:01PM|35.08646|-117.5023
1:02PM|35.08646|-117.5023
1:03PM|35.08646|-117.5023
1:04PM|35.08646|-117.5023
When I calculate the distance between the consecutive points, the third value, for instance, will always be NaN, even though the others are not. This seems to be a weird bug with R rounding.
Can't tell exactly without seeing your data (try dput), but this is mostly likely a consequence of FAQ 7.31.
(x1 <- 1)
## [1] 1
(x2 <- 1+1e-16)
## [1] 1
(x3 <- 1+1e-8)
## [1] 1
acos(x1)
## [1] 0
acos(x2)
## [1] 0
acos(x3)
## [1] NaN
That is, even if your values are so similar that their printed representations are the same, they may still differ: some will be within .Machine$double.eps and others won't ...
One way to make sure the input values are bounded by [-1,1] is to use pmax and pmin: acos(pmin(pmax(x,-1.0),1.0))
A simple workaround is to use pmin(), like this:
acos(pmin(sin(lat1)*sin(lat2) + cos(lat1)*cos(lat2) * cos(long2-long1),1))
It now ensures that the precision loss leads to a value no higher than exactly 1.
This doesn't explain what is happening, however.
(Edit: Matthew Lundberg pointed out I need to use pmin to get it tow work with vectorized inputs. This fixes the problem with getting it to work, but I'm still not sure why it is rounding incorrectly.)
I just encountered this. This is caused by input larger than 1. Due to the computational error, my inner product between unit norms becomes a bit larger than 1 (like 1+0.00001). And acos() can only deal with [-1,1]. So, we can clamp the upper bound to exactly 1 to solve the problem.
For numpy: np.clip(your_input, -1, 1)
For Pytorch: torch.clamp(your_input, -1, 1)

Resources