unprecise math in R when dealing with infinite fractions - r

The deviations of the mean should always sum up to 0.
However, when the mean has a lot of digits, maybe infinitely like this one which is 20/7, R fails to calculate it.
x <- c(1,2,2,3,3,4,5)
sum(x - mean(x))
[1] -4.440892e-16
I am quite a newbie and have not found any information about this so far, maybe I was not searching for the right terms.
Is it possible to calculate with infinitely long numbers in R?
I am asking this out of theoretical interest.

The problem you have described is a general problem with all programming languages. Internally all floats are based on the IEEE754 convention. You can read more about it here.
As far as I know there is no easy way around these small errors, except for using number representations with higher precision.
EDIT: R already used the double precision representation of floating point numbers. To read more about it you can have a look at the R FAQ and this SO question.

If you deal with rational numbers only, such as your example, you can use the gmp package.
You can use the Rmpfr package to deal with numbers with an arbitrary precision (that you have to set).
Another possibility is the lazyNumbers package, freshly released on CRAN:
library(lazyNumbers)
# create a vector of lazy numbers
x <- lazyvec(c(1, 2, 2, 3, 3, 4, 5))
# compute its mean
m <- sum(x) / length(x)
# sum expected to be 0
y <- sum(x - m)
# convert it to double
as.double(y)
## 0

Related

how many digits does R carry in a numeric calculation (how to increase number of digits in R Numeric) [duplicate]

There is an option in R to get control over digit display. For example:
options(digits=10)
is supposed to give the calculation results in 10 digits till the end of R session. In the help file of R, the definition for digits parameter is as follows:
digits: controls the number of digits
to print when printing numeric values.
It is a suggestion only. Valid values
are 1...22 with default 7
So, it says this is a suggestion only. What if I like to always display 10 digits, not more or less?
My second question is, what if I like to display more than 22 digits, i.e. for more precise calculations like 100 digits? Is it possible with base R, or do I need an additional package/function for that?
Edit: Thanks to jmoy's suggestion, I tried sprintf("%.100f",pi) and it gave
[1] "3.1415926535897931159979634685441851615905761718750000000000000000000000000000000000000000000000000000"
which has 48 decimals. Is this the maximum limit R can handle?
The reason it is only a suggestion is that you could quite easily write a print function that ignored the options value. The built-in printing and formatting functions do use the options value as a default.
As to the second question, since R uses finite precision arithmetic, your answers aren't accurate beyond 15 or 16 decimal places, so in general, more aren't required. The gmp and rcdd packages deal with multiple precision arithmetic (via an interace to the gmp library), but this is mostly related to big integers rather than more decimal places for your doubles.
Mathematica or Maple will allow you to give as many decimal places as your heart desires.
EDIT:
It might be useful to think about the difference between decimal places and significant figures. If you are doing statistical tests that rely on differences beyond the 15th significant figure, then your analysis is almost certainly junk.
On the other hand, if you are just dealing with very small numbers, that is less of a problem, since R can handle number as small as .Machine$double.xmin (usually 2e-308).
Compare these two analyses.
x1 <- rnorm(50, 1, 1e-15)
y1 <- rnorm(50, 1 + 1e-15, 1e-15)
t.test(x1, y1) #Should throw an error
x2 <- rnorm(50, 0, 1e-15)
y2 <- rnorm(50, 1e-15, 1e-15)
t.test(x2, y2) #ok
In the first case, differences between numbers only occur after many significant figures, so the data are "nearly constant". In the second case, Although the size of the differences between numbers are the same, compared to the magnitude of the numbers themselves they are large.
As mentioned by e3bo, you can use multiple-precision floating point numbers using the Rmpfr package.
mpfr("3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825")
These are slower and more memory intensive to use than regular (double precision) numeric vectors, but can be useful if you have a poorly conditioned problem or unstable algorithm.
If you are producing the entire output yourself, you can use sprintf(), e.g.
> sprintf("%.10f",0.25)
[1] "0.2500000000"
specifies that you want to format a floating point number with ten decimal points (in %.10f the f is for float and the .10 specifies ten decimal points).
I don't know of any way of forcing R's higher level functions to print an exact number of digits.
Displaying 100 digits does not make sense if you are printing R's usual numbers, since the best accuracy you can get using 64-bit doubles is around 16 decimal digits (look at .Machine$double.eps on your system). The remaining digits will just be junk.
One more solution able to control the how many decimal digits to print out based on needs (if you don't want to print redundant zero(s))
For example, if you have a vector as elements and would like to get sum of it
elements <- c(-1e-05, -2e-04, -3e-03, -4e-02, -5e-01, -6e+00, -7e+01, -8e+02)
sum(elements)
## -876.5432
Apparently, the last digital as 1 been truncated, the ideal result should be -876.54321, but if set as fixed printing decimal option, e.g sprintf("%.10f", sum(elements)), redundant zero(s) generate as -876.5432100000
Following the tutorial here: printing decimal numbers, if able to identify how many decimal digits in the certain numeric number, like here in -876.54321, there are 5 decimal digits need to print, then we can set up a parameter for format function as below:
decimal_length <- 5
formatC(sum(elements), format = "f", digits = decimal_length)
## -876.54321
We can change the decimal_length based on each time query, so it can satisfy different decimal printing requirement.
If you work primarily with tibbles, there is a function that enforces digits: num().
Here is an example:
library(tidyverse)
data <- tribble(
~ weight, ~ weight_selfreport,
81.5,81.66969147005445,
72.6,72.59528130671505,
92.9,93.01270417422867,
79.4,79.4010889292196,
94.6,96.64246823956442,
80.2,79.4010889292196,
116.2,113.43012704174228,
95.4,95.73502722323049,
99.5,99.8185117967332
)
data <-
data %>%
mutate(across(where(is.numeric), ~ num(., digits = 3)))
data
#> # A tibble: 9 × 2
#> weight weight_selfreport
#> <num:.3!> <num:.3!>
#> 1 81.500 81.670
#> 2 72.600 72.595
#> 3 92.900 93.013
#> 4 79.400 79.401
#> 5 94.600 96.642
#> 6 80.200 79.401
#> 7 116.200 113.430
#> 8 95.400 95.735
#> 9 99.500 99.819
Thus you can even decide to have different rounding options depending on what your needs are. I find it very helpful and a rather quick solution to printing dfs.

How can I compute ratios of large factorials without unnecessary overflows?

I'm writing a program (in R, in case that matters) in which I need to compute the number of unique permutations of a vector of elements, which can contain repeated values. The mathematical formula for this is straightforward: the factorial of the total number of elements divided by the product of the factorials of the counts of each unique element. However, calculating the result naively is very likely to lead to overflows even when the actual answer is not very large. For example:
# x has 200 elements, but 199 of them are identical
x <- c(rep(1, 199), 2)
num_unique_permutations <- factorial(length(x)) / prod(factorial(table(x)))
If this didn't overflow, then num_unique_permutations would be 200!/(199!*1!) = 200. However, both 200! and 199! overflow the max value of a double, so the actual result is NaN. Is there a good way to do this calculation that will always avoid overflows (or underflows) as long as the answer itself doesn't overflow? (Or perhaps, as long as it doesn't come within a factor of length(x) of overflowing?)
(Note that R uses doubles for most numerical calculations, but the problem is not specific to doubles. Any numeric type with a range has the same problem. Also, I don't care about losing a bit of precision to floating point math, since I'm just using this to get a rough upper bound on something.)
In base R use lfactorial, to compute the logarithms of the numerator and of the denominator. Then exponentiate the appropriate difference.
numer <- lfactorial(length(x))
denom <- sum(lfactorial(table(x)))
exp(numer - denom)
#[1] 200
This can be easily written as a function.
num_unique_permutations <- function(x){
numer <- lfactorial(length(x))
denom <- sum(lfactorial(table(x)))
exp(numer - denom)
}
num_unique_permutations(x)
#[1] 200
You can use the gmp library.
library(gmp)
factorial(as.bigz(length(x))) / prod(factorial(as.bigz(table(x))))
#[1] 200

Is there a way to handle calculations invovling exponential of big values in R?

I have looked a bit online and in the site but I did not find any solution. My problem is relatively simple so if you could point me to a possible solution, much appreciated.
test_vec <- c(2,8,709,600)
mean(exp(test_vec))
test_vec_bis <- c(2,8,710,600)
mean(exp(test_vec_bis))
exp(709)
exp(710)
# The numerical limit of R is at exp(709)
How can I calculate the mean of my vector and deal with the Inf values knowing that R could probably handle the mean value but not all values in the numerator of the mean calculation ?
There is an edge case where you can solve your problem by simply restating your problem mathematically, but that would require that the length of your vector is extremely large and/or that your large exp. numbers are close to the numeric limit:
Since the mean sum(x)/n can be written as sum(x/n) and since exp(x)/exp(y) = exp(x-y), you can calculate sum(exp(x-log(n))), which gives you a relief of log(n).
mean(exp(test_vec))
[1] 2.054602e+307
sum(exp(test_vec - log(length(test_vec))))
[1] 2.054602e+307
sum(exp(test_vec_bis - log(length(test_vec_bis))))
[1] 5.584987e+307
While this works for your example, most likely this won't work for your real vector.
In this case, you will have to consult packages like Rmpfr as suggested by #fra.
Here's one way where you qualify to only select those in your test_vec that give an answer < Inf:
mean(exp(test_vec)[which(exp(test_vec) < Inf)])
[1] 1.257673e+260
t2 <- c(2,8,600)
mean(exp(t2))
[1] 1.257673e+260
This assumes you were looking to exclude values that result in Inf, of course.

how to get more decimal places in round function in R [duplicate]

There is an option in R to get control over digit display. For example:
options(digits=10)
is supposed to give the calculation results in 10 digits till the end of R session. In the help file of R, the definition for digits parameter is as follows:
digits: controls the number of digits
to print when printing numeric values.
It is a suggestion only. Valid values
are 1...22 with default 7
So, it says this is a suggestion only. What if I like to always display 10 digits, not more or less?
My second question is, what if I like to display more than 22 digits, i.e. for more precise calculations like 100 digits? Is it possible with base R, or do I need an additional package/function for that?
Edit: Thanks to jmoy's suggestion, I tried sprintf("%.100f",pi) and it gave
[1] "3.1415926535897931159979634685441851615905761718750000000000000000000000000000000000000000000000000000"
which has 48 decimals. Is this the maximum limit R can handle?
The reason it is only a suggestion is that you could quite easily write a print function that ignored the options value. The built-in printing and formatting functions do use the options value as a default.
As to the second question, since R uses finite precision arithmetic, your answers aren't accurate beyond 15 or 16 decimal places, so in general, more aren't required. The gmp and rcdd packages deal with multiple precision arithmetic (via an interace to the gmp library), but this is mostly related to big integers rather than more decimal places for your doubles.
Mathematica or Maple will allow you to give as many decimal places as your heart desires.
EDIT:
It might be useful to think about the difference between decimal places and significant figures. If you are doing statistical tests that rely on differences beyond the 15th significant figure, then your analysis is almost certainly junk.
On the other hand, if you are just dealing with very small numbers, that is less of a problem, since R can handle number as small as .Machine$double.xmin (usually 2e-308).
Compare these two analyses.
x1 <- rnorm(50, 1, 1e-15)
y1 <- rnorm(50, 1 + 1e-15, 1e-15)
t.test(x1, y1) #Should throw an error
x2 <- rnorm(50, 0, 1e-15)
y2 <- rnorm(50, 1e-15, 1e-15)
t.test(x2, y2) #ok
In the first case, differences between numbers only occur after many significant figures, so the data are "nearly constant". In the second case, Although the size of the differences between numbers are the same, compared to the magnitude of the numbers themselves they are large.
As mentioned by e3bo, you can use multiple-precision floating point numbers using the Rmpfr package.
mpfr("3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825")
These are slower and more memory intensive to use than regular (double precision) numeric vectors, but can be useful if you have a poorly conditioned problem or unstable algorithm.
If you are producing the entire output yourself, you can use sprintf(), e.g.
> sprintf("%.10f",0.25)
[1] "0.2500000000"
specifies that you want to format a floating point number with ten decimal points (in %.10f the f is for float and the .10 specifies ten decimal points).
I don't know of any way of forcing R's higher level functions to print an exact number of digits.
Displaying 100 digits does not make sense if you are printing R's usual numbers, since the best accuracy you can get using 64-bit doubles is around 16 decimal digits (look at .Machine$double.eps on your system). The remaining digits will just be junk.
One more solution able to control the how many decimal digits to print out based on needs (if you don't want to print redundant zero(s))
For example, if you have a vector as elements and would like to get sum of it
elements <- c(-1e-05, -2e-04, -3e-03, -4e-02, -5e-01, -6e+00, -7e+01, -8e+02)
sum(elements)
## -876.5432
Apparently, the last digital as 1 been truncated, the ideal result should be -876.54321, but if set as fixed printing decimal option, e.g sprintf("%.10f", sum(elements)), redundant zero(s) generate as -876.5432100000
Following the tutorial here: printing decimal numbers, if able to identify how many decimal digits in the certain numeric number, like here in -876.54321, there are 5 decimal digits need to print, then we can set up a parameter for format function as below:
decimal_length <- 5
formatC(sum(elements), format = "f", digits = decimal_length)
## -876.54321
We can change the decimal_length based on each time query, so it can satisfy different decimal printing requirement.
If you work primarily with tibbles, there is a function that enforces digits: num().
Here is an example:
library(tidyverse)
data <- tribble(
~ weight, ~ weight_selfreport,
81.5,81.66969147005445,
72.6,72.59528130671505,
92.9,93.01270417422867,
79.4,79.4010889292196,
94.6,96.64246823956442,
80.2,79.4010889292196,
116.2,113.43012704174228,
95.4,95.73502722323049,
99.5,99.8185117967332
)
data <-
data %>%
mutate(across(where(is.numeric), ~ num(., digits = 3)))
data
#> # A tibble: 9 × 2
#> weight weight_selfreport
#> <num:.3!> <num:.3!>
#> 1 81.500 81.670
#> 2 72.600 72.595
#> 3 92.900 93.013
#> 4 79.400 79.401
#> 5 94.600 96.642
#> 6 80.200 79.401
#> 7 116.200 113.430
#> 8 95.400 95.735
#> 9 99.500 99.819
Thus you can even decide to have different rounding options depending on what your needs are. I find it very helpful and a rather quick solution to printing dfs.

Converting matrix multiplication and sum function from Matlab to R

I'm converting a rather complicated set of code from Matlab to R. I have zero experience in Matlab and am a functioning novice in R.
I have a segment of code which reads (in matlab):
dSii=(sum(tao.*Sik,1))'-(sum(m'))'.*Sii-beta.*Sii./N.*(Iii+sum(Iik)');
Which I've simplified and will focus on the first segment (if I can solve the first segment I'm confident I can perform the rest):
J = (sum(A.*B,1))' - ...
tao (or A) and Sik (or B) are matrices. So my assumption is I'm performing matrix multiplication here (A * B)and summing the resultant column. The '1' is what is throwing me off in that statement. In R, that 1 would likely indicate we're talking about a sum of rows as opposed to columns(indicated by 2). But I can't find any supporting documentation for that kind of Matlab statement.
I was thinking of using a statement like this (but of course, too many '1's and ',')
J<- (apply(A*B, 1), 1, sum)
Thanks for all your help. I searched for other examples here and elsewhere and couldn't find an answer. I'm willing to work for it but this is akin to me studying French (which I don't know) to translate in Spanish (which I'm moderate in) while interpreting the whole process in English. :D
Because of the different conventions in R and Matlab, the idiosyncrasies have to be learned for each (just like your language analogy!). The Matlab command sum(A.*B,1) means multiply A and B element-wise, so they must be the same shape, and then sum along dimension 1, i.e. add each row together to get the column sums. Dimension 1 is the default so, sum(A.*B) would do the same thing as sum(A.*B,1). Because R treats * as element-wise for matrix multiplication, the following Matlab and R codes will produce the same column of numbers in J:
Matlab:
A=[[1,2,3];[4,5,6];[7,8,9]];
B=[[10,11,12];[13,14,15];[16,17,18]];
J=sum(A.*B,1)'; %the ' means to transpose the column sums to be a 3x1 matrix
R:
A<-matrix(c(1,2,3,4,5,6,7,8,9),3,byrow=T)
B<-matrix(c(10,11,12,13,14,15,16,17,18),3,byrow=T)
J<-matrix(colSums(A*B)) # no transpose needed here: nrow(J)==3

Resources