Why does R display wrong number with large numbers? - r

When I save a large number in R as an object the wrong number is saved? Why is that?
options("scipen"=100, "digits"=4)
num <- 201912030032451613
num
#> [1] 201912030032451616
Created on 2019-12-12 by the reprex package (v0.2.1.9000)

As #Roland says, this is a floating point issue (the Wikipedia page on floating point numbers is as good as anything). Unpacking it a bit though, R has specific integer format but it is limited to 32 bit integers:
> str(-2147483647L)
int -2147483647
> str(2147483647L)
int 2147483647
> str(21474836470L)
num 21474836470
Warning message:
non-integer value 21474836470L qualified with L; using numeric value
So, when R gets your number it is storing it as a floating point number not an integer. Floating point numbers are limited in how much precision they can store and typically only have about 17 significant digits. Because your number has more significant digits than that there is loss of precision. Losing precision in the smallest digits doesn't usually matter for computer arithmetic, but if your big number is a key of some kind (or a date stamp) then you are in more trouble. The bit64 package is designed with this kind of use case in mind, or you could import it as a string, depending on what you want to do.

Related

Donot want large numbers to be rounded off in R

options(scipen=999)
625075741017804800
625075741017804806
When I type the above in the R console, I get the same output for the two numbers listed above. The output being: 625075741017804800
How do I avoid that?
Numbers greater than 2^53 are not going to be unambiguously stored in the R numeric classed vectors. There was a recent change to allow integer storage in the numeric abscissa, however your number is larger that that increased capacity for precision:
625075741017804806 > 2^53
[1] TRUE
Prior to that change integers could only be stored up to Machine$integer.max == 2147483647. Numbers larger than that value get silently coerced to 'numeric' class. You will either need to work with them using character values or install a package that is capable of achieving arbitrary precision. Rmpfr and gmp are two that come to mind.
You can use package Rmpfr for arbitrary precision
dig <- mpfr("625075741017804806")
print(dig, 18)
# 1 'mpfr' number of precision 60 bits
# [1] 6.25075741017804806e17

Calculations precision level in R

I am working in R with very small numbers which reflect probabilities in an Maximum Likelihood Estimation algorithm. Some of these numbers are as small as 1e-155 ( or smaller). However, when there is something as simple as summation taking place, the precision level gets truncated to the least precise one and thus ruins the precisions of my calculations and produces meaningless results.
Example:
> sum(c(7.831908e-70,6.002923e-26,6.372573e-36,5.025015e-38,5.603268e-38,1.118121e-14, 4.512098e-07,4.400717e-05,2.300423e-26,1.317602e-58))
[1] 4.445838e-05
As is seen from the example, the base for this calculation is 1e-5 , which in a very rude manner rounds up sensitive calculation.
Is there a way around this? Why is R choosing such a strange automatic behavior? Perhaps it is not really doing this, I just see the result in the truncated form? In this case, is the actual number with correct precision stored in the variable?
There is no precision loss in your sum. But if you're worried about it, you should use a multiple-precision library:
library("Rmpfr")
x <- c(7.831908e-70,6.002923e-26,6.372573e-36,5.025015e-38,5.603268e-38,1.118121e-14, 4.512098e-07,4.400717e-05,2.300423e-26,1.317602e-58)
sum(mpfr(x, 1024))
# 1 'mpfr' number of precision 1024 bits
# [1] 4.445837981118120898327314579322617633703674840117902103769961398533293289165193843930280422747754618577451267010103975610356319174778512980120125435961577770470993217990999166176083700886405875414277348471907198346293122011042229843450802884152750493740313686430454254150390625000000000000000000000000000000000e-5
Your results are only truncated in the display.
Try:
x <- sum(c(7.831908e-70,6.002923e-26,6.372573e-36,5.025015e-38,5.603268e-38,1.118121e-14, 4.512098e-07,4.400717e-05,2.300423e-26,1.317602e-58))
print(x, digits=22)
[1] 4.445837981118121081878e-05
You can read more about the behaviour of print at ?print.default
You can also set an option - this will affext all calls to print
options(digits=22)
have you ever heard about Floating point numbers?
there is no loss of precision (significant figures) in multiplication or division as far as the result stay between
1.7976931348623157·10^308 to 4.9·10^−324 (see the link for detail)
so if you do 1.0e-30 * 1.0e-10 result will be 1.0e-40
but if you do 1.0e-30 + 1.0e-10 result will be 1.0e-10
Why?
-> finite set of number rapresentable with computer works. (64 bits max 2^64 different representation of numbers with 64 bits)
instead of using a direct conversion like for integer numbers (they represent from ~ -2^62 to +2^62, every INTEGER number -> about from -10^16 to +10*16)
or there exist a clever way like floating point? from 1.7976931348623157·10^308 to - 4.9·10^−324 and it can represent /approximate rational numbers?
So in floating point, to achieve a wider range, precision in sums is sacrified, There is loss of precision during sums or subtractions as the significant figures that could be represented by (the 52 bits of) the fraction part (of a floating point number of 64 bits) are less than log10(2^52) ~ 16.
if you look for a basic everyday example, summary(lm), when the p-value of parameter is near zero, summary() output <2.2e-16 (what a coincidence).
why limited to 64 bits? CPU have the execution units specifically to 64bits floating point arithmetic (64 bit IEEE 754 standard), if you use higher precision like 128 bits floating point, the performances will be lowered by 10 times or more, as CPU need to split the data and operation in multiple 64 bits data and operations.
https://en.wikipedia.org/wiki/Double-precision_floating-point_format

Why does R regard large number as EVEN

From "http://cran.r-project.org/doc/FAQ/R-FAQ.html#Why-doesn_0027t-R-think-the" 7.31
We already know that large number (over 2^53) can make an error in modular operation.
However, I cannot understand why all the large number is regarded as even(I have never seen "odd" of large integer which is over 2^53) even though I take some errors in approximation
(2^53+1)%%2
(2^100-1)%%2
error message(probable complete loss of accuracy in modulus) can be ignored
etc..
are all not 1 but 0
why so? (I know there is some approximation, but I need to know the reason concretely)
> print(2^54,22)
[1] 18014398509481984.00000
> print(2^54+1,22)
[1] 18014398509481984.00000
> print(2^54+2,22)
[1] 18014398509481984.00000
> print(2^54+3,22)
[1] 18014398509481988.0000
An IEEE double precision value has a 53-bit mantissa. Any number requiring more than 53 binary digits of precision will be rounded, i.e. the digits from 54 onwards will be implicitly set to zero. Thus any number with magnitude greater than 2^53 will necessarily be even (since the least-significant bit of its integer representation is beyond the floating-point precision, and is therefore zero).
There is no such thing as an "integer" in versions of R at or earlier that v2.15.3 whose magnitude was greater than 2^31-1. You are working with "numeric" or "double" entities. You are probably "rounding down" or trunc-ating your values.
?`%%`
The soon-to-be but as yet unreleased version 3.0 of R will have 8 byte integers and this problem will then not arise until you go out beyond 2^2^((8*8)-1))-1. At the moment coercion to integer fails at that level:
> as.integer(2^((8*4)-1)-1)
[1] 2147483647
> as.integer(2^((8*8)-1)-1)
[1] NA
Warning message:
NAs introduced by coercion
So your first example may rerun the proper result but your second example may still fail.

Weird error in R when importing (64-bit) integer with many digits

I am importing a csv that has a single column which contains very long integers (for example: 2121020101132507598)
a<-read.csv('temp.csv',as.is=T)
When I import these integers as strings they come through correctly, but when imported as integers the last few digits are changed. I have no idea what is going on...
1 "4031320121153001444" 4031320121153001472
2 "4113020071082679601" 4113020071082679808
3 "4073020091116779570" 4073020091116779520
4 "2081720101128577687" 2081720101128577792
5 "4041720081087539887" 4041720081087539712
6 "4011120071074301496" 4011120071074301440
7 "4021520051054304372" 4021520051054304256
8 "4082520061068996911" 4082520061068997120
9 "4082620101129165548" 4082620101129165312
As others have noted, you can't represent integers that large. But R isn't reading those values into integers, it's reading them into double precision numerics.
Double precision can only represent numbers to ~16 places accurately, which is why you see your numbers rounded after 16 places. See the gmp, Rmpfr, and int64 packages for potential solutions. Though I don't see a function to read from a file in any of them, maybe you could cook something up by looking at their sources.
UPDATE:
Here's how you can get your file into an int64 object:
# This assumes your numbers are the only column in the file
# Read them in however, just ensure they're read in as character
a <- scan("temp.csv", what="")
ia <- as.int64(a)
R's maximum intger value is about 2E9. As #Joshua mentions in another answer, one of the potential solutions is the int64 package.
Import the values as character instead. Then convert to type int64.
require(int64)
a <- read.csv('temp.csv', colClasses = 'character', header=FALSE)[[1]]
a <- as.int64(a)
print(a)
[1] 4031320121153001444 4113020071082679601 4073020091116779570
[4] 2081720101128577687 4041720081087539887 4011120071074301496
[7] 4021520051054304372 4082520061068996911 4082620101129165548
You simply cannot represent integers that big. See
.Machine
which on my box has
$integer.max
[1] 2147483647
The maximum value of a 32-bit signed integer is 2,147,483,647. Your numbers are much larger.
Try importing them as floating point values instead.
There4 are a few caveats to be aware of when dealing with floating point arithmetic in R or any other language:
http://blog.revolutionanalytics.com/2009/11/floatingpoint-errors-explained.html
http://blog.revolutionanalytics.com/2009/03/when-is-a-zero-not-a-zero.html
http://floating-point-gui.de/basic/

Is there a way to store a large number precisely in R?

Is there a way to store a large number precicely in R?
double is stored as a binary fraction and its precision varies with the value, and integer has limited range of 4 bytes.
What if I wanted to store a very large number precisely?
You can try the bigz class from the gmppackage:
> library("gmp")
> 2^10000
[1] Inf
> 2^(as.bigz(10000))
[1] "199506.... and a LOT of more numbers!
It basically stores the number as a string and so avoiding the integer/double limits.
It depends on what you mean by large number:
If you want numbers above the top end of double precision arithmetic, there is the Brobdingnag package
If you want more precision there are the gmp and related Rmpfr packages.

Resources