Donot want large numbers to be rounded off in R - r

options(scipen=999)
625075741017804800
625075741017804806
When I type the above in the R console, I get the same output for the two numbers listed above. The output being: 625075741017804800
How do I avoid that?

Numbers greater than 2^53 are not going to be unambiguously stored in the R numeric classed vectors. There was a recent change to allow integer storage in the numeric abscissa, however your number is larger that that increased capacity for precision:
625075741017804806 > 2^53
[1] TRUE
Prior to that change integers could only be stored up to Machine$integer.max == 2147483647. Numbers larger than that value get silently coerced to 'numeric' class. You will either need to work with them using character values or install a package that is capable of achieving arbitrary precision. Rmpfr and gmp are two that come to mind.

You can use package Rmpfr for arbitrary precision
dig <- mpfr("625075741017804806")
print(dig, 18)
# 1 'mpfr' number of precision 60 bits
# [1] 6.25075741017804806e17

Related

Why does R display wrong number with large numbers?

When I save a large number in R as an object the wrong number is saved? Why is that?
options("scipen"=100, "digits"=4)
num <- 201912030032451613
num
#> [1] 201912030032451616
Created on 2019-12-12 by the reprex package (v0.2.1.9000)
As #Roland says, this is a floating point issue (the Wikipedia page on floating point numbers is as good as anything). Unpacking it a bit though, R has specific integer format but it is limited to 32 bit integers:
> str(-2147483647L)
int -2147483647
> str(2147483647L)
int 2147483647
> str(21474836470L)
num 21474836470
Warning message:
non-integer value 21474836470L qualified with L; using numeric value
So, when R gets your number it is storing it as a floating point number not an integer. Floating point numbers are limited in how much precision they can store and typically only have about 17 significant digits. Because your number has more significant digits than that there is loss of precision. Losing precision in the smallest digits doesn't usually matter for computer arithmetic, but if your big number is a key of some kind (or a date stamp) then you are in more trouble. The bit64 package is designed with this kind of use case in mind, or you could import it as a string, depending on what you want to do.

R addition of large and small numbers is ignoring the smaller values

I'm encountering a problem when adding larger numbers in R. The smaller values are getting ignored and it's producing an incorrect result.
For example, I've been using a binary to decimal converter found here: Convert binary string to binary or decimal value. The penultimate step looks like this:
2^(which(rev(unlist(strsplit(as.character(MyData$Index[1]), "")) == 1))-1)
[1] 1 2 32 64 256 2048 ...
I didn't include all number for length purposes, but when these numbers are summed, they will yield the integer value of the binary number. The correct result should be 4,919,768,674,277,575,011, but R is giving me a result of 4,919,768,674,277,574,656. Notice that this number is off by 355, which is the sum of the first 5 listed numbers.
I had thought it might have to do with a integer limit, but I tested it and R can handle larger numbers than what I need. Here's an example of something I tried, which again yielded an incorrect result:
2^64
[1] 18446744073709551616 #Correct Value
2^65
[1] 36893488147419103232 #Correct Value
2^64 + 2^65
[1] 55340232221128654858 #Correct Value
2^64 + 2^65 + 1
[1] 55340232221128654858 #Incorrect Value
It seems like there's some sort of problem with precision of large number addition, but I don't know how I can fix this so that I can get the desired result.
Any help would be greatly appreciated. And I apologize if anything is formatted improperly, this is my first post on the site.
For large integers, we could use as.bigz from gmp
library(gmp)
as.bigz(2^64) + as.bigz(2^65) + 1
# Big Integer ('bigz') :
#[1] 55340232221128654849

What's the difference between integer class and numeric class in R

I want to preface this by saying I'm an absolute programming beginner, so please excuse how basic this question is.
I'm trying to get a better understanding of "atomic" classes in R and maybe this goes for classes in programming in general. I understand the difference between a character, logical, and complex data classes, but I'm struggling to find the fundamental difference between a numeric class and an integer class.
Let's say I have a simple vector x <- c(4, 5, 6, 6) of integers, it would make sense for this to be an integer class. But when I do class(x) I get [1] "numeric". Then if I convert this vector to an integer class x <- as.integer(x). It return the same exact list of numbers except the class is different.
My question is why is this the case, and why the default class for a set of integers is a numeric class, and what are the advantages and or disadvantages of having an integer set as numeric instead of integer.
There are multiple classes that are grouped together as "numeric" classes, the 2 most common of which are double (for double precision floating point numbers) and integer. R will automatically convert between the numeric classes when needed, so for the most part it does not matter to the casual user whether the number 3 is currently stored as an integer or as a double. Most math is done using double precision, so that is often the default storage.
Sometimes you may want to specifically store a vector as integers if you know that they will never be converted to doubles (used as ID values or indexing) since integers require less storage space. But if they are going to be used in any math that will convert them to double, then it will probably be quickest to just store them as doubles to begin with.
Patrick Burns on Quora says:
First off, it is perfectly feasible to use R successfully for years
and not need to know the answer to this question. R handles the
differences between the (usual) numerics and integers for you in the
background.
> is.numeric(1)
[1] TRUE
> is.integer(1)
[1] FALSE
> is.numeric(1L)
[1] TRUE
> is.integer(1L)
[1] TRUE
(Putting capital 'L' after an integer forces it to be stored as an
integer.)
As you can see "integer" is a subset of "numeric".
> .Machine$integer.max
[1] 2147483647
> .Machine$double.xmax
[1] 1.797693e+308
Integers only go to a little more than 2 billion, while the other
numerics can be much bigger. They can be bigger because they are
stored as double precision floating point numbers. This means that
the number is stored in two pieces: the exponent (like 308 above,
except in base 2 rather than base 10), and the "significand" (like
1.797693 above).
Note that 'is.integer' is not a test of whether you have a whole
number, but a test of how the data are stored.
One thing to watch out for is that the colon operator, :, will return integers if the start and end points are whole numbers. For example, 1:5 creates an integer vector of numbers from 1 to 5. You don't need to append the letter L.
> class(1:5)
[1] "integer"
Reference: https://www.quora.com/What-is-the-difference-between-numeric-and-integer-in-R
To quote the help page (try ?integer), bolded portion mine:
Integer vectors exist so that data can be passed to C or Fortran code which expects them, and so that (small) integer data can be represented exactly and compactly.
Note that current implementations of R use 32-bit integers for integer vectors, so the range of representable integers is restricted to about +/-2*10^9: doubles can hold much larger integers exactly.
Like the help page says, R's integers are signed 32-bit numbers so can hold between -2147483648 and +2147483647 and take up 4 bytes.
R's numeric is identical to an 64-bit double conforming to the IEEE 754 standard. R has no single precision data type. (source: help pages of numeric and double). A double can store all integers between -2^53 and 2^53 exactly without losing precision.
We can see the data type sizes, including the overhead of a vector (source):
> object.size(1:1000)
4040 bytes
> object.size(as.numeric(1:1000))
8040 bytes
To my understanding - we do not declare a variable with a data type so by default R has set any number without L to be a numeric.
If you wrote:
> x <- c(4L, 5L, 6L, 6L)
> class(x)
>"integer" #it would be correct
Example of Integer:
> x<- 2L
> print(x)
Example of Numeric (kind of like double/float from other programming languages)
> x<-3.4
> print(x)
Numeric is an umbrella term for several types of classes (e.g. double and integer). Integers are numbers which do not have decimal points and thus are stored with minimal space in memory. Use the integer class only when doing computations with such numbers, otherwise revert to numeric.

How do you cast a double to an integer in R?

My question is: Suppose you have computed an algorithm that gives the number of iterations and you would like to print the number of iterations out. But the output always many decimal places, like the following:
64.00000000
Is it possible to get an integer by doing type casting in R ? How would you do it ??
There are some gotchas in coercing to integer mode. Presumably you have a variety of numbers in some structure. If you are working with a matrix, then the print routine will display all the numbers at the same precision. However, you can change that level. If you have calculated this result with an arithmetic process it may be actually less than 64 bit display as that value.
> 64.00000000-.00000099999
[1] 64
> 64.00000000-.0000099999
[1] 63.99999
So assuming you want all the values in whatever structure this is part of, to be displayed as integers, the safest would be:
round(64.000000, 0)
... since this could happen, otherwise.
> as.integer(64.00000000-.00000000009)
[1] 63
The other gotcha is that the range of value for integers is considerably less than the range of floating point numbers.
The function is.integer can be used to test for integer mode.
is.integer(3)
[1] FALSE
is.integer(3L)
[1] TRUE
Neither round nor trunc will return a vector in integer mode:
is.integer(trunc(3.4))
[1] FALSE
Instead of trying to convert the output into an integer, find out why it is not an integer in the first place, and fix it there.
Did you initialize it as an integer, e.g. num.iterations <- 0L or num.iterations <- integer(1) or did you make the mistake of setting it to 0 (a numeric)?
When you incremented it, did you add 1 (a numeric) or 1L (an integer)?
If you are not sure, go through your code and check your variable's type using the class function.
Fixing the problem at the root could save you a lot of trouble down the line. It could also make your code more efficient as numerous operations are faster on integers than numerics (an example).
The function as.integer() truncate the number up to 0 order, so you must add a 0.5 to get a proper approx
dd<-64.00000000
as.integer(dd+0.5)
If you have a numeric matrix you wish to coerce to an integer matrix (e.g., you are creating a set of dummy variables from a factor), as.integer(matrix_object) will coerce the matrix to a vector, which is not what you want. Instead, you can use storage.mode(matrix_object) <- "integer" to maintain the matrix form.

Is there a way to store a large number precisely in R?

Is there a way to store a large number precicely in R?
double is stored as a binary fraction and its precision varies with the value, and integer has limited range of 4 bytes.
What if I wanted to store a very large number precisely?
You can try the bigz class from the gmppackage:
> library("gmp")
> 2^10000
[1] Inf
> 2^(as.bigz(10000))
[1] "199506.... and a LOT of more numbers!
It basically stores the number as a string and so avoiding the integer/double limits.
It depends on what you mean by large number:
If you want numbers above the top end of double precision arithmetic, there is the Brobdingnag package
If you want more precision there are the gmp and related Rmpfr packages.

Resources