r scientific notation limit mantissa decimal points - r

In r, is it possible to limit the number after decimal points of mantissa/significand. E.g 1.43566334245e-9, I want to ignore/round to 1.44e-9.
I do not want to simply say keep N numbers after decimal. Cause if there is another number in the dataset is 5.2340972e-5, I want it to be 5.23e-5 but not 5.234097e-5. So only limiting on mantissa's decimal point, rather than the whole number.

If I understood you correctly:
signif(1.43566334245e-9,3)
[1] 1.44e-09
signif(5.2340972e-5,3)
[1] 5.23e-05

Related

Representing decimal numbers in binary

How do I represent integers numbers, for example, 23647 in two bytes, where one byte contains the last two digits (47) and the other contains the rest of the digits(236)?
There are several ways do to this.
One way is to try to use Binary Coded Decimal (BCD). This codes decimal digits, rather than the number as a whole into binary. The packed form puts two decimal digits into a byte. However, your example value 23647 has five decimal digits and will not fit into two bytes in BCD. This method will fit values up to 9999.
Another way is to put each of your two parts in binary and place each part into a byte. You can do integer division by 100 to get the upper part, so in Python you could use
upperbyte = 23647 // 100
Then the lower part can be gotten by the modulus operation:
lowerbyte = 23647 % 100
Python will directly convert the results into binary and store them that way. You can do all this in one step in Python and many other languages:
upperbyte, lowerbyte = divmod(23647, 100)
You are guaranteed that the lowerbyte value fits, but if the given value is too large the upperbyte value many not actually fit into a byte. All this assumes that the value is positive, since negative values would complicate things.
(This following answer was for a previous version of the question, which was to fit a floating-point number like 36.47 into two bytes, one byte for the integer part and another byte for the fractional part.)
One way to do that is to "shift" the number so you consider those two bytes to be a single integer.
Take your value (36.47), multiply it by 256 (the number of values that fit into one byte), round it to the nearest integer, convert that to binary. The bottom 8 bits of that value are the "decimal numbers" and the next 8 bits are the "integer value." If there are any other bits still remaining, your number was too large and there is an overflow condition.
This assumes you want to handle only non-negative values. Handling negatives complicates things somewhat. The final result is only an approximation to your starting value, but that is the best you can do.
Doing those calculations on 36.47 gives the binary integer
10010001111000
So the "decimal byte" is 01111000 and the "integer byte" is 100100 or 00100100 when filled out to 8 bits. This represents the float number 36.46875 exactly and your desired value 36.47 approximately.

write.csv precision R

I am dealing with very precise numbers (maximum number of digits).
I noticed that write.csv(x) in R sometimes round the number.
Has anyone noticed something like that?
What is the default number of digits saved?
As written in the documentation,
In almost all cases the conversion of numeric quantities is governed
by the option "scipen" (see options), but with the internal equivalent
of digits = 15. For finer control, use format to make a character
matrix/data frame, and call write.table on that.
So the simple solution is to change the options, i.e.
options(digits = DESIRED_VALUE)
and the customized solution is to convert your variables to character type with format, e.g.
dat <- mtcars
dat$wt <- format(dat$wt, digits = 20)
and save it like this. Notice however then when using computers we are always dealing with rounded numbers (see Gldberg, 1991, What Every Computer Scientist Should Know About Floating-Point Arithmetic), and you could find tricky outcomes do to the computer precision, e.g.
format(2.620, digits = 20)
## [1] "2.6200000000000001066"
So there is nothing "bad" with rounded values as you probably need them only to be precise up to some number of decimal places. Moreover, your measurements are also affected with measurement errors, so the precision can be illusory.

R: force format(scientific=FALSE) not to round

I have been playing around with this command for a while and cannot seem to make it work the way I would like it to. I would like format to give me the full list of numbers as a text without any rounding even when the whole number portion is large. For example:
format(2290000000000000000.000081 , scientific=FALSE)
[1] "2290000000000000000"
While what I want returned is:
"2290000000000000000.000081"
As noted, you can't store that number exactly using double precision. You'll need to use multiple-precision floating point numbers.
library(Rmpfr)
mpfr("2290000000000000000.000081", precBits=85)
## 1 'mpfr' number of precision 85 bits
## [1] 2290000000000000000.000081

unable to get desired precision of the output from division of 2 integers in R

Iam dividing two numbers in R. The numerator is a big integer ( ranges in millions) divided by a 13.00001
It is taking 13.000001 as 13 and the output that comes is limited to only 1 decimal place.
I require the output to be uptil 2 decimal places which is not happening.
I tried round, format and as.numeric but it is fruitless
round is not giving anything (round(divison,1)
format(nsmall=2) makes it upto 2 decimal places but converts it into character
as.numeric reconverts it from character but the 2 decimal places are replaced by 1 decimal place
Is there any way that I can get 2 decimal places when I divide an integer with a number like 13.000001?
Be careful not to confuse output with internal precision:
x <- 13e7/13.000001
sprintf("%10.20f",x)
#[1] "9999999.23076928965747356415"
sprintf("%10.10f",x*13)
#[1] "129999990.0000007600"
sprintf("%10.10f",x*13.000001)
#[1] "129999999.9999999851"
Differences to the expected output are due to the limited floating point precision.

Is there a way to store a large number precisely in R?

Is there a way to store a large number precicely in R?
double is stored as a binary fraction and its precision varies with the value, and integer has limited range of 4 bytes.
What if I wanted to store a very large number precisely?
You can try the bigz class from the gmppackage:
> library("gmp")
> 2^10000
[1] Inf
> 2^(as.bigz(10000))
[1] "199506.... and a LOT of more numbers!
It basically stores the number as a string and so avoiding the integer/double limits.
It depends on what you mean by large number:
If you want numbers above the top end of double precision arithmetic, there is the Brobdingnag package
If you want more precision there are the gmp and related Rmpfr packages.

Resources