I use the following multiplication in R (v. R-3.6.1): 115*1.044. I get 120.1. In Excel I get 120.06. By hand I get 120.062.
I select use options(digits=4) in R, but I still get the same result: 120.1.
Why does R behave like this? I use to trust it more than Excel, but it seems that here Excel is more accurate in what it returns. Is there a way to force R to return the accurate digits I would get if multiplying by hand?
The function format has the digits option referred to the total digits of the number considered as a whole (integer and decimal part):
> format(115*1.044, digits = 5)
[1] "120.06"
> format(115*1.044, digits = 4)
[1] "120.1"
Related
I'm trying to read an Excel file into R.
I used read_excel function of the readxl package with parameter col_types = "text" since the columns of the Excel sheet contain mixed data types.
df <- read_excel("Test.xlsx",sheet="Sheet1",col_types = "text")
But it appears a very slight difference in the numeric value is introduced. It's always those few values so I think it's some hidden attributes in Excel.
I tried to format those values as numbers in Excel, and also tried add 0s after the number, but it won't work.
I changed the numeric value of a cell from 2.3 to 2.4, and it was read correctly by R.
This is a consequence of floating-point imprecision, but it's a little tricky. When you enter the number 1.2 (for example) into R or Excel, it's not represented exactly as 1.2:
print(1.2,digits=22)
## [1] 1.199999999999999955591
Excel and R usually try to shield you from these details, which are inevitable if you're using fixed precision floating-point values (which most computer systems do), by limiting the printing precision to a level that will ignore those floating-point imprecisions. When you explicitly convert to character, however, R figures you don't want to lose information, so it gives you all the digits. Numbers that can be represented exactly in a binary representation, such as 2.375, don't gain all those extra digits.
However, there's a simple solution in this case:
readxl::read_excel("Test.xlsx", na="ND")
This tells R that the string "ND" should be treated as a special "not available" value, so all of your numeric values get handled properly. When you examine your data, the tiny imprecisions will still be there, but R will print the numbers the same way that Excel does.
I feel like there's probably a better way to approach this (mixed-type columns are really hard to deal with), but if you need to 'fix' the format of the numbers you can try something like this:
x <- c(format(1.2,digits=22),"abc")
## [1] "1.199999999999999955591" "abc"
fix_nums <- function(x) {
nn <- suppressWarnings(as.numeric(x))
x[!is.na(nn)] <- format(nn[!is.na(nn)])
return(x)
}
fix_nums(x)
## [1] "1.2" "abc"
Then if you're using tidyverse you can use my_data %>% mutate_all(fix_nums)
I have a number in an excel file that is equal to -29998,1500000003
When I try to open it in R I get
> library(openxlsx)
> posotest <- as.character(read.xlsx("sofile.xlsx"))
> posotest
[1] "-29998.1500000004"
Any help? Desired result: -29998,1500000003
EDIT: with options(digits=13) I get -29998.150000000373 which could explain why the rounding is done, however even with options(digits=13) I get
> as.character(posotest)
[1] "-29998.1500000004"
Do you have any function that would allow me to get the full number in characters?
EDIT2 format does this but it adds artificial noise at the end.
x <- -29998.150000000373
format(x,digits=22)
[1] "-29998.15000000037252903"
How can I know how many digits to use in format since nchar will give me a wrong value?
The file is here
You can get a string with up to 22 digits of precision via format():
x <- -29998.150000000373
format(x,digits=22)
[1] "-29998.15000000037252903"
Of course, this will show you all sorts of ugliness related to trying to represent a decimal number in a binary representation with finite precision ...
Because Fortran, I need to write a program that, among other wonderful features, will force R to output a number in scientific notation with 3 significant figures after the decimal point.
Specifically, I need 170.5 to be written to output as 1.705e+02.
Using options(scipen = -999), I can force r to write 170.5 as 1.71e+02. However, this is insufficient for my use - ireally need that third digit after the period.
I think this question ought to be easy, but I am unable to figure out how to solve it. Thanks.
Perhaps try
options(scipen = -999, digits = 4)
Look at either sprintf or formatC.
sprintf("%s", 170.5)
[1] "1.705e+02"
print( sprintf("%s", 170.5), quote=FALSE)
[1] 1.705e+02
I have a numeric variable imported from Oracle with 17 numbers, for example: 20172334534654667.
Now I imported it from Oracle using dbGetQuery() in R, but R use scientific notation: 2.01723e+16
If I try to convert the number using:
mydata$var <- format(mydata$a, scientific=FALSE)
I obtain 20172334534654600 instead of 20172334534654667
So, the last two numbers are always substituted with 00.
How can I solve it, possibly without using additional packages?
I was unable to replicate your issue, but I think it would probably be best to use formatC rather than format.
For your case, it could be:
numb <- 20172334534654667
numb
formatC(numb, format = "f", digits = 0)
Which gives:
[1] "20172334534654668"
Hopefully that works for you!
As part of my dataset, one of the columns is a series of 24-digit numbers.
Example:
bigonumber <- 429382748394831049284934
When I import it using either data.table::fread or read.csv, it shows up as numeric in exponential format (EG: 4.293827e+23).
options(digits=...) won't work since the number is longer than 22 digits.
When I do
as.character(bigonumber)
what I get is "4.29382748394831e+23"
Is there a way to get bigonumber converted to a character string and show all of the digits as characters? I don't need to do any math on it, but I do need to search against it and do dplyr joins on it.
I need to this after import, since the column number varies from month to month.
(Yes, in the perfect world, my upstream data provider would use a hash instead of a long number and a static number of columns that stay the same every month, but I don't get to dictate that to them.)
You can specify colClasses on your fread or read.csv statement.
bignums
429382748394831049284934
429382748394831049284935
429382748394831049284936
429382748394831049284937
429382748394831049284938
429382748394831049284939
bignums <- read.csv("~/Desktop/bignums.txt", sep="", colClasses = 'character')
You can suppress the scientific notation with
options(scipen=999)
If you define the number then
bigonumber <- 429382748394831049284934
you can convert it into a string:
big.o.string <- as.character(bigonumber)
Unfortunately, this does not work because R converts the number to a double, thereby losing precision:
#[1] "429382748394831019507712"
The last digits are not preserved, as pointed out by #SabDeM. Even setting
options(digits=22)
doesn't help, and in any case 22 is the largest number that is allowed; and in your case there are 24 digits. So it seems that you will have to read the data directly as character or factor. Great answers have been posted showing how this can be achieved.
As a side note, there is a package called gmp that allows using arbitrarily large integer numbers. However, there is a catch: they have to be read as characters (again, in order to prevent R's internal conversion into double).
library(gmp)
bigonumber <- as.bigz("429382748394831049284934")
> bigonumber
Big Integer ('bigz') :
[1] 429382748394831049284934
> class(bigonumber)
[1] "bigz"
The advantage is that you can indeed treat these entries as numbers and perform calculations while preserving all the digits.
> bigonumber * 2
#Big Integer ('bigz') :
#[1] 858765496789662098569868
This package and my answer here may not solve your problem, because reading the numbers directly as characters is an easier way to achieve your goal, but I thought I might post this anyway as an information for users who may need to use large integers with more than 22 digits.
Use digest::digest on bigonumber to generate an md5 hash of the number yourself?
bigonumber <- 429382748394831049284934
hash_big <- digest::digest(bigonumber)
hash_big
# "e47e7d8a9e1b7d74af6a492bf4f27193"
I saw this before I posted my answer, but dont see it here anymore.
set options(scipen) to a big value so that there is no truncation:
options(scipen = 999)
bigonumber <- 429382748394831049284934
bigonumber
# [1] 429382748394831019507712
as.character(bigonumber)
# [1] "429382748394831019507712"
Use "scan" to read the file - the "what" parameter lets you define the input type of each column.
If you want numbers as numbers you can't print all values. The digits options allows a maximum of 22 digits. The range is from 1 to 22. It uses the print.default method. You can set it with:
options( digits = 22 )
Even with this options, the numbers will change. I ignore why that happens, most likely due to the fact that the object your are about to print (the number) is longer than the allowed amount of digits and so R does some weird stuff. I'll investigate about it.