Define variables in an implicit way - r

I wanted to know how I can define variables in an implicit way in R.
For example, let's assume I have z<-0.5 and x<-2, I want to define y such that the following holds: z=beta(x,y).
Obviously, if I enter z<-beta(x,y), I have the following error Error in beta(x, y) : object 'y' not found.
I tried to find a solution in google but strangely I didn't find anything.
Thank you in advance!

For your example you could use uniroot to find the value of y:
(y <- uniroot(function(y) beta(x,y)-z, interval=c(0,100)))
$root
[1] 1
$f.root
[1] -1.08689e-07
$iter
[1] 13
$estim.prec
[1] 6.103516e-05
beta(x,y$root)==z
[1] FALSE
all.equal(beta(x,y$root),z, tol=1e-5)
[1] TRUE
beta(x,1)==z
[1] TRUE
However this relies on a number of assumptions such as there only being one value to satisfy the equation and you being able to give it a sensible interval. In general your function may not admit solutions, and it may be slow to compute if you need to calculate a large number of y values. You also need to consider that a numerical solution may not be exact, so comparisons will need to be made with care.

Related

Why does this xlim error occur in circlize initialization?

I want to initialize a new chord diagram with circlize, but I'm getting an error that doesn't seem to make any sense given the data I'm feeding into it:
Error: Since `xlim` is a matrix, it should have same number of rows as the length of the level of `sectors` and number of columns of 2.
I understand the requirement, but when I try to produce different plots, it fails for some but not others. Here's the relevant code snippet with some output for debugging
dev.new()
circos.clear()
circos.par(cell.padding=c(0,0,0,0), track.margin=c(0,0.01), gap.degree=1)
xlim = cbind(0, regionTotal)
print(class(region))
print(length(region))
print(class(xlim))
print(dim(xlim))
circos.initialize(factors=region, xlim=xlim)
The output for a plot that works fine:
[1] "character"
[1] 24
[1] "matrix" "array"
[1] 24 2
And for one that returns the error:
[1] "character"
[1] 50
[1] "matrix" "array"
[1] 50 2
Error: Since `xlim` is a matrix, it should have same number of rows as the length of the level of `sectors` and number of columns of 2.
I am aware of these question:
this one led me to check the class
and this one led me to check my circlize version (0.4.11)
What am I missing??? Thanks for any help you can provide.
After a lot of hair pulling, I figured out the problem: there was a repeated value in my region variable (the factors or sectors entry in circos.initialize), so the effective number of sectors was lower than the dimension of the variable. Hopefully nobody else is dumb enough to make this mistake, but just in case they are, now they can have an additional thing to check if they come across this error.

Numerical problems with qnorm

I'm having a numeric issue using qnorm(psn()). The problem is numeric.
Firstly, the Skew-Normal CDF round the result, since psn(9) is not 1:
library(sn)
psn(9)
#[1] 1
then
qnorm(psn(9))
#[1] Inf
And see that:
qnorm(.9999999999999999)
#[1] 8.209536
qnorm(.99999999999999999)
#[1] Inf
note that 8.209536 is not that big, so this rounding is very imprecise.
Then, my final problem is the calculation of qnorm(psn()), that is part of my Copula density. Any hint on how can I avoid these numerical problems?
(This is not a resolution to your dilemma, more of an explanation of why I think you're seeing this and perhaps not likely to find an easy solution.)
I think this is getting into the realm where normal floating-point precision isn't going to work for you. For instance, doing the inverse of your function:
options(digits=22)
pnorm(8.209536)
# [1] 0.99999999999999989
pnorm(8.209536) - 1
# [1] -1.1102230246251565e-16
which is very close to
.Machine$double.eps
# [1] 2.2204460492503131e-16
which, according to ?.Machine, is
double.eps: the smallest positive floating-point number 'x' such that
'1 + x != 1'. It equals 'double.base ^ ulp.digits' if either
'double.base' is 2 or 'double.rounding' is 0; otherwise, it
is '(double.base ^ double.ulp.digits) / 2'. Normally
'2.220446e-16'.
It might be possible to translate what you need into higher-precision using auxiliary packages like gmp or Rmpfr. (I don't know if they support qnorm-like operations.)

R: How to convert long number to string to save precision

I have a problem to convert a long number to a string in R. How to easily convert a number to string to preserve precision? A have a simple example below.
a = -8664354335142704128
toString(a)
[1] "-8664354335142704128"
b = -8664354335142703762
toString(b)
[1] "-8664354335142704128"
a == b
[1] TRUE
I expected toString(a) == toString(b), but I got different values. I suppose toString() converts the number to float or something like that before converting to string.
Thank you for your help.
Edit:
> -8664354335142704128 == -8664354335142703762
[1] TRUE
> along = bit64::as.integer64(-8664354335142704128)
> blong = bit64::as.integer64(-8664354335142703762)
> along == blong
[1] TRUE
> blong
integer64
[1] -8664354335142704128
I also tried:
> as.character(blong)
[1] "-8664354335142704128"
> sprintf("%f", -8664354335142703762)
[1] "-8664354335142704128.000000"
> sprintf("%f", blong)
[1] "-0.000000"
Edit 2:
My question first was, if I can convert a long number to string without loss. Then I realized, in R is impossible to get the real value of a long number passed into a function, because R automatically read the value with the loss.
For example, I have the function:
> my_function <- function(long_number){
+ string_number <- toString(long_number)
+ print(string_number)
+ }
If someone used it and passed a long number, I am not able to get the information, which number was passed exactly.
> my_function(-8664354335142703762)
[1] "-8664354335142704128"
For example, if I read some numbers from a file, it is easy. But it is not my case. I just need to use something that some user passed.
I am not R expert, so I just was curious why in another language it works and in R not. For example in Python:
>>> def my_function(long_number):
... string_number = str(long_number)
... print(string_number)
...
>>> my_function(-8664354335142703762)
-8664354335142703762
Now I know, the problem is how R reads and stores numbers. Every language can do it differently. I have to change the way how to pass numbers to R function, and it solves my problem.
So the correct answer to my question is:
""I suppose toString() converts the number to float", nope, you did it yourself (even if unintentionally)." - Nope, R did it itself, that is the way how R reads numbers.
So I marked r2evans answer as the best answer because this user helped me to find the right solution. Thank you!
Bottom line up front, you must (in this case) read in your large numbers as string before converting to 64-bit integers:
bit64::as.integer64("-8664354335142704128") == bit64::as.integer64("-8664354335142703762")
# [1] FALSE
Some points about what you've tried:
"I suppose toString() converts the number to float", nope, you did it yourself (even if unintentionally). In R, when creating a number, 5 is a float and 5L is an integer. Even if you had tried to create it as an integer, it would have complained and lost precision anyway:
class(5)
# [1] "numeric"
class(5L)
# [1] "integer"
class(-8664354335142703762)
# [1] "numeric"
class(-8664354335142703762L)
# Warning: non-integer value 8664354335142703762L qualified with L; using numeric value
# [1] "numeric"
more appropriately, when you type it in as a number and then try to convert it, R processes the inside of the parentheses first. That is, with
bit64::as.integer64(-8664354335142704128)
R first has to parse and "understand" everything inside the parentheses before it can be passed to the function. (This is typically a compiler/language-parsing thing, not just an R thing.) In this case, it sees that it appears to be a (large) negative float, so it creates a class numeric (float). Only then does it send this numeric to the function, but by this point the precision has already been lost. Ergo the otherwise-illogical
bit64::as.integer64(-8664354335142704128) == bit64::as.integer64(-8664354335142703762)
# [1] TRUE
In this case, it just *happens that the 64-bit version of that number is equal to what you intended.
bit64::as.integer64(-8664254335142704128) # ends in 4128
# integer64
# [1] -8664254335142704128 # ends in 4128, yay! (coincidence?)
If you subtract one, it results in the same effective integer64:
bit64::as.integer64(-8664354335142704127) # ends in 4127
# integer64
# [1] -8664354335142704128 # ends in 4128 ?
This continues for quite a while, until it finally shifts to the next rounding point
bit64::as.integer64(-8664254335142703617)
# integer64
# [1] -8664254335142704128
bit64::as.integer64(-8664254335142703616)
# integer64
# [1] -8664254335142703104
It is unlikely to be coincidence that the difference is 1024, or 2^10. I haven't fished yet, but I'm guessing there's something meaningful about this with respect to floating point precision in 32-bit land.
fortunately, bit64::as.integer64 has several S3 methods, useful for converting different formats/classes to a integer64
library(bit64)
methods(as.integer64)
# [1] as.integer64.character as.integer64.double as.integer64.factor
# [4] as.integer64.integer as.integer64.integer64 as.integer64.logical
# [7] as.integer64.NULL
So, bit64::as.integer64.character can be useful, since precision is not lost when you type it or read it in as a string:
bit64::as.integer64("-8664354335142704128")
# integer64
# [1] -8664354335142704128
bit64::as.integer64("-8664354335142704128") == bit64::as.integer64("-8664354335142703762")
# [1] FALSE
FYI, your number is already near the 64-bit boundary:
-.Machine$integer.max
# [1] -2147483647
-(2^31-1)
# [1] -2147483647
log(8664354335142704128, 2)
# [1] 62.9098
-2^63 # the approximate +/- range of 64-bit integers
# [1] -9.223372e+18
-8664354335142704128
# [1] -8.664354e+18

Loss of decimal places when calculating mean in R

I have a list entitled SET1Bearing1slope with nine numbers, and each number has at least 10 decimal places. When I use the mean() function on the list I get an arithmetic mean
.
Yet if I list the numbers individually and then use the mean() function, I get a different output
I know that this is caused by a rounding and that the second mean is more accurate. Is there a way to avoid this issue? What method can I use to avoid rounding errors when calculating the mean?
In R, mean() expects a vector of values, not multiple values. It is also a generic function so it is tolerant of additional parameters it doesn't understand (but doesn't warn you about them). See
mean(c(1,5,6))
# [1] 4
mean(1, 5, 6) #only "1" is used here, 5 and 6 are ignored.
# [1] 1
So in your example there are no rounding errors, you are just calling the function incorrectly.
Look at the difference in the way you're calling the function:
mean(c(1,2,5))
[1] 2.666667
mean(1,2,5)
[1] 1
As pointed by MrFlick, in the first case you're passing a vector of numbers (the correct way); in the second, you're passing a list of arguments, and just the first one is considered.
As for the number of digits, you can specify it using options():
options(digits = 10)
x <- runif(10)
x
[1] 0.49957540398 0.71266139182 0.07266473584 0.90541790240 0.41799820261
[6] 0.59809536533 0.88133668737 0.17078919476 0.92475634208 0.48827998806
mean(x)
[1] 0.5671575214
But remember that a greater number of digits is not necessarily better. There's a reason why R and others limits the number os digits. Check this topic: https://en.wikipedia.org/wiki/Significance_arithmetic

exponents and negative numbers

I do not know if other R users have found the following problem.
Within R I do the folowing operation:
> (3/-2)^(1/3)
[1] NaN
I obtain a NaN result.
I the similar way if I set:
> w<-(3/-2)
> g<-1/3
> w^g
[1] NaN
However, if I do:
> 3/-2
[1] -1.5
> -1.5^(1/3)
[1] -1.144714
Is there anybody that can explain this contradiction?
Where do you see a problem? -1.5^(1/3) is not the same as (-1.5)^(1/3). If you have basic maths education you shouldn't expect these to be the same.
Read help("Syntax") to learn that ^ has higher precedence than - in R.
This is due to the mathematical definition of exponentiation. For the continuous real exponentiation operator, you are not allowed to have a negative base.
Begin by doing (3/2)^(1/3) and after add "-"
you can't calculate a cube root of a negative number !
If you really want the answer you can do the computation over the complex numbers, i.e. get the cube root of -1.5+0i:
complex(real=-1.5,im=0)^(1/3)
## [1] 0.5723571+0.9913516i
This is actually only one of three complex roots of x^3+1.5==0:
polyroot(c(1.5,0,0,1))
[1] 0.5723571+0.9913516i -1.1447142+0.0000000i 0.5723571-0.9913516i
but the other answers probably come closer to addressing your real question.

Resources