I have worked out a probability in r (equal to 1.7715e-06) and I would like to be able to view it to 10 decimal points. Does anybody know how to convert this to 10 decimal points? I've tried using the round() and signific() functions but they aren't useful.
I got this answer (1.7715e-06) using:
dbinom(11, 11, 0.3)
A possible solution:
format(round(dbinom(11, 11, 0.3), 10), scientific = F)
#> [1] "0.0000017715"
You can also do this:
options("scipen"=999)
Then:
.3^11
[1] 0.00000177147
or
round(.3^11,10)
[1] 0.0000017715
or
dbinom(11,11,.3)
[1] 0.00000177147
or
round(dbinom(11,11,.3),10)
[1] 0.0000017715
Related
In R, I have the following vector of numbers:
numbers <- c(0.0193738397702257, 0.0206218006695066, 0.021931558829559,
0.023301378178208, 0.024728095594751, 0.0262069239112787, 0.0277310799996657,
0.0292913948762414, 0.0308758879014822, 0.0324693108459748, 0.0340526658271053,
0.03560271425176, 0.0370915716288017, 0.0384863653635563, 0.0397490272396821,
0.0408363289939899, 0.0417002577578561, 0.0422890917131629, 0.0425479537267193,
0.0424213884467212, 0.0418571402964338, 0.0408094991140723, 0.039243951482081,
0.0371450856007627, 0.0345208537496488, 0.0314091884865658, 0.0278854381969885,
0.0240607638577763, 0.0200808932436969, 0.0161193801903312, 0.0123615428382314,
0.00920410652651576, 0.00628125319205829, 0.0038816517651031,
0.00214210795679701, 0.00103919307280354, 0.000435532895812429,
0.000154730641092234, 4.56593150728962e-05, 1.09540661898799e-05,
2.08952167815574e-06, 3.10045314287095e-07, 3.51923218134997e-08,
3.02121734299694e-09, 1.95269500257237e-10, 9.54697530552714e-12,
3.5914029230041e-13, 1.07379981978647e-14, 2.68543048763588e-16,
6.03891613157815e-18, 1.33875697089866e-19, 3.73885699170518e-21,
1.30142752487978e-22, 5.58607581840324e-24, 2.92551478380617e-25,
1.85002124085815e-26, 1.39826890505611e-27, 1.25058972437096e-28,
1.31082961467944e-29, 1.59522437605631e-30, 2.23371981458205e-31,
3.5678974253211e-32, 6.44735482309705e-33, 1.30771083084868e-33,
2.95492180915218e-34, 7.3857554006177e-35, 2.02831084124162e-35,
6.08139499028838e-36, 1.97878175996974e-36, 6.94814886769478e-37,
2.61888070029751e-37, 1.05433608968287e-37, 4.51270543356897e-38,
2.04454840598946e-38, 9.76544451781597e-39, 4.90105271869773e-39,
2.5743371658684e-39, 1.41165292292001e-39, 8.06250933233367e-40,
4.78746160076622e-40, 2.94835809615626e-40, 1.87667170875529e-40,
1.22833908072915e-40, 8.21091993733535e-41, 5.53869254991177e-41,
3.74485710867631e-41, 2.52485401054841e-41, 1.69027430542613e-41,
1.12176290106797e-41, 7.38294520887852e-42, 4.8381070000246e-42,
3.20123319815522e-42, 2.16493953538386e-42, 1.50891804884267e-42,
1.09057070511506e-42, 8.1903023226717e-43, 6.3480235351625e-43,
5.13533594742621e-43, 4.25591269645348e-43, 3.57422485839717e-43,
3.0293235331048e-43, 2.58514651313175e-43, 2.21952686649801e-43,
1.91634521841049e-43, 1.66319240529025e-43, 1.45043336371471e-43,
1.27052593975384e-43, 1.11752052211757e-43, 9.86689196888877e-44,
8.74248543892126e-44)
I use cumsum to get the cumulative sum. Due to R's numerical precision, many of the numbers towards the end of the vector are now equivalent to 1 (even though technically they're not exactly = 1, just very close to it).
So then when I try to recover my original numbers by using diff(cumulative), I get a lot of 0s instead of a very small number. How can I prevent R from "rounding"?
cumulative <- cumsum(numbers)
diff(cumulative)
I think the Rmpfr package does what you want:
library(Rmpfr)
x <- mpfr(numbers,200) # set arbitrary precision that's greater than R default
cumulative <- cumsum(x)
diff(cumulative)
Here's the top and bottom of the output:
> diff(cumulative)
109 'mpfr' numbers of precision 200 bits
[1] 0.02062180066950659862445860426305443979799747467041015625
[2] 0.021931558829559001655429284483034280128777027130126953125
[3] 0.02330137817820800150148130569505156017839908599853515625
[4] 0.0247280955947510004688805196337852976284921169281005859375
...
[107] 1.117520522117570086014450710640040701536080790307716261438975e-43
[108] 9.866891968888769759087690539062888824928577731689952701181586e-44
[109] 8.742485438921260418707338389502002282130643811990663213422948e-44
You can adjust the precision as you like by changing the second argument to mpfr.
You might want to try out the package Rmpfr.
It seems a silly question, but I have searched on line, but still did not find any sufficient reply.
My question is: suppose we have a matrix M, then we use the scale() function, how can we extract the center and scale of each column by writing a line of code (I know we can see the centers and scales..), but my matrix has lots of columns, it is cumbersome to do it manually.
Any ideas? Many thanks!
you are looking for the attributes function:
set.seed(1)
mat = matrix(rnorm(1000),,10) # Suppose you have 10 columns
s = scale(mat) # scale your data
attributes(s)#This gives you the means and the standard deviations:
$`dim`
[1] 100 10
$`scaled:center`
[1] 0.1088873669 -0.0378080766 0.0296735350 0.0516018586 -0.0391342406 -0.0445193567 -0.1995797418
[8] 0.0002549694 0.0100772648 0.0040650015
$`scaled:scale`
[1] 0.8981994 0.9578791 1.0342655 0.9916751 1.1696122 0.9661804 1.0808358 1.0973012 1.0883612 1.0548091
These values can also be obtained as:
colMeans(mat)
[1] 0.1088873669 -0.0378080766 0.0296735350 0.0516018586 -0.0391342406 -0.0445193567 -0.1995797418
[8] 0.0002549694 0.0100772648 0.0040650015
sqrt(diag(var(mat)))
[1] 0.8981994 0.9578791 1.0342655 0.9916751 1.1696122 0.9661804 1.0808358 1.0973012 1.0883612 1.0548091
you get a list that you can subset the way you want:
or you can do
attr(s,"scaled:center")
[1] 0.1088873669 -0.0378080766 0.0296735350 0.0516018586 -0.0391342406 -0.0445193567 -0.1995797418
[8] 0.0002549694 0.0100772648 0.0040650015
attr(s,"scaled:scale")
[1] 0.8981994 0.9578791 1.0342655 0.9916751 1.1696122 0.9661804 1.0808358 1.0973012 1.0883612 1.0548091
I have calculated the required Safety Stock using the excel goal seek function. Below is the image.
But now I want to do the same using R.
The below function gives me the same excel results when I enter the SafetyStock & SD. Now I need to do the reverse calculation(Whenever I provide x & SD I need the SS). Could someone help me with the same?
I tried Optix and other similar R packages but couldn't succeed.
opt<-function(SS,SD){
x=-SS*(1-pnorm(SS/SD)) + SD * dnorm(SS/SD,mean=0,sd =1,0)
print(x)
}
Excel Goal seek
Solving f(x)=c for x is the same as solving f(x)-c=0. You can use uniroot to find the root:
f <- function(SS, SD, ESC) {
-SS*(1-pnorm(SS/SD)) + SD * dnorm(SS/SD,mean=0,sd =1,0) - ESC
}
zero <- uniroot(f,c(0,1000),SD=600,ESC=39.3)
zero$root
The second argument is the interval to search: between 0 and 1000. This returns
674.0586
The zero structure has more interesting information:
$root
[1] 674.0586
$f.root
[1] 1.933248e-08
$iter
[1] 8
$init.it
[1] NA
$estim.prec
[1] 6.103516e-05
I want to know how can I calculate large values multiplication in R.
R returns Inf!
For example:
6.350218e+277*2.218789e+215
[1] Inf
Let me clarify the problem more:
consider the following code and the results of outFunc function:
library(hypergeo)
poch <-function(a,b) gamma(a+b)/gamma(a)
n<-c(37 , 41 , 4 , 9 , 12 , 13 , 2 , 5 , 23 , 73 , 129 , 22 , 121 )
v<-c(90.2, 199.3, 61, 38, 176.3, 293.6, 318.6, 328.7, 328.1, 313.3, 142.4, 92.9, 95.5)
DF<-data.frame(n,v)
outFunc<-function(k,w,r,lam,a,b) {
((((w*lam)^k) * poch(r,k) * poch(a,b) ) * hypergeo(r+k,a+k,a+b+k,-(w*lam)) )/(poch(a+k,b)*factorial(k))
}
and the function returns:
outFunc(DF$n,DF$v,0.2, 1, 3, 1)
[1] 0.002911330+ 0i 0.003047594+ 0i 0.029886646+ 0i 0.013560599+ 0i 0.010160073+ 0i
[6] 0.008928524+ 0i 0.040165795+ 0i 0.019402318+ 0i 0.005336008+ 0i 0.001689114+ 0i
[11] Inf+NaNi 0.005577985+ 0i Inf+NaNi
As can be seen above, outFunc returns Inf+NaNi for n values of 129 and 121.
I checked the code sections part by part and I find that the returned results of (wlam)^k poch(r,k) for these n values are Inf. I also check my code with equivalent code in Mathematica which everything is OK:
in: out[indata[[All, 1]], indata[[All, 2]], 0.2, 1, 3, 1]
out: {0.00291133, 0.00304759, 0.0298866, 0.0135606, 0.0101601, 0.00892852, \
0.0401658, 0.0194023, 0.00533601, 0.00168911, 0.000506457, \
0.00557798, 0.000365445}
Now please let me know how we can solve this issue as simple as it is in Mathematica. regards.
One option you have available in base R, which does not require a special library, is to convert the two numbers to a common base, and then add the exponents together to get the final result:
> x <- log(6.350218e+277, 10)
> x
[1] 277.8028
> y <- log(2.218789e+215, 10)
> y
[1] 215.3461
> x + y
[1] 493.1489
Since 10^x * 10^y = 10^(x+y), your final answer is 10^493.1489
Note that this solution does not allow to actually store numbers which R would normally treat as INF. Hence, in this example, you still cannot compute 10^493, but you can tease out what the product would be.
For first, I'd recommend two useful reads: logarithms and how floating values are handled by a computer. These are pertinent because with some "tricks" you can handle much bigger values than you think. For instance, your definition of the poch function is terrible. This because the fraction can be simplified a lot but a computer will evaluate the numerator first and if it overflows the result will be useless. That's why R provides beside gamma the lgamma function: it just calculates the logarithm of gamma and can handle much bigger values. So, we calculate the log of each factor in your function and then we use exp to restore the intended values. Try this:
#redefine poch properly
poch<-function(a,b) lgamma(a+b) - lgamma(a)
#redefine outFunc
outFunc<-function(k,w,r,lam,a,b) {
exp((k*(log(w)+log(lam))+ poch(r,k) + poch(a,b) ) +
log(hypergeo(r+k,a+k,a+b+k,-(w*lam)))- poch(a+k,b)-lgamma(k+1))
}
#Now we go
outFunc(DF$n,DF$v,0.2, 1, 3, 1)
#[1] 0.0029113299+0i 0.0030475939+0i 0.0298866458+0i 0.0135605995+0i
#[5] 0.0101600732+0i 0.0089285243+0i 0.0401657947+0i 0.0194023182+0i
#[9] 0.0053360084+0i 0.0016891144+0i 0.0005064566+0i 0.0055779850+0i
#[13] 0.0003654449+0i
> library(gmp)
> x<- pow.bigz(6.350218,277)
> y<- pow.bigz(2.218789,215)
> x*y
Big Integer ('bigz') :
[1] 18592826814872791919942226542714580401488894909642693257011204682802122918146288728149155739011270579948954646130492024596687919148494136290260248656581476275790189359808616520170359345612068099238508437236172770752199936303947098513476300142414338199993261924467166943683593371648
When I print a vector in R line by line with cat(), results are rounded differently than in the usual output:
> dbinom(0:10, 10, 0.95)
[1] 9.765625e-14 1.855469e-11 1.586426e-09 8.037891e-08 2.672599e-06
[6] 6.093525e-05 9.648081e-04 1.047506e-02 7.463480e-02 3.151247e-01
[11] 5.987369e-01
> options(scipen=999)
> dbinom(0:10, 10, 0.95)
[1] 0.00000000000009765625 0.00000000001855468750 0.00000000158642578125
[4] 0.00000008037890625000 0.00000267259863281252 0.00006093524882812524
[7] 0.00096480810644531680 0.01047505944140628489 0.07463479852001964066
[10] 0.31512470486230492739 0.59873693923837867370
> cat(dbinom(0:10, 10, 0.95), sep = "\n")
0.00000000000009765625
0.00000000001855469
0.000000001586426
0.00000008037891
0.000002672599
0.00006093525
0.0009648081
0.01047506
0.0746348
0.3151247
0.5987369
How can I preserve the decimal places?
Try this using sprintf:
> cat(sprintf("%.20f", dbinom(0:10, 10, 0.95)),sep="\n")
0.00000000000009765625
0.00000000001855468750
0.00000000158642578125
0.00000008037890625000
0.00000267259863281251
0.00006093524882812514
0.00096480810644531680
0.01047505944140628489
0.07463479852001966841
0.31512470486230481637
0.59873693923837867370
I should also mention that any precision beyond 15 digits is probably spurious at best using floating point calculations. Notice that 0.31512470486230492739 in your data and 0.31512470486230481637 in mine don't match beyond 15 digits.