How can I get the value of the non-centrality parameter that gives a probability of exactly 0.9 for different critical values and degrees of freedom?
For example, with the significance level = 0.05 and 1 degree of freedom (critical value = 3.84), the ncp must be equal to 10.50742 in order to get a probability of 0.9:
1 - pchisq(3.841459, 1, 10.50742)
[1] 0.9
Rearrange terms in: 1 - pchisq(3.841459, 1, 10.50742) = 0.9 and wrap abs around the result to construct a minimization function:
optim( 1, function(x) abs(pchisq(3.841459, 1, x) - 0.1) )
#-------
$par
[1] 10.50742
$value
[1] 1.740301e-08
$counts
function gradient
56 NA
$convergence
[1] 0
$message
NULL
To do a sensitivity analysis, you can serially alter the values of the other parameters:
for( crit.val in seq(2.5, 3.5, by=0.1)) {
print( optim( 1,
function(x) abs(pchisq(crit.val, 1, x) - 0.1),
method="Brent" , lower=0, upper=20)$par)}
[1] 8.194852
[1] 8.375145
[1] 8.553901
[1] 8.731204
[1] 8.907135
[1] 9.081764
[1] 9.255156
[1] 9.427372
[1] 9.598467
[1] 9.768491
[1] 9.937492
I'm currently developing a package, NCdistributions, which provides a function doing exactly what you want:
> library(NCdistributions)
> find_chisq_ncp(nu = 1, q = 3.841459, p = 0.1)
[1] 10.50742
Related
The function is:
f1 = function(x) {
-1.3 * (x-0.1)^2+0.5 * (x-0.1)^5
}
I am trying to find the maximum in the interval [-1, 1].
The optimize functions returns the right value:
optimize(f1, interval = c(-1, 1), maximum = T)
This gives the result (which is correct):
$maximum
[1] 0.09999769
$objective
[1] -6.942984e-12
I am being asked (this question is for homework) to use optim and uniroot however. Tryng to use optim:
optim(par = c(-1, 1), fn = f1)
results in this error message (and this happens no matter what i do):
Error in optim(par = c(-1, 1), fn = f1) :
objective function in optim evaluates to length 2 not 1
While uniroot gives an answer that is clearly wrong:
uniroot(f1, lower = -1, upper = 1, f.lower = -1, f.upper =1)
The result (clearly wrong):
$root
[1] 0.9998795
$f.root
[1] -0.7576706
So what is the issue here?
You can try this one
> optim(0, f1, lower = -1, upper = 1, method = "L-BFGS-B", control = list(fnscale = -1))
$par
[1] 0.1
$value
[1] -4.806839e-26
$counts
function gradient
5 5
$convergence
[1] 0
$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
My equation is: function(x){10 * x-x^3+6* x^2-15*x}
The value of x that maximizes this function is 3.528 see:
10* 3.528-3.528^3+6* 3.528^2-15 *3.528
But why when i try to find the value of x that maximizes the function optim give me the wrong value?
>optim(c(0),function(x){10*x-x^3+6*x^2-15*x}, control=list(fnscale=-1))$par:
-180925139433306515188282888004820028006042404082666062660624248000000026088
Try with optimize():
optimize(function(x){10*x-x^3+6*x^2-15*x},interval = c(-2000,2000),maximum = T)
The maximum and the value are also wrong:
$maximum
[1] -2000
$objective
[1] 8024008559
Seems like there's no global maximum.
Here are some workarounds:
With optimize
optimize(
function(x) 10 * x - x^3 + 6 * x^2 - 15 * x,
c(0, .Machine$double.eps**-1),
maximum = TRUE
)
giving
$maximum
[1] 3.527528
$objective
[1] 13.12845
With optim
optim(
0,
fn = function(x) 10 * x - x^3 + 6 * x^2 - 15 * x,
method = "Brent",
lower = 0,
upper = .Machine$double.eps**-1,
control = list(fnscale = -1)
)
giving
$par
[1] 3.527525
$value
[1] 13.12845
$counts
function gradient
NA NA
$convergence
[1] 0
$message
NULL
If you don't mind using external packages, you can try fminbnd from package pracma, e.g.,
pracma::fminbnd(
function(x) 10 * x - x^3 + 6 * x^2 - 15 * x,
0,
.Machine$double.eps**-1,
maximum = TRUE
)
which gives
$xmin
[1] 3.527525
$fmin
[1] -13.12845
$niter
[1] 103
$estim.prec
[1] 3.527525e-07
#attempt
optim(c(0.1,0.1),
function(x){x[1]^2*0.05126875+2*((x[1]*x[2])*-0.00809375)+x[2]^2*0.03376875})
How to create a function that generates values from 0.01 to 1 for x [1] and x [2], and returns me what was the lowest result with the condition of x [1] + x [2] = 1?
When you have a constrain and still want to use optim, you can reformulate your constrained optimization problem, e.g.,
optim(0.1,
function(x) x^2*0.05126875+2*((x*(1-x))*-0.00809375)+(1-x)^2*0.03376875,
lower = 0,
upper = 1,
method = "L-BFGS-B")
which gives
$par
[1] 0.4135589
$value
[1] 0.01645614
$counts
function gradient
4 4
$convergence
[1] 0
$message
[1] "CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL"
For your case, the solution is
x1 = 0.4135589
x2 = 1-x1
x = c(x1,x2)
> x
[1] 0.4135589 0.5864411
I'm trying to set up a "Solver" function to optimize the value of "gfc" to zero varying (and finding) the variable "fc" on equation below. The parameters are given.
f0 = 6
f1 = 1
k = 2
ft = 0.3
gfc = ft-((f0-fc)/k)+((f1/k)*ln((fc-f1)/(f0-f1)))
Solving this function on Excel, I found the value of fc=5.504.
You can use uniroot to find where a function equals zero:
f0 = 6
f1 = 1
k = 2
ft = 0.3
gfc = function(fc) {
ft - ((f0 - fc) / k) + ((f1 / k) * log((fc - f1) / (f0 - f1)))
}
uniroot(gfc, interval = c(f0, f1))
#> $root
#> [1] 5.504386
#>
#> $f.root
#> [1] 6.72753e-09
#>
#> $iter
#> [1] 5
#>
#> $init.it
#> [1] NA
#>
#> $estim.prec
#> [1] 6.103516e-05
I assume that what you mean is that you want to solve for the value of fc for which gfc equals zero. We assume fc lies between f0 and f1. In that case using the constants in the question we have the following base R solutions. (Additionally packages with such functionality include nleqslv and rootSolve.)
1) optimize we can minimize gfc^2:
gfc <- function(fc) ft-((f0-fc)/k)+((f1/k)*log((fc-f1)/(f0-f1)))
optimize(function(x) gfc(x)^2, c(f0, f1))
giving:
$minimum
[1] 5.504383
$objective
[1] 4.777981e-12
2) uniroot or we can do it directly using uniroot:
u <- uniroot(gfc, c(f0, f1))
giving:
> u
$root
[1] 5.504386
$f.root
[1] 6.72753e-09
$iter
[1] 5
$init.it
[1] NA
$estim.prec
[1] 6.103516e-05
3) We can also solve this directly without any function like optimize or uniroot by rewriting
gfc(fc) = 0
as this where we have moved the first term of gfc to the LHS and then isolated fc in that term putting everything else on the RHS.
fc = f0 - k*(ft + ((f1/k)*log((fc-f1)/(f0-f1))))
Writing this as:
fc = f(fc)
we just iterate f.
f <- function(fc) f0 - k*(ft + ((f1/k)*log((fc-f1)/(f0-f1))))
fc <- (f0 + f1)/2 # starting value
for(i in 1:10) fc <- f(fc)
fc
## [1] 5.504386
4) brute force Another approach is to evaluate gfc at many points and just pick the one for which gfc^2 is least. The finer you subdivide the interval the more accurate the answer.
s <- seq(f0, f1, length = 100000)
g <- gfc(s)
s[which.min(g^2)]
## [1] 5.504395
Graphics
We can show the solution:
curve(gfc, f0, f1)
abline(h = 0, v = u$root, lty = 2)
axis(1, u$root, round(u$root, 3))
I want to calculate the following
So I want to find Theta and Sigma that maximizes the function.
The constraints are:
> Theta>-Sigma
> -1<Sigma<1
So one of my problem is that I dont know how to deal with the fact that one parameter is included in the constraint of the other Parameter, that I want to optimize over.
I tried with optim(), constrOptim and dfoptim!
Using optim():
k=8
i=1:(k-1)
x=c(5,0.2)
n=24
nj=c(3,4,8,1,1,4,2,1)
EPPF <- function(x,n,nj) {
y=(x[1]+1):(x[1]+1+(n-1)-1)
z=-(prod(x[1]+i*x[2])/(prod(y))*prod(sapply(nj, hfun)))
return(z)}
hfun <- function(p){
h=(1-x[2]):((1-x[2])+p-1)
hfun=prod(h)
return(hfun)
}
> optim(c(6,0.3), fn=EPPF,method = "L-BFGS-B", n=n,nj=nj, lower = c(-x[1],-1), upper = c(Inf,1))
$par
[1] 6.0 0.3
$value
[1] -1.258458e-15
$counts
function gradient
2 2
$convergence
[1] 0
$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
I also tried using a constructor function:
make.EPPF <- function(n,nj,fixed=c(FALSE,FALSE)){
params <-fixed
function(p) {
hfun <- function(y){
h=(1-sigma):((1-sigma)+y-1)
hfun=prod(h)
return(hfun)
}
params[!fixed] <- p
theta <- params[1]
sigma <- params[2]
y=(theta+1):(theta+1+(n-1)-1)
z=(prod(theta+i*sigma)/(prod(y))*prod(sapply(nj, hfun)))
z
}
}
EPPF <-make.EPPF (n,nj)
> optim(c(theta=6, sigma=0.5), fn=EPPF,method = "L-BFGS-B",lower = c(-sigma,-1), upper = c(Inf,1))
Error in optim(c(theta = 6, sigma = 0.5), fn = EPPF, method = "L-BFGS-B", :
object 'sigma' not found
Using constrOptim():
> A <- matrix(c(1,1,0,1,0,-1),3,2,byrow=T)
> b <- c(0,-1,-1)
>
> constrOptim(c(3,0.3),EPPF,NULL,A,b, control=list(fnscale=-1))
$par
[1] 3.0 0.3
$value
[1] 9.712117e-16
$counts
[1] 0
$convergence
[1] 0
$message
NULL
$outer.iterations
[1] 1
$barrier.value
[1] 7.313452e-05
Using Package dfoptim:
> library(dfoptim)
> nmkb(x=c(6,0.3), EPPF, lower=c(-x[2],-1), upper=c(Inf, 1 ))
Error in par < lower :
comparison (3) is possible only for atomic and list types
Either there is for some reasons no convergence or some other Errors.
I am relativ new to programming and R and would really appreciate if someone could help me. Thanks!
These are 3 linear inequality constraints:
sigma + theta > 0
sigma + 1 > 0
-sigma + 1 > 0
You can do this in maxLik. But note that maxLik maximizes the function, hence remove the '-' in front of z. Here is the code that works for me (using Rscript):
k=8
i=1:(k-1)
x=c(5,0.2)
n=24
nj=c(3,4,8,1,1,4,2,1)
EPPF <- function(x,n,nj) {
theta <- x[1]
sigma <- x[2]
y=(x[1]+1):(x[1]+1+(n-1)-1)
z <- prod(x[1]+i*x[2])/(prod(y))*prod(sapply(nj, hfun))
z <- log(z)
return(z)
}
hfun <- function(p){
h=(1-x[2]):((1-x[2])+p-1)
hfun=prod(h)
return(hfun)
}
library(maxLik)
constraints <- list(ineqA=matrix(c(1,0,0,1,1,-1),3,2), ineqB=c(0,1,1))
m <- maxBFGS(EPPF, start=c(6,0.3), constraints=constraints, n=n, nj=nj)
print(summary(m))
I also took logarithm of the result as this leads to more "human" numbers. Otherwise you have to re-tune the stopping conditions. The answer seems to be -1, 1.