Minimum with conditions x [1] + x [2] = 1 [r] optim - r

#attempt
optim(c(0.1,0.1),
function(x){x[1]^2*0.05126875+2*((x[1]*x[2])*-0.00809375)+x[2]^2*0.03376875})
How to create a function that generates values ​​from 0.01 to 1 for x [1] and x [2], and returns me what was the lowest result with the condition of x [1] + x [2] = 1?

When you have a constrain and still want to use optim, you can reformulate your constrained optimization problem, e.g.,
optim(0.1,
function(x) x^2*0.05126875+2*((x*(1-x))*-0.00809375)+(1-x)^2*0.03376875,
lower = 0,
upper = 1,
method = "L-BFGS-B")
which gives
$par
[1] 0.4135589
$value
[1] 0.01645614
$counts
function gradient
4 4
$convergence
[1] 0
$message
[1] "CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL"
For your case, the solution is
x1 = 0.4135589
x2 = 1-x1
x = c(x1,x2)
> x
[1] 0.4135589 0.5864411

Related

Find maximum value of a equation with optim()

My equation is: function(x){10 * x-x^3+6* x^2-15*x}
The value of x that maximizes this function is 3.528 see:
10* 3.528-3.528^3+6* 3.528^2-15 *3.528
But why when i try to find the value of x that maximizes the function optim give me the wrong value?
>optim(c(0),function(x){10*x-x^3+6*x^2-15*x}, control=list(fnscale=-1))$par:
-180925139433306515188282888004820028006042404082666062660624248000000026088
Try with optimize():
optimize(function(x){10*x-x^3+6*x^2-15*x},interval = c(-2000,2000),maximum = T)
The maximum and the value are also wrong:
$maximum
[1] -2000
$objective
[1] 8024008559
Seems like there's no global maximum.
Here are some workarounds:
With optimize
optimize(
function(x) 10 * x - x^3 + 6 * x^2 - 15 * x,
c(0, .Machine$double.eps**-1),
maximum = TRUE
)
giving
$maximum
[1] 3.527528
$objective
[1] 13.12845
With optim
optim(
0,
fn = function(x) 10 * x - x^3 + 6 * x^2 - 15 * x,
method = "Brent",
lower = 0,
upper = .Machine$double.eps**-1,
control = list(fnscale = -1)
)
giving
$par
[1] 3.527525
$value
[1] 13.12845
$counts
function gradient
NA NA
$convergence
[1] 0
$message
NULL
If you don't mind using external packages, you can try fminbnd from package pracma, e.g.,
pracma::fminbnd(
function(x) 10 * x - x^3 + 6 * x^2 - 15 * x,
0,
.Machine$double.eps**-1,
maximum = TRUE
)
which gives
$xmin
[1] 3.527525
$fmin
[1] -13.12845
$niter
[1] 103
$estim.prec
[1] 3.527525e-07

Optimize a Solver function

I'm trying to set up a "Solver" function to optimize the value of "gfc" to zero varying (and finding) the variable "fc" on equation below. The parameters are given.
f0 = 6
f1 = 1
k = 2
ft = 0.3
gfc = ft-((f0-fc)/k)+((f1/k)*ln((fc-f1)/(f0-f1)))
Solving this function on Excel, I found the value of fc=5.504.
You can use uniroot to find where a function equals zero:
f0 = 6
f1 = 1
k = 2
ft = 0.3
gfc = function(fc) {
ft - ((f0 - fc) / k) + ((f1 / k) * log((fc - f1) / (f0 - f1)))
}
uniroot(gfc, interval = c(f0, f1))
#> $root
#> [1] 5.504386
#>
#> $f.root
#> [1] 6.72753e-09
#>
#> $iter
#> [1] 5
#>
#> $init.it
#> [1] NA
#>
#> $estim.prec
#> [1] 6.103516e-05
I assume that what you mean is that you want to solve for the value of fc for which gfc equals zero. We assume fc lies between f0 and f1. In that case using the constants in the question we have the following base R solutions. (Additionally packages with such functionality include nleqslv and rootSolve.)
1) optimize we can minimize gfc^2:
gfc <- function(fc) ft-((f0-fc)/k)+((f1/k)*log((fc-f1)/(f0-f1)))
optimize(function(x) gfc(x)^2, c(f0, f1))
giving:
$minimum
[1] 5.504383
$objective
[1] 4.777981e-12
2) uniroot or we can do it directly using uniroot:
u <- uniroot(gfc, c(f0, f1))
giving:
> u
$root
[1] 5.504386
$f.root
[1] 6.72753e-09
$iter
[1] 5
$init.it
[1] NA
$estim.prec
[1] 6.103516e-05
3) We can also solve this directly without any function like optimize or uniroot by rewriting
gfc(fc) = 0
as this where we have moved the first term of gfc to the LHS and then isolated fc in that term putting everything else on the RHS.
fc = f0 - k*(ft + ((f1/k)*log((fc-f1)/(f0-f1))))
Writing this as:
fc = f(fc)
we just iterate f.
f <- function(fc) f0 - k*(ft + ((f1/k)*log((fc-f1)/(f0-f1))))
fc <- (f0 + f1)/2 # starting value
for(i in 1:10) fc <- f(fc)
fc
## [1] 5.504386
4) brute force Another approach is to evaluate gfc at many points and just pick the one for which gfc^2 is least. The finer you subdivide the interval the more accurate the answer.
s <- seq(f0, f1, length = 100000)
g <- gfc(s)
s[which.min(g^2)]
## [1] 5.504395
Graphics
We can show the solution:
curve(gfc, f0, f1)
abline(h = 0, v = u$root, lty = 2)
axis(1, u$root, round(u$root, 3))

Solving non linear equation in R

I need to solve the following function for P and K:
I would like to find all (or a handful) of P's and K's that satisfy the equation.
I have tried using R's nleqslv package, but something is going wrong.
MPK<-function(X){
Y=numeric(2)
Y[1] = 4.34783*((.3*(X[1]^.23)+.7*(X[2]^.23))^3.34783)*0.069*(X[1]^-.77)
Y[2] = 0.3*((1-X[2])/(1-X[1]))^.7
Y
}
#solve for K, P
xstart = c(.5,.5)
nleqslv(x = xstart,fn = MPK)
What I get is the following:
$x
[1] 1.214578e+10 1.006411e+00
$fvec
[1] 5.531138e-03 7.636165e-10
$termcd
[1] 5
$message
[1] "Jacobian is too ill-conditioned (1/condition=8.9e-013) (see
allowSingular option)"
$scalex
[1] 1 1
$nfcnt
[1] 142
$njcnt
[1] 7
$iter
[1] 70
How do I specify that I need solutions where Y1 is equal to Y[2]?

non-central chi-square probability and non-centrality parameter

How can I get the value of the non-centrality parameter that gives a probability of exactly 0.9 for different critical values and degrees of freedom?
For example, with the significance level = 0.05 and 1 degree of freedom (critical value = 3.84), the ncp must be equal to 10.50742 in order to get a probability of 0.9:
1 - pchisq(3.841459, 1, 10.50742)
[1] 0.9
Rearrange terms in: 1 - pchisq(3.841459, 1, 10.50742) = 0.9 and wrap abs around the result to construct a minimization function:
optim( 1, function(x) abs(pchisq(3.841459, 1, x) - 0.1) )
#-------
$par
[1] 10.50742
$value
[1] 1.740301e-08
$counts
function gradient
56 NA
$convergence
[1] 0
$message
NULL
To do a sensitivity analysis, you can serially alter the values of the other parameters:
for( crit.val in seq(2.5, 3.5, by=0.1)) {
print( optim( 1,
function(x) abs(pchisq(crit.val, 1, x) - 0.1),
method="Brent" , lower=0, upper=20)$par)}
[1] 8.194852
[1] 8.375145
[1] 8.553901
[1] 8.731204
[1] 8.907135
[1] 9.081764
[1] 9.255156
[1] 9.427372
[1] 9.598467
[1] 9.768491
[1] 9.937492
I'm currently developing a package, NCdistributions, which provides a function doing exactly what you want:
> library(NCdistributions)
> find_chisq_ncp(nu = 1, q = 3.841459, p = 0.1)
[1] 10.50742

Use of constraints in R

Simple one here, i once knew this but it has been lost over the years.
Simple equation easy to code in R:
f(x,y) = 2x^2 + 4y^2 + 6x - 8y + 15
And i have constraints of x > 1 and y > -1.
I cant for the life of me remember how to write the constraints properly in R and the book i have is no use
Cheers for any help
Looking for the minimum and maximum
Define your function that takes a single vector of arguments:
myfun <- function(xy) {
x <- xy[1]
y <- xy[2]
2*x^2 + 4*y^2 + 6*x - 8*y + 15
}
Supply starting values to optim and specify your lower bounds for x and y:
starting_values <- c(0, 0)
optim(starting_values, myfun, lower=c(1, -1), method='L-BFGS-B')
optim output:
$par
[1] 1 1
$value
[1] 19
$counts
function gradient
2 2
$convergence
[1] 0
$message
[1] "CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL"

Resources