I'm trying to set up a "Solver" function to optimize the value of "gfc" to zero varying (and finding) the variable "fc" on equation below. The parameters are given.
f0 = 6
f1 = 1
k = 2
ft = 0.3
gfc = ft-((f0-fc)/k)+((f1/k)*ln((fc-f1)/(f0-f1)))
Solving this function on Excel, I found the value of fc=5.504.
You can use uniroot to find where a function equals zero:
f0 = 6
f1 = 1
k = 2
ft = 0.3
gfc = function(fc) {
ft - ((f0 - fc) / k) + ((f1 / k) * log((fc - f1) / (f0 - f1)))
}
uniroot(gfc, interval = c(f0, f1))
#> $root
#> [1] 5.504386
#>
#> $f.root
#> [1] 6.72753e-09
#>
#> $iter
#> [1] 5
#>
#> $init.it
#> [1] NA
#>
#> $estim.prec
#> [1] 6.103516e-05
I assume that what you mean is that you want to solve for the value of fc for which gfc equals zero. We assume fc lies between f0 and f1. In that case using the constants in the question we have the following base R solutions. (Additionally packages with such functionality include nleqslv and rootSolve.)
1) optimize we can minimize gfc^2:
gfc <- function(fc) ft-((f0-fc)/k)+((f1/k)*log((fc-f1)/(f0-f1)))
optimize(function(x) gfc(x)^2, c(f0, f1))
giving:
$minimum
[1] 5.504383
$objective
[1] 4.777981e-12
2) uniroot or we can do it directly using uniroot:
u <- uniroot(gfc, c(f0, f1))
giving:
> u
$root
[1] 5.504386
$f.root
[1] 6.72753e-09
$iter
[1] 5
$init.it
[1] NA
$estim.prec
[1] 6.103516e-05
3) We can also solve this directly without any function like optimize or uniroot by rewriting
gfc(fc) = 0
as this where we have moved the first term of gfc to the LHS and then isolated fc in that term putting everything else on the RHS.
fc = f0 - k*(ft + ((f1/k)*log((fc-f1)/(f0-f1))))
Writing this as:
fc = f(fc)
we just iterate f.
f <- function(fc) f0 - k*(ft + ((f1/k)*log((fc-f1)/(f0-f1))))
fc <- (f0 + f1)/2 # starting value
for(i in 1:10) fc <- f(fc)
fc
## [1] 5.504386
4) brute force Another approach is to evaluate gfc at many points and just pick the one for which gfc^2 is least. The finer you subdivide the interval the more accurate the answer.
s <- seq(f0, f1, length = 100000)
g <- gfc(s)
s[which.min(g^2)]
## [1] 5.504395
Graphics
We can show the solution:
curve(gfc, f0, f1)
abline(h = 0, v = u$root, lty = 2)
axis(1, u$root, round(u$root, 3))
Related
I am trying to find optimal quantity for which I have to equate differentiation of Total Revenue Equation with Marginal cost. I dont know how to solve for x here. Differentiation works on expression type variable and returns the same, and solve() take numeric equation with only coefficient. I dont want to manually input coefficent.
TR = expression(Quantity * (40- 3*Quantity))
MR = D(TR,"Quantity")
Optimal_Quantity = solve(MR-MC) to get Q
The last line is pseudo code on what I want to achieve, Please help. I can manually enter the values, But wish to make it universal. MC = constant numeric value on RHS
I am not completely understanding but if you want to optimize a function, find that functions derivative then find the derivative's zeros.
TR <- expression(Quantity * (40- 3*Quantity))
MR <- D(TR,"Quantity")
class(MR)
#> [1] "call"
dTR <- function(x, const) {
e <- new.env()
e$Quantity <- x
eval(MR, envir = e) - const
}
MC <- 0
u <- uniroot(dTR, interval = c(-10, 10), const = MC)
u
#> $root
#> [1] 6.666667
#>
#> $f.root
#> [1] 0
#>
#> $iter
#> [1] 1
#>
#> $init.it
#> [1] NA
#>
#> $estim.prec
#> [1] 16.66667
curve(dTR(x, const = MC), from = -10, to = 10)
abline(h = 0)
points(u$root, u$f.root, pch = 16, col = "red")
Created on 2022-11-19 with reprex v2.0.2
Edit
To make the function dTR more general purpose, I have included an argument FUN. Above it would only evaluate MR, it can now evaluate any function passed to it.
The code below plots dTR in a large range of values, from -10 to 100, hoping to catch negative and positive end points. Then, after drawing the horizontal axis, boxes the root between 20 and 30.
dTR <- function(x, FUN, const) {
e <- new.env()
e$Quantity <- x
eval(FUN, envir = e) - const
}
total.revenue <- expression(Quantity * (10- Quantity/5))
marginal.revenue <- D(total.revenue, "Quantity")
marginal.cost <- 1
curve(dTR(x, FUN = marginal.revenue, const = marginal.cost), from = -10, to = 100)
abline(h = 0)
abline(v = c(20, 30), lty = "dashed")
u <- uniroot(dTR, interval = c(20, 30), FUN = marginal.revenue, const = marginal.cost)
u
#> $root
#> [1] 22.5
#>
#> $f.root
#> [1] 0
#>
#> $iter
#> [1] 1
#>
#> $init.it
#> [1] NA
#>
#> $estim.prec
#> [1] 7.5
Created on 2022-11-22 with reprex v2.0.2
My equation is: function(x){10 * x-x^3+6* x^2-15*x}
The value of x that maximizes this function is 3.528 see:
10* 3.528-3.528^3+6* 3.528^2-15 *3.528
But why when i try to find the value of x that maximizes the function optim give me the wrong value?
>optim(c(0),function(x){10*x-x^3+6*x^2-15*x}, control=list(fnscale=-1))$par:
-180925139433306515188282888004820028006042404082666062660624248000000026088
Try with optimize():
optimize(function(x){10*x-x^3+6*x^2-15*x},interval = c(-2000,2000),maximum = T)
The maximum and the value are also wrong:
$maximum
[1] -2000
$objective
[1] 8024008559
Seems like there's no global maximum.
Here are some workarounds:
With optimize
optimize(
function(x) 10 * x - x^3 + 6 * x^2 - 15 * x,
c(0, .Machine$double.eps**-1),
maximum = TRUE
)
giving
$maximum
[1] 3.527528
$objective
[1] 13.12845
With optim
optim(
0,
fn = function(x) 10 * x - x^3 + 6 * x^2 - 15 * x,
method = "Brent",
lower = 0,
upper = .Machine$double.eps**-1,
control = list(fnscale = -1)
)
giving
$par
[1] 3.527525
$value
[1] 13.12845
$counts
function gradient
NA NA
$convergence
[1] 0
$message
NULL
If you don't mind using external packages, you can try fminbnd from package pracma, e.g.,
pracma::fminbnd(
function(x) 10 * x - x^3 + 6 * x^2 - 15 * x,
0,
.Machine$double.eps**-1,
maximum = TRUE
)
which gives
$xmin
[1] 3.527525
$fmin
[1] -13.12845
$niter
[1] 103
$estim.prec
[1] 3.527525e-07
#attempt
optim(c(0.1,0.1),
function(x){x[1]^2*0.05126875+2*((x[1]*x[2])*-0.00809375)+x[2]^2*0.03376875})
How to create a function that generates values from 0.01 to 1 for x [1] and x [2], and returns me what was the lowest result with the condition of x [1] + x [2] = 1?
When you have a constrain and still want to use optim, you can reformulate your constrained optimization problem, e.g.,
optim(0.1,
function(x) x^2*0.05126875+2*((x*(1-x))*-0.00809375)+(1-x)^2*0.03376875,
lower = 0,
upper = 1,
method = "L-BFGS-B")
which gives
$par
[1] 0.4135589
$value
[1] 0.01645614
$counts
function gradient
4 4
$convergence
[1] 0
$message
[1] "CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL"
For your case, the solution is
x1 = 0.4135589
x2 = 1-x1
x = c(x1,x2)
> x
[1] 0.4135589 0.5864411
I need to numerically integrate the following:
I tried to use cubature and pracma but they don't seem to support functional integration limits. I found a attempt to use cubature by:
library(cubature)
integrand <- function(arg) {
x <- arg[1]
y <- arg[2]
z <- arg[3]
w <- arg[4]
v<- arg[5]
ff <- dnorm(x, 10,2)*dnorm(y, 10,2)*dnorm(z, 10,2)*dnorm(w, 10,2)* dnorm(v, 10,2)* (x+y+z+w+v<=52)
return(ff)
}
R <- cuhre(f = integrand,
lowerLimit=c(0,0,0,0,0),
upperLimit=c(20,20,20,20,20),
relTol = 1e-5, absTol= 1e-5)
But the author doesn't guarantee that it's correct to do it.
Is there a way to numerically integrate multiple integrals with functional limits in R?
The domain of integration is the canonical simplex scaled by the factor 42. To evaluate an integral on a simplex, use the SimplicialCubature package:
integrand <- function(arg) {
x <- arg[1]
y <- arg[2]
z <- arg[3]
w <- arg[4]
v <- arg[5]
dnorm(x, 10, 2) *
dnorm(y, 10, 2) *
dnorm(z, 10, 2) *
dnorm(w, 10, 2) *
dnorm(v, 10, 2)
}
library(SimplicialCubature)
Simplex <- 42 * CanonicalSimplex(5)
Here is the command to run:
adaptIntegrateSimplex(integrand, S = Simplex)
# $integral
# [1] 0.03252553
#
# $estAbsError
# [1] 0.3248119
#
# $functionEvaluations
# [1] 9792
#
# $returnCode
# [1] 1
#
# $message
# [1] "error: maxEvals exceeded - too many function evaluations"
The algorithm has reached the maximal number of function evaluations and the estimated absolute error is 0.3248119, while the estimated value of the integral is 0.03252553. This is a big error.
We can increase the maximum number of function evaluations allowed. Taking 1e6, the computation is a bit slow and we get:
adaptIntegrateSimplex(integrand, S = Simplex, maxEvals = 1e6)
# $integral
# [1] 0.03682535
#
# $estAbsError
# [1] 0.001004083
#
# $functionEvaluations
# [1] 999811
#
# $returnCode
# [1] 1
#
# $message
# [1] "error: maxEvals exceeded - too many function evaluations"
The estimated error has decreased to 0.001004083, quite better.
Note that we can approximate this integral by using simulations, because this integral is the measure of the simplex under a multivariate normal distribution:
library(mvtnorm)
Sigma <- 2^2 * diag(5)
Mean <- rep(10, 5)
set.seed(666)
sims <- rmvnorm(1e6, mean = Mean, sigma = Sigma)
f <- function(X){ # test whether 0 < x < 42, 0 < x + y < 42, 0 < x + y + z < 42, ...
all(X > 0 & cumsum(X) < 42)
}
mean(apply(sims, 1, f))
# 0.037083
I've been trying to code this problem:
https://sg.answers.yahoo.com/question/index?qid=20110127015240AA9RjyZ
I believe there is a R function somewhere to solve for the root values of the following equations:
(x+3)^2 + (y-50)^2 = 1681
(x-11)^2 + (y+2)^2 = 169
(x-13)^2 + (y-34)^2 = 625
I tried using the 'solve' function but they're only for linear equations(?)
Also tried 'nls'
dt = data.frame(a=c(-3,11,13), b = c(50, -2, 34), c = c(1681,169,625))
nls(c~(x-a)^2 + (y-b)^2, data = dt, start = list(x = 1, y = 1))
but getting an error all the time. (and yes I already tried changing the max iteration)
Error in nls(c ~ (x - a)^2 + (y - b)^2, data = dt, start = list(x = 1, :
number of iterations exceeded maximum of 50
how do you solve the root values in R?
nls does not work with zero residual data -- see ?nls where this is mentioned. nlxb in the nlmrt package is mostly similar to nls in terms of input arguments and does support zero residual data. Using dt from the question just replace nls with nlxb:
library(nlmrt)
nlxb(c~(x-a)^2 + (y-b)^2, data = dt, start = list(x = 1, y = 1))
giving:
nlmrt class object: x
residual sumsquares = 2.6535e-20 on 3 observations
after 5 Jacobian and 6 function evaluations
name coeff SE tstat pval gradient JSingval
x 6 7.21e-12 8.322e+11 7.649e-13 -1.594e-09 96.93
y 10 1.864e-12 5.366e+12 1.186e-13 -1.05e-08 22.45
You cannot always solve three equations for two variables.You can solve two equations for two variables and test if the solution satisfies the third equation.
Use package nleqslv as follows.
library(nleqslv)
f1 <- function(z) {
f <- numeric(2)
x <- z[1]
y <- z[2]
f[1] <- (x+3)^2 + (y-50)^2 - 1681
f[2] <- (x-11)^2 + (y+2)^2 - 169
f
}
f2 <- function(z) {
x <- z[1]
y <- z[2]
(x-13)^2 + (y-34)^2 - 625
}
zstart <- c(0,0)
z1 <- nleqslv(zstart,f1)
z1
f2(z1$x)
which gives you the following output:
>z1
$x
[1] 6 10
$fvec
[1] 7.779818e-09 7.779505e-09
$termcd
[1] 1
$message
[1] "Function criterion near zero"
$scalex
[1] 1 1
$nfcnt
[1] 9
$njcnt
[1] 1
$iter
[1] 9
>f2(z1$x)
[1] 5.919242e-08
So a solution has been found and the solution follows from the vector z$x. Inserting z$x in function f2 also gives almost zero.
So a solution has been found.
You could also try package BB.
Just go through rootSolve package and you will be done:
https://cran.r-project.org/web/packages/rootSolve/vignettes/rootSolve.pdf