As a test in understanding nonlinear optimization using Julia's JuMP modeling language, I am trying to minimize the Rosenbrock function in 10 dimensions with constraints 0 <= x[i] <= 0.5. First Rosenbrock with variable arguments:
function rosen(x...)
local n = length(x); local s = 0.0
for i = 1:length(x)-1
s += 100*(x[i+1] - x[i]^2)^2 + (x[i] - 1)^2
end
return s
end
## rosen (generic function with 1 method)
Define the optimization model with Ipopt as solver,
using JuMP; using Ipopt
m = Model(solver = IpoptSolver())
## Feasibility problem with:
## * 0 linear constraints
## * 0 variables
## Solver is Ipopt
and the variables with bound constraints and starting values x[i] = 0.1:
#variable(m, 0.0 <= x[1:10] <= 0.5)
for i in 1:10 setvalue(x[i], 0.1); end
Now I understand that I have to register the objective function.
JuMP.register(m, :rosen, 10, rosen, autodiff=true)
I am uncertain here whether I can do it like this, or if I need to define and register a mysquare function, as is done in the "User-defined Functions" section of the JuMP manual.
#NLobjective(m, Min, rosen(x[1],x[2],x[3],x[4],x[5],x[6],x[7],x[8],x[9],x[10]))
How can I write this more compactly? An expression like
#NLobjective(m, Min, rosen(x[1:10]))
##ERROR: Incorrect number of arguments for "rosen" in nonlinear expression.
gives an error. What if I would like to solve this problem with 100 variables?
Now we solve this model problem and, alas, it returnes a solution, and indeed the correct solution as I know from solving it with the NEOS IPOPT solver.
sol = solve(m);
## ...
## EXIT: Optimal Solution Found.
As I am only interested in the exact value of x[10], extracting it thus:
getvalue(x[10]
## 0.00010008222367154784
Can this be simplified somehow? Think of it how easy it is to solve this problem with fminsearch in MATLAB or optim in R.
R> optim(rep(0.1,10), fnRosenbrock, method="L-BFGS-B",
lower=rep(0.0,10), upper=rep(0.5,10),
control=list(factr=1e-12, maxit=5000))
## $par
## [1] 0.50000000 0.26306537 0.08003061 0.01657414 0.01038065
## [6] 0.01021197 0.01020838 0.01020414 0.01000208 0.00000000
Except, of course, it says $par[10] is 0.0 which is not true.
Related
I am working with the R programming language.
I am trying to learn more about optimization algorithms, and as a learning exercise - I would like to try an optimize a mathematical function using the (famous) gradient descent algorithm using the R programming language.
For instance, I would like to try and "optimize" (i.e. find out the values of "x1 and x2" that produce the smallest possible value of "y") the following function (this function is called the Rastrign Function, and is a popular function to test optimization algorithms on due to its irregular and complicated shape):
I first defined this function in R:
Rastrigin <- function(x)
{
return(20 + x[1]^2 + x[2]^2 - 10*(cos(2*pi*x[1]) + cos(2*pi*x[2])))
}
Then, I tried to do some research and see if there are any standard and common implementations of gradient descent in R. For example, I found out about the "optim()" function in (base) R, which provides many choices of popular optimization algorithms such as "BFGS", "Simulated Annealing" and "Nelder-Meade". For instance, below I used a variant of the "BFGS" algorithm to optimize the Rastrign Function:
#run BFGS optimization algorithm:
optim(par = c(2,2), Rastrigin, lower = c(-5,-5), upper = c(5,5), method = "L-BFGS-B")
$par
[1] 5.453531e-15 5.453531e-15
$value
[1] 0
$counts
function gradient
7 7
$convergence
[1] 0
$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
Based on the results of the above code, it seems like the BFGS algorithm was able to successfully find the minimum of this function by returning values of x1 and x2 that are very close to the true minimum (using trigonometry, we can see that if "x1 = x2 = 0", f(x1,x2) = 20 + 0 + 0 - 10*(cos(0) + cos(0)) = 20 - 10*2 = 20 - 20 = 0 ).
My Question: I tried looking for a standard function in R that would allow you to perform gradient descent optimization, but I could not find anything.
Does anyone know if there are any standard functions in R for gradient descent optimization? Can someone please show me how to do this?
Thanks!
References:
https://stat.ethz.ch/R-manual/R-devel/library/stats/html/optim.html
As indicated in the comments, (I just learned that) "gradient descent" is the same as "steepest descent":
library(pracma)
> steep_descent(c(1, 1), Rastrigin)
$xmin
[1] 0.9949586 0.9949586
$fmin
[1] 1.989918
$niter
[1] 3
Currently doing a homework exercise based on root finding algorithms:
A root finding algorithm can also be used to approximate certain functions. Show mathematically how the evaluation of the square root function f(x) = √x can be expressed as a root finding problem.4 Use both Newton’s method and the bisection method to approximate √x for different values of x. Compare your approximations with the R function sqrt. For which values of x does the approximation work well? Does Newton’s method or the bisection method perform better? How do the answers to these questions depend
on your starting value?
I have the following code that worked for every function so far:
newton.function <- function(f, fPrime, nmax, eps, x0){
n <- 1
x1 <- x0
result <- c()
while((n <= nmax) && (abs(f(x1)) >= eps)){
x1 <- (x0 - (f(x0)/fPrime(x0)))
result <- c(result, x1)
n <- n + 1
x0 <- x1
}
iterations <- n - 1
return(c(iterations, result[length(result)]))
}
Sqrt functions:
g <- function(x){
x^(1/2)
}
gPrime <- function(x){
1/(2*x^(1/2))
}
When I execute the function I either get Error in if (abs(f(x1)) <= eps) break :
missing value where TRUE/FALSE needed or if the x0 = 0 I get 1 and 0 as a result.
newton.function(f = g, fPrime = gPrime, nmax = 1000, eps = 1E-8, x0 = 0)
My bisection function works equally as bad, I am stuck answering the question.
From a programming point of view, your code works as expected.
If you start with 0, which is the exact solution, you get 0, fine.
Now look what happens when starting with any other number:
x1 <- (x0 - (f(x0)/fPrime(x0))) = (x0 - (x0^(1/2)/(1/(2*x^(1/2)))))
= x0-2x0 = -x0
So if you start with a positive number, x1 will be negative after the first iteration, and the next call to f(x1) returns NaN since you ask the square root of a negative number.
The error message tells you that R can not evaluate abs(f(x1)) >= eps to TRUE or FALSE, indeed, abs(f(x1)) returns NaN and the >= operator returns also NaN in this case. This is exactly what the error message tells you.
So I advice you to look at some mathematics source to check you algorithm, but the R part is ok.
Many users have asked how to solve the Heat Equation, u_t = u_xx, with non-zero Dirichlet BCs and with conjugate gradients for the internal linear solver. This is a common simplified PDE problem before moving to more difficult versions of parabolic PDEs. How is this done in DifferentialEquations.jl?
Let's solve this problem in steps. First, let's build the linear operator for the discretized Heat Equation with Dirichlet BCs. A discussion of the discretization can be found on this Wiki page which shows that the central difference method gives a 2nd order discretization of the second derivative by (u[i-1] - 2u[i] + u[i+1])/dx^2. This is the same as multiplying by the Tridiagonal matrix of [1 -2 1]*(1/dx^2), so let's start by building this matrix:
using LinearAlgebra, OrdinaryDiffEq
x = collect(-π : 2π/511 : π)
## Dirichlet 0 BCs
u0 = #. -(x).^2 + π^2
n = length(x)
A = 1/(2π/511)^2 * Tridiagonal(ones(n-1),-2ones(n),ones(n-1))
Notice that we have implicitly simplified the end, since (u[0] - 2u[1] + u[2])/dx^2 = (- 2u[1] + u[2])/dx^2 when the left BC is zero, so the term is dropped from the matmul. We then use this discretization of the derivative to solve the Heat Equation:
function f(du,u,A,t)
mul!(du,A,u)
end
prob = ODEProblem(f,u0,(0.0,10.0),A)
sol = solve(prob,ImplicitEuler())
using Plots
plot(sol[1])
plot!(sol[end])
Now we make the BCs non-zero. Notice that we just have to add back the u[0]/dx^2 that we previously dropped off, so we have:
## Dirichlet non-zero BCs
## Note that the operator is no longer linear
## To handle affine BCs, we add the dropped term
u0 = #. (x - 0.5).^2 + 1/12
n = length(x)
A = 1/(2π/511)^2 * Tridiagonal(ones(n-1),-2ones(n),ones(n-1))
function f(du,u,A,t)
mul!(du,A,u)
# Now do the affine part of the BCs
du[1] += 1/(2π/511)^2 * u0[1]
du[end] += 1/(2π/511)^2 * u0[end]
end
prob = ODEProblem(f,u0,(0.0,10.0),A)
sol = solve(prob,ImplicitEuler())
plot(sol[1])
plot!(sol[end])
Now let's swap out the linear solver. The documentation suggests that you should use LinSolveCG here, which looks like:
sol = solve(prob,ImplicitEuler(linsolve=LinSolveCG()))
There are some advantages to this, since it has a norm handling that helps conditioning. Howerver, the documentation also states that you can build your own linear solver routine. This is done by giving a Val{:init} dispatch that returns the type to use as the linear solver, so we do:
## Create a linear solver for CG
using IterativeSolvers
function linsolve!(::Type{Val{:init}},f,u0;kwargs...)
function _linsolve!(x,A,b,update_matrix=false;kwargs...)
cg!(x,A,b)
end
end
sol = solve(prob,ImplicitEuler(linsolve=linsolve!))
plot(sol[1])
plot!(sol[end])
And there we are, non-zero Dirichlet Heat Equation with a Krylov method (conjugate gradients) for the linear solver, making it a Newton-Krylov method.
I'm trying to tackle a nonlinear optimization problem where the objective functions are non-linear and constraints are linear. I read a bit on the ROI package in R and I decided to use the same. However, I am facing a problem while solving the optimization problem.
I am essentially trying to minimize the area under a supply-demand curve. The equation for the supply and demand curves are defined in the code:
Objective function: minimize (Integral of supply curve + integral of demand curve),
subject to constraints q greater than or equal to 34155 (stored in a variable called ICR),
q greater than or equal to 0
and q less than or equal to 40000.
I have tried to run this through the ROI package in RStudio and I keep getting an error telling me that there is no solver to be found.
library(tidyverse)
library(ROI)
library(rSymPy)
library(mosaicCalc)
# Initializing parameters for demand curve
A1 <- 6190735.2198302800
B1 <- -1222739.9618776600
C1 <- 103427.9556133250
D1 <- -4857.0627045073
E1 <- 136.7660814828
# Initializing parameters for Supply Curve
S1 <- -1.152
S2 <- 0.002
S3 <- a-9.037e-09
S4 <- 2.082e-13
S5 <- -1.64e-18
ICR <- 34155
demand_curve_integral <- antiD(A1 + B1*q + C1*(q^2)+ D1*(q^3) + E1*(q^4) ~q)
supply_curve_integral <- antiD(S1 + S2*(q) + S3*(q^2) + S4*(q^3) + S5*(q^4)~q)
# Setting up the objective function
obj_func <- function(q){ (18.081*demand_curve_integral(q))+supply_curve_integral(q)}
# Setting up the optimization Problem
lp <- OP(objective = F_objective(obj_func, n=1L),
constraints=L_constraint(L=matrix(c(1, 1, 1), nrow=3),
dir=c(">=", ">=", "<="),
rhs=c(ICR, 0, 40000, 1))),
maximum = FALSE)
sol <- ROI_solve(lp)
This is the error that I keep getting in RStudio:
Error in ROI_solve(lp) : no solver found for this signature:
objective: F
constraints: L
bounds: V
cones: X
maximum: FALSE
C: TRUE
I: FALSE
B: FALSE
What should I do to rectify this error?
In general you could use ROI.plugin.alabama or ROI.plugin.nloptr for this optimization problem.
But I looked at the problem and this raised several questions.
a is not defined in the code.
You state that q has length 1 and add 3 linear constraints the constraints say
q >= 34155, q >= 0, q <= 40000 or q <= 1
I am not entirely sure since the length of rhs is 4 but L and dir
suggest there are only 3 linear constraints.
How should the constraint look like?
34155 <= q <= 40000?
Then you could specify the constraint as bounds and use ROI.plugin.optimx
or since you have a one dimensional optimization problem just use optimize
from the stats package https://stat.ethz.ch/R-manual/R-devel/library/stats/html/optimize.html.
I haven't run NLP using ROI. But you have to install an ROI solver plug-in and then load the library in your code. The current solver plug-ins are:
library(ROI.plugin.glpk)
library(ROI.plugin.lpsolve)
library(ROI.plugin.neos)
library(ROI.plugin.symphony)
library(ROI.plugin.cplex)
Neos provides access to NLP solvers but I don't know how to pass solver parameters via an ROI plug-in function call.
https://neos-guide.org/content/nonlinear-programming
I have a functional like this :
(LaTex formula: $v[y]=\int_0^2 (y'^2+23yy'+12y^2+3ye^{2t})dt$)
with given start and end conditions y(0)=-1, y(2)=18.
How can I find extreme values of this functional in R? I realize how it can be done for example in Excel but didn't find appropriate solution in R.
Before trying to solve such a task in a numerical setting, it might be better to lean back and think about it for a moment.
This is a problem typically treated in the mathematical discipline of "variational calculus". A necessary condition for a function y(t) to be an extremum of the functional (ie. the integral) is the so-called Euler-Lagrange equation, see
Calculus of Variations at Wolfram Mathworld.
Applying it to f(t, y, y') as the integrand in your request, I get (please check, I can easily have made a mistake)
y'' - 12*y + 3/2*exp(2*t) = 0
You can go now and find a symbolic solution for this differential equation (with the help of a textbook, or some CAS), or solve it numerically with the help of an R package such as 'deSolve'.
PS: Solving this as an optimization problem based on discretization is possible, but may lead you on a long and stony road. I remember solving the "brachistochrone problem" to a satisfactory accuracy only by applying several hundred variables (not in R).
Here is a numerical solution in R. First the functional:
f<-function(y,t=head(seq(0,2,len=length(y)),-1)){
len<-length(y)-1
dy<-diff(y)*len/2
y0<-(head(y,-1)+y[-1])/2
2*sum(dy^2+23*y0*dy+12*y0^2+3*y0*exp(2*t))/len
}
Now the function that does the actual optimization. The best results I got were using the BFGS optimization method, and parametrizing using dy rather than y:
findMinY<-function(points=100, ## number of points of evaluation
boundary=c(-1,18), ## boundary values
y0=NULL, ## optional initial value
method="Nelder-Mead", ## optimization method
dff=T) ## if TRUE, optimizes based on dy rather than y
{
t<-head(seq(0,2,len=points),-1)
if(is.null(y0) || length(y0)!=points)
y0<-seq(boundary[1],boundary[2],len=points)
if(dff)
y0<-diff(y0)
else
y0<-y0[-1]
y0<-head(y0,-1)
ff<-function(z){
if(dff)
y<-c(cumsum(c(boundary[1],z)),boundary[2])
else
y<-c(boundary[1],z,boundary[2])
f(y,t)
}
res<-optim(y0,ff,control=list(maxit=1e9),method=method)
cat("Iterations:",res$counts,"\n")
ymin<-res$par
if(dff)
c(cumsum(c(boundary[1],ymin)),boundary[2])
else
c(boundary[1],ymin,boundary[2])
}
With 500 points of evaluation, it only takes a few seconds with BFGS:
> system.time(yy<-findMinY(500,method="BFGS"))
Iterations: 90 18
user system elapsed
2.696 0.000 2.703
The resulting function looks like this:
plot(seq(0,2,len=length(yy)),yy,type='l')
And now a solution that numerically integrates the Euler equation.
As #HansWerner pointed out, this problem boils down to applying the Euler-Lagrange equation to the integrand in OP's question, and then solving that differential equation, either analytically or numerically. In this case the relevant ODE is
y'' - 12*y = 3/2*exp(2*t)
subject to:
y(0) = -1
y(2) = 18
So this is a boundary value problem, best approached using bvpcol(...) in package bvpSolve.
library(bvpSolve)
F <- function(t, y.in, pars){
dy <- y.in[2]
d2y <- 12*y.in[1] + 1.5*exp(2*t)
return(list(c(dy,d2y)))
}
init <- c(-1,NA)
end <- c(18,NA)
t <- seq(0, 2, by = 0.01)
sol <- bvpcol(yini = init, yend = end, x = t, func = F)
y = function(t){ # analytic solution...
b <- sqrt(12)
a <- 1.5/(4-b*b)
u <- exp(2*b)
C1 <- ((18*u + 1) - a*(exp(4)*u-1))/(u*u - 1)
C2 <- -1 - a - C1
return(a*exp(2*t) + C1*exp(b*t) + C2*exp(-b*t))
}
par(mfrow=c(1,2))
plot(t,y(t), type="l", xlim=c(0,2),ylim=c(-1,18), col="red", main="Analytical Solution")
plot(sol[,1],sol[,2], type="l", xlim=c(0,2),ylim=c(-1,18), xlab="t", ylab="y(t)", main="Numerical Solution")
It turns out that in this very simple example, there is an analytical solution:
y(t) = a * exp(2*t) + C1 * exp(sqrt(12)*t) + C2 * exp(-sqrt(12)*t)
where a = -3/16 and C1 and C2 are determined to satisfy the boundary conditions. As the plots show, the numerical and analytic solution agree completely, and also agree with the solution provided by #mrip