I have a complex objective function I am looking to optimize. The optimization problem takes a considerable time to optimize. Fortunately, I do have the gradient and the hessian of the function available.
Is there an optimization package in R that can take all three of these inputs? The class 'optim' does not accept the Hessian. I have scanned the CRAN task page for optimization and nothing pops.
For what it's worth, I am able to perform the optimization in MATLAB using 'fminunc' with the the 'GradObj' and 'Hessian' arguments.
I think the package trust which does trust region optimization will do the trick. From the documentation of trust, you see that
This function carries out a minimization or maximization of a function
using a trust region algorithm... (it accepts) an R function that
computes value, gradient, and Hessian of the function to be minimized
or maximized and returns them as a list with components value,
gradient, and hessian.
In fact, I think it uses the same algorithm used by fminunc.
By default fminunc chooses the large-scale algorithm if you supply the
gradient in fun and set GradObj to 'on' using optimset. This algorithm
is a subspace trust-region method and is based on the
interior-reflective Newton method described in [2] and [3]. Each
iteration involves the approximate solution of a large linear system
using the method of preconditioned conjugate gradients (PCG). See
Large Scale fminunc Algorithm, Trust-Region Methods for Nonlinear
Minimization and Preconditioned Conjugate Gradient Method.
Both stats::nlm() and stats::nlminb() take analytical gradients and hessians. Note, however, that the former (nlm()) currently does not update the analytical gradient correctly but that this is fixed in the current development version of R (since R-devel, svn rev 72555).
Related
This is a more general question, somewhat independent of data, so I do not have a MWE.
I often have functions fn(.) that implement algorithms that are not differentiable but that I want to optimize. I usually use optim(.) with its standard method, which works fine for me in terms of speed and results.
However, I now have a problem that requires me to set bounds on one of the several parameters of fn. From what I understand, optim(method="L-BFGS-B",...) allows me to set limits to parameters but also requires a gradient. Because fn(.) is not a mathematical function but an algorithm, I suspect it does not have a gradient that I could derive through differentiation. This leads me to ask whether there is a way of performing constrained optimization in R in a way that does not require me to give a gradient.
I have looked at some sources, e.g. John C. Nash's texts on this topic but as far as I understand them, they concern mostly differentiable functions where gradients can be supplied.
Summarizing the comments so far (which are all things I would have said myself):
you can use method="L-BFGS-B" without providing explicit gradients (the gr argument is optional); in that case, R will compute approximations to the derivative by finite differencing (#G.Grothendieck). It is the simplest solution, because it works "out of the box": you can try it and see if it works for you. However:
L-BFGS-B is probably the finickiest of the methods provided by optim() (e.g. it can't handle the case where a trial set of parameters evaluates to NA)
finite-difference approximations are relatively slow and numerically unstable (but, fine for simple problems)
for simple cases you can fit the parameter on a transformed scale, e.g. if b is a parameter that must be positive, you can use log_b as a parameter (and transform it via b <- exp(log_b) in your objective function). (#SamMason) But:
there isn't always a simple transformation that will achieve the constraint you want
if the optimal solution is on the boundary, transforming will cause problems
there are a variety of derivative-free optimizers with constraints (typically "box constraints", i.e. independent lower and/or upper bounds one or more parameters) (#ErwinKalvelagen): dfoptim has a few, I have used the nloptr package (and its BOBYQA optimizer) extensively, minqa has some as well. This is the solution I would recommend.
I have an objective function (a log likelihood), which I want to maximize using R for a vector of inputs (parameters). I have the gradient of the function (i.e. the score vector), and I also happen to know the Hessian of the function.
In Matlab, I can maximize the function easily and the performance is drastically improved by including both the gradient and the Hessian to the minimization using optimset('GradObj', 'on') and optimset('Hessian', 'on'). In particular, the latter makes a huge difference in this case.
However, I want to do this in R. In optim, I can supply the gradient, but as far as I can tell I can only request the Hessian.
My question: is there a straight-forward way of inlcuding the Hessian for optimization problems in R, as there is in Matlab?
I'm porting some matlab code that uses rcond() to test for singularity, as also recommended here (for matlab singularity testing).
I see that there is a cond() function in Julia (as also in Matlab), but rcond() doesn't appear to be available by default:
ERROR: rcond not defined
I'd assume that rcond(), like the Matlab version is more efficient than 1/cond(). Is there such a function in Julia, perhaps using an add-on module?
Julia calculates the condition number using the ratio of maximum to the minimum of the eigenvalues (got to love open source, no more MATLAB black boxs!)
Julia doesn't have a rcond function in Base, and I'm unaware of one in any package. If it did, it'd just be the ratio of the maximum to the minimum instead. I'm not sure why its efficient in MATLAB, but its quite possible that whatever the reason is it doesn't carry though to Julia.
Matlab's rcond is an optimization based upon the fact that its an estimate of the condition number for square matrices. In my testing and given that its help mentions LAPACK's 1-norm estimator, it appears as though it uses LAPACK's dgecon.f. In fact, this is exactly what Julia does when you ask for the condition number of a square matrix with the 1- or Inf-norm.
So you can simply define
rcond(A::StridedMatrix) = 1/cond(A,1)
You can save Julia from twice-inverting LAPACK's results by manually combining cond(::StridedMatrix) and cond(::LU), but the savings here will almost certainly be immeasurable. Where there is a measurable savings, however, is that you can directly take the norm(A) instead of reconstructing a matrix similar to A through its LU factorization.
rcond(A::StridedMatrix) = LAPACK.gecon!('1', lufact(A).factors, norm(A, 1))
In my tests, this behaves identically to Matlab's rcond (2014b), and provides a decent speedup.
I have done it in Excel but need to run a proper simulation in R.
I need to minimize function F(x) (x is a vector) while having constraints that sum(x)=1, all values in x are [0,1] and another function G(x) > G_0.
I have tried it with optim and constrOptim. None of them give you this option.
The problem you are referring to is (presumably) a non-linear optimization with non-linear constraints. This is one of the most general optimization problems.
The package I have used for these purposes is called nloptr: see here. From my experience, it is both versatile and fast. You can specify both equality and inequality constaints by setting eval_g_eq and eval_g_ineq, correspondingly. If the jacobians are known explicitly (can be derived analytically), specify them for faster convergence; otherwise, a numerical approximation is used.
Use this list as a general reference to optimization problems.
Write the set of equations using the Lagrange multiplier, then solve using the R command nlm.
You can do this in the OpenMx Package (currently host at the site listed below. Aiming for 2.0 relase on cran this year)
It is a general purpose package mostly used for Structural Equation Modelling, but handling nonlinear constraints.
FOr your case, make an mxModel() with your algebras expressed in mxAlgebras() and the constraints in mxConstraints()
When you mxRun() the model, the algebras will be solved within the constraints, if possible.
http://openmx.psyc.virginia.edu/
Is there a linear program optimizer in R that supports upper and lower bound constraints?
The libraries limSolve and lpSolve do not support bound constraints.
It is not at all clear from the R Cran Optimization Task View page which LP optimizers support bound constraints.
Please note that all linear programming solvers assume their variables are positive. If you need different lower bounds, the easiest thing is to perform a linear transformation on the variables, apply lpSolve (or Rglpk), and retransform the variables. This has been explained in a posting to R-help some time ago -- which I am not able to find at the moment.
By the way, Rglpk has a parameter 'bounds' that allows to define upper and lower bounds through vectors, not matrices. That may attenuate your concern about matrices growing too fast.
Commands in the Rglpk package do constraints.
Or consider the General Purpose Continuous Solvers;
Package stats offers several general purpose optimization routines. First, function optim() provides an implementation of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method, bounded BFGS, conjugate gradient, Nelder-Mead, and simulated annealing (SANN) optimization methods. It utilizes gradients, if provided, for faster convergence. Typically it is used for unconstrained optimization but includes an option for box-constrained optimization.
Additionally, for minimizing a function subject to linear inequality constraints stats contains the routine constrOptim().
nlminb() offers unconstrained and constrained optimization using PORT routines.