Is there a linear program optimizer in R that supports upper and lower bound constraints?
The libraries limSolve and lpSolve do not support bound constraints.
It is not at all clear from the R Cran Optimization Task View page which LP optimizers support bound constraints.
Please note that all linear programming solvers assume their variables are positive. If you need different lower bounds, the easiest thing is to perform a linear transformation on the variables, apply lpSolve (or Rglpk), and retransform the variables. This has been explained in a posting to R-help some time ago -- which I am not able to find at the moment.
By the way, Rglpk has a parameter 'bounds' that allows to define upper and lower bounds through vectors, not matrices. That may attenuate your concern about matrices growing too fast.
Commands in the Rglpk package do constraints.
Or consider the General Purpose Continuous Solvers;
Package stats offers several general purpose optimization routines. First, function optim() provides an implementation of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method, bounded BFGS, conjugate gradient, Nelder-Mead, and simulated annealing (SANN) optimization methods. It utilizes gradients, if provided, for faster convergence. Typically it is used for unconstrained optimization but includes an option for box-constrained optimization.
Additionally, for minimizing a function subject to linear inequality constraints stats contains the routine constrOptim().
nlminb() offers unconstrained and constrained optimization using PORT routines.
Related
I try to optimize the averaged prediction of two logistic regressions in a classification task using a superlearner.
My measure of interest is classif.auc
The mlr3 help file tells me (?mlr_learners_avg)
Predictions are averaged using weights (in order of appearance in the
data) which are optimized using nonlinear optimization from the
package "nloptr" for a measure provided in measure (defaults to
classif.acc for LearnerClassifAvg and regr.mse for LearnerRegrAvg).
Learned weights can be obtained from $model. Using non-linear
optimization is implemented in the SuperLearner R package. For a more
detailed analysis the reader is referred to LeDell (2015).
I have two questions regarding this information:
When I look at the source code I think LearnerClassifAvg$new() defaults to "classif.ce", is that true?
I think I could set it to classif.auc with param_set$values <- list(measure="classif.auc",optimizer="nloptr",log_level="warn")
The help file refers to the SuperLearner package and LeDell 2015. As I understand it correctly, the proposed "AUC-Maximizing Ensembles through Metalearning" solution from the paper above is, however, not impelemented in mlr3? Or do I miss something? Could this solution be applied in mlr3? In the mlr3 book I found a paragraph regarding calling an external optimization function, would that be possible for SuperLearner?
As far as I understand it, LeDell2015 proposes and evaluate a general strategy that optimizes AUC as a black-box function by learning optimal weights. They do not really propose a best strategy or any concrete defaults so I looked into the defaults of the SuperLearner package's AUC optimization strategy.
Assuming I understood the paper correctly:
The LearnerClassifAvg basically implements what is proposed in LeDell2015 namely, it optimizes the weights for any metric using non-linear optimization. LeDell2015 focus on the special case of optimizing AUC. As you rightly pointed out, by setting the measure to "classif.auc" you get a meta-learner that optimizes AUC. The default with respect to which optimization routine is used deviates between mlr3pipelines and the SuperLearner package, where we use NLOPT_LN_COBYLA and SuperLearner ... uses the Nelder-Mead method via the optim function to minimize rank loss (from the documentation).
So in order to get exactly the same behaviour, you would need to implement a Nelder-Mead bbotk::Optimizer similar to here that simply wraps stats::optim with method Nelder-Mead and carefully compare settings and stopping criteria. I am fairly confident that NLOPT_LN_COBYLA delivers somewhat comparable results, LeDell2015 has a comparison of the different optimizers for further reference.
Thanks for spotting the error in the documentation. I agree, that the description is a little unclear and I will try to improve this!
I need to estimate parameters of continuous-discrete nonlinear stochastic dynamic system using Kalman filtering techniques.
I'm going to use Julia ode45() from ODE and implement Extended Kalman Filter by myself to compute loglikelihood. ODE is written fully in Julia, ForwardDiff supports differentiation of native Julia functions, including nested differentiation, that's what I also need cause I want to use ForwardDiff in my EKF implementation.
Will ForwardDiff handle differentiation of such a comprehensive function like the loglikelihood I've described?
ODE.jl is in maintenance mode so I would recommend using DifferentialEquations.jl instead. In the DiffEq FAQ there is an explanation about using ForwardDiff through the ODE solvers. It works, but as in the FAQ I would recommend using sensitivity analysis since that's a better way of calculating the derivatives (it will take a lot less compilation time). But yes, DiffEqParamEstim.jl is a whole repository for parameter estimation of ODEs/SDEs/DAEs/DDEs and it uses ForwardDiff.jl through the solvers.
(BTW, what you're looking to do sounds interesting. Feel free to get in touch with us in the JuliaDiffEq channel to talk about the development of parameter estimation tooling!)
I need help in a price model optimization.
I am trying to maximize Sale based on several conditions.I have already done optimization in Excel using solver(GRG Nonlinear) but want to do in R since solver has limitations(Microsoft Excel Solver has a limit of 200 decision variables, for both linear and nonlinear problems).
Excel's NLP solver is based on Lasdon's GRG2 solver. I don't think this is available under R. We don't know the exact form of your model and its size (details like whether the constraints are linear or not, whether the objective is linear, quadratic or otherwise nonlinear etc), so it is difficult to recommend a particular solver. Here is a list of solvers available under R. Opposed to good LP solvers that basically can solve whatever you throw at them, NLP solvers are a little bit more fragile and may require a little bit more hand-holding (things like scaling, initial point, bounds come to mind).
I have done it in Excel but need to run a proper simulation in R.
I need to minimize function F(x) (x is a vector) while having constraints that sum(x)=1, all values in x are [0,1] and another function G(x) > G_0.
I have tried it with optim and constrOptim. None of them give you this option.
The problem you are referring to is (presumably) a non-linear optimization with non-linear constraints. This is one of the most general optimization problems.
The package I have used for these purposes is called nloptr: see here. From my experience, it is both versatile and fast. You can specify both equality and inequality constaints by setting eval_g_eq and eval_g_ineq, correspondingly. If the jacobians are known explicitly (can be derived analytically), specify them for faster convergence; otherwise, a numerical approximation is used.
Use this list as a general reference to optimization problems.
Write the set of equations using the Lagrange multiplier, then solve using the R command nlm.
You can do this in the OpenMx Package (currently host at the site listed below. Aiming for 2.0 relase on cran this year)
It is a general purpose package mostly used for Structural Equation Modelling, but handling nonlinear constraints.
FOr your case, make an mxModel() with your algebras expressed in mxAlgebras() and the constraints in mxConstraints()
When you mxRun() the model, the algebras will be solved within the constraints, if possible.
http://openmx.psyc.virginia.edu/
I have a complex objective function I am looking to optimize. The optimization problem takes a considerable time to optimize. Fortunately, I do have the gradient and the hessian of the function available.
Is there an optimization package in R that can take all three of these inputs? The class 'optim' does not accept the Hessian. I have scanned the CRAN task page for optimization and nothing pops.
For what it's worth, I am able to perform the optimization in MATLAB using 'fminunc' with the the 'GradObj' and 'Hessian' arguments.
I think the package trust which does trust region optimization will do the trick. From the documentation of trust, you see that
This function carries out a minimization or maximization of a function
using a trust region algorithm... (it accepts) an R function that
computes value, gradient, and Hessian of the function to be minimized
or maximized and returns them as a list with components value,
gradient, and hessian.
In fact, I think it uses the same algorithm used by fminunc.
By default fminunc chooses the large-scale algorithm if you supply the
gradient in fun and set GradObj to 'on' using optimset. This algorithm
is a subspace trust-region method and is based on the
interior-reflective Newton method described in [2] and [3]. Each
iteration involves the approximate solution of a large linear system
using the method of preconditioned conjugate gradients (PCG). See
Large Scale fminunc Algorithm, Trust-Region Methods for Nonlinear
Minimization and Preconditioned Conjugate Gradient Method.
Both stats::nlm() and stats::nlminb() take analytical gradients and hessians. Note, however, that the former (nlm()) currently does not update the analytical gradient correctly but that this is fixed in the current development version of R (since R-devel, svn rev 72555).