Optimization in R with arbitrary constraints - r

I have done it in Excel but need to run a proper simulation in R.
I need to minimize function F(x) (x is a vector) while having constraints that sum(x)=1, all values in x are [0,1] and another function G(x) > G_0.
I have tried it with optim and constrOptim. None of them give you this option.

The problem you are referring to is (presumably) a non-linear optimization with non-linear constraints. This is one of the most general optimization problems.
The package I have used for these purposes is called nloptr: see here. From my experience, it is both versatile and fast. You can specify both equality and inequality constaints by setting eval_g_eq and eval_g_ineq, correspondingly. If the jacobians are known explicitly (can be derived analytically), specify them for faster convergence; otherwise, a numerical approximation is used.
Use this list as a general reference to optimization problems.

Write the set of equations using the Lagrange multiplier, then solve using the R command nlm.

You can do this in the OpenMx Package (currently host at the site listed below. Aiming for 2.0 relase on cran this year)
It is a general purpose package mostly used for Structural Equation Modelling, but handling nonlinear constraints.
FOr your case, make an mxModel() with your algebras expressed in mxAlgebras() and the constraints in mxConstraints()
When you mxRun() the model, the algebras will be solved within the constraints, if possible.
http://openmx.psyc.virginia.edu/

Related

What is the best package/algorithm to solve a large optimization problem in R?

The problem has an exponential objective and exponential equality constraint function.Both my objective and constraint function are differentiable.
I would want to solve it using lagrange method and also want the function to output lagrange multiplier. I am currently exploring aug_lag in nloptr and solnp in rsolnp.
First question:
Would you suggest these packages or i should explore any other package?
Second question:
I know that rsolnp and nloptr performs local optimisation majorly but if i want to do global optimisation on a similar problem, can you suggest any package / algorithm?

constrained optimization without gradient

This is a more general question, somewhat independent of data, so I do not have a MWE.
I often have functions fn(.) that implement algorithms that are not differentiable but that I want to optimize. I usually use optim(.) with its standard method, which works fine for me in terms of speed and results.
However, I now have a problem that requires me to set bounds on one of the several parameters of fn. From what I understand, optim(method="L-BFGS-B",...) allows me to set limits to parameters but also requires a gradient. Because fn(.) is not a mathematical function but an algorithm, I suspect it does not have a gradient that I could derive through differentiation. This leads me to ask whether there is a way of performing constrained optimization in R in a way that does not require me to give a gradient.
I have looked at some sources, e.g. John C. Nash's texts on this topic but as far as I understand them, they concern mostly differentiable functions where gradients can be supplied.
Summarizing the comments so far (which are all things I would have said myself):
you can use method="L-BFGS-B" without providing explicit gradients (the gr argument is optional); in that case, R will compute approximations to the derivative by finite differencing (#G.Grothendieck). It is the simplest solution, because it works "out of the box": you can try it and see if it works for you. However:
L-BFGS-B is probably the finickiest of the methods provided by optim() (e.g. it can't handle the case where a trial set of parameters evaluates to NA)
finite-difference approximations are relatively slow and numerically unstable (but, fine for simple problems)
for simple cases you can fit the parameter on a transformed scale, e.g. if b is a parameter that must be positive, you can use log_b as a parameter (and transform it via b <- exp(log_b) in your objective function). (#SamMason) But:
there isn't always a simple transformation that will achieve the constraint you want
if the optimal solution is on the boundary, transforming will cause problems
there are a variety of derivative-free optimizers with constraints (typically "box constraints", i.e. independent lower and/or upper bounds one or more parameters) (#ErwinKalvelagen): dfoptim has a few, I have used the nloptr package (and its BOBYQA optimizer) extensively, minqa has some as well. This is the solution I would recommend.

Mathematical constrained optimization in R

I have a mathematical optimization which I wish to solve in R consider this system/problem:
How Can I solve this problem in R?
In this model Budget, p_l for all l and mu_target are fixed constants while muis a given m-dimensional vector and R is a given n by m matrix.
I have looked into constrOptim and lp but I don't have the imagination to implement the constraints
Those functions require that I have a "constraint" matrix but my problem is that I simply don't know how to design that constraint matrix. There are not many examples with decision variables on both sides of the equations.
Have a look on the nloptr package. It has quite extensive documentation with examples. Lots of algorithms to choose from, depending what problem you are trying to resolve.
NLoptr link

Is there an equivalent to matlab's rcond() function in Julia?

I'm porting some matlab code that uses rcond() to test for singularity, as also recommended here (for matlab singularity testing).
I see that there is a cond() function in Julia (as also in Matlab), but rcond() doesn't appear to be available by default:
ERROR: rcond not defined
I'd assume that rcond(), like the Matlab version is more efficient than 1/cond(). Is there such a function in Julia, perhaps using an add-on module?
Julia calculates the condition number using the ratio of maximum to the minimum of the eigenvalues (got to love open source, no more MATLAB black boxs!)
Julia doesn't have a rcond function in Base, and I'm unaware of one in any package. If it did, it'd just be the ratio of the maximum to the minimum instead. I'm not sure why its efficient in MATLAB, but its quite possible that whatever the reason is it doesn't carry though to Julia.
Matlab's rcond is an optimization based upon the fact that its an estimate of the condition number for square matrices. In my testing and given that its help mentions LAPACK's 1-norm estimator, it appears as though it uses LAPACK's dgecon.f. In fact, this is exactly what Julia does when you ask for the condition number of a square matrix with the 1- or Inf-norm.
So you can simply define
rcond(A::StridedMatrix) = 1/cond(A,1)
You can save Julia from twice-inverting LAPACK's results by manually combining cond(::StridedMatrix) and cond(::LU), but the savings here will almost certainly be immeasurable. Where there is a measurable savings, however, is that you can directly take the norm(A) instead of reconstructing a matrix similar to A through its LU factorization.
rcond(A::StridedMatrix) = LAPACK.gecon!('1', lufact(A).factors, norm(A, 1))
In my tests, this behaves identically to Matlab's rcond (2014b), and provides a decent speedup.

linear program solver with box/bound constraints?

Is there a linear program optimizer in R that supports upper and lower bound constraints?
The libraries limSolve and lpSolve do not support bound constraints.
It is not at all clear from the R Cran Optimization Task View page which LP optimizers support bound constraints.
Please note that all linear programming solvers assume their variables are positive. If you need different lower bounds, the easiest thing is to perform a linear transformation on the variables, apply lpSolve (or Rglpk), and retransform the variables. This has been explained in a posting to R-help some time ago -- which I am not able to find at the moment.
By the way, Rglpk has a parameter 'bounds' that allows to define upper and lower bounds through vectors, not matrices. That may attenuate your concern about matrices growing too fast.
Commands in the Rglpk package do constraints.
Or consider the General Purpose Continuous Solvers;
Package stats offers several general purpose optimization routines. First, function optim() provides an implementation of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method, bounded BFGS, conjugate gradient, Nelder-Mead, and simulated annealing (SANN) optimization methods. It utilizes gradients, if provided, for faster convergence. Typically it is used for unconstrained optimization but includes an option for box-constrained optimization.
Additionally, for minimizing a function subject to linear inequality constraints stats contains the routine constrOptim().
nlminb() offers unconstrained and constrained optimization using PORT routines.

Resources