I need help in a price model optimization.
I am trying to maximize Sale based on several conditions.I have already done optimization in Excel using solver(GRG Nonlinear) but want to do in R since solver has limitations(Microsoft Excel Solver has a limit of 200 decision variables, for both linear and nonlinear problems).
Excel's NLP solver is based on Lasdon's GRG2 solver. I don't think this is available under R. We don't know the exact form of your model and its size (details like whether the constraints are linear or not, whether the objective is linear, quadratic or otherwise nonlinear etc), so it is difficult to recommend a particular solver. Here is a list of solvers available under R. Opposed to good LP solvers that basically can solve whatever you throw at them, NLP solvers are a little bit more fragile and may require a little bit more hand-holding (things like scaling, initial point, bounds come to mind).
Related
I had to add some circular dependencies to my model and thus adding NonlinearBlockGS and LinearBlockGS to the Group with the circular dependency. I get messages like this
LN: LNBGSSolver 'LN: LNBGS' on system 'XXX' failed to converge in 10
iterations.
in the phase where it's finding the Coloring of the problem. There is a Dymos trajectory as part of the problem, but the circular dependency is not in the Trajectory group, it's upstream. It however converges very easily when actually solving the problem. The number of FWD solves is the same as it was before-- everything seem to work fine. Should I be worried about anything?
the way our total derivative coloring works is that we replace partial derivatives with random numbers and then solve the linear system. So the linear solver should be converging. Now, whether or not it should converge with LNBGS in 10 iterations... probably not.
Its hard to speak diffinitively when putting random numbers into a matrix to invert it... but generally speaking it should remain invertible (though we can't promise). That does not mean that it will remain easily invertible. How close does the linear residual get during the coloring? it is decreasing, but slowly. Would more iteration let it get there?
If your problem is working well, I don't think you need to freak out about this. If you would like it to converge better, it won't hurt anything and might give you better coloring. You can increase the iprint of that solver to get more information on the convergence history.
Another option, if your system is small enough, is to try using the DirectSolver instead of LNBGS. For most models with less than 10,000 variables in them a DirectSolver will be overall faster than the LNBGS. There is a nice symetry to using LNBGS with NLGBS ... but while the nonlinear solver tends to be a good choice (i.e. fast and stable) for cyclic dependencies the same can't be said for its linear counter part.
So my go-to combination if NLBGS and DirectSolver. You can't always use the DirectSolver. If you have distributed components in your model, or components that use the matrix-free derivative APIs (apply_linear, compute_jacvec_product), then LNBGS is a good option. But if everything is explicit components with compute_partials or implicit components that provide partials in the linearize method then I suggest using the DirectSolver as your first option.
I think you may have discovered a coloring performance issue in OpenMDAO. When we compute coloring, internally we replace the component partials with random arrays matching the declared sparsity. Since we're not trying to find an actual solution when we compute coloring, we probably don't need to iterate more than once in any given system. And we shouldn't be generating convergence warnings when computing the coloring. I don't think you need to be worried in this case. I'll put a story in our bug tracker to look into this.
I want to train SVMs in R and I know there are functions such as e1071::tune.svm() that can be used to find the optimal parameters for the SVM. However, it seems there are some formulas out there (e.g. used in this report) that can give you a reasonable estimate of these parameters.
Since a grid-search for the parameters can take quite a lot of time on larger datasets and usually, one has to provide a range of possible values anyway, I wondered whether there is a package that implements formulas to get a quick estimate for the gamma and cost parameters for the SVM?
So far, I've found out that caret::train() might use such an approach to estimate sigma (which should be the reciprocal of 2*gamma^2) but I haven't tried it yet, since other calculations are still running (and will be, probably for the next days). Is there also an implementation to estimate cost or at least give a range of reasonable values?
I have found a similar question that asks for alternatives to grid-search in general. However, I would be interested in an R implementation of such alternatives and also, I hope things have developed further since the more general question was posted years ago.
I am trying to find extremum of a linear objective function with linear equality, linear inequality and nonlinear (quadratic) inequality constraints. The problem is I have already tried many algorithms from packages like nloptr, Rsolnp Nlcoptim and for every time I have obtained different results. What is more the results differ (in many cases) from GRG algorithm from Excel which can find better results in terms of the minimising objective function.
So far solnp (Rsolnp package) gives some good results and after proper calibrating the results are even better than the one from GRG Excel algorithm. Results from Solnl (NlcOptim) are average and very different, even if the data input is slightly changed.
Nloptr (Nloptr package) function has implemented various number of algorithms. I tried few (I do not remember which exactly) and the results were still average and completely different than the one obtained so far from other algorithms.
My knowledge about optimisation algorithms is really poor and my attempts are rather based on a random selection of algorithms. Thus could you advise some algorithms implemented in R that can handle such problem? And which one (and why) is better from another? Maybe there is some framework or decision tree regarding choosing proper optimisation algorithm.
If this can help, I try to find the optimal weights of the portfolio assets, where the objective function is to minimise portfolio risk (standard deviation), with all assets weights sum up to 1 and are greater then or equal to 0, and with defined portfolio return as constraints.
I am currently looking at rewriting a commercial "back-box" portfolio optimiser, data in -> results out. I want to move away and use my own R version of it, so far a have to implementations working for my equality constraints, "solve.QP" and "constrOptim".
My problem now is the more I move towards nonlinear constraints (especially turnover limitations and transaction costs) the less information I find, would be great if someone could recommend a package, best case already a finance package or a more general mathematical one. The few packages I read along the lines so far were, "nloptr","fportfolio" and sometimes "rmetrics".
Any examples would also be highly appreciated.
thanks
Turnover constraints involve an absolute value. This can be linearized. So you can use your existing solver.
Linear transaction cost: same story. If your transaction cost have a fixed cost structure then things become more complicated. That may require an MIQP solver.
Is there a linear program optimizer in R that supports upper and lower bound constraints?
The libraries limSolve and lpSolve do not support bound constraints.
It is not at all clear from the R Cran Optimization Task View page which LP optimizers support bound constraints.
Please note that all linear programming solvers assume their variables are positive. If you need different lower bounds, the easiest thing is to perform a linear transformation on the variables, apply lpSolve (or Rglpk), and retransform the variables. This has been explained in a posting to R-help some time ago -- which I am not able to find at the moment.
By the way, Rglpk has a parameter 'bounds' that allows to define upper and lower bounds through vectors, not matrices. That may attenuate your concern about matrices growing too fast.
Commands in the Rglpk package do constraints.
Or consider the General Purpose Continuous Solvers;
Package stats offers several general purpose optimization routines. First, function optim() provides an implementation of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method, bounded BFGS, conjugate gradient, Nelder-Mead, and simulated annealing (SANN) optimization methods. It utilizes gradients, if provided, for faster convergence. Typically it is used for unconstrained optimization but includes an option for box-constrained optimization.
Additionally, for minimizing a function subject to linear inequality constraints stats contains the routine constrOptim().
nlminb() offers unconstrained and constrained optimization using PORT routines.