Maximization in quadratic programming using CGAL - linear-algebra

I am using CGAL to solve some quadratic programming problems.
Assume I want to minimize x^2 for x taking values from -oo(-infinity) to
+oo. This could be easily solved by doing:
Program qp (CGAL::SMALLER, false, 0, false, 0);
qp.set_d(0, 0, 2);
Solution s = CGAL::solve_quadratic_program(qp, ET());
which of course will return 0 as a result. Now suppose I want to maximize
x^2. In order to so, I have to minimize -x^2. But the following does not "work"
in CGAL:
Program qp (CGAL::SMALLER, false, 0, false, 0);
qp.set_d(0, 0, -2);
Solution s = CGAL::solve_quadratic_program(qp, ET());
as the now matrix D = [-2] is not positive semidefinite (the API for a quadratic programming problem "asks" for D to be positive semidefinite). By running the above snippet, the wrong result 0 is returned instead of -oo.
What should I do in order to maximize an objective function like x^2 in CGAL?

CGAL's documentation says that your objective function must be minimization of a convex function. You are trying to minimize -x^2, which is not convex - so you cannot do this with CGAL.
Furthermore, in section 10.2.2 of the documentation I've linked, it says that trying to minimize a non-convex function may not even warn you that the problem is non-convex, and instead return a message than optimal solution was found. That is, if you're going to use CGAL for QP make sure it's convex quadratic or you're going to get bad answers.
You might consider a solver that can handle non-convex nonlinear optimization. IPOPT is open source, and will work if your objective function and constraints are twice continuously differentiable. COIN-OR has several solvers (see "Optimization deterministic nonlinear") that might work for you. KNITRO is an excellent commercial solver.

Related

How to do interval algorithms based integration using "IntervalArithmetic" package in Julia?

How to integrate an interval-algorithm-based nonlinear function (for example, d(F(X))/dX= a/(1+cX), where a=[1, 2], c=[2, 3] are interval constants) using the "IntervalArithmetic" package in Julia? Could you please give an example? Or could you please provide a relevant document? F(X) will come out as an interval with bounds, F(X)=[p, q].
Just numerical integration?
As long as the integrating code is written in Julia (otherwise i suspect it will struggle to understand IntervalArithmetic) and there isn't some snag about how it should interpret tolerances, then it should just work, more or less how you might expect it to handle e.g. complex numbers.
using IntervalArithmetic
f(x) = interval(1,2)/(1+interval(2,3)*x)
and combined with e.g.
using QuadGK
quadgk(f, 0, 1)
gives ([0.462098, 1.09862], [0, 0.636515]) (so.. i guess here the interpretation is that the error is in the interval 0 to 0.636515 :))
Just as a sanity check, lets just go with a good old trapezodial rule.
using Trapz
xs = range(0, 1,length=100)
trapz(xs, f.(xs))
again gives us the expected interval [0.462098, 1.09862]

Solve system of implicit ODE with Scilab

I'm modelling an overhead crane and obtained the following equations:
I'm noob when it comes to Scilab and so far I only simullated (using ODE) linear systems with no more than two degrees of freedom, which are simple systems that I can easily convert to am matrix and integrate it using ODE.
But this system in particular I have no clue how to simulate it, not because of the sin and cos functions, but because of the fact that I don't know how to put it in a state space matrix.
I've looked for a few tutorials (listed bellow) but I didn't understand any of those, can somebody tell me how I do it, or at least point where I could learn it?
http://www.openeering.com/sites/default/files/Nonlinear_Systems_Scilab.pdf
http://www.math.univ-metz.fr/~sallet/ODE_Scilab.pdf
Thank you, and sorry about my english
The usual form means writing in terms of first order derivatives. So you'll have relations where the 2nd derivative terms will be written as:
x'' = d(x')/fx
Substitute these into the equations you have. You'll end up with eight simultaneous ODEs to solve instead of four, with appropriate initial conditions.
Although this ODE system is implicit, you can solve it with a classical (explicit) ODE solver by reformulating it this way: if you define X=(x,L,theta,q)^T then your system can be reformulated using matrix algebra as A(X,X') * X" = B(X,X'). Please note that the first order form of this system is
d/dt(X,X') = ( X', A(X,X')^(-1)*B(X,X') )
Suppose now that you have defined two Scilab functions A and B which actually compute their values w.r.t. to the values of Xand X'
function out = A(X,Xprime)
x=X(1)
L=X(2)
theta=X(3)
qa=X(4)
xd=XPrime(1)
Ld=XPrime(2)
thetad=XPrime(3)
qa=XPrime(4);
...
end
function out = B(X,Xprime)
...
end
then the right hand side of the system of 8 ODEs, as it can be given to the ode function of Scilab can be coded as follows
function dstate_dt = rhs(t,state)
X = state(1:4);
Xprime = state(5:8);
out = [ Xprime
A(X,Xprime) \ B(X,Xprime)]
end
Writing the code of A() and B() according to the given equations is the only remaining (but quite easy) task.

How to handle boundary constraints when using `nls.lm` in R

I asked this question a while ago. I am not sure whether I should post this as an answer or a new question. I do not have an answer but I "solved" the problem by applying the Levenberg-Marquardt algorithm using nls.lm in R and when the solution is at the boundary, I run the trust-region-reflective algorithm (TRR, implemented in R) to step away from it. Now I have new questions.
From my experience, doing this way the program reaches the optimal and is not so sensitive to the starting values. But this is only a practical method to step aside from the issues I encounterd using nls.lm and also other optimization functions in R. I would like to know why nls.lm behaves this way for optimization problems with boundary constraints and how to handle the boundary constraints when using nls.lm in practice.
Following I gave an example illustrating the two issues using nls.lm.
It is sensitive to starting values.
It stops when some parameter reaches the boundary.
A Reproducible Example: Focus Dataset D
library(devtools)
install_github("KineticEval","zhenglei-gao")
library(KineticEval)
data(FOCUS2006D)
km <- mkinmod.full(parent=list(type="SFO",M0 = list(ini = 0.1,fixed = 0,lower = 0.0,upper =Inf),to="m1"),m1=list(type="SFO"),data=FOCUS2006D)
system.time(Fit.TRR <- KinEval(km,evalMethod = 'NLLS',optimMethod = 'TRR'))
system.time(Fit.LM <- KinEval(km,evalMethod = 'NLLS',optimMethod = 'LM',ctr=kingui.control(runTRR=FALSE)))
compare_multi_kinmod(km,rbind(Fit.TRR$par,Fit.LM$par))
dev.print(jpeg,"LMvsTRR.jpeg",width=480)
The differential equations that describes the model/system is:
"d_parent = - k_parent * parent"
"d_m1 = - k_m1 * m1 + k_parent * f_parent_to_m1 * parent"
In the graph on the left is the model with initial values, and in the middle is the fitted model using "TRR"(similar to the algorithm in Matlab lsqnonlin function ), on the right is the fitted model using "LM" with nls.lm. Looking at the fitted parameters(Fit.LM$par) you will find that one fitted parameter(f_parent_to_m1) is at the boundary 1. If I change the starting value for one parameter M0_parent from 0.1 to 100, then I got the same results using nls.lm and lsqnonlin.I have many cases like this one.
newpars <- rbind(Fit.TRR$par,Fit.LM$par)
rownames(newpars)<- c("TRR(lsqnonlin)","LM(nls.lm)")
newpars
M0_parent k_parent k_m1 f_parent_to_m1
TRR(lsqnonlin) 99.59848 0.09869773 0.005260654 0.514476
LM(nls.lm) 84.79150 0.06352110 0.014783294 1.000000
Except for the above problems, it often happens that the Hessian returned by nls.lm is not invertable(especially when some parameters are on the boundary) so I cannot get an estimation of the covariance matrix. On the other hand, the "TRR" algorithm(in Matlab) almost always give an estimation by calculating the Jacobian at the solution point. I think this is useful but I am also sure that R optimization algorithms(the ones I have tried) did not do this for a reason. I would like to know whether I am wrong by using the Matlab way of calculating the covariance matrix to get standard error for the parameter estimates.
One last note, I claimed in my previous post that the Matlab lsqnonlin outperforms R's optimization functions in almost all cases. I was wrong. The "Trust-Region-Reflective" algorithm used in Matlab is in fact slower(sometimes much slower) if also implemented in R as you can see from the above example. However, it is still more stable and reaches a better solution than the R's basic optimization algorithms.
First off, I am not an expert on Matlab and Optimisation and have never used R.
I am not sure I see what your actual question is, but maybe I can shed some light into your puzzlement:
LM is slightly enhanced Gauß-Newton approach - for problems with several local minima it is very sensitive to initial states. Including boundaries typically generates more of those minima.
TRR is akin to LM, but more robust. It has better capabilities for "jumping out of" bad local minima. It is quite feasible that it will behave better, but perform worse, than an LM. Actually explaining why is very hard. You would need to study the algorithms in detail and look at how they behave in this situation.
I cannot explain the difference between Matlab's and R's implementation, but there are several extensions to TRR that maybe Matlab uses and R does not.
Does your approach of using LM and TRR alternatingly converge better than TRR alone?
Using the mkin package, you can find the parameters using the "Port" algorithm (which is also a kind of a TRR algorithm as far as I could tell from its documentation), or the "Marq" algorithm, which uses nls.lm in the background. Then you can use "normal" starting values or "bad" starting values.
library(mkin)
packageVersion("mkin")
Recent mkin version can speed up the process considerably as they compile the models from automatically generated C code if a compiler is available on your system (e.g. you have r-base-dev installed on Debian/Ubuntu, or Rtools on Windows).
This defines the model:
m <- mkinmod(parent = mkinsub("SFO", "m1"),
m1 = mkinsub("SFO"),
use_of_ff = "max")
You can check that the differential equations are correct:
cat(m$diffs, sep = "\n")
Then we fit in four variants, Port and LM, with or without M0 fixed to 0.1:
f.Port = mkinfit(m, FOCUS_2006_D)
f.Port.M0 = mkinfit(m, FOCUS_2006_D, state.ini = c(parent = 0.1, m1 = 0))
f.LM = mkinfit(m, FOCUS_2006_D, method.modFit = "Marq")
f.LM.M0 = mkinfit(m, FOCUS_2006_D, state.ini = c(parent = 0.1, m1 = 0),
method.modFit = "Marq")
Then we look at the results:
results <- sapply(list(Port = f.Port, Port.M0 = f.Port.M0, LM = f.LM, LM.M0 = f.LM.M0),
function(x) round(summary(x)$bpar[, "Estimate"], 5))
which are
Port Port.M0 LM LM.M0
parent_0 99.59848 99.59848 99.59848 39.52278
k_parent 0.09870 0.09870 0.09870 0.00000
k_m1 0.00526 0.00526 0.00526 0.00000
f_parent_to_m1 0.51448 0.51448 0.51448 1.00000
So we can see that the Port algorithm finds the best solution (to the best of my knowledge) even with bad starting values. The speed issue that one may have with more complicated models is alleviated using the automatic generation of C code.

Finding Contraints Matrix in constrOptim (R)

I would like to set the constraints for constrOptim to optimize the following function:
logistic<-function(b,x,target){
b1<-b[1]
b2<-b[2]
log<-function(x){1/(1+exp(-(b1+b2*x)))}
abs(mean(log(x))-target)
}
Optimization to target=.75 can generally be done using
optim(c(1,1),logistic,x=data,target=.75,hessian=TRUE,method='SANN')
Method SANN seems necessary, as the function is not differentiable.
The constraint is that b2>0 (condition 1) or b2<0 (condition 2), while b1 can be any real number. But how can I extend this function to constrOptim? Specifically, I do not know how to specify arguments ui and ci.
Alternative approaches to optimization under these constraints also welcome. Thanks.
EDIT
I think that I found a workarround. We find constraint b2<0 by redefining logistic to
logistic_lt<-function(b,x,target){
b1<-b[1]
b2<-b[2]
if(b2>0){b2<-b2*(-1)}
log<-function(x){1/(1+exp(-(b1+b2*x)))}
abs(mean(log(x))-target)
}
Still I'd be interested in a solution involving constrOptim.

How to solve nested ODE equations

We can use the deSolve package in R for ordinary differential equations (ODE), however, I can't find a way to solve two nested ODE equations, suppose `
b'(t) = beta - k*b(t);
a'(t) = alpha -b(t)*gamma;
where ' means differentiation. How can we solve a and b then? as a' is a function of b, we have to solve a and b simultaneously.
I got an error:
Error in lsoda(y, times, func, parms, ...) : The used combination of solvers cannot be nested.
When I tried to add the solve for b inside the ode solution for a.
I may be confused, but you seem to be describing coupled equations, which lsoda can handle perfectly well, as follows (I implemented your ODEs but made up some parameters since I didn't know what you had in mind.)
gfun <- function(t,y,parms,...) {
## 'with' trick lets us write gradient in terms of variable/parameter names
with(as.list(c(y,parms)),
list(c(b=beta-k*b,a=alpha-b*gamma),NULL))
}
library(deSolve)
L1 <- lsoda(y=c(b=1,a=1),
times=seq(0,10,by=0.1),
func=gfun,
parms=c(alpha=0.1,beta=0.2,gamma=0.05,k=0.01))
matplot(L1[,1],L1[,-1],type="l",lty=1,bty="l",las=1)
PS: this seems to be a set of coupled linear ODEs, so you should actually be able to get a full closed-form solution rather than solving them numerically. (I'm too lazy to do that right now; b(t) can be solved immediately (an "affine" equation), a(t) can be solved by integration.)

Resources