univariate nonlinear optimization with quadratic constraint in R - r

I have a quadratic function f where, f = function (x) {2+.1*x+.23*(x*x)}. Let's say I have another quadratic fn g where g = function (x) {3+.4*x-.60*(x*x)}
Now, I want to maximize f given the constraints 1. g>0 and 2. 600<x<650
I have tried the packages optim,constrOptim and optimize. optimize does one dim. optimization, but without constraints and constrOptim I couldn't understand. I need to this using R. Please help.
P.S. In this example, the values may be erratic as I have given two random quadratic functions, but basically I want maximization of a quadratic fn given a quadratic constraint.

If you solve g(x)=0 for x by the usual quadratic formula then that just gives you another set of bounds on x. If your x^2 coefficent is negative then g(x) > 0 between the solutions, otherwise g(x)>0 outside the solutions, so within (-Inf, x1) and (x2, Inf).
In this case, g(x)>0 for -1.927 < x < 2.59. So in this case both your constraints cannot be simultaneously achieved (g(x) is LESS THAN 0 for 600<x<650).
But supposing your second condition was 1 < x < 5, then you'd just combine the solution from g(x)>0 with that interval to get 1 < x < 2.59, and then maximise f in that interval using standard univariate optimisation.
And you don't even need to run an optimisation algorithm. Your target f is quadratic. If the coefficient of x^2 is positive the maximum is going to be at one of your limits of x, so you only have a small number of values to try. If the coefficient of x^2 is -ve then the maximum is either at a limit or at the point where f(x) peaks (solve f'(x)=0) if that is within your limits.
So you can do this precisely, there's just a few conditions to test and then some intervals to compute and then some values of f at those interval limits to calculate.

Related

Solve quadratic optimization with nonlinear constraints [duplicate]

I am trying to solve the following inequality constraint:
Given time-series data for N stocks, I am trying to construct a portfolio weight vector to minimize the variance of the returns.
the objective function:
min w^{T}\sum w
s.t. e_{n}^{T}w=1
\left \| w \right \|\leq C
where w is the vector of weights, \sum is the covariance matrix, e_{n}^{T} is a vector of ones, C is a constant. Where the second constraint (\left \| w \right \|) is an inequality constraint (2-norm of the weights).
I tried using the nloptr() function but it gives me an error: Incorrect algorithm supplied. I'm not sure how to select the correct algorithm and I'm also not sure if this is the right method of solving this inequality constraint.
I am also open to using other functions as long as they solve this constraint.
Here is my attempted solution:
data <- replicate(4,rnorm(100))
N <- 4
fn<-function(x) {cov.Rt<-cov(data); return(as.numeric(t(x) %*%cov.Rt%*%x))}
eqn<-function(x){one.vec<-matrix(1,ncol=N,nrow=1); return(-1+as.numeric(one.vec%*%x))}
C <- 1.5
ineq<-function(x){
z1<- t(x) %*% x
return(as.numeric(z1-C))
}
uh <-rep(C^2,N)
lb<- rep(0,N)
x0 <- rep(1,N)
local_opts <- list("algorithm"="NLOPT_LN_AUGLAG,",xtol_rel=1.0e-7)
opts <- list("algorithm"="NLOPT_LN_AUGLAG,",
"xtol_rel"=1.0e-8,local_opts=local_opts)
sol1<-nloptr(x0,eval_f=fn,eval_g_eq=eqn, eval_g_ineq=ineq,ub=uh,lb=lb,opts=opts)
This looks like a simple QP (Quadratic Programming) problem. It may be easier to use a QP solver instead of a general purpose NLP (NonLinear Programming) solver (no need for derivatives, functions etc.). R has a QP solver called quadprog. It is not totally trivial to setup a problem for quadprog, but here is a very similar portfolio example with complete R code to show how to solve this. It has the same objective (minimize risk), the same budget constraint and the lower and upper-bounds. The example just has an extra constraint that specifies a minimum required portfolio return.
Actually I misread the question: the second constraint is ||x|| <= C. I think we can express the whole model as:
This actually looks like a convex model. I could solve it with "big" solvers like Cplex,Gurobi and Mosek. These solvers support convex Quadratically Constrained problems. I also believe this can be formulated as a cone programming problem, opening up more possibilities.
Here is an example where I use package cccp in R. cccp stands for
Cone Constrained Convex Problems and is a port of CVXOPT.
The 2-norm of weights doesn't make sense. It has to be the 1-norm. This is essentially a constraint on the leverage of the portfolio. 1-norm(w) <= 1.6 implies that the portfolio is at most 130/30 (Sorry for using finance language here). You want to read about quadratic cones though. w'COV w = w'L'Lw (Cholesky decomp) and hence w'Cov w = 2-Norm (Lw)^2. Hence you can introduce the linear constraint y - Lw = 0 and t >= 2-Norm(Lw) [This defines a quadratic cone). Now you minimize t. The 1-norm can also be replaced by cones as abs(x_i) = sqrt(x_i^2) = 2-norm(x_i). So introduce a quadratic cone for each element of the vector x.

Minimizing quadratic function subject to norm inequality constraint

I am trying to solve the following inequality constraint:
Given time-series data for N stocks, I am trying to construct a portfolio weight vector to minimize the variance of the returns.
the objective function:
min w^{T}\sum w
s.t. e_{n}^{T}w=1
\left \| w \right \|\leq C
where w is the vector of weights, \sum is the covariance matrix, e_{n}^{T} is a vector of ones, C is a constant. Where the second constraint (\left \| w \right \|) is an inequality constraint (2-norm of the weights).
I tried using the nloptr() function but it gives me an error: Incorrect algorithm supplied. I'm not sure how to select the correct algorithm and I'm also not sure if this is the right method of solving this inequality constraint.
I am also open to using other functions as long as they solve this constraint.
Here is my attempted solution:
data <- replicate(4,rnorm(100))
N <- 4
fn<-function(x) {cov.Rt<-cov(data); return(as.numeric(t(x) %*%cov.Rt%*%x))}
eqn<-function(x){one.vec<-matrix(1,ncol=N,nrow=1); return(-1+as.numeric(one.vec%*%x))}
C <- 1.5
ineq<-function(x){
z1<- t(x) %*% x
return(as.numeric(z1-C))
}
uh <-rep(C^2,N)
lb<- rep(0,N)
x0 <- rep(1,N)
local_opts <- list("algorithm"="NLOPT_LN_AUGLAG,",xtol_rel=1.0e-7)
opts <- list("algorithm"="NLOPT_LN_AUGLAG,",
"xtol_rel"=1.0e-8,local_opts=local_opts)
sol1<-nloptr(x0,eval_f=fn,eval_g_eq=eqn, eval_g_ineq=ineq,ub=uh,lb=lb,opts=opts)
This looks like a simple QP (Quadratic Programming) problem. It may be easier to use a QP solver instead of a general purpose NLP (NonLinear Programming) solver (no need for derivatives, functions etc.). R has a QP solver called quadprog. It is not totally trivial to setup a problem for quadprog, but here is a very similar portfolio example with complete R code to show how to solve this. It has the same objective (minimize risk), the same budget constraint and the lower and upper-bounds. The example just has an extra constraint that specifies a minimum required portfolio return.
Actually I misread the question: the second constraint is ||x|| <= C. I think we can express the whole model as:
This actually looks like a convex model. I could solve it with "big" solvers like Cplex,Gurobi and Mosek. These solvers support convex Quadratically Constrained problems. I also believe this can be formulated as a cone programming problem, opening up more possibilities.
Here is an example where I use package cccp in R. cccp stands for
Cone Constrained Convex Problems and is a port of CVXOPT.
The 2-norm of weights doesn't make sense. It has to be the 1-norm. This is essentially a constraint on the leverage of the portfolio. 1-norm(w) <= 1.6 implies that the portfolio is at most 130/30 (Sorry for using finance language here). You want to read about quadratic cones though. w'COV w = w'L'Lw (Cholesky decomp) and hence w'Cov w = 2-Norm (Lw)^2. Hence you can introduce the linear constraint y - Lw = 0 and t >= 2-Norm(Lw) [This defines a quadratic cone). Now you minimize t. The 1-norm can also be replaced by cones as abs(x_i) = sqrt(x_i^2) = 2-norm(x_i). So introduce a quadratic cone for each element of the vector x.

I want to maximize returns on a portfolio ensuring risk is below a certain level. Which function can I use for optimization?

Objective function to be maximized : pos%*%mu where pos is the weights row vector and mu is the column vector of mean returns of d stocks
Constraints: 1) ones%*%pos = 1 where ones is a row vector of 1's of size 1*d (d is the number of stocks)
2) pos%*%cov%*%t(pos) = rb^2 # where cov is the covariance matrix of size d*d and rb is risk budget which is the free parameter whose values will be changed to draw the efficient frontier
I want to write a code for this optimization problem in R but I can't think of any function or library for help.
PS: solve.QP in library quadprog has been used to minimize covariance subject to a target return . Can this function be also used to maximize return subject to a risk budget ? How should I specify the Dmat matrix and dvec vector for this problem ?
EDIT :
library(quadprog)
mu <- matrix(c(0.01,0.02,0.03),3,1)
cov # predefined covariance matrix of size 3*3
pos <- matrix(c(1/3,1/3,1/3),1,3) # random weights vector
edr <- pos%*%mu # expected daily return on portfolio
m1 <- matrix(1,1,3) # constraint no.1 ( sum of weights = 1 )
m2 <- pos%*%cov # constraint no.2
Amat <- rbind(m1,m2)
bvec <- matrix(c(1,0.1),2,1)
solve.QP(Dmat= ,dvec= ,Amat=Amat,bvec=bvec,meq=2)
How should I specify Dmat and dvec ? I want to optimize over pos
Also, I think I have not specified constraint no.2 correctly. It should make the variance of portfolio equal to the risk budget.
(Disclaimer: There may be a better way to do this in R. I am by no means an expert in anything related to R, and I'm making a few assumptions about how R is doing things, notably that you're using an interior-point method. Also, there is likely an R package for what you're trying to do, but I don't know what it is or how to use it.)
Minimising risk subject to a target return is a linearly-constrained problem with a quadratic objective, looking like this:
min x^T Q x
subject to sum x_i = 1
sum ret_i x_i >= target
(and x >= 0 if you want to be long-only).
Maximising return subject to a risk budget is quadratically-constrained, however; it looks like this:
max ret^T x
subject to sum x_i = 1
x^T Q x <= riskbudget
(and maybe x >= 0).
Convex quadratic terms in the objective impose less of a computational cost in an interior-point method compared to introducing a convex quadratic constraint. With a quadratic objective term, the Q matrix just shows up in the augmented system. With a convex quadratic constraint, you need to optimise over a more complicated cone containing a second-order cone factor and you need to be careful about how you solve the linear systems that arise.
I would suggest you use the risk-minimisation formulation repeatedly, doing a binary search on the target parameter until you've found a portfolio approximately maximising return subject to your risk budget. I am suggesting this approach because it is likely sufficient for your needs.
If you really want to solve your problem directly, I would suggest using an interface Todd, Toh, and Tutuncu's SDPT3. This really is overkill; SDPT3 permits you to formulate and solve symmetric cone programs of your choosing. I would also note that portfolio optimisation problems are particularly special cases of symmetric cone programs; other approaches exist that are reportedly very successful. Unfortunately, I'm not studied up on them.

Efficiently calculating integral of a multivariate function on non-rectangular region?

I want to compute the expected value of a multivariate function f(x) wrt to dirichlet distribution. My problem is "penta-nomial" (i.e 5 variables) so calculating the explicit form of the expected value seems unreasonable. Is there a way to numerically integrate it efficiently?
f(x) = \sum_{0,4}(x_i*log(n/x_i))
x = <x_0, x_1, x_2, x_3, x_4> and n is a constant

Possible infinite loop on math equation?

I have the following problem, and am having trouble understanding part of the equation:
Monte Carlo methods to estimate an integral is basically, take a lot of random samples and determined a weighted average. For example, the integral of f(x) can be estimated from N independent random samples xr by
alt text http://www.goftam.com/images/area.gif
for a uniform probability distribution of xr in the range [x1, x2]. Since each
function evaluation f(xr) is independent, it is easy to distribute this work
over a set of processes.
What I don't understand is what f(xr) is supposed to do? Does it feed back into the same equation? Wouldn't that be an infinite loop?
It should say f(xi)
f() is the function we are trying to integrate via the numerical monte carlo method, which estimates an integral (and its error) by evaluating randomly choosen points from the integration region.
Ref.
Your goal is to compute the integral of f from x1 to x2. For example, you may wish to compute the integral of sin(x) from 0 to pi.
Using Monte Carlo integration, you can approximate this by sampling random points in the interval [x1,x2] and evaluating f at those points. Perhaps you'd like to call this MonteCarloIntegrate( f, x1, x2 ).
So no, MonteCarloIntegrate does not "feed back" into itself. It calls a function f, the function you are trying to numerically integrate, e.g. sin.
Replace f(x_r) by f(x_r_i) (read: f evaluated at x sub r sub i). The r_i are chosen uniformly at random from the interval [x_1, x_2].
The point is this: the area under f on [x_1, x_2] is equal to (x_2 - x_1) times the average of f on the interval [x_1, x_2]. That is
A = (x_2 - x_1) * [(1 / (x_2 - x_1)) * int_{x_1}^{x_2} f(x)\, dx]
The portion in square brackets is the average of f on [x_1, x_2] which we will denote avg(f). How can we estimate the average of f? By sampling it at N random points and taking the average value of f evaluated at those random points. To wit:
avg(f) ~ (1 / N) * sum_{i=1}^{N} f(x_r_i)
where x_r_1, x_r_2, ..., x_r_N are points chosen uniformly at random from [x_1, x_2].
Then
A = (x_2 - x_1) * avg(f) ~ (x_2 - x_1) * (1 / N) * sum_{i=1}^{N} f(x_r_i).
Here is another way to think about this equation: the area under f on the interval [x_1, x_2] is the same as the area of a rectangle with length (x_2 - x_1) and height equal to the average height of f. The average height of f is approximately
(1 / N) * sum_{i=1}^{N} f(x_r_i)
which is value that we produced previously.
Whether it's xi or xr is irrelevant - it's the random number that we're feeding into function f().
I'm more likely to write the function (aside from formatting) as follows:
(x2-x1) * sum(f(xi))/N
That way, we can see that we're taking the average of N samples of f(x) to get an average height of the function, then multiplying by the width (x2-x1).
Because, after all, integration is just calculating area under the curve. (Nice pictures at http://hyperphysics.phy-astr.gsu.edu/Hbase/integ.html#c4.
x_r is a random value from the integral's range.
Substituting Random(x_1, x_2) for x_r would give an equivalent equation.

Resources