Minimizing quadratic function subject to norm inequality constraint - r

I am trying to solve the following inequality constraint:
Given time-series data for N stocks, I am trying to construct a portfolio weight vector to minimize the variance of the returns.
the objective function:
min w^{T}\sum w
s.t. e_{n}^{T}w=1
\left \| w \right \|\leq C
where w is the vector of weights, \sum is the covariance matrix, e_{n}^{T} is a vector of ones, C is a constant. Where the second constraint (\left \| w \right \|) is an inequality constraint (2-norm of the weights).
I tried using the nloptr() function but it gives me an error: Incorrect algorithm supplied. I'm not sure how to select the correct algorithm and I'm also not sure if this is the right method of solving this inequality constraint.
I am also open to using other functions as long as they solve this constraint.
Here is my attempted solution:
data <- replicate(4,rnorm(100))
N <- 4
fn<-function(x) {cov.Rt<-cov(data); return(as.numeric(t(x) %*%cov.Rt%*%x))}
eqn<-function(x){one.vec<-matrix(1,ncol=N,nrow=1); return(-1+as.numeric(one.vec%*%x))}
C <- 1.5
ineq<-function(x){
z1<- t(x) %*% x
return(as.numeric(z1-C))
}
uh <-rep(C^2,N)
lb<- rep(0,N)
x0 <- rep(1,N)
local_opts <- list("algorithm"="NLOPT_LN_AUGLAG,",xtol_rel=1.0e-7)
opts <- list("algorithm"="NLOPT_LN_AUGLAG,",
"xtol_rel"=1.0e-8,local_opts=local_opts)
sol1<-nloptr(x0,eval_f=fn,eval_g_eq=eqn, eval_g_ineq=ineq,ub=uh,lb=lb,opts=opts)

This looks like a simple QP (Quadratic Programming) problem. It may be easier to use a QP solver instead of a general purpose NLP (NonLinear Programming) solver (no need for derivatives, functions etc.). R has a QP solver called quadprog. It is not totally trivial to setup a problem for quadprog, but here is a very similar portfolio example with complete R code to show how to solve this. It has the same objective (minimize risk), the same budget constraint and the lower and upper-bounds. The example just has an extra constraint that specifies a minimum required portfolio return.
Actually I misread the question: the second constraint is ||x|| <= C. I think we can express the whole model as:
This actually looks like a convex model. I could solve it with "big" solvers like Cplex,Gurobi and Mosek. These solvers support convex Quadratically Constrained problems. I also believe this can be formulated as a cone programming problem, opening up more possibilities.
Here is an example where I use package cccp in R. cccp stands for
Cone Constrained Convex Problems and is a port of CVXOPT.

The 2-norm of weights doesn't make sense. It has to be the 1-norm. This is essentially a constraint on the leverage of the portfolio. 1-norm(w) <= 1.6 implies that the portfolio is at most 130/30 (Sorry for using finance language here). You want to read about quadratic cones though. w'COV w = w'L'Lw (Cholesky decomp) and hence w'Cov w = 2-Norm (Lw)^2. Hence you can introduce the linear constraint y - Lw = 0 and t >= 2-Norm(Lw) [This defines a quadratic cone). Now you minimize t. The 1-norm can also be replaced by cones as abs(x_i) = sqrt(x_i^2) = 2-norm(x_i). So introduce a quadratic cone for each element of the vector x.

Related

Solve quadratic optimization with nonlinear constraints [duplicate]

I am trying to solve the following inequality constraint:
Given time-series data for N stocks, I am trying to construct a portfolio weight vector to minimize the variance of the returns.
the objective function:
min w^{T}\sum w
s.t. e_{n}^{T}w=1
\left \| w \right \|\leq C
where w is the vector of weights, \sum is the covariance matrix, e_{n}^{T} is a vector of ones, C is a constant. Where the second constraint (\left \| w \right \|) is an inequality constraint (2-norm of the weights).
I tried using the nloptr() function but it gives me an error: Incorrect algorithm supplied. I'm not sure how to select the correct algorithm and I'm also not sure if this is the right method of solving this inequality constraint.
I am also open to using other functions as long as they solve this constraint.
Here is my attempted solution:
data <- replicate(4,rnorm(100))
N <- 4
fn<-function(x) {cov.Rt<-cov(data); return(as.numeric(t(x) %*%cov.Rt%*%x))}
eqn<-function(x){one.vec<-matrix(1,ncol=N,nrow=1); return(-1+as.numeric(one.vec%*%x))}
C <- 1.5
ineq<-function(x){
z1<- t(x) %*% x
return(as.numeric(z1-C))
}
uh <-rep(C^2,N)
lb<- rep(0,N)
x0 <- rep(1,N)
local_opts <- list("algorithm"="NLOPT_LN_AUGLAG,",xtol_rel=1.0e-7)
opts <- list("algorithm"="NLOPT_LN_AUGLAG,",
"xtol_rel"=1.0e-8,local_opts=local_opts)
sol1<-nloptr(x0,eval_f=fn,eval_g_eq=eqn, eval_g_ineq=ineq,ub=uh,lb=lb,opts=opts)
This looks like a simple QP (Quadratic Programming) problem. It may be easier to use a QP solver instead of a general purpose NLP (NonLinear Programming) solver (no need for derivatives, functions etc.). R has a QP solver called quadprog. It is not totally trivial to setup a problem for quadprog, but here is a very similar portfolio example with complete R code to show how to solve this. It has the same objective (minimize risk), the same budget constraint and the lower and upper-bounds. The example just has an extra constraint that specifies a minimum required portfolio return.
Actually I misread the question: the second constraint is ||x|| <= C. I think we can express the whole model as:
This actually looks like a convex model. I could solve it with "big" solvers like Cplex,Gurobi and Mosek. These solvers support convex Quadratically Constrained problems. I also believe this can be formulated as a cone programming problem, opening up more possibilities.
Here is an example where I use package cccp in R. cccp stands for
Cone Constrained Convex Problems and is a port of CVXOPT.
The 2-norm of weights doesn't make sense. It has to be the 1-norm. This is essentially a constraint on the leverage of the portfolio. 1-norm(w) <= 1.6 implies that the portfolio is at most 130/30 (Sorry for using finance language here). You want to read about quadratic cones though. w'COV w = w'L'Lw (Cholesky decomp) and hence w'Cov w = 2-Norm (Lw)^2. Hence you can introduce the linear constraint y - Lw = 0 and t >= 2-Norm(Lw) [This defines a quadratic cone). Now you minimize t. The 1-norm can also be replaced by cones as abs(x_i) = sqrt(x_i^2) = 2-norm(x_i). So introduce a quadratic cone for each element of the vector x.

Simple Orthographic Structure from Motion using R -- Determining Metric Constraints

I would like to build a simple structure from motion program according to Tomasi and Kanade [1992]. The article can be found below:
https://people.eecs.berkeley.edu/~yang/courses/cs294-6/papers/TomasiC_Shape%20and%20motion%20from%20image%20streams%20under%20orthography.pdf
This method seems elegant and simple, however, I am having trouble calculating the metric constraints outlined in equation 16 of the above reference.
I am using R and have outlined my work thus far below:
Given a set of images
I want to track the corners of the three cabinet doors and the one picture (black points on images). First we read in the points as a matrix w where
Ultimately, we want to factorize w into a rotation matrix R and shape matrix S that describe the 3 dimensional points. I will spare as many details as I can but a complete description of the maths can be gleaned from the Tomasi and Kanade [1992] paper.
I supply w below:
w.vector=c(0.2076,0.1369,0.1918,0.1862,0.1741,0.1434,0.176,0.1723,0.2047,0.233,0.3593,0.3668,0.3744,0.3593,0.3876,0.3574,0.3639,0.3062,0.3295,0.3267,0.3128,0.2811,0.2979,0.2876,0.2782,0.2876,0.3838,0.3819,0.3819,0.3649,0.3913,0.3555,0.3593,0.2997,0.3202,0.3137,0.31,0.2718,0.2895,0.2867,0.825,0.7703,0.742,0.7251,0.7232,0.7138,0.7345,0.6911,0.1937,0.1248,0.1723,0.1741,0.1657,0.1313,0.162,0.1657,0.8834,0.8118,0.7552,0.727,0.7364,0.7232,0.7288,0.6892,0.4309,0.3798,0.4021,0.3965,0.3844,0.3546,0.3695,0.3583,0.314,0.3065,0.3989,0.3876,0.3857,0.3781,0.3989,0.3593,0.5184,0.4849,0.5147,0.5193,0.5109,0.4812,0.4979,0.4849,0.3536,0.3517,0.4121,0.3951,0.3951,0.3781,0.397,0.348,0.5175,0.484,0.5091,0.5147,0.5128,0.4784,0.4905,0.4821,0.7722,0.7326,0.7326,0.7232,0.7232,0.7119,0.7402,0.7006,0.4281,0.3779,0.3918,0.3863,0.3825,0.3472,0.3611,0.3537,0.8043,0.7628,0.7458,0.7288,0.727,0.7213,0.7364,0.6949,0.5789,0.5491,0.5761,0.5817,0.5733,0.5444,0.5537,0.5379,0.3649,0.3536,0.4177,0.3951,0.3857,0.3819,0.397,0.3461,0.697,0.671,0.6821,0.6821,0.6719,0.6412,0.6468,0.6235,0.3744,0.3649,0.4159,0.3819,0.3781,0.3612,0.3763,0.314,0.7008,0.6691,0.6794,0.6812,0.6747,0.6393,0.6412,0.6235,0.7571,0.7345,0.7439,0.7496,0.7402,0.742,0.7647,0.7213,0.5817,0.5463,0.5696,0.5779,0.5761,0.5398,0.551,0.5398,0.7665,0.7326,0.7439,0.7345,0.7288,0.727,0.7515,0.7062,0.8301,0.818,0.8571,0.8878,0.8766,0.8561,0.858,0.8394,0.4121,0.3876,0.4347,0.397,0.38,0.3631,0.3668,0.2971,0.912,0.8962,0.9185,0.939,0.9259,0.898,0.8887,0.8571,0.3989,0.3781,0.4215,0.3725,0.3612,0.3461,0.3423,0.2782,0.9092,0.8952,0.9176,0.9399,0.925,0.8971,0.8887,0.8571,0.4743,0.4536,0.4894,0.4517,0.446,0.4328,0.4385,0.3706,0.8273,0.8171,0.8571,0.8878,0.8766,0.8543,0.8561,0.8394,0.4743,0.4554,0.4969,0.4668,0.4536,0.4404,0.4536,0.3857)
w=matrix(w.vector,ncol=16,nrow=16,byrow=FALSE)
Then create registered measurement matrix wm according to equation 2 as
by
wm = w - rowMeans(w)
We can decompose wm into a '2FxP' matrix o1 a diagonal 'PxP' matrix e and 'PxP' matrix o2 by using a singular value decomposition.
svdwm <- svd(wm)
o1 <- svdwm$u
e <- diag(svdwm$d)
o2 <- t(svdwm$v) ## dont forget the transpose!
However, because of noise, we only pay attention to the first 3 columns of o1, first 3 values of e and the first 3 rows of o2 by:
o1p <- svdwm$u[,1:3]
ep <- diag(svdwm$d[1:3])
o2p <- t(svdwm$v)[1:3,] ## dont forget the transpose!
Now we can solve for our rhat and shat in equation (14)
by
rhat <- o1p%*%ep^(1/2)
shat <- ep^(1/2) %*% o2p
However, these results are not unique and we still need to solve for R and S by equation (15)
by using the metric constraints of equation (16)
Now I need to find Q. I believe there are two potential methods but am unclear how to employ either.
Method 1 involves solving for B where B=Q%*%solve(Q) then using Cholesky decomposition to find Q. Method 1 appears to be the common choice in literature, however, little detail is given as to how to actually solve the linear system. It is apparent that B is a '3x3' symmetric matrix of 6 unknowns. However, given the metric constraints (equations 16), I don't know how to solve for 6 unknowns given 3 equations. Am I forgetting a property of symmetric matrices?
Method II involves using non-linear methods to estimate Q and is less commonly used in structure from motion literature.
Can anyone offer some advice as to how to go about solving this problem? Thanks in advance and let me know if I need to be more clear in my question.
can be written as .
can be written as .
can be written as .
so our equations are:
So the first equation can be written as:
which is equivalent to
To keep it short we define now:
(I know the spacings are terrably small, but yes, this is a Vector...)
So for all equations in all different Frames f, we can write one big equation:
(sorry for the ugly formulas...)
Now you just need to solve the -Matrix using Cholesky decomposition or whatever...

r optimization function with formula for constrains

I need to optimize a function, say g(x), where x is vector and g is an analytic function.
The problem is that I need to optimize with a constraint (analytic) function c(x) which gives a scalar as an output, i.e the constraint is c(x) > k for some k > 0.
constrOptim only allows to give a constrain for every field separately.
Advices?
Found the right tool - nloptr package. A very robust package where I can define functions for the optimization (g) and constraint (c). Also I can define upper and lower bounds for each of the variables separately, and use different king of optimization methods.

I want to maximize returns on a portfolio ensuring risk is below a certain level. Which function can I use for optimization?

Objective function to be maximized : pos%*%mu where pos is the weights row vector and mu is the column vector of mean returns of d stocks
Constraints: 1) ones%*%pos = 1 where ones is a row vector of 1's of size 1*d (d is the number of stocks)
2) pos%*%cov%*%t(pos) = rb^2 # where cov is the covariance matrix of size d*d and rb is risk budget which is the free parameter whose values will be changed to draw the efficient frontier
I want to write a code for this optimization problem in R but I can't think of any function or library for help.
PS: solve.QP in library quadprog has been used to minimize covariance subject to a target return . Can this function be also used to maximize return subject to a risk budget ? How should I specify the Dmat matrix and dvec vector for this problem ?
EDIT :
library(quadprog)
mu <- matrix(c(0.01,0.02,0.03),3,1)
cov # predefined covariance matrix of size 3*3
pos <- matrix(c(1/3,1/3,1/3),1,3) # random weights vector
edr <- pos%*%mu # expected daily return on portfolio
m1 <- matrix(1,1,3) # constraint no.1 ( sum of weights = 1 )
m2 <- pos%*%cov # constraint no.2
Amat <- rbind(m1,m2)
bvec <- matrix(c(1,0.1),2,1)
solve.QP(Dmat= ,dvec= ,Amat=Amat,bvec=bvec,meq=2)
How should I specify Dmat and dvec ? I want to optimize over pos
Also, I think I have not specified constraint no.2 correctly. It should make the variance of portfolio equal to the risk budget.
(Disclaimer: There may be a better way to do this in R. I am by no means an expert in anything related to R, and I'm making a few assumptions about how R is doing things, notably that you're using an interior-point method. Also, there is likely an R package for what you're trying to do, but I don't know what it is or how to use it.)
Minimising risk subject to a target return is a linearly-constrained problem with a quadratic objective, looking like this:
min x^T Q x
subject to sum x_i = 1
sum ret_i x_i >= target
(and x >= 0 if you want to be long-only).
Maximising return subject to a risk budget is quadratically-constrained, however; it looks like this:
max ret^T x
subject to sum x_i = 1
x^T Q x <= riskbudget
(and maybe x >= 0).
Convex quadratic terms in the objective impose less of a computational cost in an interior-point method compared to introducing a convex quadratic constraint. With a quadratic objective term, the Q matrix just shows up in the augmented system. With a convex quadratic constraint, you need to optimise over a more complicated cone containing a second-order cone factor and you need to be careful about how you solve the linear systems that arise.
I would suggest you use the risk-minimisation formulation repeatedly, doing a binary search on the target parameter until you've found a portfolio approximately maximising return subject to your risk budget. I am suggesting this approach because it is likely sufficient for your needs.
If you really want to solve your problem directly, I would suggest using an interface Todd, Toh, and Tutuncu's SDPT3. This really is overkill; SDPT3 permits you to formulate and solve symmetric cone programs of your choosing. I would also note that portfolio optimisation problems are particularly special cases of symmetric cone programs; other approaches exist that are reportedly very successful. Unfortunately, I'm not studied up on them.

univariate nonlinear optimization with quadratic constraint in R

I have a quadratic function f where, f = function (x) {2+.1*x+.23*(x*x)}. Let's say I have another quadratic fn g where g = function (x) {3+.4*x-.60*(x*x)}
Now, I want to maximize f given the constraints 1. g>0 and 2. 600<x<650
I have tried the packages optim,constrOptim and optimize. optimize does one dim. optimization, but without constraints and constrOptim I couldn't understand. I need to this using R. Please help.
P.S. In this example, the values may be erratic as I have given two random quadratic functions, but basically I want maximization of a quadratic fn given a quadratic constraint.
If you solve g(x)=0 for x by the usual quadratic formula then that just gives you another set of bounds on x. If your x^2 coefficent is negative then g(x) > 0 between the solutions, otherwise g(x)>0 outside the solutions, so within (-Inf, x1) and (x2, Inf).
In this case, g(x)>0 for -1.927 < x < 2.59. So in this case both your constraints cannot be simultaneously achieved (g(x) is LESS THAN 0 for 600<x<650).
But supposing your second condition was 1 < x < 5, then you'd just combine the solution from g(x)>0 with that interval to get 1 < x < 2.59, and then maximise f in that interval using standard univariate optimisation.
And you don't even need to run an optimisation algorithm. Your target f is quadratic. If the coefficient of x^2 is positive the maximum is going to be at one of your limits of x, so you only have a small number of values to try. If the coefficient of x^2 is -ve then the maximum is either at a limit or at the point where f(x) peaks (solve f'(x)=0) if that is within your limits.
So you can do this precisely, there's just a few conditions to test and then some intervals to compute and then some values of f at those interval limits to calculate.

Resources