Why doesn't solve.QP and portfolio.optim generate identical results? - r

The documentation for portfolio.optim {tseries} says that solve.QP {quadprog} is used to generate the solution for finding the tangency portfolio that maximizes the Sharpe ratio. That implies that results should be identical with either function. I'm probably overlooking something, but in this simple example I get similar but not identical solutions for estimating optimal portfolio weights with portfolio.optim and solve.QP. Shouldn't the results be identical? If so, where am I going wrong? Here's the code:
library(tseries)
library(quadprog)
# 1. Generate solution with solve.QP via: comisef.wikidot.com/tutorial:tangencyportfolio
# create artifical data
set.seed(1)
nO <- 100 # number of observations
nA <- 10 # number of assets
mData <- array(rnorm(nO * nA, mean = 0.001, sd = 0.01), dim = c(nO, nA))
rf <- 0.0001 # riskfree rate (2.5% pa)
mu <- apply(mData, 2, mean) # means
mu2 <- mu - rf # excess means
# qp
aMat <- as.matrix(mu2)
bVec <- 1
zeros <- array(0, dim = c(nA,1))
solQP <- solve.QP(cov(mData), zeros, aMat, bVec, meq = 1)
# rescale variables to obtain weights
w <- as.matrix(solQP$solution/sum(solQP$solution))
# 2. Generate solution with portfolio.optim (using artificial data from above)
port.1 <-portfolio.optim(mData,riskless=rf)
port.1.w <-port.1$pw
port.1.w <-matrix(port.1.w)
# 3. Compare portfolio weights from the two methodologies:
compare <-cbind(w,port.1$pw)
compare
[,1] [,2]
[1,] 0.337932967 0.181547633
[2,] 0.073836572 0.055100415
[3,] 0.160612441 0.095800361
[4,] 0.164491490 0.102811562
[5,] 0.005034532 0.003214622
[6,] 0.147473396 0.088792283
[7,] -0.122882875 0.000000000
[8,] 0.127924865 0.067705050
[9,] 0.026626940 0.012507530
[10,] 0.078949672 0.054834759

The one and the only way to deal with such situations is to browse the source. In your case, it is accessible via tseries:::portfolio.optim.default.
Now, to find the difference between those two calls, we may narrow down the issue by defining an equivalent helper function:
foo <- function(x, pm = mean(x), covmat = cov(x), riskless = FALSE, rf = 0)
{
x <- mData
pm <- mean(x)
covmat <- cov(x)
k <- dim(x)[2]
Dmat <- covmat
dvec <- rep.int(0, k)
a1 <- colMeans(x) - rf
a2 <- matrix(0, k, k)
diag(a2) <- 1
b2 <- rep.int(0, k)
Amat <- t(rbind(a1, a2))
b0 <- c(pm - rf, b2)
solve.QP(Dmat, dvec, Amat, bvec = b0, meq = 1)$sol
}
identical(portfolio.optim(mData, riskless=TRUE, rf=rf)$pw,
foo(mData, riskless=TRUE, rf=rf))
#[1] TRUE
With that, you can see that 1) riskless=rf is not the intended way, riskless=TRUE, rf=rf is the correct one; 2) there are several differences in Amat and bvec.
I am not an expert in portfolio optimization, so I do not know what's the explanation behind these additional constraints and if they should be there in the first place, but at least you can see what exactly causes the difference.

The difference in your example occurs due to the default value 'shorts=FALSE' in tseries::portfolio.optim(). Therefore you would have to either change the argument or add a non-negativity restriction in your solve.QP problem to reach the same results.
EDIT: While the answer still holds true, there seem to be some other weird default values with tseries::portfolio.optim(). For example it sets the minimum return requirement to pm = mean(x), leading to a random portfolio on the efficiency frontier instead of returning the global minimum variance portfolio if there is no return requirement. Bottom line: Stay with your quadprog::solve.QP solution. Enclosed an example of the wrapper function I use (I just started working with R and while I am quite sure that this delivers mathematically correct results, it might not be the cleanest piece of code):
# --------------------------------------------------------------------------
#' Quadratic Optimization
#' #description Wrapper for quadratic optimization to calculate the general
#' mean-variance portfolio.
#' #param S [matrix] Covariance matrix.
#' #param mu [numeric] Optional. Vector of expected returns.
#' #param wmin [numeric] Optional. Min weight per asset.
#' #param wmax [numeric] Optional. Max weight per asset.
#' #param mu_target [numeric] Optional. Required return, if empty the optimization returns the global minimum variance portfolio
#' #return Returns the mean-variance portfolio or the global minimum variance portfolio
# --------------------------------------------------------------------------
meanvar.pf <- function(S,
mu=NULL,
wmin=-1000,
wmax=1000,
mu_target=NULL){
if (!try(require(quadprog)))
stop("Execute 'install.packages('quadprog')' and try again")
if (missing(S))
stop("Covariance matrix is missing")
if (!is.null(mu) & dim(S)[1] != length(mu))
stop("S and mu have non-conformable dimensions")
N <- ncol(S)
if (wmin >= 1/N)
stop("wmin >= 1/N is not feasible")
if (wmax <= 1/N)
stop("wmax <= 1/N is not feasible")
meq <- 1
bvec <- c(1, rep(wmin,N), -rep(wmax,N))
Amat <- cbind(rep(1, N), diag(N), -diag(N))
if (!is.null(mu_target)) {
if (is.null(mu))
stop("Vector of asset returns is missing")
Amat <- cbind(mu, Amat)
bvec <- c(mu_target, bvec)
meq <- 2
}
result <- quadprog::solve.QP(Dmat=S,
dvec=rep(0, N),
Amat=Amat,
bvec=bvec,
meq=meq)
return(result)
}

Related

Optimization code in R, am I missing something?

I'm trying to optimize the parameters of a simple strategy as the code below by maximizing the sharpe ratio. The output results are clearly wrong. Can you please provide some help?
library(xts)
library(zoo)
library(quantmod)
library(PerformanceAnalytics)
library(TTR)
f_opt <- function(x, data){
a <- x[1]
b <- x[2]
sma <- SMA(Cl(data), n = a)
fma <- EMA(Cl(data), n = b)
signal <- Lag(ifelse(sma < fma, 1, -1))
ret <- Return.calculate(data, method = "discrete") * signal
colnames(ret) <- c("MA Strategy")
ret <- na.omit(ret)
sharpe <- SharpeRatio.annualized(ret, Rf = 0, scale = 252) * -1
return(as.numeric(sharpe))
}
SYMBL <- getSymbols("^GSPC", auto.assign=F, from="2011-01-01", to="2021-02-08")
data <- na.omit(SYMBL[,4])
optim(par = c(1,1), fn = f_opt, data = data, method = "L-BFGS-B", lower = 1, upper = 200)
OUTPUT
[1] 1.869527 1.000000
$value
[1] -0.6721263
$counts
function gradient
7 7
$convergence
[1] 0
$message
[1] "CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL"
optim() and related methods can find optima of smooth surfaces. With only two parameters it's easy enough to compute the objective function over a surface by brute force (I use functions from the emdbook and plot3D packages for convenience, but you could easily do this with for() loops and the built-in persp() function ...) (code below)
I know nothing about your subject area (finance?) or what's going on under the hood in the objective function, but it's not at all surprising that the optimization didn't work.
I was concerned that maybe the problem was with non-integer values of the parameters (which would present a problem for optim() in any case, but might suggest other approaches), but even restricting to integer values in the range (5-20, 180-200) we still get a rough-looking surface:
I have found the DEoptim (optimization by differential evolution) function useful for problems like this.
d1 <- DEoptim(f_opt, data=data,lower=c(1,1),upper=c(200,200))
## $optim
## $optim$bestmem
## par1 par2
## 12.87796 190.91548
##
## $optim$bestval
## [1] -1.158693
library(emdbook)
## this step takes a while
system.time(
cc <- curve3d(f_opt(c(x,y), data=data),
from=c(1,1),to=c(200,200),
n=61,
sys3d="none",
.progress="text")
)
## Cairo::Cairo(file="plot3d.png")
library(plot3D)
with(cc,persp3D(x=replicate(61,x),
y=t(replicate(61,y)),
z,
border="black")
)
## dev.off()
cc2 <- curve3d(f_opt(c(x,y), data=data),
from=c(5,180),to=c(20,200),
n=c(16,21),
sys3d="none",
.progress="text")
## Cairo::Cairo(file="plot3dB.png",width=1280,height=960)
with(cc2,persp3D(x=replicate(21,x),
y=t(replicate(16,y)),
cc2$z,
border="black"))
## dev.off()

Maximum likelihood estimation of a multivariate normal distribution of arbitrary dimesion in R - THE ULTIMATE GUIDE?

I notice searching through stackoverflow for similar questions that this has been asked several times hasn't really been properly answered. Perhaps with help from other users this post can be a helpful guide to programming a numerical estimate of the parameters of a multivariate normal distribution.
I know, I know! The closed form solutions are available and trivial to implement. In my case I am interested in modifying the likelihood function for a specific purpose and I don't expect an exact analytic solution so this is a test case to check the procedure.
So here is my attempt. Please comment. Especially if I am missing opportunities for optimization. Note, I'm not a statistician so I'd appreciate any pointers.
ll_multN <- function(theta,X) {
# theta = c(mu, diag(Sigma), Sigma[upper.tri(Sigma)])
# X is an nxk dataset
# MLE: L = - (nk/2)*log(2*pi) - (n/2)*log(det(Sigma)) - (1/2)*sum_i(t(X_i-mu)^2 %*% Sigma^-1 %*% (X_i-mu)^2)
# summation over i is performed using a apply call for efficiency
n <- nrow(X)
k <- ncol(X)
# def mu
mu.vec <- theta[1:k]
# def Sigma
Sigma.diag <- theta[(k+1):(2*k)]
Sigma.offd <- theta[(2*k+1):length(theta)]
Sigma <- matrix(NA, k, k)
Sigma[upper.tri(Sigma)] <- Sigma.offd
Sigma <- t(Sigma)
Sigma[upper.tri(Sigma)] <- Sigma.offd
diag(Sigma) <- Sigma.diag
# compute summation
sum_i <- sum(apply(X, 1, function(x) (matrix(x,1,k)-mu.vec)%*%solve(Sigma)%*%t(matrix(x,1,k)-mu.vec)))
# compute log likelihood
logl <- -.5*n*k*log(2*pi) - .5*n*log(det(Sigma))
logl <- logl - .5*sum_i
return(-logl)
}
Simulated dataset generated using the rmvnorm() function in the package "mvtnorm". Random positive definite covariance matrix generated using the additional function Posdef() (taken from here: https://stat.ethz.ch/pipermail/r-help/2008-February/153708)
library(mvtnorm)
Posdef <- function (n, ev = runif(n, 0, 5)) {
# generates a random positive definite covariance matrix
Z <- matrix(ncol=n, rnorm(n^2))
decomp <- qr(Z)
Q <- qr.Q(decomp)
R <- qr.R(decomp)
d <- diag(R)
ph <- d / abs(d)
O <- Q %*% diag(ph)
Z <- t(O) %*% diag(ev) %*% O
return(Z)
}
set.seed(2)
n <- 1000 # number of data points
k <- 3 # number of variables
mu.tru <- sample(0:3, k, replace=T) # random mean vector
Sigma.tru <- Posdef(k) # random covariance matrix
eigen(Sigma.tru)$val # check positive def (all lambda > 0)
# Generate simulated dataset
X <- rmvnorm(n, mean=mu.tru, sigma=Sigma.tru)
# initial parameter values
pars.init <- c(mu=rep(0,k), sig_ii=rep(1,k), sig_ij=rep(0, k*(k-1)/2))
# limits for optimization algorithm
eps <- .Machine$double.eps # get a small value for bounding the paramter space to avoid things such as log(0).
lower.bound <- c(rep(-Inf,k), # bound on mu
rep(eps,k), # bound on sigma_ii
rep(-Inf,k)) # bound on sigma_ij i=/=j
upper.bound <- c(rep(Inf,k), # bound on mu
rep(100,k), # bound on sigma_ii
rep(100,k)) # bound on sigma_ij i=/=j
system.time(
o <- optim(pars.init,
ll_multN, X=X, method="L-BFGS-B",
lower = lower.bound,
upper = upper.bound)
)
plot(x=c(mu.tru,diag(Sigma.tru),Sigma.tru[upper.tri(Sigma.tru)]),
y=o$par,
xlab="Parameter",
ylab="Estimate",
pch=20)
abline(c(0,1), col="red", lty=2)
This currently runs on my laptop in
user system elapsed
47.852 24.014 24.611
and gives this graphical output:
Estimated mean and variance
In particular any advice on limit setting or algorithm choice would be much appreciated.
Thanks

How does ar.yw estimate the variance

In R, how does the function ar.yw estimate the variance? Specifically, where does the number "var.pred" come from? It does not seem to come from the usual YW estimate of the variance, nor the sum of squared residuals divided by df (even though there is disagreement about what the df should be, none of the choices give an answer equivalent to var.pred). And yes, I know that there are better methods than YW; just trying to figure out what R is doing.
set.seed(82346)
temp <- arima.sim(n=10, list(ar = 0.5), sd=1)
fit <- ar(temp, method = "yule-walker", demean = FALSE, aic=FALSE, order.max=1)
## R's estimate of the sigma squared
fit$var.pred
## YW estimate
sum(temp^2)/10 - fit$ar*sum(temp[2:10]*temp[1:9])/10
## YW if there was a mean
sum((temp-mean(temp))^2)/10 - fit$ar*sum((temp[2:10]-mean(temp))*(temp[1:9]-mean(temp)))/10
## estimate based on residuals, different possible df.
sum(na.omit(fit$resid^2))/10
sum(na.omit(fit$resid^2))/9
sum(na.omit(fit$resid^2))/8
sum(na.omit(fit$resid^2))/7
Need to read the code if it's not documented.
?ar.yw
Which says: "In ar.yw the variance matrix of the innovations is computed from the fitted coefficients and the autocovariance of x." If that is not enough explanation, then you need to look at the code:
methods(ar.yw)
#[1] ar.yw.default* ar.yw.mts*
#see '?methods' for accessing help and source code
getAnywhere(ar.yw.default)
# there are two cases that I see
x <- as.matrix(x)
nser <- ncol(x)
if (nser > 1L) # .... not your situation
#....
else{
r <- as.double(drop(xacf))
z <- .Fortran(C_eureka, as.integer(order.max), r, r,
coefs = double(order.max^2), vars = double(order.max),
double(order.max))
coefs <- matrix(z$coefs, order.max, order.max)
partialacf <- array(diag(coefs), dim = c(order.max, 1L,
1L))
var.pred <- c(r[1L], z$vars)
#.......
order <- if (aic)
(0L:order.max)[xaic == 0L]
else order.max
ar <- if (order)
coefs[order, seq_len(order)]
else numeric()
var.pred <- var.pred[order + 1L]
var.pred <- var.pred * n.used/(n.used - (order + 1L))
So you now need to find the Fortran code for C_eureka. I think I'm finding it here: https://svn.r-project.org/R/trunk/src/library/stats/src/eureka.f This is the code that aI think is returning the var.pred estimate. I'm not a time series guy and It's your responsibility to review this process for applicability to your problem.
subroutine eureka (lr,r,g,f,var,a)
c
c solves Toeplitz matrix equation toep(r)f=g(1+.)
c by Levinson's algorithm
c a is a workspace of size lr, the number
c of equations
c
snipped
c estimate the innovations variance
var(l) = var(l-1) * (1 - f(l,l)*f(l,l))
if (l .eq. lr) return
d = 0.0d0
q = 0.0d0
do 50 i = 1, l
k = l-i+2
d = d + a(i)*r(k)
q = q + f(l,i)*r(k)
50 continue

Portfolio optimization

I am trying to build a portfolio which is optimized with respect to another in R.
I am trying to minimize the objective function
$$min Var(return_p-return'weight_{bm})$$
with the constraints
$$ 1_n'w = 1$$
$$w > .005$$
$$w < .8$$
with w being the returns from a portfolio. there are 10 securities, so I set the benchmark weights at .1 each.
I know that
$$ Var(return_p-return'weight_{bm})= var(r) + var(r'w_{bm}) - 2*cov(r_p, r'w_{bm})=var(r'w)-2cov(r'w,r'w_{bm})=w'var(r)w-2cov(r'w,r'w_{bm})$$
$$=w'var(r)w-2cov(r',r'w_bm)w$$
the last term is of the form I need so I tried to solve this with solve.QP in R, the constraints are giving me a problem though.
here is my code
trackport <- array(rnorm(obs * assets, mean = .2, sd = .15), dim = c(obs,
assets)) #this is the portfolio which the assets are tracked against
wbm <- matrix(rep(1/assets, assets)) #random numbers for the weights
Aeq <- t(matrix(rep(1,assets), nrow=assets, ncol = 1)) #col of 1's to add
#the weights
Beq <- 1 # weights should sum to 1's
H = 2*cov(trackport) #times 2 because of the syntax
#multiplies the returns times coefficients to create a vector of returns for
#the benchmark
rbm = trackport %*% wbm
#covariance between the tracking portfolio and benchmark returns
eff <- cov(trackport, rbm)
#constraints
Amatrix <- t(matrix(c(Aeq, diag(assets), -diag(assets)), ncol = assets,
byrow = T))
Bvector <- matrix(c(1,rep(.005, assets), rep(.8, assets)))
#solve
solQP3 <- solve.QP(Dmat = H,
dvec = zeros, #reduces to min var portfolio for
#troubleshooting purposes
Amat = Amatrix,
bvec = Bvector,
meq = 1)
the error I am getting is "constraints are inconsistent, no solution!" but I can't find what's wrong with my A matrix
My (transposed) A matrix looks like this
[1,1,...,1]
[1,0,...,0]
[0,1,...,0]
...
[0,0,...,1]
[-1,0,...,0]
[0,-1,...,0]
...
[0,0,...,-1]
and my $b_0$ looks like this
[1]
[.005]
[.005]
...
[.005]
[.8]
[.8]
...
[.8]
so I'm not sure why it isn't finding a solution, could anyone take a look?
I'm not familiar with the package, but just took a quick look at https://cran.r-project.org/web/packages/quadprog/quadprog.pdf , which apparently is what you are using.
Your RHS values of .8 should be -0.8 because this function uses ≥ inequalities. So you have been constraining the variables to be ≥ .005 and ≤ -0.8, which of course is not what you want, and is infeasible.
So leave transposed A as is and make
b0:
[1]
[.005]
[.005]
...
[.005]
[-.8]
[-.8]
...
[-.8]

Mean Variance Optimisation Under Weights Constraints

I would like to find the weights for the portfolio that maximises the sharpe ration for a 3 risky assets case. The sum of the weights of all assets should equal 2, the weight of asset1 is forced to 1 and all assets weights >=0 (i.e the problem would be to maximise the portfolio risk adjusted return by adjusting only the weights for asset asset 2 and 3 subject to them not being more than 1 and >=0). Is this the right way of programing the problem using quadprog ?
library(quadprog)
covmat <- matrix(c(3.235343e-02, -3.378191e-03, -1.544574e-05,
-3.378191e-03, 8.769166e-03, 1.951734e-06,
-1.544574e-05, 1.951734e-06, 2.186799e-06),3,3)
A <- rbind(c(1,1,1),diag(3))
b <- c(2,1,0,0) # those are the constraints, sum of weights are 2 and weights of asset1 = 1
c <- c(0,0.1,0.05) # those are the assets returns, asset1 hasd a zero return but I want him to have a 100% weight out of the available 200% in my problem
# solve QP model
solve.QP(covmat,dvec=c,Amat=t(A),bvec=b,meq=2)$solution
Solve.QP is not optimizing Sharpe ratio (SR). As explained in the help ?solve.QP, it is minimizing this function:
min(-d^T b + 1/2 b^T D b) with the constraints A^T b >= b_0.
If you want to maximize SR try this http://comisef.wikidot.com/tutorial%3atangencyportfolio
But apparently that is for optimization without restrictions.
Well, it is possible to optimize minimizing the risk for several given returns. In other words, identify a relevant segment of the efficient frontier (EF) with restrictions, and compute the Sharpe ratio (SR). The tangent obviously will be that portfolio that maximizes the SR.
Given your data and restrictions:
#Find EF
#Min variance portfolio
aMat <- cbind(rep(1,nrow(covmat)),diag(1,nrow(covmat)))
bVec <- c(2,1,0,0)
zeros <- array(0, dim = c(nrow(covmat),1))
solQP <- solve.QP(covmat, zeros, aMat, bVec, meq = 1)
# weights and return for minimum variance portfolio
w.mv <- solQP$solution
r.mv<-t(w.mv) %*% excret
#Identify tangent approximately
#for that find min var portfolio for a relevant sequence of returns
#adding a return restriction to the optimization
sret<-seq(r.mv,max(excret)*1.1,length.out =50) #the maximum by try and error
sret<-sort(unique(c(max(excret),sret)))
rp=array(r.mv,1)
sp=sqrt(t(w.mv) %*% covmat %*% w.mv)
wp=t(matrix(w.mv))
aMatt <- cbind(excret,aMat)
# solve min var for every given return
for (ri in sret[-1]){
bVect <- c(ri,bVec)
solQP <- solve.QP(covmat, zeros, aMatt, bVect, meq = 2)
wp=rbind(wp, solQP$solution)
rp<-c(rp,t(solQP$solution) %*% excret)
sp<-c(sp,sqrt(t(solQP$solution) %*% covmat %*% solQP$solution))
}
IS=rp/sp #sharpe index
cbind(wp,sp,rp,IS)
wp[which.max(IS),] #tangent
cbind(wp,sp,rp,IS)[which.max(IS),]
plot(c(sp,diag(covmat)^.5),c(rp,excret))
#As you can see in the plot you have a corner solution
(c.sol<-c(1,1,0))
c(st=sqrt(t(c.sol) %*% covmat %*% c.sol),
rt=t(c.sol) %*% excret,
ISt=t(c.sol) %*% excret/sqrt(t(c.sol) %*% covmat %*% c.sol))
The output will be:
> wp[which.max(IS),] #tangent
[1] 1.00000000 0.98339763 0.01660237
> cbind(wp,sp,rp,IS)[which.max(IS),]
sp rp
1.00000000 0.98339763 0.01660237 0.18490315 0.09916988
IS
0.53633418
> plot(c(sp,diag(covmat)^.5),c(rp,excret))
> (c.sol<-c(1,1,0))
[1] 1 1 0
> c(st=sqrt(t(c.sol) %*% covmat %*% c.sol),
+ rt=t(c.sol) %*% excret,
+ ISt=t(c.sol) %*% excret/sqrt(t(c.sol) %*% covmat %*% c.sol))
st rt ISt
0.1853813 0.1000000 0.5394288

Resources