How to leverage Convex Optimization for Portfolio Optimization in Julia - julia

I'm trying to use Julia (0.5) and Convex.jl (with ECOS solver) to figure out, given a portfolio of 2 stocks, how can I distribute my allocations (in percent) across both stocks such that I maximize my portfolio return and minimize my risk (std dev of returns). I want to maximize what is known as the Sharpe ratio that is a calculation driven from what percentages I have in each of my 2 stocks. So I want to MAXIMIZE the Sharpe ratio and have the solver figure out what is the optimal allocation for the two stocks (I want it to tell me I need x% of stock 1 and 1-x% of stock 2). The only real constraint is that the sum of the percent allocations adds to 100%. I have code below that runs, but does not give me the optimal weights/allocations I'm expecting (which is 36.3% for Supertech & 63.7% for Slowpoke). The solver instead comes back with 50/50.
My intuition is that I either have the objective function modeled incorrectly for the solver, or I need to do more with constraints. I don't have a good grasp on convex optimization so I'm winging it. Also, my objective function uses the variable.value attribute to get the correct output and I suspect I need to be working with the Variable expression object instead.
Question is, is what I'm trying to achieve something the Convex solver is designed for and I just have to model the objective function and constraints better, or do I have to just iterate the weights and brute force it?
Code with comments:
using Convex, ECOS
Supertech = [-.2; .1; .3; .5];
Slowpoke = [.05; .2; -.12; .09];
A = reshape([Supertech; Slowpoke],4,2)
mlen = size(A)[1]
R = vec(mean(A,1))
n=rank(A)
w = Variable(n)
c1 = sum(w) == 1;
λ = .01
w.value = [λ; 1-λ]
sharpe_ratio = sqrt(mlen) * (w.value' * R) / sqrt(sum(vec(w.value' .* w.value) .* vec(cov(A,1,false))))
# sharpe_ratio will be maximized at 1.80519 when w.value = [λ, 1-λ] where λ = .363
p = maximize(sharpe_ratio,c1);
solve!(p, ECOSSolver(verbose = false)); # when verbose=true, says is 'degenerate' because I don't have enough constrains...
println(w.value) # expecting to get [.363; .637] here but I get [0.5; 0.5]

Related

Initial state starts at y(1), how to go backwards to find y(0)? [duplicate]

I would like to solve a differential equation in R (with deSolve?) for which I do not have the initial condition, but only the final condition of the state variable. How can this be done?
The typical code is: ode(times, y, parameters, function ...) where y is the initial condition and function defines the differential equation.
Are your equations time reversible, that is, can you change your differential equations so they run backward in time? Most typically this will just mean reversing the sign of the gradient. For example, for a simple exponential growth model with rate r (gradient of x = r*x) then flipping the sign makes the gradient -r*x and generates exponential decay rather than exponential growth.
If so, all you have to do is use your final condition(s) as your initial condition(s), change the signs of the gradients, and you're done.
As suggested by #LutzLehmann, there's an even easier answer: ode can handle negative time steps, so just enter your time vector as (t_end, 0). Here's an example, using f'(x) = r*x (i.e. exponential growth). If f(1) = 3, r=1, and we want the value at t=0, analytically we would say:
x(T) = x(0) * exp(r*T)
x(0) = x(T) * exp(-r*T)
= 3 * exp(-1*1)
= 1.103638
Now let's try it in R:
library(deSolve)
g <- function(t, y, parms) { list(parms*y) }
res <- ode(3, times = c(1, 0), func = g, parms = 1)
print(res)
## time 1
## 1 1 3.000000
## 2 0 1.103639
I initially misread your question as stating that you knew both the initial and final conditions. This type of problem is called a boundary value problem and requires a separate class of numerical algorithms from standard (more elementary) initial-value problems.
library(sos)
findFn("{boundary value problem}")
tells us that there are several R packages on CRAN (bvpSolve looks the most promising) for solving these kinds of problems.
Given a differential equation
y'(t) = F(t,y(t))
over the interval [t0,tf] where y(tf)=yf is given as initial condition, one can transform this into the standard form by considering
x(s) = y(tf - s)
==> x'(s) = - y'(tf-s) = - F( tf-s, y(tf-s) )
x'(s) = - F( tf-s, x(s) )
now with
x(0) = x0 = yf.
This should be easy to code using wrapper functions and in the end some list reversal to get from x to y.
Some ODE solvers also allow negative step sizes, so that one can simply give the times for the construction of y in the descending order tf to t0 without using some intermediary x.

Constrained optimization in R with constraints on result

I am attempting to optimize a function in R that has constraints on the input parameters, but also constraints on a result in the function. I am using constrOptim.
The function I have to optimize is below:
fn.opt <- function(x){
fnLambda=x[[1]]
fnSigma=x[[2]]
fn.norm = fnLambda*dnorm(pz$x, mean=mxMu, sd=fnSigma)
fn.res = sum((density$y - fn.norm)^2)
return(fn.res)
}
Here I have a kernel density estimate density of some data, and I have a normal approximation fn.norm. I would like to minimize the sum of squared residuals, which this successfully does using constrOptim.
theta = c(mxLambda,mxSigma)
ui = rbind(c(1,0),c(-1,0),c(0,1))
ci = c(0,-1,0)
pz.opt = constrOptim(theta=theta,f=fn.opt,ui=ui,ci=ci,grad=NULL)
pz.opt
Constraints: 0 <= mxLambda <=1 & 0< mxSigma
However I would also like the normal density approximation fn.norm to be strictly less than the density of the data density$y.
Is there any way I could add in a constraint that has fn.norm < density$y while still optimizing fn.res?
Thanks in advance.

how to specify final value (rather than initial value) for solving differential equations

I would like to solve a differential equation in R (with deSolve?) for which I do not have the initial condition, but only the final condition of the state variable. How can this be done?
The typical code is: ode(times, y, parameters, function ...) where y is the initial condition and function defines the differential equation.
Are your equations time reversible, that is, can you change your differential equations so they run backward in time? Most typically this will just mean reversing the sign of the gradient. For example, for a simple exponential growth model with rate r (gradient of x = r*x) then flipping the sign makes the gradient -r*x and generates exponential decay rather than exponential growth.
If so, all you have to do is use your final condition(s) as your initial condition(s), change the signs of the gradients, and you're done.
As suggested by #LutzLehmann, there's an even easier answer: ode can handle negative time steps, so just enter your time vector as (t_end, 0). Here's an example, using f'(x) = r*x (i.e. exponential growth). If f(1) = 3, r=1, and we want the value at t=0, analytically we would say:
x(T) = x(0) * exp(r*T)
x(0) = x(T) * exp(-r*T)
= 3 * exp(-1*1)
= 1.103638
Now let's try it in R:
library(deSolve)
g <- function(t, y, parms) { list(parms*y) }
res <- ode(3, times = c(1, 0), func = g, parms = 1)
print(res)
## time 1
## 1 1 3.000000
## 2 0 1.103639
I initially misread your question as stating that you knew both the initial and final conditions. This type of problem is called a boundary value problem and requires a separate class of numerical algorithms from standard (more elementary) initial-value problems.
library(sos)
findFn("{boundary value problem}")
tells us that there are several R packages on CRAN (bvpSolve looks the most promising) for solving these kinds of problems.
Given a differential equation
y'(t) = F(t,y(t))
over the interval [t0,tf] where y(tf)=yf is given as initial condition, one can transform this into the standard form by considering
x(s) = y(tf - s)
==> x'(s) = - y'(tf-s) = - F( tf-s, y(tf-s) )
x'(s) = - F( tf-s, x(s) )
now with
x(0) = x0 = yf.
This should be easy to code using wrapper functions and in the end some list reversal to get from x to y.
Some ODE solvers also allow negative step sizes, so that one can simply give the times for the construction of y in the descending order tf to t0 without using some intermediary x.

Optimization using package "nloptr"

I am trying to replicate results in R from Excel's "Solver" add-in. I don't know about the inner workings of optimization (mathematically), hence my confusion at most post results as well as the error messages I am receiving. I tried using the optimx package, but apparently that doesn't allow for too much control over the constraints in the optimization, so now I'm trying out the nloptr package.
Basically, what I'm trying to do is replicate an optimum portfolio calculation (financial). Below is a sample of my code:
ret.cov <- cov(as.matrix(ret.p[,1:30]))
wts <- rep(1/portfolioSize, times = portfolioSize)
sharpe <- function(wts) {
mean.p <- sum(colMeans(ret.p[,1:30])*wts)
var.p <- t(wts) %*% (ret.cov %*% (wts))
sd.p <- sqrt(var.p)
SR <- (mean.p - Rf)/sd.p
return(as.numeric(SR))
}
fun.eq <- function(wts) {
sum(wts) == 1
}
optim.p <- nloptr(x0 = wts, eval_f = sharpe, lb = 0, ub = 1, eval_g_eq = fun.eq)
sharpe(as.numeric(optim.p$solution))
Calculates the covariance matrix of 30 stocks and their returns
Initializes the weights of those stocks to optimize (equally weighted to start)
Sets up a function to maximize which calculates the portfolio's Sharpe Ratio
Tries (???) to specify the equality function for nloptr that states that the sum of the wts vector must be equal to 1.
Tries to maximize the function (though I think it's minimizing by default, and I don't know how to change that to maximize instead).
Checks the resulting, maximized Sharpe Ratio
The Sharpe calculation function works fine, when I try it outside of the nloptr function. The issues are various, from needing to specify the proper algorithm to use, to the function not accepting the equality function I supplied.
So, the questions I have are:
How do you change the nloptr to maximize instead of minimize?
How would one write an equality function to specify that the sum of the input vector (weights) must be equal to 1?
What is the proper algorithm to specify using opts = list() here? Excel uses something called "GRG Nonlinear".
Thank you in advance!
Hope it's still relevant...
You don't supply data so I can't run it but I'll try to help.
1) In order to maximize just minimize the -sharpe
2) eval_g_eq needs to be in format of h(x)=0, meaning that you need on fun.eq to change sum(wts) == 1 to sum(wts) - 1.
3) There are a lot of decent options. I use NLOPT_LN_COBYLA

Translating code that carries out SOCP/SDP optimisation from MATLAB to R

I have the following MATLAB code which was used in the linked paper (http://www.optimization-online.org/DB_FILE/2014/05/4366.pdf), and would like to be able to use the Rsocp package to be able to carry out the same function but in R. The Rsocp package is available by using the command:
install.packages("Rsocp", repos="http://R-Forge.R-project.org")
and through the socp() function it carries out a similar function to solvesdp(constraints, -wcvar, ops) in the MATLAB code below.
I do not have MATLAB which makes this problem more difficult for me to solve.
The issue I have is the R's socp() function takes matrices as inputs that reflect the data(/covariance matrix and average return values) and constraints all together, where as the MATLAB code seems to be optimising a function...in this specific case it looks like its optimising -wcvar to get the optimal weights, so I am unsure of how to set up my problem in R to get similar results.
The MATLAB code I would therefore like help in translating to R is as follows:
function [w] = rgop(T, mu, sigma, epsilon)
% This function determines the robust growth-optimal portfolio
% Input parameters:
% T - the number of time periods
% mu - the mean vector of asset returns
% sigma - the covariance matrix of asset returns
% epsilon - violation probability
% Output parameters:
% w - robust growth-optimal portfolios
% the number of assets
n = length(mu);
% portfolio weights
w = sdpvar(n,1);
% mean and standard deviation of portfolio
rp = w'*mu;
sigmap = sqrt(w'*sigma*w);
% preclude short selling
constraints = [w >= 0]; %#ok<NBRAK>
% budget constraint
constraints = [constraints, sum(w) == 1];
% worst-case value-at-risk (theorem 4.1)
wcvar = 1/2*(1 - (1 - rp + sqrt((1-epsilon)/epsilon/T)*sigmap)^2 - ((T-1)/epsilon/T)*sigmap^2);
% maximise WCVAR
ops = sdpsettings('solver','sdpt3','verbose',0);
solvesdp(constraints, -wcvar, ops);
w = double(w);
end
For the square root function of the covariance matrix one can use:
Rsocp:::.SqrtMatrix()
Note this question is partially related to my previous question however is more focused on getting the worst case VaR weights:
SOCP Solver Error for fPortoflio using solveRsocp
Perhaps a good start would be to use this code where the Rsocp package has already been used...
https://r-forge.r-project.org/scm/viewvc.php/pkg/fPortfolio/R/solveRsocp.R?view=markup&root=rmetrics&pathrev=3507
EDIT
I think the MATLAB code for the solvesdp function is available from this link:
https://code.google.com/p/vroster/source/browse/trunk/matlab/yalmip/solvesdp.m?r=11
Also a quick question about SOCP optimisations in general...would the result obtained via SOCP optimisation be the same as that achieved using other methods of optimisation? will the only difference be speed and efficiency?
EDIT2
Since it was requested...
rgop <- function(tp, mu, sigma, epsilon){
# INPUTS
# tp - the number of time periods
# mu - the mean vector of asset returns
# sigma - the covariance matrix of asset returns
# epsilon - violation probability
# OUTPUT
# w - robust growth-optimal portfolios
#n is number of assets
n <- length(mu)
# portfolio weights (BUT THIS IS THE OUTPUT)
# for now will assume equal weight
w <- rep(1/n,n)
# mean and standard deviation of portfolio
rp <- sum(w*mu)
sigmap <- as.numeric(sqrt(t(w) %*% sigma %*% w))
# worst-case value-at-risk (theorem 4.1)
wcvar = 1/2*(1 - (1 - rp + sqrt((1-epsilon)/epsilon/tp)*sigmap)^2 - ((tp-1)/epsilon/tp)*sigmap^2);
# optimise...not sure how to carry out this optimisation...
# which is the main thrust of this question...
# could use DEoptim...but would like to understand the SOCP method
}
SOCP is just a fast way of finding the minimum in cases where you know enough about the problem to constrain it in certain technical ways. As you're discovering these constraints can be tricky to formulate, so it is worth asking if you need the speed. Often the answer is yes, but for debugging/exploration purposes brute numerical optimisation using R's optim function can be fruitful.

Resources