How to solve a portfolio optimization with a generalised objective function? - r

I have a portfolio of 5 stocks for which I want to find an optimal mix of minimizing portfolio variance and maximizing expected future dividends. The latter is from analysts forecasts. My problem is that I know how to solve an minimum variance problem but I am not sure how to put the quadratic form into the right matrix form for the objective function of quadprog.
The standard minimum variance problem reads
Min! ( portfolio volatility )
wherer has the 252 daily returns of the five stocks,d has the expected yearly dividend yields ( where firm_A pays 1 %, firm_B pays 2 % etc, )
and I have programmed it as follows
dat = rep( rnorm( 10, mean = 0, sd = 1 ), 252*5 )
r = matrix( dat, nr = 252, nc = 5 )
d = matrix( c( 1, 2, 1, 2, 2 ) )
library(quadprog)
# Dmat (covariance) and dvec (penalized returns) are generated easily
risk.param = 0.5
Dmat = cov(r)
Dmat[is.na(Dmat)]=0
dvec = matrix(colMeans(r) * risk.param)
dvec[is.na(dvec)]=1e-5
# The weights sum up to 1
n = 5
A = matrix( rep( 1, n ), nr = n )
b = 1
meq = 1
res = solve.QP( Dmat, dvec, A, b, meq = 1 )
Obviously, the returns in r a standard normal, hence each stocks gets about 20% weight.
Q1: How can I account for the fact that firm_A pays a dividend of 1, firm_B a dividend of 2, etc?
The new objective function reads:
Max! ( 0.5 * Portfolio_div - 0.5 * Portfolio_variance )
but I don't know how to hard-code it. The portfolio variance was easy to put into Dmat but the new objective function has the Portfolio_div element defined as Portfolio_div = w * d where w has the five weights.
Thanks a lot.
EDIT: Maybe it makes sense to add a higher-level description of the problem:
I am able to use a minimum-variance optimization with the code above. Minimizing the portfolio variance means optimizing the weights on the variace-covariance matrix Dmat (of dimension 5x5). However, I want to add an additional part to the optimization, which are the dividends in d multiplied with the weights (hence of dimension 5x1). The same weights are also used for Dmat.
Q2: How can I add the vector d to the code?
EDIT2: I guess the answer is to simply use
dvec = -1/d
as I maximize expected dividends by minimizing the inverse of the negative.
Q3: Could someone please tell me if that's right?

Opening a can of worms:
TLDR While I respect great work Harry MARKOWITZ ( 1990 Nobel prize ) has performed, I appreciate much more his wonderfull CACI Simulations spin-off deterministic simulation framework COMET III, than the Portfolio theory assumption, that variance per-se is the ruling minimiser driver for the portfolio optimisation process.
Driving this principal point of view ( which still may meet a bit ill-formed motivation of big funds,that live happily from their 2-by-20 feesdue to the nature and scale of "their" skewed perspective of perception of what are direct losses,which they recognise as a non-acquired hefty & risk-free management feesassociated with a crowd-panic churn attributed AUM erosion,rather than the real profits & losses, gained from their (in)ability to deliver any above average AUM returns ) further,closer to your ideathe problem is in the proper formulation of the { penalty | utility } function.
While variance is taken in classical efficient frontier theory as a penalty factor, operated in a min! global search, it has not much to do with real profit generation. You get penalised even for positive-side variance components, which is a nonsense per-se.
On the contrary, the dividend is a direct benefit, an absolute utility, entering the max! optimisation process.
So the first step in Q3 & Q1 ought be a design of a consistent utility function isolated from relative, revenue un-related factors, but containing all other absolute factors -- a cost of entry, transaction costs, rebalancing costs -- as otherwise your utility model would be misleading your portfolio wealth management strategy.
A2: Without this a-priori designed property, no one may claim a model is worth a single CPU-hour to even start the model's global optimisation efforts.

Related

Finding minimum by optimising a vector in R

I need to find a minimum of an objective function by optimising a vector. The problem is finance related if that helps - the function RC (provided below) computes the sum of squared differences of risk contribution of different assets, where the risk contribution is a product of input Risk Measure (RM, given) and weights.
The goal is to find such weights that the sum is zero, i.e. all assets have equal risk contributions.
RC = function (RM, w){
w = w/sum(w) # normalizing weights so they sum up to 1
nAssets = length(RM)
rc_matrix = matrix(nrow=1,ncol=nAssets)
rc_matrix = RM*w #risk contributions: RM (risk measure multiplied by asset's
#w eight in the portfolio)
rc_sum_squares = numeric(length=1) #placeholder
rc_sum_squares = sum(combn(
seq_along(RM),
2,
FUN = function(x)
(rc_matrix[ , x[1]] - rc_matrix[, x[2]]) ** 2
)) # this function sums the squared differences of the risk contributions
return(rc_sum_squares)
}
I searched and the solution seems to lie in the "optim" function, so I tried:
out <- optim(
par = rep(1 / length(RM), length(RM)), # initial guess
fn = RC,
RM = RM,
method = "L-BFGS-B",
lower = 0.00001,
upper = 1)
However, this returns an error message: "Error in rc_matrix[, x[1]] : incorrect number of dimensions"
I don't know how the optimization algorithm works, so I can't really wrap my head around it. The RC function works though, here is a sample for replicability:
RM <- c(0.06006928, 0.06823795, 0.05716360, 0.08363529, 0.06491009, 0.06673174, 0.03103578, 0.05741140)
w <- matrix(0.125, nrow=1, ncol=1)
I saw also CVXR package, which crashes my RStudio for some reason and nlm(), which is little more complicated and I can't write the function properly.
A solution might be not to do the funky summation of the squared differences, but finding the weights so that the risk contributions (RM*weight) are equal. I will be very glad for your help.
Note: the vector of the weights has to sum up to 1 and the values have to lie between 0 and 1.
Cheers
Daniel

R - Portfolio Analytics optimization not working

im currently working on creating a minimum Variance portfolio and decided to use the
function optimize.portfolio of the PortfolioAnalytics package.
Unfortunately, when extracting the weights, all of them are NA, eventhough non of my returns do have any NA value which would be the only reason (from my piont of view) to cause resulting weights to be NA.
My dataset consists of a multiple assets (+5000) each with 60 observations (monthly).
library(PortfolioAnalytics)
library(ROI)
#index returns is an xts object consisting of 3800 stock Ids(columns) and 60 observations in
# monthly interval: To exemplifiy my problem, I set all values in index_returns to 1, to make
# sure that no NA values exist.
index_returns
any(is.na(index_returns)) # --> evaluates to FALSE
port_spec <- portfolio.spec(assets =colnames(index_returns) )
# Add a full investment constraint such that the weights sum to 1
port_spec <- add.constraint(portfolio = port_spec, type = "full_investment")
# Add a long only constraint such that the weight of an asset is between 0 and 1
port_spec <- add.constraint(portfolio = port_spec, type = "long_only")
# Add an objective to min portfolio variance
port_spec <- add.objective(portfolio = port_spec, type = "risk", name = "var")
# Solve the optimization problem
opt <- optimize.portfolio(R = index_returns, trace=TRUE, portfolio = port_spec,optimize_method = "ROI")
extractWeights(opt) #evaluates to NA for all assets
Does anyone know why this occurs and has any suggestion how to deal with this issue. I know that this optimsiation problem very likely faces invertibility issues due to far more columns than rows, but apart from this notion Im struggling to make any progress with my problem.
I highly appreciate any help!! Thanks in advance
Your optimization most likely fails because you have way more assets than observations. Then, as you correctly assumed, you can't obtain an inverse of the estimated covariance matrix.
To quote from "A Portfolio Optimization Approach with aLarge Number of Assets: Applications tothe US and Korean Stock Markets" available at: https://onlinelibrary.wiley.com/doi/epdf/10.1111/ajfs.12233
"Many attempts have been made to find an invertible estimator of the covariancematrix when N is larger than T. The pseudoinverse estimators of the covariancematrix are used by Sengupta (1983) and Pappas et al. (2010), and the shrinkageestimators of the covariance matrix are suggested by Ledoit and Wolf (2003). Ledoitand Wolf (2003) propose estimating the co variance matrix by an optimallyweighted average of two existing estimators: the sample covariance matrix and sin-gle-index covariance matrix."
So I'd suggest you take a look at the Ledoit-Wolf shrinkage method as a first step. The R-package RiskPortfolios and might also be useful, see https://joss.theoj.org/papers/10.21105/joss.00171

optiSolve package in r

I'm trying to maximize the portfolio return subject to 5 constraints:
1.- a certain level of portfolio risk
2.- the same above but oposite sign (I need that the risk to be exactly that number)
3.- the sum of weights have to be 1
4.- all the weights must be greater or equal to cero
5.- all the weights must be at most one
I'm using the optiSolve package because I didn't find any other package that allow me to write this problem (or al least that I understood how to use it).
I have three big problems here, the first is that the resulting weights vector sum more than 1 and the second problem is that I can't declare t(w) %*% varcov_matrix %*% w == 0 in the quadratic constraint because it only allows for "<=" and finally I don't know how to put a constraint to get only positives weights
vector_de_retornos <- rnorm(5)
matriz_de_varcov <- matrix(rnorm(25), ncol = 5)
library(optiSolve)
restriccion1 <- quadcon(Q = matriz_de_varcov, dir = "<=", val = 0.04237972)
restriccion1_neg <- quadcon(Q = -matriz_de_varcov, dir = "<=",
val = -mean(limite_inf, limite_sup))
restriccion2 <- lincon(t(vector_de_retornos),
d=rep(0, nrow(t(vector_de_retornos))),
dir=rep("==",nrow(t(vector_de_retornos))),
val = rep(1, nrow(t(vector_de_retornos))),
id=1:ncol(t(vector_de_retornos)),
name = nrow(t(vector_de_retornos)))
restriccion_nonnegativa <- lbcon(rep(0,length(vector_de_retornos)))
restriccion_positiva <- ubcon(rep(1,length(vector_de_retornos)))
funcion_lineal <- linfun(vector_de_retornos, name = "lin.fun")
funcion_obj <- cop(funcion_lineal, max = T, ub = restriccion_positiva,
lc = restriccion2, lb = restriccion_nonnegativa, restriccion1,
restriccion1_neg)
porfavor_funciona <- solvecop(funcion_obj, solver = "alabama")
> porfavor_funciona$x
1 2 3 4 5
-3.243313e-09 -4.709673e-09 9.741379e-01 3.689040e-01 -1.685290e-09
> sum(porfavor_funciona$x)
[1] 1.343042
Someone knows how to solve this maximization problem with all the constraints mentioned before or tell me what I'm doing wrong? I'll really appreciate that, because the result seems like is not taking into account the constraints. Thanks!
Your restriccion2 makes the weighted sum of x is 1, if you also want to ensure the regular sum of x is 1, you can modify the constraint as follows:
restriccion2 <- lincon(rbind(t(vector_de_retornos),
# make a second row of coefficients in the A matrix
t(rep(1,length(vector_de_retornos)))),
d=rep(0,2), # the scalar value for both constraints is 0
dir=rep('==',2), # the direction for both constraints is '=='
val=rep(1,2), # the rhs value for both constraints is 1
id=1:ncol(t(vector_de_retornos)), # the number of columns is the same as before
name= 1:2)
If you only want the regular sum to be 1 and not the weighted sum you can replace your first parameter in the lincon function as you've defined it to be t(rep(1,length(vector_de_retornos))) and that will just constrain the regular sum of x to be 1.
To make an inequality constraint using only inequalities you need the same constraint twice but with opposite signs on the coefficients and right hand side values between the two (for example: 2x <= 4 and -2x <= -4 combines to make the constraint 2*x == 4). In your edit above, you provide a different value to the val parameter so these two constraints won't combine to make the equality constraint unless they match except for opposite signs as below.
restriccion1_neg <- quadcon(Q = -matriz_de_varcov, dir = "<=", val = -0.04237972)
I'm not certain because I can't find precision information in the package documentation, but those "negative" values in the x vector are probably due to rounding. They are so small and are effectively 0 so I think the non-negativity constraint is functioning properly.
restriccion_nonnegativa <- lbcon(rep(0,length(vector_de_retornos)))
A constraint of the form
x'Qx = a
is non-convex. (More general: any nonlinear equality constraint is non-convex). Non-convex problems are much more difficult to solve than convex ones and require specialized, global solvers. For convex problems, there are quite a few solvers available. This is not the case for non-convex problems. Most portfolio models are formulated as convex QP (quadratic programming i.e. risk -- the quadratic term -- is in the objective) or convex QCP/SOCP problems (quadratic terms in the constraints, but in a convex fashion). So, the constraint
x'Qx <= a
is easy (convex), as long as Q is positive-semi definite. Rewriting x'Qx=a as
x'Qx <= a
-x'Qx <= -a
unfortunately does not make the non-convexity go away, as -Q is not PSD. If we are maximizing return, we usually only use x'Qx <= a to limit the risk and forget about the >= part. Even more popular is to put both the return and the risk in the objective (that is the standard mean-variable portfolio model).
A possible solver for solving non-convex quadratic problems under R is Gurobi.

Data perturbation - How to perform it?

I am doing some projects related to statistics simulation using R based on "Introduction to Scientific Programming and Simulation Using R" and in the Students projects session (chapter 24) i am doing the "The pipe spiders of Brunswick" problem, but i am stuck on one part of an evolutionary algorithm, where you need to perform some data perturbation according to the sentence bellow:
"With probability 0.5 each element of the vector is perturbed, independently
of the others, by an amount normally distributed with mean 0 and standard
deviation 0.1"
What does being "perturbed" really mean here? I dont really know which operation I should be doing with my vector to make this perturbation happen and im not finding any answers to this problem.
Thanks in advance!
# using the most important features, we create a ML model:
m1 <- lm(PREDICTED_VALUE ~ PREDICTER_1 + PREDICTER_2 + PREDICTER_N )
#summary(m1)
#anova(m1)
# after creating the model, we perturb as follows:
#install.packages("perturb") #install the package
library(perturb)
set.seed(1234) # for same results each time you run the code
p1_new <- perturb(m1, pvars=c("PREDICTER_1","PREDICTER_N") , prange = c(1,1),niter=200) # your can change the number of iterations to any value n. Total number of iteration would come to be n+1
p1_new # check the values of p1
summary(p1_new)
Perturbing just means adding a small, noisy shift to a number. Your code might look something like this.
x = sample(10, 10)
ind = rbinom(length(x), 1, 0.5) == 1
x[ind] = x[ind] + rnorm(sum(ind), 0, 0.1)
rbinom gets the elements to be modified with probability 0.5 and rnorm adds the perturbation.

How to leverage Convex Optimization for Portfolio Optimization in Julia

I'm trying to use Julia (0.5) and Convex.jl (with ECOS solver) to figure out, given a portfolio of 2 stocks, how can I distribute my allocations (in percent) across both stocks such that I maximize my portfolio return and minimize my risk (std dev of returns). I want to maximize what is known as the Sharpe ratio that is a calculation driven from what percentages I have in each of my 2 stocks. So I want to MAXIMIZE the Sharpe ratio and have the solver figure out what is the optimal allocation for the two stocks (I want it to tell me I need x% of stock 1 and 1-x% of stock 2). The only real constraint is that the sum of the percent allocations adds to 100%. I have code below that runs, but does not give me the optimal weights/allocations I'm expecting (which is 36.3% for Supertech & 63.7% for Slowpoke). The solver instead comes back with 50/50.
My intuition is that I either have the objective function modeled incorrectly for the solver, or I need to do more with constraints. I don't have a good grasp on convex optimization so I'm winging it. Also, my objective function uses the variable.value attribute to get the correct output and I suspect I need to be working with the Variable expression object instead.
Question is, is what I'm trying to achieve something the Convex solver is designed for and I just have to model the objective function and constraints better, or do I have to just iterate the weights and brute force it?
Code with comments:
using Convex, ECOS
Supertech = [-.2; .1; .3; .5];
Slowpoke = [.05; .2; -.12; .09];
A = reshape([Supertech; Slowpoke],4,2)
mlen = size(A)[1]
R = vec(mean(A,1))
n=rank(A)
w = Variable(n)
c1 = sum(w) == 1;
λ = .01
w.value = [λ; 1-λ]
sharpe_ratio = sqrt(mlen) * (w.value' * R) / sqrt(sum(vec(w.value' .* w.value) .* vec(cov(A,1,false))))
# sharpe_ratio will be maximized at 1.80519 when w.value = [λ, 1-λ] where λ = .363
p = maximize(sharpe_ratio,c1);
solve!(p, ECOSSolver(verbose = false)); # when verbose=true, says is 'degenerate' because I don't have enough constrains...
println(w.value) # expecting to get [.363; .637] here but I get [0.5; 0.5]

Resources