Recursive optim() function in R causes errors - r

I am trying to use the optim() function in R to minimize a value with matrix operations. In this case, I am trying to minimize the volatility of a group of stocks whose individual returns covary with each other. The objective function being minimized is calculate_portfolio_variance.
library(quantmod)
filter_and_sort_symbols <- function(symbols)
{
# Name: filter_and_sort_symbols
# Purpose: Convert to uppercase if not
# and remove any non valid symbols
# Input: symbols = vector of stock tickers
# Output: filtered_symbols = filtered symbols
# convert symbols to uppercase
symbols <- toupper(symbols)
# Validate the symbol names
valid <- regexpr("^[A-Z]{2,4}$", symbols)
# Return only the valid ones
return(sort(symbols[valid == 1]))
}
# Create the list of stock tickers and check that they are valid symbols
tickers <- filter_and_sort_symbols(c("AAPL", "NVDA", "MLM", "AA"))
benchmark <- "SPY"
# Set the start and end dates
start_date <- "2007-01-01"
end_date <- "2019-01-01"
# Gather the stock data using quantmod library
getSymbols(Symbols=tickers, from=start_date, to=end_date, auto.assign = TRUE)
getSymbols(benchmark, from=start_date, to=end_date, auto.assign = TRUE)
# Create a matrix of only the adj. prices
price_matrix <- NULL
for(ticker in tickers){price_matrix <- cbind(price_matrix, get(ticker)[,6])}
# Set the column names for the price matrix
colnames(price_matrix) <- tickers
benchmark_price_matrix <- NULL
benchmark_price_matrix <- cbind(benchmark_price_matrix, get(benchmark)[,6])
# Compute log returns
returns_matrix <- NULL
for(ticker in tickers){returns_matrix <- cbind(returns_matrix, annualReturn(get(ticker)))}
returns_covar <- cov(returns_matrix)
colnames(returns_covar) <- tickers
rownames(returns_covar) <- tickers
# get average returns for tickers and benchmark
ticker_avg <- NULL
for(ticker in tickers){ticker_avg <- cbind(ticker_avg, colMeans(annualReturn(get(ticker))))}
colnames(ticker_avg) <- tickers
benchmark_avg <- colMeans(annualReturn(get(benchmark)))
# create the objective function
calculate_portfolio_variance <- function(allocations, returns_covar, ticker_avg, benchmark_avg)
{
# Name: calculate_portfolio_variance
# Purpose: Computes expected portfolio variance, to be used as the minimization objective function
# Input: allocations = vector of allocations to be adjusted for optimality; returns_covar = covariance matrix of stock returns
# ticker_avg = vector of average returns for all tickers, benchmark_avg = benchmark avg. return
# Output: Expected portfolio variance
# get benchmark volatility
benchmark_variance <- (sd(annualReturn(get(benchmark))))^2
# scale allocations for 100% investment
allocations <- as.matrix(allocations/sum(allocations))
# get the naive allocations
naive_allocations <- rep(c(1/ncol(ticker_avg)), times=ncol(ticker_avg))
portfolio_return <- sum(t(allocations)*ticker_avg)
portfolio_variance <- t(allocations)%*%returns_covar%*%allocations
# constraints = portfolio expected return must be greater than benchmark avg. return and
# portfolio variance must be less than benchmark variance (i.e. a better reward at less risk)
if(portfolio_return < benchmark_avg | portfolio_variance > benchmark_variance)
{
allocations <- naive_allocations
}
portfolio_variance <- t(allocations)%*%returns_covar%*%allocations
return(portfolio_variance)
}
# Specify lower and upper bounds for the allocation percentages
lower <- rep(0, ncol(returns_matrix))
upper <- rep(1, ncol(returns_matrix))
# Initialize the allocations by evenly distributing among all tickers
set.seed(1234)
allocations <- rep(1/length(tickers), times=length(tickers))
When I call the objective function manually, it returns a value as expected:
> calculate_portfolio_variance(allocations, returns_covar, ticker_avg, benchmark_avg)
[,1]
[1,] 0.1713439
However, when I use the optim() function it returns the error:
> optim_result <- optim(par=allocations, fn=calculate_portfolio_variance(allocations, ticker_avg, benchmark_avg), lower=lower, upper=upper, method="L-BFGS-B")
Error in t(allocations) %*% returns_covar : non-conformable arguments
I'm not sure the reason, but it may be with the way optim() recursively uses the allocations variable. What can I do to fix this?
Edit: FWIW, other optimization strategies work (differential evolution, simulated annealing) but I would prefer to use gradient descent because it is considerably faster

No error occurs if the first argument is renamed par and you switch the order in which you apply t() to the parameter vectors used in that flanking matrix-multiply operation:
cpv <- function(par, returns_covar=returns_covar, ticker_avg, benchmark_avg)
{
# Name: calculate_portfolio_variance
# Purpose: Computes expected portfolio variance, to be used as the minimization objective function
# Input: allocations = vector of allocations to be adjusted for optimality; returns_covar = covariance matrix of stock returns
# ticker_avg = vector of average returns for all tickers, benchmark_avg = benchmark avg. return
# Output: Expected portfolio variance
# get benchmark volatility
benchmark_variance <- (sd(annualReturn(get(benchmark))))^2
# scale allocations for 100% investment
par <- as.matrix(par/sum(par))
# get the naive allocations
naive_allocations <- rep(c(1/ncol(ticker_avg)), times=ncol(ticker_avg))
portfolio_return <- sum(t(par)*ticker_avg);print(par)
portfolio_variance <- t(par)%*%returns_covar%*%par
# constraints = portfolio expected return must be greater than benchmark avg. return and
# portfolio variance must be less than benchmark variance (i.e. a better reward at less risk)
if(portfolio_return < benchmark_avg | portfolio_variance > benchmark_variance)
{
par <- naive_allocations
}
portfolio_variance <- t(par)%*%returns_covar%*%par
return(portfolio_variance)
}
I left the debugging printing of par in the code and show the top of the results of running it
optim_result <- optim(par=allocations, fn=cpv, lower=lower, upper=upper, returns_covar=returns_covar, ticker_avg=ticker_avg, benchmark_avg=benchmark_avg, method="L-BFGS-B")
[,1]
[1,] 0.25
[2,] 0.25
[3,] 0.25
[4,] 0.25
[,1]
[1,] 0.2507493
[2,] 0.2497502
[3,] 0.2497502
[4,] 0.2497502
[,1]
[1,] 0.2492492
[2,] 0.2502503
[3,] 0.2502503
[4,] 0.2502503
#--- snipped output of six more iterations.
... and the result:
> optim_result
$par
[1] 0.25 0.25 0.25 0.25
$value
[1] 0.1713439
$counts
function gradient
1 1
$convergence
[1] 0
$message
[1] "CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL"
As I said in the comment to an unrelated question, the optim function first tried to raise then lower the first element in par, then tries to do the same for the second, third and fourth elements. At that point finding no improvement it "decides" it's converged at a local minimum and declares convergence.
I should point out that the code for optim is rather old and the author of the original algorithm, Dr Nash, has placed an updated version on CRAN in the form of the optimx package. He says optim was good in its day, but that he thinks other procedures should be tried if it's not successful.

Related

Estimate a global load reduction using optimization

I am trying to find a global reduction (global_reduct) for a timeseries of monthly loads.
The goal is make sure that only 50% (or any other taget) of all loads exceed a specific reference load.
global_reduct <- c(50) ## initial value
load_ref <- 450.5 ## reference load
tobject <- 50 ## 50% above reference
Data example which is a subset of 20+ years of data.
df <- data.frame(
Date=seq(as.Date('2010-01-01'), as.Date('2012-04-01'), by='months'),
load= c(1.496169954, 1.29147009, 1.964195241, 1.14352707, 1.319144304,
0.773288093, 0.65175612, 0.685340958, 0.416934849,
0.769853258, 1.104639594, 0.92213209, 1.685588986,
1.972510175, 2.6882446, 2.153314503, 1.324735759,
1.027755411, 0.610207197, 0.674642831, 0.721971375,
1.13233884, 0.739325423, 0.90031817, 1.366597449,
1.928098735, 1.216538229, 1.514353244)
)
In this case the reduction would be around 62% at a target of 50% of the reference load.
I tried to setup a function that can be called by optim to estimate
the new reduct value.
optfuc <- function(reduct, ttarget=50){
reduct_eq <- df$load *(1 - (reduct/100))
tt_exceed <- ifelse((reduct_eq *1000) > load_ref, 1, 0)
ave_ref <- sum(tt_exceed)/length(tt_exceed)*100 - ttarget
# ave_ref in this case should be = ttarget
# ave_ref
reduct
}
optim(c(30), optfuc, method ="L-BFGS-B", lower=0, upper=100)
How can I get the correct new reduct value?
Is there a different package that I can use?
It might be better to use proportions, i.e. values within [0, 1], instead of percentages.
Then minimizing the abs difference of reduced load load - load*reduct and tolerance tobject within interval [0, 1] should give the desired minimum, i.e. the reduction factor.
I use optimize directly here.
load_ref <- mean(df$load) ## for example
tobject <- .25 ## 25%
optfuc <- \(reduct, ref=load_ref, tol=tobject, data=df) {
load1 <- with(data, load - load*reduct)
abs(tol - mean(load1 > ref))
}
(o <- optimize(optfuc, c(0, 1)))
# $minimum
# [1] 0.1935267
#
# $objective
# [1] 0
reduct <- o$minimum
cat(sprintf('reduction:%s%% at target of%s%%',
formatC(reduct*100, digits=2),
formatC(tobject*100, digits=2)))
# reduction: 19% at target of 25%
Check:
(with(df, load - load*reduct) > load_ref) |> table() |> proportions()
# FALSE TRUE
# 0.75 0.25

How to get returns for 3,6,12 month using quantmod in R [duplicate]

I have to calculate the return of a vector that gives a historical price series of a stock. The vector is of a form:
a <- c(10.25, 11.26, 14, 13.56)
I need to calculate daily gain/loss (%) - i.e. what is the gain it has from 10.25 to 11.26 then from 11.26 to 14 etc.
Is there a function to calculate this automatically?
Using your sample data, I think you mean the following:
a <- c(10.25, 11.26, 14, 13.56)
> diff(a)/a[-length(a)]
[1] 0.09853659 0.24333925 -0.03142857
diff returns the vector of lagged differences and a[-length(a)] drops the last element of a.
You may find the functions in quantmod relevant for your work:
> require(quantmod)
> Delt(a)
Delt.1.arithmetic
[1,] NA
[2,] 0.09853659
[3,] 0.24333925
[4,] -0.03142857
You can also use the exact relationship that returns are equal to the exponent of log returns minus one. Thus, if Prices contains your prices, the following will give you your returns:
Returns = exp(diff(log(Prices))) - 1
Note that this is an exact relationship, rather than the approximate relationship given in the answer by #PBS.
A more detailed example with multiple time series:
############ Vector ############
vPrice <- (10.25, 11.26, 14, 13.56)
n = length(vPrice)
#Log returns
log_ret <- diff(log(vPrice)) # or = log(vPrice[-1]/vPrice[-n]) because "..[-i]" removes the i'th item of the vector
log_ret
#Simple returns
simple_ret <- diff(vPrice)/vPrice[1:(n-1)] # or = diff(vPrice)/vPrice[-n]
simple_ret
############ Multiple Time series ############
head(EuStockMarkets)
mPrice <- EuStockMarkets
n = dim(mPrice)[1] #Nb rows
log_ret <- diff(log(mPrice))
head(log_ret)
simple_ret <- diff(mPrice)/mPrice[1:(n-1),]
head(simple_ret)
#Total Returns
total_log_ret <- colSums(log_ret,2) #just a sum for log-returns
total_log_ret
total_Simple_ret <- apply(1+simple_ret, 2, prod)-1 # product of simple returns
total_Simple_ret
##################
#From simple to log returns
all.equal(log(1+total_Simple_ret),total_log_ret) #should be true
#From Log to simple returns
all.equal( total_Simple_ret,exp(total_log_ret)-1) #should be true
ret<-diff(log(a))
This will give you the geometric returns - return follow a lognormal distribution (lower boundary is -100% since prices are always non-negative), so the ln(prices) follows a normal distribution (therefore you might see returns smaller than -1 or -100%).
For the "normal" range of returns, the difference between the [P(t+1)-P(t)]/P(t) and the LN(P(t+1)/P(t)) should be negligible. I hope this helps.
Another possibility is the ROC function of the TTR package:
library(TTR)
a <- c(10.25, 11.26, 14, 13.56)
ROC(a, type = "discrete")
## [1] NA 0.09853659 0.24333925 -0.03142857
type = continuous (which is also the default) gives log-returns:
ROC(a)
## [1] NA 0.09397892 0.21780071 -0.03193305
You can do this:
(PRICE / lag(PRICE)-1) * 100

How to construct a sequence with a pattern in R

I would like to construct a sequence with length 50 of the following type:
Xn+1=4*Xn*(1-Xn). For your information, this is the Logistic Map for r=4. In the case of the Logistic Map with parameter r = 4 and an initial state in (0,1), the attractor is also the interval (0,1) and the probability measure corresponds to the beta distribution with parameters a = 0.5 and b = 0.5. (The Logistic Map is a polynomial mapping (equivalently, recurrence relation) of degree 2, often cited as an archetypal example of how complex, chaotic behaviour can arise from very simple non-linear dynamical equations). How can I do this in R?
There are some ready to use solution on the net. I cite the general solution of mage's blog where you can find more detailed description.
logistic.map <- function(r, x, N, M){
## r: bifurcation parameter
## x: initial value
## N: number of iteration
## M: number of iteration points to be returned
z <- 1:N
z[1] <- x
for(i in c(1:(N-1))){
z[i+1] <- r *z[i] * (1 - z[i])
}
## Return the last M iterations
z[c((N-M):N)]
}
For OP example:
logistic.map(4,0.2,50,49)
This isn't really an R question, is it? More basic programming. Anyway, you probably need an accumulator and a value to process.
values <- 0.2 ## this accumulates as a vector, starting with 0.2
xn <- values ## xn gets the first value
for (it in 2:50) { ## start the loop from the second iteration
xn <- 4L*xn*(1L-xn) ## perform the sequence function
values <- c(values, xn) ## add the new value to the vector
}
values
# [1] 0.2000000000 0.6400000000 0.9216000000 0.2890137600 0.8219392261 0.5854205387 0.9708133262 0.1133392473 0.4019738493 0.9615634951 0 .1478365599 0.5039236459
# [13] 0.9999384200 0.0002463048 0.0009849765 0.0039360251 0.0156821314 0.0617448085 0.2317295484 0.7121238592 0.8200138734 0.5903644834 0 .9673370405 0.1263843622
# [25] 0.4416454208 0.9863789723 0.0537419811 0.2034151221 0.6481496409 0.9122067356 0.3203424285 0.8708926280 0.4497546341 0.9899016128 0 .0399856390 0.1535471506
# [37] 0.5198816927 0.9984188732 0.0063145074 0.0250985376 0.0978744041 0.3531800204 0.9137755744 0.3151590962 0.8633353611 0.4719496615 0 .9968527140 0.0125495222
# [49] 0.0495681269 0.1884445109

Generate multivariate normal r.v.'s with rank-deficient covariance via Pivoted Cholesky Factorization

I'm just beating my head against the wall trying to get a Cholesky decomposition to work in order to simulate correlated price movements.
I use the following code:
cormat <- as.matrix(read.csv("http://pastebin.com/raw/qGbkfiyA"))
cormat <- cormat[,2:ncol(cormat)]
rownames(cormat) <- colnames(cormat)
cormat <- apply(cormat,c(1,2),FUN = function(x) as.numeric(x))
chol(cormat)
#Error in chol.default(cormat) :
# the leading minor of order 8 is not positive definite
cholmat <- chol(cormat, pivot=TRUE)
#Warning message:
# In chol.default(cormat, pivot = TRUE) :
# the matrix is either rank-deficient or indefinite
rands <- array(rnorm(ncol(cholmat)), dim = c(10000,ncol(cholmat)))
V <- t(t(cholmat) %*% t(rands))
#Check for similarity
cor(V) - cormat ## Not all zeros!
#Check the standard deviations
apply(V,2,sd) ## Not all ones!
I'm not really sure how to properly use the pivot = TRUE statement to generate my correlated movements. The results look totally bogus.
Even if I have a simple matrix and I try out "pivot" then I get bogus results...
cormat <- matrix(c(1,.95,.90,.95,1,.93,.90,.93,1), ncol=3)
cholmat <- chol(cormat)
# No Error
cholmat2 <- chol(cormat, pivot=TRUE)
# No warning... pivot changes column order
rands <- array(rnorm(ncol(cholmat)), dim = c(10000,ncol(cholmat)))
V <- t(t(cholmat2) %*% t(rands))
#Check for similarity
cor(V) - cormat ## Not all zeros!
#Check the standard deviations
apply(V,2,sd) ## Not all ones!
There are two errors with your code:
You did not use pivoting index to revert the pivoting done to the Cholesky factor. Note, pivoted Cholesky factorization for a semi-positive definite matrix A is doing:
P'AP = R'R
where P is a column pivoting matrix, and R is an upper triangular matrix. To recover A from R, we need apply the inverse of P (i.e., P'):
A = PR'RP' = (RP')'(RP')
Multivariate normal with covariance matrix A, is generated by:
XRP'
where X is multivariate normal with zero mean and identity covariance.
Your generation of X
X <- array(rnorm(ncol(R)), dim = c(10000,ncol(R)))
is wrong. First, it should not be ncol(R) but nrow(R), i.e., the rank of X, denoted by r. Second, you are recycling rnorm(ncol(R)) along columns, and the resulting matrix is not random at all. Therefore, cor(X) is never close to an identity matrix. The correct code is:
X <- matrix(rnorm(10000 * r), 10000, r)
As a model implementation of the above theory, consider your toy example:
A <- matrix(c(1,.95,.90,.95,1,.93,.90,.93,1), ncol=3)
We compute the upper triangular factor (suppressing possible rank-deficient warnings) and extract inverse pivoting index and rank:
R <- suppressWarnings(chol(A, pivot = TRUE))
piv <- order(attr(R, "pivot")) ## reverse pivoting index
r <- attr(R, "rank") ## numerical rank
Then we generate X. For better result we centre X so that column means are 0.
X <- matrix(rnorm(10000 * r), 10000, r)
## for best effect, we centre `X`
X <- sweep(X, 2L, colMeans(X), "-")
Then we generate target multivariate normal:
## compute `V = RP'`
V <- R[1:r, piv]
## compute `Y = X %*% V`
Y <- X %*% V
We can verify that Y has target covariance A:
cor(Y)
# [,1] [,2] [,3]
#[1,] 1.0000000 0.9509181 0.9009645
#[2,] 0.9509181 1.0000000 0.9299037
#[3,] 0.9009645 0.9299037 1.0000000
A
# [,1] [,2] [,3]
#[1,] 1.00 0.95 0.90
#[2,] 0.95 1.00 0.93
#[3,] 0.90 0.93 1.00

How to calculate returns from a vector of prices?

I have to calculate the return of a vector that gives a historical price series of a stock. The vector is of a form:
a <- c(10.25, 11.26, 14, 13.56)
I need to calculate daily gain/loss (%) - i.e. what is the gain it has from 10.25 to 11.26 then from 11.26 to 14 etc.
Is there a function to calculate this automatically?
Using your sample data, I think you mean the following:
a <- c(10.25, 11.26, 14, 13.56)
> diff(a)/a[-length(a)]
[1] 0.09853659 0.24333925 -0.03142857
diff returns the vector of lagged differences and a[-length(a)] drops the last element of a.
You may find the functions in quantmod relevant for your work:
> require(quantmod)
> Delt(a)
Delt.1.arithmetic
[1,] NA
[2,] 0.09853659
[3,] 0.24333925
[4,] -0.03142857
You can also use the exact relationship that returns are equal to the exponent of log returns minus one. Thus, if Prices contains your prices, the following will give you your returns:
Returns = exp(diff(log(Prices))) - 1
Note that this is an exact relationship, rather than the approximate relationship given in the answer by #PBS.
A more detailed example with multiple time series:
############ Vector ############
vPrice <- (10.25, 11.26, 14, 13.56)
n = length(vPrice)
#Log returns
log_ret <- diff(log(vPrice)) # or = log(vPrice[-1]/vPrice[-n]) because "..[-i]" removes the i'th item of the vector
log_ret
#Simple returns
simple_ret <- diff(vPrice)/vPrice[1:(n-1)] # or = diff(vPrice)/vPrice[-n]
simple_ret
############ Multiple Time series ############
head(EuStockMarkets)
mPrice <- EuStockMarkets
n = dim(mPrice)[1] #Nb rows
log_ret <- diff(log(mPrice))
head(log_ret)
simple_ret <- diff(mPrice)/mPrice[1:(n-1),]
head(simple_ret)
#Total Returns
total_log_ret <- colSums(log_ret,2) #just a sum for log-returns
total_log_ret
total_Simple_ret <- apply(1+simple_ret, 2, prod)-1 # product of simple returns
total_Simple_ret
##################
#From simple to log returns
all.equal(log(1+total_Simple_ret),total_log_ret) #should be true
#From Log to simple returns
all.equal( total_Simple_ret,exp(total_log_ret)-1) #should be true
ret<-diff(log(a))
This will give you the geometric returns - return follow a lognormal distribution (lower boundary is -100% since prices are always non-negative), so the ln(prices) follows a normal distribution (therefore you might see returns smaller than -1 or -100%).
For the "normal" range of returns, the difference between the [P(t+1)-P(t)]/P(t) and the LN(P(t+1)/P(t)) should be negligible. I hope this helps.
Another possibility is the ROC function of the TTR package:
library(TTR)
a <- c(10.25, 11.26, 14, 13.56)
ROC(a, type = "discrete")
## [1] NA 0.09853659 0.24333925 -0.03142857
type = continuous (which is also the default) gives log-returns:
ROC(a)
## [1] NA 0.09397892 0.21780071 -0.03193305
You can do this:
(PRICE / lag(PRICE)-1) * 100

Resources