R portfolio analytics chart.EfficientFrontier function - r

I am trying to use the chart.EfficientFrontier function in the portfolioanalytics package in R to chart an efficient frontier object that I have created but it keeps failing. Basically I am trying to find a frontier that will minimize annaulized standard deviation. Eventually once I get this working I would also like to maximize annualized return.
Firstly I created an annualized standard deviation function using this code
pasd <- function(R, weights){
as.numeric(StdDev(R=R, weights=weights)*sqrt(12)) # hardcoded for monthly data
# as.numeric(StdDev(R=R, weights=weights)*sqrt(4)) # hardcoded for quarterly data
}
I imported a csv file with monthly returns and my portfolio object looks like this:
> prt
**************************************************
PortfolioAnalytics Portfolio Specification
**************************************************
Call:
portfolio.spec(assets = colnames(returns))
Number of assets: 3
Asset Names
[1] "Global REITs" "Au REITs" "Au Util and Infra"
Constraints
Enabled constraint types
- leverage
- long_only
Objectives:
Enabled objective names
- mean
- pasd
Now I successfully create an efficient frontier object using this line:
prt.ef <- create.EfficientFrontier(R = returns, portfolio = prt, type = "DEoptim", match.col = "pasd")
But when I try to plot it I am getting the following error messages.
> chart.EfficientFrontier(prt.ef, match.col="pasd")
Error in StdDev(R = R, weights = weights) :
argument "weights" is missing, with no default
In addition: There were 26 warnings (use warnings() to see them)
Error in StdDev(R = R, weights = weights) :
argument "weights" is missing, with no default
Error in StdDev(R = R, weights = weights) :
argument "weights" is missing, with no default
Error in xlim[2] * 1.15 : non-numeric argument to binary operator
Anyone know why this is the case? When I use summary(prt.ef) I can see the weights, but why is the chart.EfficientFrontier function failing?

As #WaltS suggested, you need to be consistent in implementing functions to annualize mean and risk returns.
But actually to get annualized statistics you have two options, you are not using any:
1) Make the optimization with monthly data, with the original risk return functions in the specification. For plotting you can anualize making
Port.Anua.Returns=prt.ef$frontier[,1]*12
Port.Anua.StDev=prt.ef$frontier[,2]*12^.5
The weights will be the same for monthly or annualized portfolios.
prt.ef$frontier[,-(1:3)]
2) Transform your monthly returns in annualized returns multiplying by 12. Then do the optimization with the usual procedure, all risk and return will be already annualized in prt.ef$frontier.
Related to the jagged line in EF. Using your portfolio specification I was also able to recreate the same behavior. For the following plot I used edhec data, your specification with original mean and StdDev in the objectives:
data(edhec)
returns <- edhec[,1:3]
That behavior must be influenced by the specification or the optimization algorithm you are using. I did the same optimization with solve.QP from package quadprog. This is the result.
Update
The code is here:
require(quadprog)
#min_x(-d^T x + 1/2 b^T D x) r.t A.x>=b
MV_QP<-function(nx, tarRet, Sig=NULL,long_only=FALSE){
if (is.null(Sig)) Sig=cov(nx)
dvec=rep(0,ncol(Sig))
meq=2
Amat=rbind(rep(1,ncol(Sig)),
apply(nx,2,mean) )
bvec=c(1,tarRet )
if (long_only) {
meq=1
Amat=Amat[-1,]
Amat=rbind(Amat,
diag(1,ncol(Sig)),
rep(1,ncol(Sig)),
rep(-1,ncol(Sig)))
bvec=bvec[-1]
bvec=c(bvec,
rep(0,ncol(Sig)),.98,-1.02)
}
sol <- solve.QP(Dmat=Sig, dvec, t(Amat), bvec, meq=meq)$solution
}
steps=50
x=returns
µ.b <- apply(X = x, 2, FUN = mean)
long_only=TRUE
range.bl <- seq(from = min(µ.b), to = max(µ.b)*ifelse(long_only,1,1.6), length.out = steps)
risk.bl <- t(sapply(range.bl, function(targetReturn) {
w <- MV_QP(x, targetReturn,long_only=long_only)
c(sd(x %*% w),w) }))
weigthsl=round(risk.bl[,-1],4)
colnames(weigthsl)=colnames(x)
weigthsl
risk.bl=risk.bl[,1]
rets.bl= weigthsl%*%µ.b
fan=12
plot(x = risk.bl*fan^.5, y = rets.bl*fan,col=2,pch=21,
xlab = "Annualized Risk ",
ylab = "Annualized Return", main = "long only EF with solve.QP")

Adding to Robert's comments, the optimization calculation with monthly returns is a quadratic programming problem with linear constraints. When mean is the return objective and StdDev or var is the risk objective, optimize.portfolio and create.EfficientFrontier select the ROI method as the solver which uses solve.QP, an efficient solver for these sorts of problems. When the risk objective is changed to pasd, these functions don't recognize this as a QP problem so use DEoptim a general nonlinear problem solver perhaps better suited to solving nonconvex problem rather than convex QP ones. See Differential Evolution with DEoptim . This seems to be the cause of the jagged efficient frontier.
In order to have create.EfficientFrontier use solve.QP, which is much more efficient and accurate for this type of problem, you can make a custom moment function to compute the mean and variance and then specify it with the argument momentFUN. However, create.EfficientFrontier at least in part uses means computed directly from the returns rather than using mu from momentFUN. To deal with that, multiply the returns and divide the variance by 12 as shown in the example below.
library(PortfolioAnalytics)
data(edhec)
returns <- edhec[,1:3]
# define moment function
annualized.moments <- function(R, scale=12, portfolio=NULL){
out <- list()
out$mu <- matrix(colMeans(R), ncol=1)
out$sigma <- cov(R)/scale
return(out)
}
# define portfolio
prt <- portfolio.spec(assets=colnames(returns))
prt <- add.constraint(portfolio=prt, type="long_only")
# leverage defaults to weight_sum = 1 so is equivalent to full_investment constraint
prt <- add.constraint(portfolio=prt, type="leverage")
prt <- add.objective(portfolio=prt, type="risk", name="StdDev")
# calculate and plot efficient frontier
prt_ef <- create.EfficientFrontier(R=12*returns, portfolio=prt, type="mean-StdDev",
match.col = "StdDev", momentFUN="annualized.moments", scale=12)
xlim <- range(prt_ef$frontier[,2])*c(1, 1.5)
ylim <- range(prt_ef$frontier[,1])*c(.80, 1.05)
chart.EfficientFrontier(prt_ef, match.col="StdDev", chart.assets = FALSE,
labels.assets = FALSE, xlim=xlim, ylim=ylim )
points(with(annualized.moments(12*returns, scale=12), cbind(sqrt(diag(sigma)), mu)), pch=19 )
text(with(annualized.moments(12*returns, scale=12), cbind(sqrt(diag(sigma)), mu)),
labels=colnames(returns), cex=.8, pos=4)
chart.EF.Weights(prt_ef, match.col="StdDev")
The means and standard deviations of the assets also need to be adjusted and so are plotted outside of chart.EfficientFrontier and shown on the chart below.
At the end of the day it would be simpler, as Robert suggests, to compute the weights for the efficient frontier using the monthly returns and then compute the portfolio returns and standard deviations using annualized asset means and standard deviations and the monthly weights which are the same in both cases. However, perhaps this example is useful to show the use of custom moment and objective functions.

Does not find the reason of the error, but setting the limits it partially works!
prt.ef$frontier #see the EF
xylims=apply(prt.ef$frontier[,c(2,1)],2,range)*c(.98,1.01)
chart.EfficientFrontier(prt.ef, match.col="pasd",
main="Portfolio Optimization",
xlim=xylims[,1], ylim=xylims[,2])
#or
plot(prt.ef$frontier[,c(2,1)],col=2)

ok so I tried the pasd function that WaltS suggested, and the chart.EfficientFrontier seemed to work but it gave me a jagged line and not a smooth line.
I have now created an annualized return function using this code:
pamean <- function(R, weights=NULL){Return.annualized(apply(as.xts(t(t(R) * weights)),1,sum))}
and added this as an objective to my portfolio prt.
> prt
**************************************************
PortfolioAnalytics Portfolio Specification
**************************************************
Call:
portfolio.spec(assets = colnames(returns))
Number of assets: 3
Asset Names
[1] "Global REITs" "Au REITs" "Au Util and Infra"
Constraints
Enabled constraint types
- long_only
- leverage
Objectives:
Enabled objective names
- pamean
- pasd
I then create the efficient frontier again using this line:
> prt.ef <- create.EfficientFrontier(R=returns, portfolio=prt, type="DEoptim", match.col="pasd")
but when I use the summary function I see that only 1 frontier point has been generated. What does the error msg mean and why was only 1 point generated?
> summary(prt.ef)
**************************************************
PortfolioAnalytics Efficient Frontier
**************************************************
Call:
create.EfficientFrontier(R = returns, portfolio = prt, type = "DEoptim",
match.col = "pasd")
Efficient Frontier Points: 1
Error in `colnames<-`(`*tmp*`, value = character(0)) :
attempt to set 'colnames' on an object with less than two dimensions

Related

Is it possible to extract the (Root) Mean Squared Error of predictions by crost() in tsintermittent package in R?

Currently, I am forecasting intermittent demand using Croston's method and the the Syntetos Boylan Approximation (SBA). I use the forecast package and specifically the function croston() to make the predictions using Croston's method. I use the tsintermittent package and the function crost() with type = "sba" to make the predictions using SBA. To numerically find the optimal value of alpha I have used the function accuracy() in the forecast() package and applied it to prediction results of croston(). I can, however, not apply this function to the output of crost(). Hence, I was looking for a way to extract the (Root) Mean Squared Error from the crost() output to numerically find the optimal value of alpha. Can someone help me?
N.B. I am aware of the fact that the value of alpha can be optimized by crost(). However, I am interested to find one value of alpha for a lot of items so that the value of alpha does not have to be optimized a lot of times.
N.B.2 An example of how I optimize alpha for croston() is given below. In this case, a corresponds to alpha which we want to optimise,
opt_croston_a <- function(a) {
rmse <- c()
for (i in 1:nrow(devices)) {
df <- total_df %>%
filter(device == devices[i, ])
rmse[i] <- accuracy(croston(df$sales, h = 1, alpha = a))[2]
}
return(sum(rmse))
}
optimize(opt_croston_a, 0.1, lower = 0.01, upper = 0.99)

Specifying custom weights for the nonparametric estimate of spatially-varying relative risk in spatstat

Is there a way to specify weights in relrisk.ppp function in spatstat (version 1.63-3)?
The relrisk.ppp function calls the density.ppp function, which does allow users to specify their own weights.
For example, let us build upon the provided spatstat.data::urkiola data where, instead of individual trees, the locations are tree stands and we have a second numeric mark for the frequency of trees at each point-location:
urkiola_new <- spatstat.data::urkiola
urkiola_new$marks <- data.frame("type" = urkiola_new$marks, "freq" = rpois(urkiola_new$n, 3))
f1 <- spatstat::relrisk(urkiola_new, weights = urkiola_new$marks$freq)
When using the urkiola_new in a call of relrisk, urkiola_new is caught by stopifnot(is.multitype(X)) in relrisk.ppp. I next tried specifying the weights separately as a vector while using the original urkiola data,
f2 <- spatstat::relrisk(urkiola, weights = urkiola_new$marks$freq)
but was caught by a warning from the pixellate.ppp function within the internal density.ppp function:
Error in pixellate.ppp(x, ..., padzero = TRUE) : length(weights) == npoints(x) || length(weights) == 1 is not TRUE
The same error occurs when I convert the weights into a list
urkiola_weights <- split(urkiola_new$marks$freq, urkiola_new$marks$type)
f3 <- spatstat::relrisk(urkiola, weights = urkiola_weights)
I suspect there is a way to specify the weights cleverly, but it yet escapes me. Any suggestions or guidance would be helpful, thank you!
The function relrisk.ppp is not currently designed to handle weights. The help entry for relrisk.ppp does not mention weights.
The example above does not work because relrisk.ppp applies density.ppp separately to the sub-patterns of points of each type, and the extra argument weights is the wrong length for these sub-patterns.
I will take this question as a feature request, to add this capability to relrisk.ppp. It should be done soon.
Update: this is now implemented in the development version, spatstat 1.64-0.018 available at the spatstat github repository

R non-linear model fitting using fitModel function

I want to fit a non-linear model to a real data.
The real data consists of 2 known numerical vectors ; thickness as 'x' and fh as 'y'
thickness=seq(0.15,2.00,by=0.05)
fh = c(5.17641, 4.20461, 3.31091, 2.60899, 2.23541, 1.97771, 1.88141, 1.62821, 1.50138, 1.51075, 1.40850, 1.26222, 1.09432, 1.13202, 1.12918, 1.10355, 1.11867, 1.09740,1.08324, 1.05687, 1.19422, 1.22984, 1.34516, 1.19713,1.25398 ,1.29885, 1.33658, 1.31166, 1.40332, 1.39550,1.37855, 1.41491, 1.59549, 1.56027, 1.63925, 1.72440, 1.74192, 1.82049)
plot(thickness,fh)
This is apparently non-linear. So, I am trying to fit this model as a non-linear function of
y= x*2/3+(2+2*a)/(3*x)
Variable a is an unknown constant and I am trying to find the best constant a that minimizes the sum of square of error between the regression line and the real data.
I first used a function fitModel that I found on a YouTube video, Fitting Functions to Data in R.
library(TIMP)
f=fitModel(fh~thickness^2/3+(2+2*A)/(3*thickness)) #it finds the coefficient 'A'
coef(f) # to represent just the coefficient
However, there's an error
Error in modelspec[[datasetind[i]]] : subscript out of bounds
So, as an alternative, want to find a plot of 'a' and 'the Sum of Squares of Error'. This time, I have such a hard time finding 'a' and plotting this graph. By manual work, I figured out the value 'a' is somewhere near 0.2 but this is not a precise value.
It would be helpful if someone could manifest either:
Why the fitModel function didn't work or
How to find the value a and plot the graph.
You could try this instead:
yf = function(a,xv) xv*(2/3)+(2+2*a)/(3*xv)
yf(2,thickness)
f <- function (a,y, xv) sum((y - yf(a,xv))^2)
f(2,fh,thickness)
xmin <- optimize(f, c(0, 10), tol = 0.0001, y=fh,xv=thickness)
xmin
plot(thickness,fh)
lines(thickness,yf(xmin$minimum,thickness),col=3)

Parameter estimates using FME ODE model fitting in R

I have a system of ODE equations that I am trying to fit to generated data, synthetic or lab. The final product I am interested in is the parameter and it's estimated error. We use the R package FME with modCost and modFit. As an example, a system of ODEs may be defined as such:
eqs <- function (time, y, parms, ...) {
with(as.list(c(parms, y)), {
dP <- k2*PA - k1*A*P # concentration of nucleic acid
dA <- dP # concentration of free protein
dPA <- -dP
list(c(dA,dP,dPA))
}
}
with parameters k1 and k2 and variables A,P and PA. I import the data (not shown) and define the cost function used in modFit
cost <- function(p, data, ...) {
yy <- p[c("A","P","PA")]
pp <- p[c("k1", "k2")]
out <- ode(yy, time, eqs, pp)
modCost(out, data, ...)
}
I set some initial conditions with a parms vector and then do the fitting with
fit <- modFit(f = cost, p = parms, data = dat, weight = "std",
lower = rep(0, 8), upper = c(600,100,600,0.01,0.01), method = "Marq")
I then do a final ode to get the generated fits with best parameters, Bob's your uncle, and boom, estimated parameters. The input numbers don't matter, I hope my process outline is legible for those who use this package.
My issue and question centers around two things: I'm a scientist, a physicist, and the error of the estimated parameters is important to report. Can I generate the estimated error from MFE somehow or is there a separate package for that kind of return?
I don't get your point. You can just use:
summary(fit)
to see the Std. Error.

Model fitting with nls.lm in R, "Error: unused argument"

I'm trying to use the nls.lm function in the minpack.lm to fit a non-linear model to some data from a psychophysics experiment.
I've had a search around and can't find a lot of information about the package so have essentially copied the format of the example given on the nls.lm help page. Unfortunately my script is still failing to run and R is throwing out this error:
Error in fn(par, ...) :
unused argument (observed = c(0.1429, 0.2857, 0.375, 0.3846, 0.4667, 0.6154))
It appears that the script thinks the data I want to fit the model to is irrelevant, which is definitely wrong.
I'm expecting it to fit the model and produce a value of 0.5403 for the spare parameter (w).
Any help is greatly appreciated.
I'm making the transfer from Matlab over to R so apologies if my code looks sloppy.
Here's the script.
install.packages("pracma")
require(pracma)
install.packages("minpack.lm")
require(minpack.lm)
# Residual function, uses parameter w (e.g. .23) to predict accuracy error at a given ratio [e.g. 2:1]
residFun=function(w,n) .5 * erfc( abs(n[,1]-n[,2])/ ((sqrt(2)*w) * sqrt( (n[,1]^2) + (n[,2]^2) ) ) )
# example for residFun
# calculates an error rate of 2.59%
a=matrix(c(2,1),1,byrow=TRUE)
residFun(.23,a)
# Initial guess for parameter to be fitted (w)
parStart=list(w=0.2)
# Recorded accuracies in matrix, 1- gives errors to input into residFun
# i.e. the y-values I want to fit the model
Acc=1-(matrix(c(0.8571,0.7143,0.6250,0.6154,0.5333,0.3846),ncol=6))
# Ratios (converted to proportions) used in testing
# i.e. the points along the x-axis to fit the above data to
Ratios=matrix(c(0.3,0.7,0.4,0.6,0.42,0.58,0.45,0.55,0.47,0.53,0.49,0.51),nrow=6,byrow=TRUE)
# non-linear model fitting, attempting to calculate the value of w using the Levenberg-Marquardt nonlinear least-squares algorithm
output=nls.lm(par=parStart,fn=residFun,observed=Acc,n=Ratios)
# Error message shown after running
# Error in fn(par, ...) :
# unused argument (observed = c(0.1429, 0.2857, 0.375, 0.3846, 0.4667, 0.6154))
The error means you passed a function an argument that it did not expect. ?nls.lm has no argument observed, so it is passed to the function passed to fn, in your case, residFun. However, residFun doesn't expect this argument either, hence the error. You need to redefine this function like this :
# Residual function, uses parameter w (e.g. .23) to predict accuracy error at a given ratio [e.g. 2:1]
residFun=function(par,observed, n) {
w <- par$w
r <- observed - (.5 * erfc( abs(n[,1]-n[,2])/ ((sqrt(2)*w) * sqrt( (n[,1]^2) + (n[,2]^2) ) ) ))
return(r)
}
It gives the following result :
> output = nls.lm(par=parStart,fn=residFun,observed=Acc,n=Ratios)
> output
Nonlinear regression via the Levenberg-Marquardt algorithm
parameter estimates: 0.540285874836135
residual sum-of-squares: 0.02166
reason terminated: Relative error in the sum of squares is at most `ftol'.
Why that happened :
It seems that you were inspired by this example in he documentation :
## residual function
residFun <- function(p, observed, xx) observed - getPred(p,xx)
## starting values for parameters
parStart <- list(a=3,b=-.001, c=1)
## perform fit
nls.out <- nls.lm(par=parStart, fn = residFun, observed = simDNoisy,
xx = x, control = nls.lm.control(nprint=1))
Note that observed is an argument of residFun here.

Resources