R - Modelling Multivariate GARCH (rugarch and ccgarch) - r

First time asking a question here, I'll do my best to be explicit - but let me know if I should provide more info! Second, that's a long question...hopefully simple to solve for someone ;)! So using "R", I'm modelling multivariate GARCH models based on some paper (Manera et al. 2012).
I model the Constant Conditional Correlation (CCC) and Dynamic Conditional Correlation (DCC) models with external regressors in the mean equations; using "R" version 3.0.1 with package "rugarch" version 1.2-2 for the univariate GARCH with external regressors, and "ccgarch" package (version 0.2.0-2) for the CCC/DCC models. (I'm currently looking into the "rmgarch" package - but it seems to be only for the DCC and I need CCC model too.)
I have problem in the mean equations of my models. In the paper that I mentionned above, the parameter estimates of the mean equation between the CCC and DCC models changes! And I don't know how I would do that in R...
(currently, looking on Google and into Tsay's book "analysis of financial time series" and Engle's book "Anticipating correlations" to find my mistake)
What I mean by "my mean equations don't change between CCC and DCC models", it the following: I specify the univariate GARCH for my n=5 time series with the package rugarch. Then, I use the estimates parameters of the GARCH (ARCH + GARCH terms) and use them for both the CCC and DCC functions "eccc.sim()" and "dcc.sim()". Then, from eccc.estimation() and dcc.estimation() functions, I can retrieve the estimates for the variance equations as well as the correlation matrices. But not for the mean equation.
I post the R-code (reproducible and my original one) for univariate models and the CCC model only. Thank you already for reading my post!!!!!
Note: in the code below, "data.repl" is a "zoo" object of dim 843x22 (9 daily Commodities returns series and explanatory variables series). The multivariate GARCH is for 5 series only.
Reproducible code:
# libraries:
library(rugarch)
library(ccgarch)
library(quantmod)
# Creating fake data:
dataRegr <- matrix(rep(rnorm(3149, 11, 1),1), ncol=1, nrow=3149)
dataFuelsLag1 <- matrix(rep(rnorm(3149, 24, 8),2), ncol=2, nrow=3149)
#S&P 500 via quantmod and Yahoo Finance
T0 <- "2000-06-23"
T1 <- "2012-12-31"
getSymbols("^GSPC", src="yahoo", from=T0, to=T1)
sp500.close <- GSPC[,"GSPC.Close"],
getSymbols("UBS", src="yahoo", from=T0, to=T1)
ubs.close <- UBS[,"UBS.Close"]
dataReplic <- merge(sp500.close, ubs.close, all=TRUE)
dataReplic[which(is.na(dataReplic[,2])),2] <- 0 #replace NA
### (G)ARCH modelling ###
#########################
# External regressors: macrovariables and all fuels+biofuel Working's T index
ext.regr.ext <- dataRegr
regre.fuels <- cbind(dataFuelsLag1, dataRegr)
### spec of GARCH(1,1) spec with AR(1) ###
garch11.fuels <- as.list(1:2)
for(i in 1:2){
garch11.fuels[[i]] <- ugarchspec(mean.model = list(armaOrder=c(1,0),
external.regressors = as.matrix(regre.fuels[,-i])))
}
### fit of GARCH(1,1) AR(1) ###
garch11.fuels.fit <- as.list(1:2)
for(i in 1:2){
garch11.fuels.fit[[i]] <- ugarchfit(garch11.fuels[[i]], dataReplic[,i])
}
##################################################################
#### CCC fuels: with external regression in the mean eqaution ####
##################################################################
nObs <- length(data.repl[-1,1])
coef.unlist <- sapply(garch11.fuels.fit, coef)
cccFuels.a <- rep(0.1, 2)
cccFuels.A <- diag(coef.unlist[6,])
cccFuels.B <- diag(coef.unlist[7, ])
cccFuels.R <- corr.test(data.repl[,fuels.ind], data.repl[,fuels.ind])$r
# model=extended (Jeantheau (1998))
ccc.fuels.sim <- eccc.sim(nobs = nObs, a=cccFuels.a, A=cccFuels.A,
B=cccFuels.B, R=cccFuels.R, model="extended")
ccc.fuels.eps <- ccc.fuels.sim$eps
ccc.fuels.est <- eccc.estimation(a=cccFuels.a, A=cccFuels.A,
B=cccFuels.B, R=cccFuels.R,
dvar=ccc.fuels.eps, model="extended")
ccc.fuels.condCorr <- round(corr.test(ccc.fuels.est$std.resid,
ccc.fuels.est$std.resid)$r,digits=3)
My original code:
### (G)ARCH modelling ###
#########################
# External regressors: macrovariables and all fuels+biofuel Working's T index
ext.regr.ext <- as.matrix(data.repl[-1,c(10:13, 16, 19:22)])
regre.fuels <- cbind(fuel.lag1, ext.regr.ext) #fuel.lag1 is the pre-lagged series
### spec of GARCH(1,1) spec with AR(1) ###
garch11.fuels <- as.list(1:5)
for(i in 1:5){
garch11.fuels[[i]] <- ugarchspec(mean.model = list(armaOrder=c(1,0),
external.regressors = as.matrix(regre.fuels[,-i])))
}# regre.fuels[,-i] => "-i" because I model an AR(1) for each mean equation
### fit of GARCH(1,1) AR(1) ###
garch11.fuels.fit <- as.list(1:5)
for(i in 1:5){
j <- i
if(j==5){j <- 7} #because 5th "fuels" is actually column #7 in data.repl
garch11.fuels.fit[[i]] <- ugarchfit(garch11.fuels[[i]], as.matrix(data.repl[-1,j])))
}
#fuelsLag1.names <- paste(cmdty.names[fuels.ind], "(-1)")
fuelsLag1.names <- cmdty.names[fuels.ind]
rowNames.ext <- c("Constant", fuelsLag1.names, "Working's T Gasoline", "Working's T Heating Oil",
"Working's T Natural Gas", "Working's T Crude Oil",
"Working's T Soybean Oil", "Junk Bond", "T-bill",
"SP500", "Exch.Rate")
ic.n <- c("Akaike", "Bayes")
garch11.ext.univSpec <- univ.spec(garch11.fuels.fit, ols.fit.ext, rowNames.ext,
rowNum=c(1:15), colNames=cmdty.names[fuels.ind],
ccc=TRUE)
##################################################################
#### CCC fuels: with external regression in the mean eqaution ####
##################################################################
# From my GARCH(1,1)-AR(1) model, I extract ARCH and GARCH
# in order to model a CCC GARCH model:
nObs <- length(data.repl[-1,1])
coef.unlist <- sapply(garch11.fuels.fit, coef)
cccFuels.a <- rep(0.1, length(fuels.ind))
cccFuels.A <- diag(coef.unlist[17,])
cccFuels.B <- diag(coef.unlist[18, ])
#based on Engle(2009) book, page 31:
cccFuels.R <- corr.test(data.repl[,fuels.ind], data.repl[,fuels.ind])$r
# model=extended (Jeantheau (1998))
# "allow the squared errors and variances of the series to affect
# the dynamics of the individual conditional variances
ccc.fuels.sim <- eccc.sim(nobs = nObs, a=cccFuels.a, A=cccFuels.A,
B=cccFuels.B, R=cccFuels.R, model="extended")
ccc.fuels.eps <- ccc.fuels.sim$eps
ccc.fuels.est <- eccc.estimation(a=cccFuels.a, A=cccFuels.A,
B=cccFuels.B, R=cccFuels.R,
dvar=ccc.fuels.eps, model="extended")
ccc.fuels.condCorr <- round(corr.test(ccc.fuels.est$std.resid,
ccc.fuels.est$std.resid)$r,digits=3)
colnames(ccc.fuels.condCorr) <- cmdty.names[fuels.ind]
rownames(ccc.fuels.condCorr) <- cmdty.names[fuels.ind]
lowerTri(ccc.fuels.condCorr, rep=NA)

Are you aware that there is a whole package rmgarch for multivariate GARCH models?
Per its DESCRIPTION, it covers
Feasible multivariate GARCH models including DCC, GO-GARCH and
Copula-GARCH.

Well, I hope this is not too late. Here is what I found from the rmgarch manual: "the CCC model is calculated using a static GARCH copula (Normal) model".

Related

Fit a copula model in R

I want to accomplish the task of creating an optimal portfolio of stocks, the yield between which is modeled using kopulas.
And I have data: return of 4 stocks:
s1 <- read.csv('s1.csv',header=F)$V2
s2 <- read.csv('s2.csv',header=F)$V2
s3 <- read.csv('s3.csv',header=F)$V2
s4 <- read.csv('s4.csv',header=F)$V2
Then I tried to fit t-copula and plot the density
t.cop <- tCopula(dim=4)
set.seed(500)
m <- pobs(as.matrix(cbind(s1,s2,s3,s4)))
fit <- fitCopula(t.cop,m,method='ml')
coef(fit)
rho <- coef(fit)[1]
df <- coef(fit)[2]
persp(tCopula(dim=2,rho,df=df),dCopula)
But I cant understand how to build other types of copulas(vine copulas for example). And how can I find an optimal portfolio?

Standard errors of impacts in a spatial regression lagsarlm

I am using a spatial lag and durbin regression models and I would like to estimate the standard errors of the impacts. Any ideas on how to do this?
Reproducible example below using a durbin model
# data
data(oldcol)
# neighbours lists
lw <- nb2listw(COL.nb, style="W")
# regression
fit_durb <- lagsarlm(CRIME ~ INC + HOVAL, data=COL.OLD, type="Durbin",
listw=lw, method="eigen",
zero.policy=T, na.action="na.omit")
# power traces
W <- as(lw, "CsparseMatrix")
trMC <- trW(W, type="MC", listw = lw)
# Impacts
imp <- summary(impacts(fit_durb, tr=trMC, R=1000), zstats=TRUE, short=TRUE)
You should be able to use the MC samples stored in the imp object to get the standard errors, for instance:
test1<-lapply(imp$sres, function(x){apply(x, 2, mean)})
test2<-lapply(imp$sres, function(x){apply(x, 2, sd)}
test1$direct/test2$direct
give the same z values as returned by imp

Cross Validation K-Fold with Forecast Package in R

I have created a model in R using the forecast package.
My source of learning this is from here:
https://robjhyndman.com/hyndsight/dailydata/
I am using the last section which includes fourier series as such:
y <- ts(x, frequency=7)
z <- fourier(ts(x, frequency=365.25), K=5)
zf <- fourier(ts(x, frequency=365.25), K=5, h=100)
fit <- auto.arima(y, xreg=cbind(z,holiday), seasonal=FALSE)
fc <- forecast(fit, xreg=cbind(zf,holidayf), h=100)
After I create this model, is there a way I can do a cross validation k-fold test to determine the error and adjusted error?
I know how to do it with a generalized linear model as such:
library(boot)
lm1 <- glm(ValuePerSqFt ~ Units + SqFt + Boro, data = housing)
lm1cv <- cv.glm(housing, lm1, K=5)
lm1cv$delta
[1] 1870.31 1869.352
This shows the error and adjusted error.
Is there a function in the forecast package that can do this and it will help me compare the accuracy of this model with the glm model?

glmmPQL in R, se.fit not providing SE values

When using predict() on an object returned by glmmPQL (MASS package in R), I appear not to be able to return the standard errors. Here's a representative example of my workflow with some dummy data:
#### simulate some representative data
set.seed(1986)
dep <- rbinom(200, 1, .5) # dependent binomial variable
set.seed(1987)
ind <- rnorm(200) # Gaussian independent variable
set.seed(1988)
ran <- rep(1:5, 40) # random factor
##### use PQL to run binomial GLMM
anTest <- glmmPQL(dep~ind, random=~1|ran, family=binomial)
##### specify values of *ind* at which to predict. expand.grid() is overkill here...
newData <- expand.grid(ind=seq(min(ind), max(ind), length.out=100))
#### and generate predictions
pred <- predict(anTest, newdata=newData, type="response", level=0, se.fit=TRUE)
(newData <- data.frame(newData, fit=pred))
However, as you can see, even though se.fit is set to TRUE, only predictions are generated by the function. What's going on here? I've tried doing this with simulated Poisson and Gaussian data and I still get no standard errors. Help please!
I'm running R Studio v 0.98.490 on Apple OSX v. 10.9.1

Calculate confidence intervals for model averaged data using shrinkage in R

I'm trying to run a nest survival model using the logistic-exposure method based on Shaffer, 2004. I have a range of parameters and wish to compare all possible models and then estimate model-averaged parameters using shrinkage as in Burnham and Anderson, 2002. However, I am having trouble figuring out how to estimate the confidence intervals for the shrinkage adjusted parameters.
Is it possible to estimate confidence intervals for the model-averaged parameters estimated using shrinkage? I can easily extract the mean estimates for the model-averaged parameters with shrinkage using model.average$coef.shrinkage but am unclear how to get the corresponding confidence intervals.
Any help is gratefully appreciated. I'm currently working with the MuMIn package as I get errors with AICcmodavg regarding the link function.
Below is a simplified version of the code I'm using:
library(MuMIn)
# Logistical Exposure Link Function
# See Shaffer, T. 2004. A unifying approach to analyzing nest success.
# Auk 121(2): 526-540.
logexp <- function(days = 1)
{
require(MASS)
linkfun <- function(mu) qlogis(mu^(1/days))
linkinv <- function(eta) plogis(eta)^days
mu.eta <- function(eta) days * plogis(eta)^(days-1) *
.Call("logit_mu_eta", eta, PACKAGE = "stats")
valideta <- function(eta) TRUE
link <- paste("logexp(", days, ")", sep="")
structure(list(linkfun = linkfun, linkinv = linkinv,
mu.eta = mu.eta, valideta = valideta, name = link),
class = "link-glm")
}
# randomly generate data
nest.data <- data.frame(egg=rep(1,100), chick=runif(100), exposure=trunc(rnorm(100,113,10)), density=rnorm(100,0,1), height=rnorm(100,0,1))
nest.data$chick[nest.data$chick<=0.5] <- 0
nest.data$chick[nest.data$chick!=0] <- 1
# run global logistic exposure model
glm.logexp <- glm(chick/egg ~ density * height, family=binomial(logexp(days=nest.data$exposure)), data=nest.data)
# evaluate all possible models
model.set <- dredge(glm.logexp)
# model average 95% confidence set and estimate parameters using shrinkage
mod.avg <- model.avg(model.set, beta=TRUE)
(mod.avg$coef.shrinkage)
Any ideas on how to extract/generate the corresponding confidence intervals?
Thanks
Amy
After pondering about this for a while, I have come up with the following solution based on equation 5 in Lukacs, P. M., Burnham, K. P., & Anderson, D. R. (2009). Model selection bias and Freedman’s paradox. Annals of the Institute of Statistical Mathematics, 62(1), 117–125. Any comments on its validity would be appreciated.
The code follows on from that above:
# MuMIn generated shrinkage estimate
shrinkage.coef <- mod.avg$coef.shrinkage
# beta parameters for each variable/model combination
coef.array <- mod.avg$coefArray
coef.array <- replace(coef.array, is.na(coef.array), 0) # replace NAs with zeros
# generate empty dataframe for estimates
shrinkage.estimates <- data.frame(shrinkage.coef,variance=NA)
# calculate shrinkage-adjusted variance based on Lukacs et al, 2009
for(i in 1:dim(coef.array)[3]){
input <- data.frame(coef.array[,,i],weight=model.set$weight)
variance <- rep(NA,dim(input)[2])
for (j in 1:dim(input)[2]){
variance[j] <- input$weight[j] * (input$Std..Err[j]^2 + (input$Estimate[j] - shrinkage.estimates$shrinkage.coef[i])^2)
}
shrinkage.estimates$variance[i] <- sum(variance)
}
# calculate confidence intervals
shrinkage.estimates$lci <- shrinkage.estimates$shrinkage.coef - 1.96*shrinkage.estimates$variance
shrinkage.estimates$uci <- shrinkage.estimates$shrinkage.coef + 1.96*shrinkage.estimates$variance

Resources