R - predicting simple dyn model with one lag term - r

I'm trying to predict a simple lagged time series regression with the dyn library in R. This question was a helpful starting point, but I'm getting some weird behaviour that I'm hoping someone can explain.
Here's a minimum working example.
library(dyn)
# Initial data
y.orig <- arima.sim(model=list(ar=c(.9)),n=10)
x1.orig <- rnorm(10)
data <- cbind(y=y.orig, x1=x1.orig)
# This model, with a single lag term, predicts from t=2
mod1 <- dyn$lm(y ~ lag(y, -1), data)
y.new <- window(y.orig, end=end(y.orig) + c(5,0), extend=TRUE)
newdata1 <- cbind(y=y.new)
predict(mod1, newdata1)
# This one, with a lag plus another predictor, predicts from t=1 on
mod2 <- dyn$lm(y ~ lag(y, -1) + x1, data)
y.new <- window(y.orig, end=end(y.orig) + c(5,0), extend=TRUE)
x1.new <- c(x1.orig, rnorm(5))
newdata2 <- cbind(y=y.new, x1=x1.new)
predict(mod2, newdata2)
Why is there the difference between the two? Can anyone suggest how to predict my ''mod1'' using dyn? Thanks in advance.

Both mod1 and mod2 start predicting at t=2. The prediction vector for mod2 starts at t=1 but its NA. Regarding why one starts at 2 and the other at 1 note that predict merges together the variables on the right hand side of the formula and in the case of mod1 we see that lag(y, -1) starts at t=2 since y starts at t=1. On the other hand in the case of mod2 when we merge lag(y, -1) and x1 we get a series that starts at t=1 (since x1 starts at t=1). Try this which does not involve dyn:
> start(with(as.list(newdata1), merge.zoo(lag(y, -1))))
[1] 2
> start(with(as.list(newdata2), merge.zoo(lag(y, -1), x1)))
[1] 1
If we wanted predict(mod1, newdata1) to start at t=1 we could add our own Intercept column and remove the default intercept to avoid duplication. That would force it to start at 1 since now the RHS has a series which starts at 1:
data.b <- cbind(y=y.orig, x1=x1.orig, Intercept = 1)
mod.b <- dyn$lm(y ~ Intercept + lag(y, -1) - 1, data.b)
newdata.b <- cbind(Intercept = 1, y = y.new)
predict(mod.b, newdata.b)
Regarding the second question, if you want to predict mod1 then use fitted(mod1) .
It seems there is lurking some third question about how it basically all works so maybe this clarifies it. All dyn does is to align the time series in the formula and then lm and predict can be run as usual. For example, if we create an aligned model frame using dyn$model.frame then everything else can be done using just ordinary lm and ordinary predict and dyn is not involved from that point onwards. Below mod1a is similar to mod1 from the question except it runs an ordinary lm on the aligned model frame. If you understand the mod1a lm and its predict then mod1 and predict are similar.
## mod1 and mod1a are similar
# from code in the question
mod1 <- dyn$lm(y ~ lag(y, -1), data = data)
mod1
# redo it using a plain lm by applying dyn to model.frame
mf <- dyn$model.frame(y ~ lag(y, -1), data = data)
mod1a <- lm(y ~ `lag(y, -1)`, mf)
mod1a
## the two predicts below are similar
# the 1 ensures its an mts rather than ts but is otherwise not used
newdata1 <- cbind(y=y.new, 1)
predict(mod1, newdata1)
newdata1a <- cbind(1, `lag(y, -1)` = lag(y.new, -1))
predict(mod1a, newdata1a)

Related

Estimating multiple OLS with AR residuals

I am new to modeling in R, so I'm stumbling a bit...
I have a model in Eviews, which I have to translate to R and make further upgrades.
The model is multiple OLS with AR(1) of residuals.
I implemented it like this
model1 <- lm(y ~ x1 + x2 + x3, data)
data$e <- dplyr:: lag(residuals(model1), 1)
model2 <- lm(y ~ x1 + x2 + x3 + e, data)
My issue is the same as it is in this thread and I expected it: while parameter estimations are similar, they are different enought that I cannot use it.
I am planing of using ARIMA from stats package, but the problem is implementation. How to make AR(1) on residuals, and make other variables as they are?
Provided I understood you correctly, you can supply external regressors to your arima model through the xreg argument.
You don't provide sample data so I don't have anything to play with, but your model should translate to something like
model <- arima(data$y, xreg = as.matrix(data[, c("x1", "x2", "x3")]), order = c(1, 0, 0))
Explanation: The first argument data$y contains your time series data. xreg contains your external regressors as a matrix, with every column containing as many observations for that regressor as you have time points. order = c(1, 0, 0) defines an AR(1) model.

R: one regression model for 2 different data sets to prepare for waldtest

I have two different data sets. Each of them represents one portfolio of my two portfolios.
y(p) as dependent variable and x1(p), x2(p),x3(p),x4(p) as independent variables.
(p) indicates a portfolio-specific value. column 1 of each variable represents portfolio 1 and column 2 represents portfolio 2.
The regression equation is:
y(p)=∝(p)+ 𝛽1(p)*x1(p)+𝛽2(p)*x2(p)+𝛽3(p)*x3(p)+𝛽4(p)*x4(p)
What i did so far is to implement a separate regression model for each portfolio in R:
lm1 <- lm(y[,1]~x1[,1]+x2[,1]+x3[,1]+x4[,1])
lm2 <- lm(y[,2]~x1[,2]+x2[,2]+x3[,2]+x4[,2])
My objective is to compare the two intercepts of both regression models. Within the scope of this comparison i need to test the joint significance of these intercepts. As far as i can tell, using the wald test should be appropriate.
If I use the waldtest-function from the lmtest-package it does not work.
Obviously, because the response variable is not the same for both models.
library(lmtest)
waldtest(lm1,lm2)
In waldtest.default(object, ..., test = match.arg(test)) :
models with response "y[, 2]" removed because response differs from model 1
All workarounds I tried so far did not work either, e.g. R: Waldtest: "Error in solve.default(vc[ovar, ovar]) : 'a' is 0-diml"
My guess is that the regression needs to be done in a different way to fix the problems regarding the waldtest.
So that leads to my question:
Is there a possibility to do the regression in one model, which still generates portfolio-specific intercepts and coefficients? (I assume, that this would fix the problems with the waldtest-function.)
Any advice or suggestion will be appreciated.
The following data can be used for a reproducible example:
y=matrix(rnorm(10),ncol=2)
x1=matrix(rnorm(10),ncol=2)
x2=matrix(rnorm(10),ncol=2)
x3=matrix(rnorm(10),ncol=2)
x4=matrix(rnorm(10),ncol=2)
lm1 <- lm(y[,1]~x1[,1]+x2[,1]+x3[,1]+x4[,1])
lm2 <- lm(y[,2]~x1[,2]+x2[,2]+x3[,2]+x4[,2])
library(lmtest)
waldtest(lm1,lm2)
Best regards,
Simon
Here are three ways to test intercepts equality. The second one is an implementation of the accepted answer to this question, while the other two are implementations of the second answer to the aforementioned question under different assumptions.
Let
n <- 5
y <- matrix(rnorm(10), ncol = 2)
x <- matrix(rnorm(10), ncol = 2)
First, we may indeed perform the test with only a single model. For that purpose we create a new vector Y that concatenates y[, 1] and y[, 2]. As for the independent variables, we create a block-diagonal matrix with the regressors of one model at the upper-left block and those for the other model at the lower-right block. Lastly, I create a group factor indicating the hidden model. Hence,
library(Matrix)
Y <- c(y)
X <- as.matrix(bdiag(x[, 1], x[, 2]))
G <- factor(rep(0:1, each = n))
Now the unrestricted model is
m1 <- lm(Y ~ G + X - 1)
while the restricted one is
m2 <- lm(Y ~ X)
Testing for intercepts equality gives
library(lmtest)
waldtest(m1, m2)
# Wald test
#
# Model 1: Y ~ G + X - 1
# Model 2: Y ~ X
# Res.Df Df F Pr(>F)
# 1 6
# 2 7 -1 0.5473 0.4873
so that, as expected, we cannot reject they equality. A problem with this solution, however, is that it is like estimating the two models separately but assuming that the errors have the same variance in both. Also, we don't allow for a cross-correlation between errors.
Second, we can relax the assumption of identical errors variance by estimating two separate models and employing a Z-test as follows.
M1 <- lm(y[, 1] ~ x[, 1])
M2 <- lm(y[, 2] ~ x[, 2])
Z <- unname((coef(M1)[1] - coef(M2)[1]) / (coef(summary(M1))[1, 2]^2 + coef(summary(M2))[1, 2])^2)
2 * pnorm(-abs(Z))
# [1] 0.5425736
leading to the same conclusion.
Lastly, we can employ the SUR in this way allowing for model-dependent errors variance as well as contemporaneous errors cross-dependence (that may be not necessary in your case, it matters what kind of data you are using). For that we can use the systemfit package as follows:
library(systemfit)
eq1 <- y[, 1] ~ x[, 1]
eq2 <- y[, 2] ~ x[, 2]
m <- systemfit(list(eq1, eq2), method = "SUR")
In this case we also are able to perform the Wald test:
R <- matrix(c(1, 0, -1, 0), nrow = 1) # Restriction matrix
linearHypothesis(m, R, test = "Chisq")
# Linear hypothesis test (Chi^2 statistic of a Wald test)
#
# Hypothesis:
# eq1_((Intercept) - eq2_(Intercept) = 0
#
# Model 1: restricted model
# Model 2: m
#
# Res.Df Df Chisq Pr(>Chisq)
# 1 7
# 2 6 1 0.3037 0.5816

GLMNET prediction with intercept

I have two questions about prediction using GLMNET - specifically about the intercept.
I made a small example of train data creation, GLMNET estimation and prediction on the train data (which I will later change to Test data):
# Train data creation
Train <- data.frame('x1'=runif(10), 'x2'=runif(10))
Train$y <- Train$x1-Train$x2+runif(10)
# From Train data frame to x and y matrix
y <- Train$y
x <- as.matrix(Train[,c('x1','x2')])
# Glmnet model
Model_El <- glmnet(x,y)
Cv_El <- cv.glmnet(x,y)
# Prediction
Test_Matrix <- model.matrix(~.-y,data=Train)[,-1]
Test_Matrix_Df <- data.frame(Test_Matrix)
Pred_El <- predict(Model_El,newx=Test_Matrix,s=Cv_El$lambda.min,type='response')
I want to have an intercept in the estimated formula. This code gives an error concerning the dimensions of the Test_Matrix matrix unless I remove the (Intercept) column of the matrix - as in
Test_Matrix <- model.matrix(~.-y,data=Train)[,-1]
My questions are:
Is it the right way to do this in order to get the prediction - when I want the prediction formula to include the intercept?
If it is the right way: Why do I have to remove the intercept in the matrix?
Thanks in advance.
The matrix x you were feeding into the glmnet function doesn't contain an intercept column. Therefore, you should respect this format when constructing your test matrix: i.e. just do model.matrix(y ~ . - 1, data = Train).
By default, an intercept is fit in glmnet (see the intercept parameter in the glmnet function). Therefore, when you called glmnet(x, y), you are technically doing glmnet(x, y, intercept = T). Thus, even though your x matrix didn't have an intercept, one was fit for you.
If you want to predict a model with intercept, you have to fit a model with intercept. Your code used model matrix x <- as.matrix(Train[,c('x1','x2')]) which is intercept-free, therefore if you provide an intercept when using predict, you get an error.
You can do the following:
x <- model.matrix(y ~ ., Train) ## model matrix with intercept
Model_El <- glmnet(x,y)
Cv_El <- cv.glmnet(x,y)
Test_Matrix <- model.matrix(y ~ ., Train) ## prediction matrix with intercept
Pred_El <- predict(Model_El, newx = Test_Matrix, s = Cv_El$lambda.min, type='response')
Note, you don't have to do
model.matrix(~ . -y)
model.matrix will ignore the LHS of the formula, so it is legitimate to use
model.matrix(y ~ .)

r loess: coefficients of global "parametric" terms

Is there a way how I can extract coefficients of globally fitted terms in local regression modeling?
Maybe I do misunderstand the role of globally fitted terms in the function loess, but what I would like to have is the following:
# baseline:
x <- sin(seq(0.2,0.6,length.out=100)*pi)
# noise:
x_noise <- rnorm(length(x),0,0.1)
# known structure:
x_1 <- sin(seq(5,20,length.out=100))
# signal:
y <- x + x_1*0.25 + x_noise
# fit loess model:
x_seq <- seq_along(x)
mod <- loess(y ~ x_seq + x_1,parametric="x_1")
The fit is done perfectly, however, how can I extract the estimated value of the globally fitted term x_1 (i.e. some value near 0.25 for the example above)?
Finally, I found a solution to my problem using the function gam from the package gam:
require(gam)
mod2 <- gam(y ~ lo(x_seq,span=0.75,degree=2) + x_1)
However, the fits from the two models are not exactly the same (which might be due to different control settings?)...

Why is leave-one-out cross-validation of GLM model (package=boot) failing when data contains NaN's?

This is a fairly simple procedure - refitting GLM model with subset of data (training set) and calculating the accuracy of the prediction on the remaining data. I am trying to run a "leave-one-out" strategy on a data set (i.e. training subset is length = n-1) using the cv.glm function of the package boot.
Am I doing something wrong, or is this really the case that the function doesn't seem to handle NA's? I'm guessing that this is fairly easy to program on my own, but I would appreciate any advise if there is some other mistake that I am making. Cheers.
Example:
require(boot)
#create data
n <- 100
x <- runif(n)
e <- rnorm(n, sd=100)
a <- 5
b <- 3
y <- exp(a + b*x) + e
plot(y ~ x)
plot(y ~ x, log="y")
#make some y's NaN
set.seed(1)
y[sample(n, 0.1*n)] <- NaN
#fit glm model
df <- data.frame(y=y, x=x)
glm.fit <- glm(y ~ x, data=df, family=gaussian(link="log"))
summary(glm.fit)
#calculate mean error of prediction (leave-one-out cross-validation)
cv.res <- cv.glm(df, glm.fit)
cv.res$delta
[1] NA NA
You're right. The function is not set up to handle NAs. The various options for the na.action argument of the glm() function don't really help, either. The easiest way to deal with it, is to remove the NAs from the data frame at the outset.
sub <- df[!is.na(df$y), ]
glm.fit <- glm(y ~ x, data=sub, family=gaussian(link="log"))
summary(glm.fit)
# calculate mean error of prediction (leave-one-out cross-validation)
cv.res <- cv.glm(sub, glm.fit)
cv.res$delta

Resources