Cointegration: Testing for bounds in VAR and interpretation of results - r

I am currently working with different pairwise VAR models to analyze cointegration relationships.
There is following pair of time series: X is I(0), Y I(1). Because X and Y are not integrated of same order (i.e. I(1) and I(1)), I can't carry out the Johansen and Juselius (ca.jo) test with the vars package. Rather, I have to consider a test by Pesaran et al. (2001) that works for time series that are integrated of different order.
Here is my reproducible code for a cointegration test of variables of different order of integration with a package named ardl:
install.packages("devtools")
library(devtools)
install_github("fcbarbi/ardl")
library(ardl)
data(br_month)
br_month
m1 <- ardl(mpr~cpi, data=br_month, ylag=1, case=3)
bounds.test(m1)
m2 <- ardl(cpi~mpr, data=br_month, ylag=1, case=3)
bounds.test(m2)
Question here:
Can I test the cointegration of a VAR (with 2 variables) with the ARDL test?
Interpretation of results (case 5 = constant+trend):
bounds.test(m1)
PSS case 5 ( unrestricted intercept, unrestricted trend )
Null hypothesis (H0): No long-run relation exist, ie H0:pi=0
I(0) I(1)
10% 5.59 6.26
5% 6.56 7.30
1% 8.74 9.63
F statistic 11.21852
Existence of a Long Term relation is not rejected at 5%.
bounds.test(m2)
PSS case 5 (unrestricted intercept, unrestricted trend )
Null hypothesis (H0): No long-run relation exist, ie H0:pi=0
I(0) I(1)
10% 5.59 6.26
5% 6.56 7.30
1% 8.74 9.63
F statistic 5.571511
Existence of a Long Term relation is rejected at 5% (even assumming all regressors I(0))
I would conclude that there is a cointegration relationship between cpi and mpr as the F statistic for m2 is smaller than the critical value for I(0) at the 5% level.
However, does it tell me anything that it can be concluded for m2 but not m1?

To me, you are confusing the definition of "cointegration". Because: for many time series to be cointegrated, they must have the same order of integration.
So, your question rather seems to be "what can I do when I have time series with different order of integration?".
So, in that case, I would advise you to take differences (of non-stationary variables) to obtain stationarity; and leave stationary ones as they are. Then apply VAR ordinarily.
From ARDL test's 2001 paper: This paper proposes a new approach to testing for the existence of a relationship between variables in levels which is applicable irrespective of whether the underlying regressors are purely I(0), purely I(1) or mutually cointegrated.
So, normally, ARDL test is not used for cointegration checking.

Related

Hypothesis testing for three groups

Based on the data, is the average sale amount statistically the same for the A, B, and C groups?
I performed t.test on AB, BC, CA. for CA, p-value>0.05, so I concluded for CA, we can't reject null hypothesis, and average may be same.
H1- alternative hypothesis was - true difference in means between group 36-45 and group 46-50 is not equal to 0
My Question is - Did I do this correctly or is there another way to check the hypothesis for three groups
If the population means of the groups are denoted mu_A, mu_B, and mu_C, then you are actually interested in the single joint null hypothesis: H_0: mu_A=mu_B=mu_C. The problem with conducting three pairwise test is the fact that it is difficult to control the probability of the type I error. That is, how do you know that three test at a significance level of 5% will still reject the H_0 above with 5% probability if this H_0 is true?
The test you are looking for is called an Analysis of Variance (ANOVA) test. It will provide a single test statistic and a single p-value to test the hypothesis above. If you search for "ANOVA statistical test", then Google will suggest many explanations (and probably also appropriate commands to do the analysis in R). I hope this helps.

predict.coxph() and survC1::Est.Cval -- type for predict() output

Given a coxph() model, I want to use predict() to predict hazards and then use survC1::Est.Cval( . . . nofit=TRUE) to get a c-value for the model.
The Est.Cval() documentation is rather terse, but says that "nofit=TRUE: If TRUE, the 3rd column of mydata is used as the risk score directly in calculation of C."
Say, for simplicity, that I want to predict on the same data I built the model on. For
coxModel a Cox regression model from coxph();
time a vector of times (positive reals), the same times that coxModel was built on; and
event a 0/1 vector, the same length, of event/censor indicators, the same events that coxModel was built on --
does this indicate that I want
predictions <- predict(coxModel, type="risk")
dd <- cbind(time, event, pred)
Est.Cval(mydata=dd, tau=tau, nofit=TRUE)
or should that first line be
predictions <- predict(coxModel, type="lp")
?
Thanks for any help,
The answer is that it doesn't matter.
Basically, the concordance value is testing, for all comparable pairs of times (events and censors), how probable it is that the later time has the lower risk (for a really good model, almost always). But since e^u is a monotonic function of real u, and the c-value is only testing comparisons, it doesn't matter whether you provide the hazard ratio, e^(sum{\beta_i x_i}), or the linear predictor, sum{\beta_i x_i}.
Since #42 motivated me to come up with a minimal working example, we can test this. We'll compare the values that Est.Cval() provides using one input versus using the other; and we can compare both to the value we get from coxph().
(That last value won't match exactly, because Est.Cval() uses the method of Uno et al. 2011 (Uno, H., Cai, T., Pencina, M. J., D’Agostino, R. B. & Wei, L. J. On the C-statistics for evaluating overall adequacy of risk prediction procedures with censored survival data. Statist. Med. 30, 1105–1117 (2011), https://onlinelibrary.wiley.com/doi/full/10.1002/sim.4154) but it can serve as a sanity check, since the values should be close.)
The following is based on the example worked through in Survival Analysis with R, 2017-09-25, by Joseph Rickert, https://rviews.rstudio.com/2017/09/25/survival-analysis-with-r/.
library("survival")
library("survC1")
# Load dataset included with survival package
data("veteran")
# The variable `time` records survival time; `status` indicates whether the
# patient’s death was observed (status=1) or that survival time was censored
# (status = 0).
# The model they build in the example:
coxModel <- coxph(Surv(time, status) ~ trt + celltype + karno + diagtime +
age + prior, data=veteran)
# The results
summary(coxModel)
Note the c-score it gives us:
Concordance= 0.736 (se = 0.021 )
Now, we calculate the c-score given by Est.Cval() on the two types of values:
# The value from Est.Cval(), using a risk input
cvalByRisk <- Est.Cval(mydata=cbind(time=veteran$time, event=veteran$status,
predictions=predict(object=coxModel, newdata=veteran, type="risk")),
tau=2000, nofit=TRUE)
# The value from Est.Cval(), using a linear predictor input
cvalByLp <- Est.Cval(mydata=cbind(time=veteran$time, event=veteran$status,
predictions=predict(object=coxModel, newdata=veteran, type="lp")),
tau=2000, nofit=TRUE)
And we compare the results:
cvalByRisk$Dhat
[1] 0.7282348
cvalByLp$Dhat
[1] 0.7282348

Does the varIdent function, used in LME work fine?

I would be glad if somebody could help me to solve this problem. I have data with repeated measurements design, where we tested a reaction of birds (time.dep) before and after the infection (exper). We have also FL (fuel loads, % of lean body mass), fat score and group (Experimental vs Control) as explanatory variables. I decided to use LME, because distribution of residuals doesn’t deviate from normality. But there is a problem with homogeneity of residuals. Variances of groups “before” and “after” and also between fat levels differ significantly (Fligner-Killeen test, p=0.038 and p=0.01 respectively).
ring group fat time.dep FL exper
1 XZ13125 E 4 0.36 16.295 before
2 XZ13125 E 3 0.32 12.547 after
3 XZ13126 E 3 0.28 7.721 before
4 XZ13127 C 3 0.32 9.157 before
5 XZ13127 C 3 0.40 -1.902 after
6 XZ13129 C 4 0.40 10.382 before
After I have selected the random part of the model, which is random-intercept (~1|ring), I have applied the weight parameter for both “fat” and “exper” (varComb(varIdent(form=~1|fat), varIdent(form=~1|exper)). Now the plot of standardized residuals vs. fitted looks better, but I still get the violation of homogeneity for these variables (same values in fligner test). What do I do wrong?
A common trap in lme is that the default is to give raw residuals, i.e. not adjusted for any of the heteroscedasticity (weights) or correlation (correlation) sub-models that may have been used. From ?residuals.lme:
type: an optional character string specifying the type of residuals
to be used. If ‘"response"’, as by default, the “raw”
residuals (observed - fitted) are used; else, if ‘"pearson"’,
the standardized residuals (raw residuals divided by the
corresponding standard errors) are used; else, if
‘"normalized"’, the normalized residuals (standardized
residuals pre-multiplied by the inverse square-root factor of
the estimated error correlation matrix) are used. Partial
matching of arguments is used, so only the first character
needs to be provided.
Thus if you want your residuals to be corrected for heteroscedasticity (as included in the model) you need type="pearson"; if you want them to be corrected for correlation, you need type="normalized".

R: Regression with a holdout of certain variables

I'm doing a multi-linear regression model using lm(), Y is response variable (e.g.: return of interests) and others are explanatory variable (100+ cases, 30+ variables).
There are certain variables which are considered as key variables (concerning investment), when I ran the lm() function, R returns a model with adj.r.square of 97%. But some of the key variables are not significant predictors.
Is there a way to do a regression by keeping all of the key variables in the model (as significant predictors)? It doesn't matter if the adjusted R square decreases.
If the regression doesn't work, is there other methodology?
thank you!
==========================
the data set is uploaded
https://www.dropbox.com/s/gh61obgn2jr043y/df.csv
==========================
additional questions:
what if some variables have impact from previous period to current period?
Example: one takes a pill in the morning when he/she has breakfast and the effect of pills might last after lunch (and he/she takes the 2nd pill at lunch)
I suppose I need to take consideration of data transformation.
* My first choice is to plus a carry-over rate: obs.2_trans = obs.2 + c-o rate * obs.1
* Maybe I also need to consider the decay of pill effect itself, so a s-curve or a exponential transformation is also necessary.
take variable main1 for example, I can use try-out method to get an ideal c-o rate and s-curve parameter starting from 0.5 and testing by step of 0.05, up to 1 or down to 0, until I get the highest model score - say, lowest AIC or highest R square.
This is already a huge quantity to test.
If I need to test more than 3 variables in the same time, how could I manage that by R?
Thank you!
First, a note on "significance". For each variable included in a model, the linear modeling packages report the likelihood that the coefficient of this variable is different from zero (actually, they report p=1-L). We say that, if L is larger (smaller p), then the coefficient is "more significant". So, while it is quite reasonable to talk about one variable being "more significant" than another, there is no absolute standard for asserting "significant" vs. "not significant". In most scientific research, the cutoff is L>0.95 (p<0.05). But this is completely arbitrary, and there are many exceptions. recall that CERN was unwilling to assert the existence of the Higgs boson until they had collected enough data to demonstrate its effect at 6-sigma. This corresponds roughly to p < 1 × 10-9. At the other extreme, many social science studies assert significance at p < 0.2 (because of the higher inherent variability and usually small number of samples). So excluding a variable from a model because it is "not significant" really has no meaning. On the other hand you would be hard pressed to include a variable with high p while excluding another variable with lower p.
Second, if your variables are highly correlated (which they are in your case), then it is quite common that removing one variable from a model changes all the p-values greatly. A retained variable that had a high p-value (less significant), might suddenly have low p-value (more significant), just because you removed a completely different variable from the model. Consequently, trying to optimize a fit manually is usually a bad idea.
Fortunately, there are many algorithms that do this for you. One popular approach starts with a model that has all the variables. At each step, the least significant variable is removed and the resulting model is compared to the model at the previous step. If removing this variable significantly degrades the model, based on some metric, the process stops. A commonly used metric is the Akaike information criterion (AIC), and in R we can optimize a model based on the AIC criterion using stepAIC(...) in the MASS package.
Third, the validity of regression models depends on certain assumptions, especially these two: the error variance is constant (does not depend on y), and the distribution of error is approximately normal. If these assumptions are not met, the p-values are completely meaningless!! Once we have fitted a model we can check these assumptions using a residual plot and a Q-Q plot. It is essential that you do this for any candidate model!
Finally, the presence of outliers frequently distorts the model significantly (almost by definition!). This problem is amplified if your variables are highly correlated. So in your case it is very important to look for outliers, and see what happens when you remove them.
The code below rolls this all up.
library(MASS)
url <- "https://dl.dropboxusercontent.com/s/gh61obgn2jr043y/df.csv?dl=1&token_hash=AAGy0mFtfBEnXwRctgPHsLIaqk5temyrVx_Kd97cjZjf8w&expiry=1399567161"
df <- read.csv(url)
initial.fit <- lm(Y~.,df[,2:ncol(df)]) # fit with all variables (excluding PeriodID)
final.fit <- stepAIC(initial.fit) # best fit based on AIC
par(mfrow=c(2,2))
plot(initial.fit) # diagnostic plots for base model
plot(final.fit) # same for best model
summary(final.fit)
# ...
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 11.38360 18.25028 0.624 0.53452
# Main1 911.38514 125.97018 7.235 2.24e-10 ***
# Main3 0.04424 0.02858 1.548 0.12547
# Main5 4.99797 1.94408 2.571 0.01195 *
# Main6 0.24500 0.10882 2.251 0.02703 *
# Sec1 150.21703 34.02206 4.415 3.05e-05 ***
# Third2 -0.11775 0.01700 -6.926 8.92e-10 ***
# Third3 -0.04718 0.01670 -2.826 0.00593 **
# ... (many other variables included)
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# Residual standard error: 22.76 on 82 degrees of freedom
# Multiple R-squared: 0.9824, Adjusted R-squared: 0.9779
# F-statistic: 218 on 21 and 82 DF, p-value: < 2.2e-16
par(mfrow=c(2,2))
plot(initial.fit)
title("Base Model",outer=T,line=-2)
plot(final.fit)
title("Best Model (AIC)",outer=T,line=-2)
So you can see from this that the "best model", based on the AIC metric, does in fact include Main 1,3,5, and 6, but not Main 2 and 4. The residuals plot shows no dependance on y (which is good), and the Q-Q plot demonstrates approximate normality of the residuals (also good). On the other hand the Leverage plot shows a couple of points (rows 33 and 85) with exceptionally high leverage, and the Q-Q plot shows these same points and row 47 as having residuals not really consistent with a normal distribution. So we can re-run the fits excluding these rows as follows.
initial.fit <- lm(Y~.,df[c(-33,-47,-85),2:ncol(df)])
final.fit <- stepAIC(initial.fit,trace=0)
summary(final.fit)
# ...
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 27.11832 20.28556 1.337 0.185320
# Main1 1028.99836 125.25579 8.215 4.65e-12 ***
# Main2 2.04805 1.11804 1.832 0.070949 .
# Main3 0.03849 0.02615 1.472 0.145165
# Main4 -1.87427 0.94597 -1.981 0.051222 .
# Main5 3.54803 1.99372 1.780 0.079192 .
# Main6 0.20462 0.10360 1.975 0.051938 .
# Sec1 129.62384 35.11290 3.692 0.000420 ***
# Third2 -0.11289 0.01716 -6.579 5.66e-09 ***
# Third3 -0.02909 0.01623 -1.793 0.077060 .
# ... (many other variables included)
So excluding these rows results in a fit that has all the "Main" variables with p < 0.2, and all except Main 3 at p < 0.1 (90%). I'd want to look at these three rows and see if there is a legitimate reason to exclude them.
Finally, just because you have a model that fits your existing data well, does not mean that it will perform well as a predictive model. In particular, if you are trying to make predictions outside of the "model space" (equivalent to extrapolation), then your predictive power is likely to be poor.
Significance is determined by the relationships in your data .. not by "I want them to be significant".
If the data says they are insignificant, then they are insignificant.
You are going to have a hard time getting any significance with 30 variables, and only 100 observations. With only 100+ observations, you should only be using a few variables. With 30 variables, you'd need 1000's of observations to get any significance.
Maybe start with the variables you think should be significant, and see what happens.

How to perform random forest/cross validation in R

I'm unable to find a way of performing cross validation on a regression random forest model that I'm trying to produce.
So I have a dataset containing 1664 explanatory variables (different chemical properties), with one response variable (retention time). I'm trying to produce a regression random forest model in order to be able to predict the chemical properties of something given its retention time.
ID RT (seconds) 1_MW 2_AMW 3_Sv 4_Se
4281 38 145.29 5.01 14.76 28.37
4952 40 132.19 6.29 11 21.28
4823 41 176.21 7.34 12.9 24.92
3840 41 174.24 6.7 13.99 26.48
3665 42 240.34 9.24 15.2 27.08
3591 42 161.23 6.2 13.71 26.27
3659 42 146.22 6.09 12.6 24.16
This is an example of the table that I have. I want to basically plot RT against 1_MW, etc (up to 1664 variables), so I can find which of these variables are of importance and which aren't.
I do:-
r = randomForest(RT..seconds.~., data = cadets, importance =TRUE, do.trace = 100)
varImpPlot(r)
which tells me which variables are of importance and what not, which is great. However, I want to be able to partition my dataset so that I can perform cross validation on it. I found an online tutorial that explained how to do it, but for a classification model rather than regression.
I understand you do:-
k = 10
n = floor(nrow(cadets)/k)
i = 1
s1 = ((i-1) * n+1)
s2 = (i * n)
subset = s1:s2
to define how many cross folds you want to do, and the size of each fold, and to set the starting and end value of the subset. However, I don't know what to do here on after. I was told to loop through but I honestly have no idea how to do this. Nor do I know how to then plot the validation set and the test set onto the same graph to depict the level of accuracy/error.
If you could please help me with this I'd be ever so grateful, thanks!
From the source:
The out-of-bag (oob) error estimate
In random forests, there is no need for cross-validation or a separate
test set to get an unbiased estimate of the test set error. It is
estimated internally , during the run...
In particular, predict.randomForest returns the out-of-bag prediction if newdata is not given.
As topchef pointed out, cross-validation isn't necessary as a guard against over-fitting. This is a nice feature of the random forest algorithm.
It sounds like your goal is feature selection, cross-validation is still useful for this purpose. Take a look at the rfcv() function within the randomForest package. Documentation specifies input of a data frame & vector, so I'll start by creating those with your data.
set.seed(42)
x <- cadets
x$RT..seconds. <- NULL
y <- cadets$RT..seconds.
rf.cv <- rfcv(x, y, cv.fold=10)
with(rf.cv, plot(n.var, error.cv))

Resources