ARIMAX exogenous variables reverse causality - r

I try to fit an ARIMAX model to figure out whether the containment measures (using the Government response stringency index, numbers from 0 to 100) are having a significant effect on the daily new cases rate. I also want to add test rates.
I programmed everything in R (every ts is stationary,...) and did the Granger causality test. Result: Pr(>F)is greater than 0.05. Therefore the null hypothesis of NO Granger causality can be rejected and the new cases rate and the containment measures have reverse causality.
Is there any possibility to transform the variable "stringency index" and continue with an ARIMAX model? If so, how to do this in R?

In R you have "forecast" package to build ARIMA models. Recall, that there is a difference between true ARIMAX models and linear regressions with ARIMA errors. Check this post by Rob Hyndman (forecast package author) for more detailed information:
The ARIMAX model muddle
Here are Rob Hyndman's examples to fit a linear regression with ARIMA errors - check more information here:
library(forecast)
library(fpp2) # To get a data set to work on
# Fit a linear regression with AR errors
fit <- Arima(uschange[,"Consumption"], xreg = uschange[,"Income"], order = c(1,0,0))
# Forecast and plot predictions
fcast <- forecast(fit, xreg=rep(mean(uschange[,2]),8))
autoplot(fcast) + xlab("Year") +
ylab("Percentage change")
# Use auto.arima function to find the optimal parameters
fit <- auto.arima(uschange[,"Consumption"], xreg = uschange[,"Income"])
# Plot predictions
fcast <- forecast(fit, xreg=rep(mean(uschange[,2]),8))
autoplot(fcast) + xlab("Year") +
ylab("Percentage change")
Regarding your question about how to solve the reverse causality matter, it is clear that you have endogeneity bias. The response stringency index affects the daily new cases rate and viceversa. If it is a prediction problem and not an estimation one, I wouldn't care too much on that as long as I get good predictions. For an estimation/causation matter, I will try to get different exogenus variables or try to use instrumental/control variables.

Related

Optimizing a GAM for Smoothness

I am currently trying to generate a general additive model in R using a response variable and three predictor variables. One of the predictors is linear, and the dataset consists of 298 observations.
I have run the following code to generate a basic GAM:
GAM <- gam(response~ linearpredictor+ s(predictor2) + s(predictor3), data = data[2:5])
This produces a model with 18 degrees of freedom and seems to substantially overfit the data. I'm wondering how I might generate a GAM that maximizes smoothness and predictive error. I realize that each of these features is going to come at the expense of the other, but is there good a way to find the optimal model that doesn't overfit?
Additionally, I need to perform leave one out cross validation (LOOCV), and I am not sure how to make sure that gam() does this in the MGCV package. Any help on either of these problems uld be greatly appreciated. Thank you.
I've run this to generate a GAM, but it overfits the data.
GAM <- gam(response~ linearpredictor+ s(predictor2) + s(predictor3), data = data[2:5])
I have also generated 1,000,000 GAMs with varying combinations of smoothing parameters and ranged the maximum degrees of freedom allowed from 10 (as shown in the code below) to 19. The variable "combinations2" is a list of all 1,000,000 combinations of smoothers I selected. This code is designed to try and balance degrees of freedom and AIC score. It does function, but I'm not sure that I'm actually going to be able to find the optimal model from this. I also cannot tell how to make sure that it uses LOOCV.
BestGAM <- gam(response~ linearpredictor+ predictor2+ predictor3, data = data[2:5])
for(i in 1:100000){
PotentialGAM <- gam(response~ linearpredictor+ s(predictor2) + s(predictor3), data = data[2:5], sp=c(combinations2[i,]$Var1,combinations2[i,]$Var2))
if (AIC(PotentialGAM,BestGAM)$df[1] <= 10 & AIC(PotentialGAM,BestGAM)$AIC[1] < AIC(PotentialGAM,BestGAM)$AIC[2]){
BestGAM <<- PotentialGAM
listNumber <- i
}
}
You are fitting your GAM using generalised cross validation (GCV) smoothness selection. GCV is a way to get around the invariance problem of ordinary cross validation (OCV; what you also call LOOCV) when estimating GAMs. Note that GCV is the same as OCV on a rotated version of the fitting problem (rotating y - Xβ by Q, any orthogonal matrix), and while when fitting with GCV {mgcv} doesn't actually need to do the rotation and the expected GCV score isn't affected by the rotation, GCV is just OCV (wood 2017, p. 260)
It has been shown that GCV can undersmooth (resulting in more wiggly models) as the objective function (GCV profile) can become flat around the optimum. Instead it is preferred to estimate GAMs (with penalized smooths) using REML or ML smoothness selection; add method = "REML" (or "ML") to your gam() call.
If the REML or ML fit is as wiggly as the GCV one with your data, then I'd be likely to presume gam() is not overfitting, but that there is something about your response data that hasn't been explained here (are the data ordered in time, for example?)
As to your question
how I might generate a GAM that maximizes smoothness and [minimize?] predictive error,
you are already doing that using GCV smoothness selection and for a particular definition of "smoothness" (in this case it is squared second derivatives of the estimated smooths, integrated over the range of the covariates, and summed over smooths).
If you want GCV but smoother models, you can increase the gamma argument above 1; gamma 1.4 is often used for example, which means that each EDF costs 40% more in the GCV criterion.
FWIW, you can get the LOOCV (OCV) score for your model without actually fitting 288 GAMs through the use of the influence matrix A. Here's a reproducible example using my {gratia} package:
library("gratia")
library("mgcv")
df <- data_sim("eg1", seed = 1)
m <- gam(y ~ s(x0) + s(x1) + s(x2) + s(x3), data = df, method = "REML")
A <- influence(m)
r <- residuals(m, type = "response")
ocv_score <- mean(r^2 / (1 - A))

Is there a way to include an autocorrelation structure in the gam function of mgcv?

I am building a model using the mgcv package in r. The data has serial measures (data collected during scans 15 minutes apart in time, but discontinuously, e.g. there might be 5 consecutive scans on one day, and then none until the next day, etc.). The model has a binomial response, a random effect of day, a fixed effect, and three smooth effects. My understanding is that REML is the best fitting method for binomial models, but that this method cannot be specified using the gamm function for a binomial model. Thus, I am using the gam function, to allow for the use of REML fitting. When I fit the model, I am left with residual autocorrelation at a lag of 2 (i.e. at 30 minutes), assessed using ACF and PACF plots.
So, we wanted to include an autocorrelation structure in the model, but my understanding is that only the gamm function and not the gam function allows for the inclusion of such structures. I am wondering if there is anything I am missing and/or if there is a way to deal with autocorrelation with a binomial response variable in a GAMM built in mgcv.
My current model structure looks like:
gam(Response ~
s(Day, bs = "re") +
s(SmoothVar1, bs = "cs") +
s(SmoothVar2, bs = "cs") +
s(SmoothVar3, bs = "cs") +
as.factor(FixedVar),
family=binomial(link="logit"), method = "REML",
data = dat)
I tried thinning my data (using only every 3rd data point from consecutive scans), but found this overly restrictive to allow effects to be detected due to my relatively small sample size (only 42 data points left after thinning).
I also tried using the prior value of the binomial response variable as a factor in the model to account for the autocorrelation. This did appear to resolve the residual autocorrelation (based on the updated ACF/PACF plots), but it doesn't feel like the most elegant way to do so and I worry this added variable might be adjusting for more than just the autocorrelation (though it was not collinear with the other explanatory variables; VIF < 2).
I would use bam() for this. You don't need to have big data to fit a with bam(), you just loose some of the guarantees about convergence that you get with gam(). bam() will fit a GEE-like model with an AR(1) working correlation matrix, but you need to specify the AR parameter via rho. This only works for non-Gaussian families if you also set discrete = TRUE when fitting the model.
You could use gamm() with family = binomial() but this uses PQL to estimate the GLMM version of the GAMM and if your binomial counts are low this method isn't very good.

Survival Analysis for Telecom Churn using R

I am working on Telecom Churn problem and here is my dataset.
http://www.sgi.com/tech/mlc/db/churn.data
Names - http://www.sgi.com/tech/mlc/db/churn.names
I'm new to survival analysis.Given the training data,my idea to build a survival model to estimate the survival time along with predicting churn/non churn on test data based on the independent factors.Could anyone help me with the code or pointers on how to go about this problem.
To be precise,say my train data has got
customer call usage details,plan details,tenure of his account etc and whether did he churn or not.
Using general classification models,I can predict churn or not on test data.Now using Survival analysis,I want to predict the tenure of the survival in test data.
Thanks,
Maddy
If you're still interested (or for the benefit of those coming later), I've written a few guides specifically for conducting survival analysis on customer churn data using R. They cover a bunch of different analytical techniques, all with sample data and R code.
Basic survival analysis: http://daynebatten.com/2015/02/customer-churn-survival-analysis/
Basic cox regression: http://daynebatten.com/2015/02/customer-churn-cox-regression/
Time-dependent covariates in cox regression: http://daynebatten.com/2015/12/survival-analysis-customer-churn-time-varying-covariates/
Time-dependent coefficients in cox regression: http://daynebatten.com/2016/01/customer-churn-time-dependent-coefficients/
Restricted mean survival time (quantify the impact of churn in dollar terms): http://daynebatten.com/2015/03/customer-churn-restricted-mean-survival-time/
Pseudo-observations (quantify dollar gain/loss associated with the churn effects of variables): http://daynebatten.com/2015/03/customer-churn-pseudo-observations/
Please forgive the goofy images.
Here is some code to get you started:
First, read the data
nm <- read.csv("http://www.sgi.com/tech/mlc/db/churn.names",
skip=4, colClasses=c("character", "NULL"), header=FALSE, sep=":")[[1]]
dat <- read.csv("http://www.sgi.com/tech/mlc/db/churn.data", header=FALSE, col.names=c(nm, "Churn"))
Use Surv() to set up a survival object for modeling
library(survival)
s <- with(dat, Surv(account.length, as.numeric(Churn)))
Fit a cox proportional hazards model and plot the result
model <- coxph(s ~ total.day.charge + number.customer.service.calls, data=dat[, -4])
summary(model)
plot(survfit(model))
Add a stratum:
model <- coxph(s ~ total.day.charge + strata(number.customer.service.calls <= 3), data=dat[, -4])
summary(model)
plot(survfit(model), col=c("blue", "red"))

How to build HoltWinters and Stepwise Autoregressive time series model with constant and linear trend in R package

I am trying to build four time series models in R
1) HoltWinters with constant trend,
2) HoltWinters with linear trend,
3) Stepwise autoregressive with constant trend,
2) Stepwise autoregressive with linear trend
In SAS I can do this using PROC Forecast and specifying method and trend option.
Could you please help me in doing this. Thanks.
For 1 and 2:
library(forecast)
fit1 <- ets(x, model="ANA", damped=FALSE)
fit2 <- ets(x, model="AAA", beta=0, damped=FALSE, lower=rep(0,4))
By default, ets does not allow constant components (such as a linear trend), but setting the lower limit to 0 allows it.
For 3 and 4, I am not sure what you mean by "stepwise autoregressive". Perhaps you mean a subset autoregression where the terms are chosen using a stepwise procedure. For that, see the FitAR package (http://www.jstatsoft.org/v28/i02/paper). However, I don't think it allows a deterministic trend.

Jackknife in logistic regression

I'm interested into apply a Jackknife analysis to in order to quantify the uncertainty of my coefficients estimated by the logistic regression. I´m using a glm(family=’binomial’) because my independent variable is in 0 - 1 format.
My dataset has 76000 obs, and I'm using 7 independent variables plus an offset. The idea involves to split the data in let’s say 5 random subsets and then obtaining the 7 estimated parameters by dropping one subset at a time from the dataset. Then I can estimate uncertainty of the parameters.
I understand the procedure but I'm unable to do it in R.
This is the model that I'm fitting:
glm(f_ocur ~ altitud + UTM_X + UTM_Y + j_sin + j_cos + temp_res + pp +
offset(log(1/off)), data = mydata, family = 'binomial')
Does anyone have an idea of how can I make this possible?
Jackknifing a logistic regression model is incredibly inefficient. But an easy time intensive approach would be like this:
Formula <- f_ocur~altitud+UTM_X+UTM_Y+j_sin+j_cos+temp_res+pp+offset(log(1/off))
coefs <- sapply(1:nrow(mydata), function(i)
coef(glm(Formula, data=mydata[-i, ], family='binomial'))
)
This is your matrix of leave-one-out coefficient estimates. The covariance matrix of this matrix estimates the covariance matrix of the parameter estimates.
A significant time improvement could be had by using glm's workhorse function, glm.fit. You can go even farther by linearizing the model (use one-step estimation, limit niter in the Newton Raphson algorithm to one iteration only, using Jackknife SEs for the one-step estimators are still robust, unbiased, the whole bit...)

Resources