Bootstrapping residuals of a linear model - r

Suppose I want to assess the goodness of a linear model before and after leaving out a covariate, and I want to implement some kind of bootstrapping.
I tried to bootstrap the sum of residuals of both models and then I applied the Kolmogorov-Smirnov test to assess if the two are the same distributions.
The minimal working code:
lm.statistic.resid <- function(data,i){
d<-data[i,]
r.gressor <- colnames(data)[1]
c.variates <- colnames(data)[-1]
lm.boot <- lm(data=d)
out <- sum(resid(lm.boot))
return(out)
}
df.restricted <- mtcars[ , names(mtcars) != c("wt")]
classical.lm <- lm(mtcars)
restricted.lm <- lm(df.restricted)
boot.regression.full = boot(df,
statistic=lm.statistic.resid,
R=1000)
boot.regression.restricted = boot(df.restricted,
statistic=lm.statistic.resid,
R=1000)
x <- boot.regression.restricted$t
y <- boot.regression.full$t
ks.test(x,y)
However, I get kind of the same result both in removing wt (which statistically significant) and am (which is not).
I should expect a smaller p-value in case I remove wt.

Related

ARIMA giving forecasts with higher RMSE than AR

I am trying to argue that ARIMA models are better than AR models i.e since AR is a subset of ARIMA, the best ARIMA model will not be worse than the best AR model, but may be better. I have used an AR(6) model, and then used auto.arima() in R which has told me that an ARIMA(1,0,2) model is optimal using AICc. I have used both of these to do a rolling window forecast, but am getting an RMSE of 3.901 for AR(6) and 4.503 for ARIMA(1,0,2). My code for the forecasting is below (I know it is not very advanced but I'm a beginner and this is the best way I could find - it matches my results by hand):
#find moving averages and residual errors
ma=rep(NA,14976)
for (i in 3:14976){
ma[i] = mean(ds[(i-2):(i-1)])
}
frame <- ds-ma
#fit model
model <- arima(ds[1:14676],order=c(1,0,2),include.mean=TRUE,method="ML")
`%+=%` = function(e1,e2){eval.parent(substitute(e1 <- e1 + e2))}
training_data <- data[1:14676]
test_data <- data[14677:14976]
window <- 1
window1 <- 2
coef <- model$coef
history <- training_data[(length(training_data)-window+1):14676]
predictions <- list()
for (i in (1:length(test_data))){
length <- length(history)
lag <- array()
for (d in ((length-window+1):length)){
lag[d-i+1] <- history[d]}
yhat <- coef[length(coef)]-1
for (t in (1:window)){
yhat %+=% (coef[t]*lag[window-t+1])}
if (window1 != 0){
for (j in ((window+1):(window+window1))){
yhat %+=% (coef[j]*frame[14676+i-j+1])}
}
obs <- test_data[i]
predictions <- append(predictions,yhat)
history <- append(history,obs)
print(predictions)
}
The graph that comes out for the ARIMA(1,0,2) forecast (compared to the actual values in the test set) looks better, but is quite raised. It seems like the intercept needs to be lower, which does give a better RMSE, but arima() gave the intercept it did so I haven't changed it.

How to convert one-fold cross-validation to K-fold cross-validation in R

I have a GAM model for which I would like to calculate AUC, TSS (True Skill Statistic) and RMSE through 5-fold cross-validation in R. Unfortunately, the caret package does not support GAM and therefore cannot be used. As I didn’t find any alternative, I tried to build the code for cross-validation myself, and it works well, with the only problem that it is only one-fold cross-validation. Could anybody help me to make this 5-fold? Sorry if this is an elementary question, I am new to R.
sample <- sample(c(TRUE, FALSE), nrow(DF), replace=TRUE, prob=c(0.8,0.2))
train <- DF[sample, ]
test <- DF[!sample, ]
predicted <- predict(GAM, test, type="response")
# Calculating RMSE
RMSE(test$Y, predicted)
# Calculating AUC
auc(test$Y, predicted)
GAM_TSS <- gam(Y ~ X1 + X2 + X3 + X4 + s(X5, k = 3), train, family = "binomial")
test$pred <- predict(GAM_TSS, type="response", newdata=test)
roc.curve <- roc(test$Y, test$pred, ci=T)
plot(roc.curve)
threshold <- 0.1
CM <- confusionMatrix(factor(test$pred>threshold), factor(test$P_A==1), positive="TRUE")
CM <- CM$byClass
Sensitivity <- CM[['Sensitivity']]
Specificity <- CM[['Specificity']]
# Calculating TSS
TSS = Sensitivity + Specificity - 1
TSS
I have come across precisely this problem with GAM in the past. My approach was to create a vector to split data randomly into parts as equally sized as possible, then loop through the fold ids as follows:
k <- 5
FoldID <- rep(1:k, ceiling(nrow(modelData)/k))
length(FoldID) <- nrow(modelData)
FoldID <- sample(FoldID, replace = FALSE)
for(fold in 1:k){
train_data <- modelData[FoldID != fold, ]
val_data <- modelData[FoldID == fold, ]
# Create training model and predictions
# Calculate RMSE data etc.
# Add a line with fold validation results to a dataframe
}
# Calculate column means of your validation results frame
I will leave you to fill in the gaps to suit your own requirements. It would also be a good idea to add an outer loop (outside the FoldID creation) for repeats.

Calculating RMSE for Simulated Linear Regression

I am trying to calculate the RMSE for the simulated data. But the output gives NaN for the RMSE. Below is the code I am using.
library(caret)
RMSE <- function(x,y) sqrt(mean((x-y)^2))
sim.regression<-function(n.obs=200,coefficients=c(3,1.5,0,0,2,0,0,0),s.deviation=.1){
n.var=length(coefficients)
M=matrix(0,ncol=n.var,nrow=n.obs)
beta=as.matrix(coefficients)
for (i in 1:n.var){
M[,i]=rnorm(n.obs,0,1)
}
y=M %*% beta + rnorm(n.obs,0,s.deviation)
train.data<-y[1:150]
train.data<-data.frame(train.data)
test.data<-y[151:200]
test.data<-data.frame(test.data)
prediction <- predict(lm(y~M),test.data)
RMSE.data<-RMSE(prediction, test.data$y)
return (list(x=M,y=y,coeff=coefficients, RMSE=RMSE.data))
}
set.seed(2000)
sim.regression(100)
Welcome to SO. There were few issues in the code:
Assuming that you are trying to learn/predict 'y' based on 'M', you have to combine M and y and make a data frame.
After that only, you should split first 150 for train and remaining for test.
Then you train on train.data and predict on test.data
Also, since you have hardcoded [1:150] and [150:200] for train-test split, you will have to pass 200 as in sim.regression(200).
Corrected code below:
library(caret)
RMSE <- function(x,y) sqrt(mean((x-y)^2))
sim.regression<-function(n.obs=200,coefficients=c(3,1.5,0,0,2,0,0,0),s.deviation=.1){
n.var=length(coefficients)
M=matrix(0,ncol=n.var,nrow=n.obs)
beta=as.matrix(coefficients)
for (i in 1:n.var){
M[,i]=rnorm(n.obs,0,1)
}
y=M %*% beta + rnorm(n.obs,0,s.deviation)
data<-data.frame(M,y)
train.data <- data[1:150,]
test.data<-data[151:200,]
prediction <- predict(lm(y~., data=train.data),test.data)
RMSE.data<-RMSE(prediction, test.data$y)
return (list(x=M,y=y,coeff=coefficients, RMSE=RMSE.data))
}
set.seed(2000)
sim.regression(200)
Prints:
$RMSE
0.0755869850491716

Create model.matrix() from LASSO output

I wish to create a model matrix of the independent variables/specific levels of categorical variables selected by LASSO so that I can plug said model matrix into a glm() function to run a logistic regression.
I have included an example of what I'm trying to do. Any help would be greatly appreciated
data("iris")
iris$Petal.Width <- factor(iris$Petal.Width)
iris$Sepal.Length2 <- ifelse(iris$Sepal.Length>=5.8,1,0)
f <- as.formula(Sepal.Length2~Sepal.Width+Petal.Length+Petal.Width+Species)
X <- model.matrix(f,iris)[,-1]
Y <- iris$Sepal.Length2
cvfit <- cv.glmnet(X,Y,alpha=1,family="binomial")
fit <- glmnet(X,Y,alpha=1,family = "binomial")
b <- coef(cvfit,s="lambda.1se")
print(b)
## This is the part I am unsure of: I want to create a model matrix of the non-zero coefficients contained within 'b'
## e.g.
lasso_x <- model.matrix(b,iris)
logistic_model <- glm.fit(lasso_x,Y,family = "binomial")
Edit:
I also tried the following:
model.matrix(~X)[which(b!=0)-1]
but it just gives me a single column of 1's, the length of the number of selections from LASSO (minus the intercept)

test significance between models with emmeans

Let's say I have these two models
dat1 <- data.frame(x=factor(c(1,2,1,1,2,2)),y=c(2,5,2,1,7,9))
dat2 <- data.frame(x=factor(c(1,2,1,1,2,2)),y=c(3,3,4,3,4,2))
mod1 <- lm(y~x,data=dat1)
mod2 <- lm(y~x, data=dat2)
and calculate a t test between the levels of x in each model
t1 <- pairs(emmeans(mod1, ~x))
t2 <- pairs(emmeans(mod2, ~x))
How can I assess whether the two models are significantly different for this contrast using emmeans?
dat1$dataset <- "dat1"
dat2$dataset <- "dat2"
alldat <- rbind(dat1, dat2)
modsame <- lm(y ~ x, data = alldat)
moddiff <- lm(y ~ x * dataset, data = alldat)
anova(modsame, moddiff)
Don't try to use emmeans() to do this; that isn't its purpose. The anova() call above compares the two models: modsame presumes that the x effects are the same in each dataset; moddiff adds two terms, dataset which accounts for the change in overall mean, and x:dataset which accounts for the change in x effects.
The comparison between the two models comprises a joint test of both the dataset and the x:dataset effects -- it is an F test with 2 numerator d.f. -- not a t test.

Resources