Partial residual plot based on model average coefficients in R - r

I'm using the R package MuMIn to do multimodel inference and the function model.avg to average the coefficients estimated by a set of models. To visually compare the data to the estimated relationships based on the averaged coefficients, I want to use partial residual plots, similar to the ones created by the crPlots function of the car package. I've tried three ways and I'm not sure whether any is appropriate. Here is a demonstration.
library(MuMIn)
# Loading the data
data(Cement)
# Creating a full model with all the covariates we are interested in
fullModel <- lm(y ~ ., data = Cement, na.action=na.fail)
# Getting all possible models based on the covariates of the full model
muModel <- dredge(fullModel)
# Averaging across all models
avgModel <- model.avg(muModel)
# Getting the averaged coefficients
coefMod <- coef(avgModel)
coefMod
# (Intercept) X1 X2 X4 X3
# 65.71487660 1.45607957 0.61085531 -0.49776089 -0.07148454
Option 1: Using crPlots
library(car) # For crPlots
# Creating a duplicate of the fullMode
hackModel <- fullModel
# Changing the coefficents to the averaged coefficients
hackModel$coefficients <- coefMod[names(coef(fullModel))]
# Changing the residuals
hackModel$residuals <- Cement$y - predict(hackModel)
# Plot the hacked model vs the full model
layout(matrix(1:8, nrow=2, byrow=TRUE))
crPlots(hackModel, layout=NA)
crPlots(fullModel, layout=NA)
Notice that the crPlots of the full and hacked versions with the average coefficients are different.
The question here is: Is this appropriate? The results rely on a hack that I found in this answer. Do I need to change parts of the model other than the residuals and the coefficients?
Option 2: Homemade plots
# Partial residuals: residuals(hacked model) + beta*x
# X1
# Get partial residuals
prX1 <- resid(hackModel) + coefMod["X1"]*Cement$X1
# Plot the partial residuals
plot(prX1 ~ Cement$X1)
# Add modeled relationship
abline(a=0,b=coefMod["X1"])
# X2 - X4
plot(resid(hackModel) + coefMod["X2"]*X2 ~ X2, data=Cement); abline(a=0,b=coefMod["X2"])
plot(resid(hackModel) + coefMod["X3"]*X3 ~ X3, data=Cement); abline(a=0,b=coefMod["X3"])
plot(resid(hackModel) + coefMod["X4"]*X4 ~ X4, data=Cement); abline(a=0,b=coefMod["X4"])
The plot looks different from the ones produced by the crPlots above.
The partial residuals have similar patterns, but their values and modeled relationships are different. The difference in values appears to due to the fact that crPlots used centered partial residuals (see this answer for a discussion of partial residuals in R). This brings me to my third option.
Option 3: Homemade plots with centered partial residuals
# Get the centered partial residuals
pRes <- resid(hackModel, type='partial')
# X1
# Plot the partial residuals
plot(pRes[,"X1"] ~ Cement$X1)
# Plot the component - modeled relationship
lines(coefMod["X1"]*(X1-mean(X1))~X1, data=Cement)
# X2 - X4
plot(pRes[,"X2"] ~ Cement$X2); lines(coefMod["X2"]*(X2-mean(X2))~X2, data=Cement)
plot(pRes[,"X3"] ~ Cement$X3); lines(coefMod["X3"]*(X3-mean(X3))~X3, data=Cement)
plot(pRes[,"X4"] ~ Cement$X4); lines(coefMod["X4"]*(X4-mean(X4))~X4, data=Cement)
Now we have similar values than the crPlots above, but the relationships are still different. The difference may be related to the intercepts. But I'm not sure what I should use instead of 0.
Any suggestions of which method is more appropriate? Is there a more straightforward way to get the partial residual plots based on model averaged coefficients?
Many thanks!

From looking at the crPlot.lm source code, it looks like only the functions residuals(model, type="partial"), predict(model, type="terms", term=var), and functions associated with finding the names of the variables are used on the model object. It also looks like the relationship is regressed, as #BenBolker suggested. The code used in crPlot.lm is: abline(lm(partial.res[,var]~.x), lty=2, lwd=lwd, col=col.lines[1]). Thus, I think that changing the coefficients and residuals of the model is sufficient to be able to use crPlots on it. I can now also reproduce the results in a homemade way.
library(MuMIn)
# Loading the data
data(Cement)
# Creating a full model with all the covariates we are interested in
fullModel <- lm(y ~ ., data = Cement, na.action=na.fail)
# Getting all possible models based on the covariates of the full model
muModel <- dredge(fullModel)
# Averaging across all models
avgModel <- model.avg(muModel)
# Getting the averaged coefficients
coefMod <- coef(avgModel)
# Option 1 - crPlots
library(car) # For crPlots
# Creating a duplicate of the fullMode
hackModel <- fullModel
# Changing the coefficents to the averaged coefficient
hackModel$coefficients <- coefMod[names(coef(fullModel))]
# Changing the residuals
hackModel$residuals <- Cement$y - predict(hackModel)
# Plot the crPlots and the regressed homemade version
layout(matrix(1:8, nrow=2, byrow=TRUE))
par(mar=c(3.5,3.5,0.5,0.5), mgp=c(2,1,0))
crPlots(hackModel, layout=NA, ylab="Partial Res", smooth=FALSE)
# Option 4 - Homemade centered and regressed
# Get the centered partial residuals
pRes <- resid(hackModel, type='partial')
# X1 - X4 plot partial residuals and used lm for the relationship
plot(pRes[,"X1"] ~ Cement$X1); abline(lm(pRes[,"X1"]~Cement$X1))
plot(pRes[,"X2"] ~ Cement$X2); abline(lm(pRes[,"X2"]~Cement$X2))
plot(pRes[,"X3"] ~ Cement$X3); abline(lm(pRes[,"X3"]~Cement$X3))
plot(pRes[,"X4"] ~ Cement$X4); abline(lm(pRes[,"X4"]~Cement$X4))

Related

Is there a way to both include PCSE and Prais-Winsten correction in a fixed effects model in R (similar to the xtpcse function in Stata)?

I want to estimate a fixed effects model while using panel-corrected standard errors as well as Prais-Winsten (AR1) transformation in order to solve panel heteroscedasticity, contemporaneous spatial correlation and autocorrelation.
I have time-series cross-section data and want to perform regression analysis. I was able to estimate a fixed effects model, panel corrected standard errors and Prais-winsten estimates individually. And I was able to include panel corrected standard errors in a fixed effects model. But I want them all at once.
# Basic ols model
ols1 <- lm(y ~ x1 + x2, data = data)
summary(ols1)
# Fixed effects model
library('plm')
plm1 <- plm(y ~ x1 + x2, data = data, model = 'within')
summary(plm1)
# Panel Corrected Standard Errors
library(pcse)
lm.pcse1 <- pcse(ols1, groupN = Country, groupT = Time)
summary(lm.pcse1)
# Prais-Winsten estimates
library(prais)
prais1 <- prais_winsten(y ~ x1 + x2, data = data)
summary(prais1)
# Combination of Fixed effects and Panel Corrected Standard Errors
ols.fe <- lm(y ~ x1 + x2 + factor(Country) - 1, data = data)
pcse.fe <- pcse(ols.fe, groupN = Country, groupT = Time)
summary(pcse.fe)
In the Stata command: xtpcse it is possible to include both panel corrected standard errors and Prais-Winsten corrected estimates, with something allong the following code:
xtpcse y x x x i.cc, c(ar1)
I would like to achieve this in R as well.
I am not sure that my answer will completely address your concern, these days I've been trying to deal with the same problem that you mention.
In my case, I ran the Prais-Winsten function from the package prais where I included my model with the fixed effects. Afterwards, I correct for heteroskedasticity using the function vcovHC.prais which is analogous to vcovHC function from the package sandwich.
This basically will give you White's/sandwich heteroskedasticity-consistent covariance matrix which, if you later fit into the function coeftest from the package lmtest, it will give you the table output with the corrected standard errors. Taking your posted example, see below the code that I have used:
# Prais-Winsten estimates with Fixed Effects
library(prais)
prais.fe <- prais_winsten(y ~ x1 + x2 + factor(Country), data = data)
library(lmtest)
prais.fe.w <- coeftest(prais.fe, vcov = vcovHC.prais(prais.fe, "HC1")
h.m1 # run the object to see the output with the corrected standard errors.
Alas, I am aware that the sandwhich heteroskedasticity-consistent standard errors are not exactly the same as the Beck and Katz's PCSEs because PCSE deals with panel heteroskedasticity while sandwhich SEs addresses overall heteroskedasticity. I am not totally sure in how much these two differ in practice, but something is something.
I hope my answer was somehow helpful, this is actually my very first answer :D

Categorical Regression with Centered Levels

R's standard way of doing regression on categorical variables is to select one factor level as a reference level and constraining the effect of that level to be zero. Instead of constraining a single level effect to be zero, I'd like to constrain the sum of the coefficients to be zero.
I can hack together coefficient estimates for this manually after fitting the model the standard way:
x <- lm(data = mtcars, mpg ~ factor(cyl))
z <- c(coef(x), "factor(cyl)4" = 0)
y <- mean(z[-1])
z[-1] <- z[-1] - y
z[1] <- z[1] + y
z
## (Intercept) factor(cyl)6 factor(cyl)8 factor(cyl)4
## 20.5021645 -0.7593074 -5.4021645 6.1614719
But that leaves me without standard error estimates for the former reference level that I just added as an explicit effect, and I need to have those as well.
I did some searching and found the constrasts functions, and tried
lm(data = mtcars, mpg ~ C(factor(cyl), contr = contr.sum))
but this still only produces two effect estimates. Is there a way to change which constraint R uses for linear regression on categorical variables properly?
Think I've figured it out. Using contrasts actually is the right way to go about it, you just need to do a little work to get the results into a convenient looking form. Here's the fit:
fit <- lm(data = mtcars, mpg ~ C(factor(cyl), contr = contr.sum))
Then the matrix cs <- contr.sum(factor(cyl)) is used to get the effect estimates and the standard error.
The effect estimates just come from multiplying the contrast matrix by the effect estimates lm spits out, like so:
cs %*% coef(fit)[-1]
The standard error can be calculated using the contrast matrix and the variance-covariance matrix of the coefficients, like so:
diag(cs %*% vcov(fit)[-1,-1] %*% t(cs))

r loess: coefficients of global "parametric" terms

Is there a way how I can extract coefficients of globally fitted terms in local regression modeling?
Maybe I do misunderstand the role of globally fitted terms in the function loess, but what I would like to have is the following:
# baseline:
x <- sin(seq(0.2,0.6,length.out=100)*pi)
# noise:
x_noise <- rnorm(length(x),0,0.1)
# known structure:
x_1 <- sin(seq(5,20,length.out=100))
# signal:
y <- x + x_1*0.25 + x_noise
# fit loess model:
x_seq <- seq_along(x)
mod <- loess(y ~ x_seq + x_1,parametric="x_1")
The fit is done perfectly, however, how can I extract the estimated value of the globally fitted term x_1 (i.e. some value near 0.25 for the example above)?
Finally, I found a solution to my problem using the function gam from the package gam:
require(gam)
mod2 <- gam(y ~ lo(x_seq,span=0.75,degree=2) + x_1)
However, the fits from the two models are not exactly the same (which might be due to different control settings?)...

'predict' gives different results than using manually the coefficients from 'summary'

Let me state my confusion with the help of an example,
#making datasets
x1<-iris[,1]
x2<-iris[,2]
x3<-iris[,3]
x4<-iris[,4]
dat<-data.frame(x1,x2,x3)
dat2<-dat[1:120,]
dat3<-dat[121:150,]
#Using a linear model to fit x4 using x1, x2 and x3 where training set is first 120 obs.
model<-lm(x4[1:120]~x1[1:120]+x2[1:120]+x3[1:120])
#Usig the coefficients' value from summary(model), prediction is done for next 30 obs.
-.17947-.18538*x1[121:150]+.18243*x2[121:150]+.49998*x3[121:150]
#Same prediction is done using the function "predict"
predict(model,dat3)
My confusion is: the two outcomes of predicting the last 30 values differ, may be to a little extent, but they do differ. Whys is it so? should not they be exactly same?
The difference is really small, and I think is just due to the accuracy of the coefficients you are using (e.g. the real value of the intercept is -0.17947075338464965610... not simply -.17947).
In fact, if you take the coefficients value and apply the formula, the result is equal to predict:
intercept <- model$coefficients[1]
x1Coeff <- model$coefficients[2]
x2Coeff <- model$coefficients[3]
x3Coeff <- model$coefficients[4]
intercept + x1Coeff*x1[121:150] + x2Coeff*x2[121:150] + x3Coeff*x3[121:150]
You can clean your code a bit. To create your training and test datasets you can use the following code:
# create training and test datasets
train.df <- iris[1:120, 1:4]
test.df <- iris[-(1:120), 1:4]
# fit a linear model to predict Petal.Width using all predictors
fit <- lm(Petal.Width ~ ., data = train.df)
summary(fit)
# predict Petal.Width in test test using the linear model
predictions <- predict(fit, test.df)
# create a function mse() to calculate the Mean Squared Error
mse <- function(predictions, obs) {
sum((obs - predictions) ^ 2) / length(predictions)
}
# measure the quality of fit
mse(predictions, test.df$Petal.Width)
The reason why your predictions differ is because the function predict() is using all decimal points whereas on your "manual" calculations you are using only five decimal points. The summary() function doesn't display the complete value of your coefficients but approximate the to five decimal points to make the output more readable.

linear predictor - ordered probit (ordinal, clm)

I have got a question regarding the ordinal package in R or specifically regarding the predict.clm() function. I would like to calculate the linear predictor of an ordered probit estimation. With the polr function of the MASS package the linear predictor can be accessed by object$lp. It gives me on value for each line and is in line with what I understand what the linear predictor is namely X_i'beta. If I however use the predict.clm(object, newdata,"linear.predictor") on an ordered probit estimation with clm() I get a list with the elements eta1 and eta2,
with one column each, if the newdata contains the dependent variable
where each element contains as many columns as levels in the dependent variable, if the newdata doesn't contain the dependent variable
Unfortunately I don't have a clue what that means. Also in the documentations and papers of the author I don't find any information about it. Would one of you be so nice to enlighten me? This would be great.
Cheers,
AK
UPDATE (after comment):
Basic clm model is defined like this (see clm tutorial for details):
Generating data:
library(ordinal)
set.seed(1)
test.data = data.frame(y=gl(4,5),
x=matrix(c(sample(1:4,20,T)+rnorm(20), rnorm(20)), ncol=2))
head(test.data) # two independent variables
test.data$y # four levels in y
Constructing models:
fm.polr <- polr(y ~ x) # using polr
fm.clm <- clm(y ~ x) # using clm
Now we can access thetas and betas (see formula above):
# Thetas
fm.polr$zeta # using polr
fm.clm$alpha # using clm
# Betas
fm.polr$coefficients # using polr
fm.clm$beta # using clm
Obtaining linear predictors (only parts without theta on the right side of the formula):
fm.polr$lp # using polr
apply(test.data[,2:3], 1, function(x) sum(fm.clm$beta*x)) # using clm
New data generation:
# Contains only independent variables
new.data <- data.frame(x=matrix(c(rnorm(10)+sample(1:4,10,T), rnorm(10)), ncol=2))
new.data[1,] <- c(0,0) # intentionally for demonstration purpose
new.data
There are four types of predictions available for clm model. We are interested in type=linear.prediction, which returns a list with two matrices: eta1 and eta2. They contain linear predictors for each observation in new.data:
lp.clm <- predict(fm.clm, new.data, type="linear.predictor")
lp.clm
Note 1: eta1 and eta2 are literally equal. Second is just a rotation of eta1 by 1 in j index. Thus, they leave left side and right side of linear predictor scale opened respectively.
all.equal(lp.clm$eta1[,1:3], lp.clm$eta2[,2:4], check.attributes=FALSE)
# [1] TRUE
Note 2: Prediction for first line in new.data is equal to thetas (as far as we set this line to zeros).
all.equal(lp.clm$eta1[1,1:3], fm.clm$alpha, check.attributes=FALSE)
# [1] TRUE
Note 3: We can manually construct such predictions. For instance, prediction for second line in new.data:
second.line <- fm.clm$alpha - sum(fm.clm$beta*new.data[2,])
all.equal(lp.clm$eta1[2,1:3], second.line, check.attributes=FALSE)
# [1] TRUE
Note 4: If new.data contains response variable, then predict returns only linear predictor for specified level of y. Again we can check it manually:
new.data$y <- gl(4,3,length=10)
lp.clm.y <- predict(fm.clm, new.data, type="linear.predictor")
lp.clm.y
lp.manual <- sapply(1:10, function(i) lp.clm$eta1[i,new.data$y[i]])
all.equal(lp.clm.y$eta1, lp.manual)
# [1] TRUE

Resources