How to replicate Stata's "margins at" in R after lm() - r

From Stata:
margins, at(age=40)
To understand why that yields the desired result, let us tell you that if you were to type
. margins
margins would report the overall margin—the margin that holds nothing constant. Because our model
is logistic, the average value of the predicted probabilities would be reported. The at() option fixes
one or more covariates to the value(s) specified and can be used with both factor and continuous
variables. Thus, if you typed
margins, at(age=40)
then margins would average over the data
the responses for everybody, setting age=40.
Could someone help me which package could be useful? I tried already to find a mean of predicted values for the subset data, but it doesnt work for sequences, for example margins, at(age=40 (1)50).

There are many ways to get marginal effects in R.
You should understand that Stata's margins, at are simply marginal effects evaluated at means or representative points (see this and the documentation).
I think that you'll like this solution best as it's most similar to what you're used to:
library(devtools)
install_github("leeper/margins")
Source: https://github.com/leeper/margins
margins is an effort to port Stata's (closed source) margins command
to R as an S3 generic method for calculating the marginal effects (or
"partial effects") of covariates included in model objects (like those
of classes "lm" and "glm"). A plot method for the new "margins" class
additionally ports the marginsplot command.
library(margins)
x <- lm(mpg ~ cyl * hp + wt, data = mtcars)
(m <- margins(x))
cyl hp wt
0.03814 -0.04632 -3.11981
See also the prediction command (?prediction) in this package.
Asides from that, here are some other solutions I've compiled:
I. erer (package)
maBina() command
http://cran.r-project.org/web/packages/erer/erer.pdf
II. mfxboot
mfxboot <- function(modform,dist,data,boot=1000,digits=3){
x <- glm(modform, family=binomial(link=dist),data)
# get marginal effects
pdf <- ifelse(dist=="probit",
mean(dnorm(predict(x, type = "link"))),
mean(dlogis(predict(x, type = "link"))))
marginal.effects <- pdf*coef(x)
# start bootstrap
bootvals <- matrix(rep(NA,boot*length(coef(x))), nrow=boot)
set.seed(1111)
for(i in 1:boot){
samp1 <- data[sample(1:dim(data)[1],replace=T,dim(data)[1]),]
x1 <- glm(modform, family=binomial(link=dist),samp1)
pdf1 <- ifelse(dist=="probit",
mean(dnorm(predict(x, type = "link"))),
mean(dlogis(predict(x, type = "link"))))
bootvals[i,] <- pdf1*coef(x1)
}
res <- cbind(marginal.effects,apply(bootvals,2,sd),marginal.effects/apply(bootvals,2,sd))
if(names(x$coefficients[1])=="(Intercept)"){
res1 <- res[2:nrow(res),]
res2 <- matrix(as.numeric(sprintf(paste("%.",paste(digits,"f",sep=""),sep=""),res1)),nrow=dim(res1)[1])
rownames(res2) <- rownames(res1)
} else {
res2 <- matrix(as.numeric(sprintf(paste("%.",paste(digits,"f",sep=""),sep="")),nrow=dim(res)[1]))
rownames(res2) <- rownames(res)
}
colnames(res2) <- c("marginal.effect","standard.error","z.ratio")
return(res2)}
Source: http://www.r-bloggers.com/probitlogit-marginal-effects-in-r/
III. Source: R probit regression marginal effects
x1 = rbinom(100,1,.5)
x2 = rbinom(100,1,.3)
x3 = rbinom(100,1,.9)
ystar = -.5 + x1 + x2 - x3 + rnorm(100)
y = ifelse(ystar>0,1,0)
probit = glm(y~x1 + x2 + x3, family=binomial(link='probit'))
xbar <- as.matrix(mean(cbind(1,ttt[1:3])))
Now the graphic, i.e., the marginal effect of x1, x2 and x3
library(arm)
curve(invlogit(1.6*(probit$coef[1] + probit$coef[2]*x + probit$coef[3]*xbar[3] + probit$coef[4]*xbar[4]))) #x1
curve(invlogit(1.6*(probit$coef[1] + probit$coef[2]*xbar[2] + probit$coef[3]*x + probit$coef[4]*xbar[4]))) #x2
curve(invlogit(1.6*(probit$coef[1] + probit$coef[2]*xbar[2] + probit$coef[3]*xbar[3] + probit$coef[4]*x))) #x3
library(AER)
data(SwissLabor)
mfx1 <- mfxboot(participation ~ . + I(age^2),"probit",SwissLabor)
mfx2 <- mfxboot(participation ~ . + I(age^2),"logit",SwissLabor)
mfx3 <- mfxboot(participation ~ . + I(age^2),"probit",SwissLabor,boot=100,digits=4)
mfxdat <- data.frame(cbind(rownames(mfx1),mfx1))
mfxdat$me <- as.numeric(as.character(mfxdat$marginal.effect))
mfxdat$se <- as.numeric(as.character(mfxdat$standard.error))
# coefplot
library(ggplot2)
ggplot(mfxdat, aes(V1, marginal.effect,ymin = me - 2*se,ymax= me + 2*se)) +
scale_x_discrete('Variable') +
scale_y_continuous('Marginal Effect',limits=c(-0.5,1)) +
theme_bw() +
geom_errorbar(aes(x = V1, y = me),size=.3,width=.2) +
geom_point(aes(x = V1, y = me)) +
geom_hline(yintercept=0) +
coord_flip() +
opts(title="Marginal Effects with 95% Confidence Intervals")

Related

Multiple minimal models in R forward stepwise regression

In R stepwise forward regression, I would like to specify several minimal models. I am looking for the best model whith choices between 12 variables (6 flow variables Q_ and 6 precipitation variables LE_).
Biggest model takes into account all the variables :
formule <- "Q ~ 0 + Q_minus_1h + Q_minus_2h + Q_minus_3h + Q_minus_4h + Q_minus_5h + Q_minus_6h + LE_6h + LE_12h + LE_18h + LE_24h + LE_30h + LE_36h"
biggest <- formula(lm(formule, Sub_fit))
With Sub_fit my set of data (data frame with Q and my 12 variables).
I would like to have at least one variable "LE_XX" in my model. So my minimal model could be :
formule <- "Q ~ 0 + LE_6h"
smallest <- formula(lm(formule, Sub_fit))
OR
formule <- "Q ~ 0 + LE_12h"
smallest <- formula(lm(formule, Sub_fit))
OR...
formule <- "Q ~ 0 + LE_36h"
smallest <- formula(lm(formule, Sub_fit))
With finally :
modele.res <- step(lm(as.formula("Q ~ 0"),data=Sub_fit), direction='forward', scope=list(lower=smallest, upper=biggest))
"lower", into "scope", does not allow a list but should be one unique formula. Is it possible to do what I need ?
To specify several minimal models in stepwise forward regression, create the smallest formulas with, for instance, lapply and then loop through them.
In the example below, built-in data set mtcars is used to fit several models having mpg as response, one per each of the 3 last variables in the data set.
data(mtcars)
biggest <- mpg ~ .
sml <- names(mtcars)[9:11]
small_list <- lapply(sml, function(x) {
fmla <- paste("mpg", x, sep = "~")
as.formula(fmla)
})
names(small_list) <- sml
fit <- lm(mpg ~ ., mtcars)
fit_list <- lapply(small_list, function(smallest){
step(fit, scope = list(lower = smallest, upper = biggest))
})
Now select with AIC as criterion
min_aic <- sapply(fit_list, AIC)
min_aic
# am gear carb
#154.1194 155.9852 154.5631
fit_list[[which.min(min_aic)]]
stepwise function in StepReg R package can include some variables you want in all models during the stepwise regression.
library(StepReg)
f1 <- Q ~ 0 + Q_minus_1h + Q_minus_2h + Q_minus_3h + Q_minus_4h + Q_minus_5h + Q_minus_6h + LE_6h + LE_12h + LE_18h + LE_24h + LE_30h + LE_36h
## include LE_6h in the model
stepwise(formula=f1,
data=yourdata,
include="LE_6h",
selection="forward",
select="AIC")
## include LE_6h and LE_12h in the model
stepwise(formula=f1,
data=yourdata,
include=c("LE_6h","LE_12h"),
selection="forward",
select="AIC")

How to run different multiple linear regressions in R, Excel/VBA on a time series data for all different combinations of Explanatory Variables?

I am new to coding and R and would like your help. For my analysis, I am trying to run regression on a time series data with 1 dependent variable (Y) and 4 Independent Variables (X1, X2, X3, X4). All these variables (Y and X) have 4 different transformations (For example for X1 - X1, SQRT(X1), Square(X1) and Ln(X1)). I want to run the regressions for all the possible combinations of Y (Y, SQRT(Y), Square(Y), Ln(Y)) and all the combinations of X values so that in the end I can decide by looking at the R squared value which variable to choose in which of its transformation.
I am currently using the code in R for linear regression and changing the variables manually which is taking a lot of time. Maybe there is a loop or something I can use for the regressions? Waiting for your kind help. Thanks
lm(Y ~ X1 + X2 + X3 + X4)
lm(SQRT(Y) ~ X1 + X2 + X3 + X4)
lm(Square(Y) ~ X1 + X2 + X3 + X4)
lm(Ln(Y) ~ 1 + X2 + X3 + X4)
lm(Y ~ SQRT(X1) + X2 + X3 + X4)
lm(Y ~ Square(X1) + X2 + X3 + X4)
....
lm(ln(Y)~ ln(X1) + ln(X2) + ln(X3) + ln(X4))
This is my original code.
Regression10 <- lm(Final_Data_v2$`10 KW Installations (MW)`~Final_Data_v2$`10 KW Prio Installations (MW)`+Final_Data_v2$`FiT 10 KW (Cent/kWh)`+Final_Data_v2$`Electricity Prices 10 kW Cent/kW`+Final_Data_v2$`PV System Price (Eur/W)`)
summary(Regression10)
Regressionsqrt10 <- lm(Final_Data_v2$`SQRT(10 KW Installations (MW))`~Final_Data_v2$`10 KW Prio Installations (MW)`+Final_Data_v2$`FiT 10 KW (Cent/kWh)`+Final_Data_v2$`Electricity Prices 10 kW Cent/kW`+Final_Data_v2$`PV System Price (Eur/W)`)
summary(Regressionsqrt10)
And so on..
Here is the link to my DATA: LINK
This picks the transformations of RHS variables such that adjusted R-squared is maximized. This statistical approach will almost certainly lead to spurious results though.
# simulate some data
set.seed(0)
df <- data.frame(Y = runif(100),
X1 = runif(100),
X2 = runif(100),
X3 = runif(100),
X4 = runif(100))
# create new variables for log/sqrt transormations of every X and Y
for(x in names(df)){
df[[paste0(x, "_log")]] <- log(df[[x]])
df[[paste0(x, "_sqrt")]] <- sqrt(df[[x]])}
# all combinations of Y and X's
yVars <- names(df)[substr(names(df),1,1)=='Y']
xVars <- names(df)[substr(names(df),1,1)=='X']
df2 <- combn(c(yVars, xVars), 5) %>% data.frame()
# Ensure that formula is in form of some Y, some X1, some X2...
valid <- function(x){
ifelse(grepl("Y", x[1]) &
grepl("X1", x[2]) &
grepl("X2", x[3]) &
grepl("X3", x[4]) &
grepl("X4", x[5]), T, F)}
df2 <- df2[, sapply(df2, valid)]
# Create the formulas
formulas <- sapply(names(df2), function(x){
paste0(df2[[x]][1], " ~ ",
df2[[x]][2], " + ",
df2[[x]][3], " + ",
df2[[x]][4], " + ",
df2[[x]][5])})
# Run linear model for each formula
models <- lapply(formulas, function(x) summary(lm(as.formula(x), data=df)))
# Return the formula that maximizes R-squared
formulas[which.max(sapply(models, function(x) x[['adj.r.squared']]))]
"Y ~ X1 + X2 + X3 + X4_log"
Consider expand.grid for all combinations of coefficients, filtering on each column name using grep. Then call model function that takes a dynamic formula with Map (wrapper to mapply) to build list of lm objects (equal to all combinations of coefficients) at N=1,024 items.
Below runs the equivalent polynomial operations for square root and squared. Note: grep is only adjustment required to actual variable names.
coeffs <- c(names(Final_Data_v2),
paste0("I(", names(Final_Data_v2), "^(1/2))"),
paste0("I(", names(Final_Data_v2), "^2)"),
paste0("log(", names(Final_Data_v2), ")"))
# BUILD DATA FRAME OF ALL COMBNS OF VARIABLE AND TRANSFORMATION TYPES
all_combns <- expand.grid(y_var = coeffs[grep("10 KW Installations (MW)", coeffs)],
x_var1 = coeffs[grep("10 KW Prio Installations (MW)", coeffs)],
x_var2 = coeffs[grep("FiT 10 KW (Cent/kWh)", coeffs)],
x_var3 = coeffs[grep("Electricity Prices 10 kW Cent/kW", coeffs)],
x_var4 = coeffs[grep("PV System Price (Eur/W)", coeffs)],
stringsAsFactors = FALSE)
# FUNCTION WITH DYNAMIC FORMULA TO RECEIVE ALL POLYNOMIAL TYPES
proc_model <- function(y, x1, x2, x3, x4) {
myformula <- paste0("`",y,"`~`",x1,"`+`",x2,"`+`",x3,"`+`",x4,"`")
summary(lm(as.formula(myformula), data=Final_Data_v2))
}
# MAP CALL PASSING COLUMN VALUES ELEMENTWISE AS FUNCTION PARAMS
lm_list <- with(all_combns, Map(proc_model, y_var, x_var1, x_var2, x_var3, x_var4))

PLS regression in R: Testing alternative model specifications

In R, I would like to test the specification of a partial least square (PLS) model m1 against a non-nested alternative m2, applying the Davidson-MacKinnon J test. For a simple linear outcome Y it works quite well using the plsr estimator followed by the jtest command:
# Libraries and data
library(plsr)
library(plsRglm)
library(lmtest)
Z <- Cornell # illustration dataset coming with the plsrglm package
# Simple linear model
m1 <- plsr(Z$Y ~ Z$X1 + Z$X2 + Z$X3 + Z$X4 + Z$X5 ,2) # including X1
m2 <- plsr(Z$Y ~ Z$X6 + Z$X2 + Z$X3 + Z$X4 + Z$X5 ,2) # including X6 as alternative
jtest(m1,m2)
However, if Iuse the generalized linear model (plsRglm) estimator to account for a possible nonlinear distibution of an outcome, e.g.:
# Generalized Model
m1 <- plsRglm(Z$Y ~ Z$X1 + Z$X2 + Z$X3 + Z$X4 + Z$X5 ,2, modele = "pls-glm-family", family=Gamma(link = "log"), pvals.expli=TRUE)
m2 <- plsRglm(Z$Y ~ Z$X6 + Z$X2 + Z$X3 + Z$X4 + Z$X5 ,2, modele = "pls-glm-family", family=Gamma(link = "log"), pvals.expli=TRUE)
I am running into an error when using jtest:
> jtest(m1,m2)
Error in terms.default(formula1) : no terms component nor attribute
>
It seems that plsRglm does not save objects of class "formula", that jtest can handle. Has anybody a suggestion of how to edit my code to get this to work?
Thanks!

glmer logit - interaction effects on probability scale (replicating `effects` with `predict`)

I am running glmer logit models using the lme4 package. I am interested in various two and three way interaction effects and their interpretations. To simplify, I am only concerned with the fixed effects coefficients.
I managed to come up with a code to calculate and plot these effects on the logit scale, but I am having trouble transforming them to the predicted probabilities scale. Eventually I would like to replicate the output of the effects package.
The example relies on the UCLA's data on cancer patients.
library(lme4)
library(ggplot2)
library(plyr)
getmode <- function(v) {
uniqv <- unique(v)
uniqv[which.max(tabulate(match(v, uniqv)))]
}
facmin <- function(n) {
min(as.numeric(levels(n)))
}
facmax <- function(x) {
max(as.numeric(levels(x)))
}
hdp <- read.csv("http://www.ats.ucla.edu/stat/data/hdp.csv")
head(hdp)
hdp <- hdp[complete.cases(hdp),]
hdp <- within(hdp, {
Married <- factor(Married, levels = 0:1, labels = c("no", "yes"))
DID <- factor(DID)
HID <- factor(HID)
CancerStage <- revalue(hdp$CancerStage, c("I"="1", "II"="2", "III"="3", "IV"="4"))
})
Until here it is all data management, functions and the packages I need.
m <- glmer(remission ~ CancerStage*LengthofStay + Experience +
(1 | DID), data = hdp, family = binomial(link="logit"))
summary(m)
This is the model. It takes a minute and it converges with the following warning:
Warning message:
In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge with max|grad| = 0.0417259 (tol = 0.001, component 1)
Even though I am not quite sure if I should worry about the warning, I use the estimates to plot the average marginal effects for the interaction of interest. First I prepare the dataset to be feed into the predict function, and then I calculate the marginal effects as well as the confidence intervals using the fixed effects parameters.
newdat <- expand.grid(
remission = getmode(hdp$remission),
CancerStage = as.factor(seq(facmin(hdp$CancerStage), facmax(hdp$CancerStage),1)),
LengthofStay = seq(min(hdp$LengthofStay, na.rm=T),max(hdp$LengthofStay, na.rm=T),1),
Experience = mean(hdp$Experience, na.rm=T))
mm <- model.matrix(terms(m), newdat)
newdat$remission <- predict(m, newdat, re.form = NA)
pvar1 <- diag(mm %*% tcrossprod(vcov(m), mm))
cmult <- 1.96
## lower and upper CI
newdat <- data.frame(
newdat, plo = newdat$remission - cmult*sqrt(pvar1),
phi = newdat$remission + cmult*sqrt(pvar1))
I am fairly confident these are correct estimates on the logit scale, but maybe I am wrong. Anyhow, this is the plot:
plot_remission <- ggplot(newdat, aes(LengthofStay,
fill=factor(CancerStage), color=factor(CancerStage))) +
geom_ribbon(aes(ymin = plo, ymax = phi), colour=NA, alpha=0.2) +
geom_line(aes(y = remission), size=1.2) +
xlab("Length of Stay") + xlim(c(2, 10)) +
ylab("Probability of Remission") + ylim(c(0.0, 0.5)) +
labs(colour="Cancer Stage", fill="Cancer Stage") +
theme_minimal()
plot_remission
I think now the OY scale is measured on the logit scale but to make sense of it I would like to transform it to predicted probabilities. Based on wikipedia, something like exp(value)/(exp(value)+1) should do the trick to get to predicted probabilities. While I could do newdat$remission <- exp(newdat$remission)/(exp(newdat$remission)+1) I am not sure how should I do this for the confidence intervals?.
Eventually I would like to get to the same plot what the effects package generates. That is:
eff.m <- effect("CancerStage*LengthofStay", m, KR=T)
eff.m <- as.data.frame(eff.m)
plot_remission2 <- ggplot(eff.m, aes(LengthofStay,
fill=factor(CancerStage), color=factor(CancerStage))) +
geom_ribbon(aes(ymin = lower, ymax = upper), colour=NA, alpha=0.2) +
geom_line(aes(y = fit), size=1.2) +
xlab("Length of Stay") + xlim(c(2, 10)) +
ylab("Probability of Remission") + ylim(c(0.0, 0.5)) +
labs(colour="Cancer Stage", fill="Cancer Stage") +
theme_minimal()
plot_remission2
Even though I could just use the effects package, it unfortunately does not compile with a lot of the models I had to run for my own work:
Error in model.matrix(mod2) %*% mod2$coefficients :
non-conformable arguments
In addition: Warning message:
In vcov.merMod(mod) :
variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Fixing that would require adjusting the estimation procedure, which at the moment I would like to avoid. plus, I am also curious what effects actually does here.
I would be grateful for any advice on how to tweak my initial syntax to get to predicted probabilities!
To obtain a similar result as the effect function provided in your question, you just have to back transform both the predicted values and the boundaries of your confidence interval from the logit scale to the original scale with the transformation you provide : exp(x)/(1+exp(x)).
This transformation can be done in base R with the plogis function :
> a <- 1:5
> plogis(a)
[1] 0.7310586 0.8807971 0.9525741 0.9820138 0.9933071
> exp(a)/(1+exp(a))
[1] 0.7310586 0.8807971 0.9525741 0.9820138 0.9933071
So using proposal from #eipi10 using ribbons for the confidence bands instead of the dotted lines (I also find this presentation more readable) :
ggplot(newdat, aes(LengthofStay, fill=factor(CancerStage), color=factor(CancerStage))) +
geom_ribbon(aes(ymin = plogis(plo), ymax = plogis(phi)), colour=NA, alpha=0.2) +
geom_line(aes(y = plogis(remission)), size=1.2) +
xlab("Length of Stay") + xlim(c(2, 10)) +
ylab("Probability of Remission") + ylim(c(0.0, 0.5)) +
labs(colour="Cancer Stage", fill="Cancer Stage") +
theme_minimal()
The results are the same (with effects_3.1-2 and lme4_1.1-13):
> compare <- merge(newdat, eff.m)
> compare[, c("remission", "plo", "phi")] <-
+ sapply(compare[, c("remission", "plo", "phi")], plogis)
> head(compare)
CancerStage LengthofStay remission Experience plo phi fit se lower upper
1 1 10 0.20657613 17.64129 0.12473504 0.3223392 0.20657613 0.3074726 0.12473625 0.3223368
2 1 2 0.35920425 17.64129 0.27570456 0.4522040 0.35920425 0.1974744 0.27570598 0.4522022
3 1 4 0.31636299 17.64129 0.26572506 0.3717650 0.31636299 0.1254513 0.26572595 0.3717639
4 1 6 0.27642711 17.64129 0.22800277 0.3307300 0.27642711 0.1313108 0.22800360 0.3307290
5 1 8 0.23976445 17.64129 0.17324422 0.3218821 0.23976445 0.2085896 0.17324530 0.3218805
6 2 10 0.09957493 17.64129 0.06218598 0.1557113 0.09957493 0.2609519 0.06218653 0.1557101
> compare$remission-compare$fit
[1] 8.604228e-16 1.221245e-15 1.165734e-15 1.054712e-15 9.714451e-16 4.718448e-16 1.221245e-15 1.054712e-15 8.326673e-16
[10] 6.383782e-16 4.163336e-16 7.494005e-16 6.383782e-16 5.689893e-16 4.857226e-16 2.567391e-16 1.075529e-16 1.318390e-16
[19] 1.665335e-16 2.081668e-16
The differences between the confidence boundaries is higher but still very small :
> compare$plo-compare$lower
[1] -1.208997e-06 -1.420235e-06 -8.815678e-07 -8.324261e-07 -1.076016e-06 -5.481007e-07 -1.429258e-06 -8.133438e-07 -5.648821e-07
[10] -5.806940e-07 -5.364281e-07 -1.004792e-06 -6.314904e-07 -4.007381e-07 -4.847205e-07 -3.474783e-07 -1.398476e-07 -1.679746e-07
[19] -1.476577e-07 -2.332091e-07
But if I use the real quantile of the normal distribution cmult <- qnorm(0.975) instead of cmult <- 1.96 I obtain very small differences also for these boundaries :
> compare$plo-compare$lower
[1] 5.828671e-16 9.992007e-16 9.992007e-16 9.436896e-16 7.771561e-16 3.053113e-16 9.992007e-16 8.604228e-16 6.938894e-16
[10] 5.134781e-16 2.289835e-16 4.718448e-16 4.857226e-16 4.440892e-16 3.469447e-16 1.006140e-16 3.382711e-17 6.765422e-17
[19] 1.214306e-16 1.283695e-16

Use all variables in a model with {plm} in R

Using different sources, I wrote a little function that creates a table with standard errors, t statistics and standard errors that are clustered according to a group variable "cluster" after a linear regression model. The code is as follows
cl1 <- function(modl,clust) {
# model is the regression model
# clust is the clustervariable
# id is a unique identifier in ids
library(plm)
library(lmtest)
# Get Formula
form <- formula(modl$call)
# Get Data frame
dat <- eval(modl$call$data)
dat$row <- rownames(dat)
dat$id <- ave(dat$row, dat[[deparse(substitute(clust))]], FUN =seq_along)
pdat <- pdata.frame(dat,
index=c("id", deparse(substitute(clust)))
, drop.index= F, row.names= T)
# # Regression
reg <- plm(form, data=pdat, model="pooling")
# # Adjustments
G <- length(unique(dat[, deparse(substitute(clust))]))
N <- length(dat[,deparse(substitute(clust))])
# # Resid degrees of freedom, adjusted
dfa <- (G/(G-1))*(N-1)/reg$df.residual
d.vcov <- dfa* vcovHC(reg, type="HC0", cluster="group", adjust=T)
table <- coeftest(reg, vcov=d.vcov)
# # Output: se, t-stat and p-val
cl1out <- data.frame(table[, 2:4])
names(cl1out) <- c("se", "tstat", "pval")
# # Cluster VCE
return(cl1out)
}
For a regression like reg1 <- lm (y ~ x1 + x2 , data= df), calling the function cl1(reg1, cluster) will work just fine.
However, if I use a model like reg2 <- lm(y ~ . , data=df), I will get the error message:
Error in terms.formula(object) : '.' in formula and no 'data' argument
After some tests, I am guessing that I can't use "." to signal "use all variables in the data frame" for {plm}. Is there a way I can do this with {plm}? Otherwise, any ideas on how I could improve my function in a way that does not use {plm} and that accepts all possible specifications of a linear model?
Indeed you can't use . notation for formula within plm pacakge.
data("Produc", package = "plm")
plm(gsp ~ .,data=Produc)
Error in terms.formula(object) : '.' in formula and no 'data' argument
One idea is to expand the formula when you have a .. Here is a custom function that does the job (surely is done within other packages):
expand_formula <-
function(form="A ~.",varNames=c("A","B","C")){
has_dot <- any(grepl('.',form,fixed=TRUE))
if(has_dot){
ii <- intersect(as.character(as.formula(form)),
varNames)
varNames <- varNames[!grepl(paste0(ii,collapse='|'),varNames)]
exp <- paste0(varNames,collapse='+')
as.formula(gsub('.',exp,form,fixed=TRUE))
}
else as.formula(form)
}
Now test it :
(eform = expand_formula("gsp ~ .",names(Produc)))
# gsp ~ state + year + pcap + hwy + water + util + pc + emp + unemp
plm(eform,data=Produc)
# Model Formula: gsp ~ state + year + pcap + hwy + water + util + pc + emp + unemp
# <environment: 0x0000000014c3f3c0>

Resources