Stargazer: omit stars for constant only - r

Sometimes it's tacky to include statistical significance stars for the constant term when reporting the results of a regression. Is it possible to configure stargazer to keep stars for the regressors, but not for the constant term?
fit <- lm(rating ~ complaints, data=attitude)
stargazer(fit)

Basically, the answer turned out to be using stargazer's p argument. From there, I just needed to write a (series of) function(s) that took a list of regression fits and returned a list of vectors of p-values. I then manually changed the p-value of the intercepts to be 1, and presto, no tacky stars on the intercept. Plus it's reproducible with no manual LaTeX editing!
commarobust <- function(fit){
require(sandwich)
require(lmtest)
coeftest(fit,vcovHC(fit, type="HC2"))
}
getrobustps <- function(fit){
robustfit <- commarobust(fit)
ps <- robustfit[,4]
ps["(Intercept)"] <- 1
return(ps)
}
makerobustpslist <- function(fitlist){
return(lapply(fitlist, FUN=getrobustps) )
}
Then in the stargazer call:
stargazer(fit_1, fit_2, fit_3, fit_4, fit_5,
p=makerobustpslist(list(fit_1, fit_2, fit_3, fit_4, fit_5)))
Works like a charm.

You could alternatively use the broom package to convert the fit results to a data frame, and then add stars to your heart's content:
library("broom")
mod <- lm(mpg ~ wt + qsec, data = mtcars)
DF <- tidy(mod)
DF$stars <- c("", "***", "***") # inspect and add manually, or automate
And the xtable package could be used to format it for LaTeX or whatever.

Related

Is there anyway to export feols model using stargazer in R?

I ran a bunch of models using feols model (fixest package), but I have trouble exporting my model into a table using stargazer. Any suggestions on how I can do that?
It does seem like I can use etable function, but I want to use stargazer because I want to add a couple lines of notes to my table and format the table the way I want it (e.g. using table.layout function in stargazer).
I do not believe that stargazer supports this kind of model. However, it is supported out-of-the-box by the modelsummary package. This package allows you to add notes, and the tables it produces are extremely customizable, because modelsummary supports several backend packages to create and customize tables: kableExtra, gt, flextable, huxtable. Tables can also be exported to many formats, including HTML, Markdown, LaTeX, JPG, data.frame, or PDF.
(Disclaimer: I am the author of modelsummary.)
Here is an example with a simple linear regression model:
library(fixest)
library(modelsummary)
# create a toy dataset
base <- iris
names(base) <- c("y", "x1", "x_endo_1", "x_inst_1", "fe")
base$x_inst_2 <- 0.2 * base$y + 0.2 * base$x_endo_1 + rnorm(150, sd = 0.5)
base$x_endo_2 <- 0.2 * base$y - 0.2 * base$x_inst_1 + rnorm(150, sd = 0.5)
# estimate
mod <- feols(y ~ sw(x1, x_endo_1, x_inst_1) | fe, data = base)
# table
modelsummary(mod)
You can use the various formula functions that fixest offers like step-wise inclusion of covariates:
mod <- feols(y ~ sw(x1, x_endo_1, x_inst_1) | fe, data = base)
modelsummary(mod)
And modelsummary also supports instrumental variable estimation. This will show both stages side-by-side:
mod <- feols(y ~ x1 | fe | x_endo_1 + x_endo_2 ~ x_inst_1 + x_inst_2, data = base)
modelsummary(summary(mod, stage = 1:2))
You may also use the etable function from fixest to export output tables:
library(fixest)
data("mtcars")
# models
model1 <- feols(mpg ~ cyl + disp, data=mtcars)
model2 <- feols(mpg ~ cyl + hp, data=mtcars)
# data.frame output
df <- etable(list(model1, model2), tex=FALSE)
# Latex output
etable(list(model1, model2), tex=TRUE)
You can also save the output locally with the file parameter.
etable(list(model1, model2), tex=FALSE, file ='tt.txt')
As of fixest 0.10.2, table notes are now supported in etable.

How to extract and modify individual p.table components from GAM fits in R

I have used a for loop to run a series of GAMs in R that regress a series of dependent variables on the same set of independent variables. I want to extract the p.table values from each model, but when I print the p.table objects from my list of model summaries, the p-values are absurdly long (~100 digits), and I cannot figure out how to apply a function to just that component of the p.table output while also printing the whole output.
Here is an example with mtcars. These model results are obviously meaningless; in this case, the p-values are printing fine, but in my data the p-values are way too long, and I want to truncate them in the printed output using, e.g., format.pval.
data(mtcars)
library(mgcv)
y_vars <- c("qsec", "wt", "hp")
models <- list()
for (i in y_vars){
models[[i]] <- gam(as.formula(paste(i, "~ cyl + s(drat) + am + gear + carb")),
method = "REML", data = mtcars)
}
models_summ <- lapply(models, summary)
lapply(models_summ, '[[', 'p.table')
I ended up assigning the output to a data frame and operating with it that way:
df <- data.frame(lapply(models_summ, '[[', 'p.table'))
If anyone has more elegant solutions, I would love to see them.

How to create a table of gravity models side by side, using the Gravity Package in r

I would like to create a table like the tables from the stargazer Package.
But using the Gravity Package for creating gravity models, this package isn´t supported by the stargazer package yet.
Do you have an idea, how to create a similar table with 3-5 models side by side for better comparison?
Output should look like this, just with gravity models from the gravity package in r:
Desired Output Style:
Please provide an example of model object created by gravity package.
Alternatively,
I will show one approach that can be used: stargazer is really nice and you CAN even create table like above even with the model objects that are not yet supported, e.g. lets say that quantile regression model is not supported by stargazer (even thought is is):
Trick is, you need to be able to obtain coefficients and standart error e.g. as vector. Then supply stargazer with model object that is suppoerted e.g. lm as a template and then mechanically specify which coefficients and standart errors should be used:
library(stargazer)
library(tidyverse)
library(quantreg)
df <- mtcars
model1 <- lm(hp ~ factor(gear) + qsec + disp, data = df)
quantreg <- rq(hp ~ factor(gear) + qsec + disp, data = df)
summary_qr <- summary(quantreg, se = "boot")
# Standart Error for quant reg
se_qr = c(211.78266, 29.17307, 58.61105, 9.70908, 0.12090)
stargazer(model1, model1,
coef = list(NULL, summary_qr$coefficients),
se = list(NULL, se_qr),
type = "text")

Cluster-Robust Standard Errors in Stargazer

Does anyone know how to get stargazer to display clustered SEs for lm models? (And the corresponding F-test?) If possible, I'd like to follow an approach similar to computing heteroskedasticity-robust SEs with sandwich and popping them into stargazer as in http://jakeruss.com/cheatsheets/stargazer.html#robust-standard-errors-replicating-statas-robust-option.
I'm using lm to get my regression models, and I'm clustering by firm (a factor variable that I'm not including in the regression models). I also have a bunch of NA values, which makes me think multiwayvcov is going to be the best package (see the bottom of landroni's answer here - Double clustered standard errors for panel data - and also https://sites.google.com/site/npgraham1/research/code)? Note that I do not want to use plm.
Edit: I think I found a solution using the multiwayvcov package...
library(lmtest) # load packages
library(multiwayvcov)
data(petersen) # load data
petersen$z <- petersen$y + 0.35 # create new variable
ols1 <- lm(y ~ x, data = petersen) # create models
ols2 <- lm(y ~ x + z, data = petersen)
cl.cov1 <- cluster.vcov(ols1, data$firmid) # cluster-robust SEs for ols1
cl.robust.se.1 <- sqrt(diag(cl.cov1))
cl.wald1 <- waldtest(ols1, vcov = cl.cov1)
cl.cov2 <- cluster.vcov(ols2, data$ticker) # cluster-robust SEs for ols2
cl.robust.se.2 <- sqrt(diag(cl.cov2))
cl.wald2 <- waldtest(ols2, vcov = cl.cov2)
stargazer(ols1, ols2, se=list(cl.robust.se.1, cl.robust.se.2), type = "text") # create table in stargazer
Only downside of this approach is you have to manually re-enter the F-stats from the waldtest() output for each model.
Using the packages lmtest and multiwayvcov causes a lot of unnecessary overhead. The easiest way to compute clustered standard errors in R is the modified summary() function. This function allows you to add an additional parameter, called cluster, to the conventional summary() function. The following post describes how to use this function to compute clustered standard errors in R:
https://economictheoryblog.com/2016/12/13/clustered-standard-errors-in-r/
You can easily the summary function to obtain clustered standard errors and add them to the stargazer output. Based on your example you could simply use the following code:
# estimate models
ols1 <- lm(y ~ x)
# summary with cluster-robust SEs
summary(ols1, cluster="cluster_id")
# create table in stargazer
stargazer(ols1, se=list(coef(summary(ols1,cluster = c("cluster_id")))[, 2]), type = "text")
I would recommend lfe package, which is much more powerful package than lm package. You can easily specify the cluster in the regression model:
ols1 <- felm(y ~ x + z|0|0|firmid, data = petersen)
summary(ols1)
stargazer(OLS1, type="html")
The clustered standard errors will be automatically produced. And stargazer will report the clustered-standard error accordingly.
By the way (allow me to do more marketing), for micro-econometric analysis, felm is highly recommended. You can specify fixed effects and IV easily using felm. The grammar is like:
ols1 <- felm(y ~ x + z|FixedEffect1 + FixedEffect2 | IV | Cluster, data = Data)

How to update summary when using NeweyWest?

I am using NeweyWest standard errors to correct my lm() / dynlm() output. E.g.:
fit1<-dynlm(depvar~covariate1+covariate2)
coeftest(fit1,vcov=NeweyWest)
Coefficients are displayed the way I´d like to, but unfortunately I loose all the regression output information like R squared, F-Test etc. that is displayed by summary. So I wonder how I can display robust se and all the other stuff in the same summary output.
Is there a way to either get everything in one call or to overwrite the 'old' estimates?
I bet I just missed something badly, but that is really relevant when sweaving the output.
Test example, taken from ?dynlm.
require(dynlm)
require(sandwich)
data("UKDriverDeaths", package = "datasets")
uk <- log10(UKDriverDeaths)
dfm <- dynlm(uk ~ L(uk, 1) + L(uk, 12))
#shows R-squared, etc.
summary(dfm)
#no such information
coeftest(dfm, vcov = NeweyWest)
btw.: same applies for vcovHC
coefficients is just a matrix in the lm (or dynlm) summary object, so all you need to do is unclass the coeftest() output.
library(dynlm)
library(sandwich)
library(lmtest)
temp.lm <- dynlm(runif(100) ~ rnorm(100))
temp.summ <- summary(temp.lm)
temp.summ$coefficients <- unclass(coeftest(temp.lm, vcov. = NeweyWest))
If you specify the covariance matrix, the F-statistics change and you need to compute it again using waldtest() right? Because
temp.summ$coefficients <- unclass(coeftest(temp.lm, vcov. = NeweyWest))
only overwrites the coefficients.
F-statistics change but R^2 remains the same .

Resources