I use the margins library in R to calculate AME's from a linear model. Normally I would use the stargazer library to create table which I can use in an academic paper. This is sadly, only possible for regression objects. Is there a efficant way to produce a similar table for the results returned from margins e.g. a library?
Thank you for your help!
Here is an example:
library(stargazer)
library(margins)
x <- lm(mpg ~ cyl * hp + wt, data = mtcars)
stargazer(x, out = 'foo.html', type = 'html') # This produces the desired outcome for the linear model
m <- margins(x)
summary(m) # I like to create a similar table as above for these results
A rather hackish idea using texreg. Use the base model x and, using the override .* options of the texreg::*reg functions, put in the AMEs. When you first create tables for the base models without overiding anything, the AME tables will look similar.
The AMEs should be extended by a preceding NA for the intercept and a following for the interaction (just to get the AMEs congruent to the coefficients). To ged rid of the GOFs use readLines identify the corresponding lines and omit them. cat saves the code into a file in your working directory.
sm <- rbind(NA, summary(m), NA)
library(texreg)
ame <- list(l=x, custom.model.names="AME", override.coef=sm[, 2], digits=3,
override.se=sm[, 3], override.pvalues=sm[, 5], omit.coef="\\(|:",
caption="Average marginal effects")
## html version
ame.html <- do.call("htmlreg", ame)
tmp <- tempfile()
cat(ame.html, sep="\n", file=tmp)
ame.html <- readLines(tmp)
ame.html <- ame.html[-(el(grep("R<sup>2", ame.html)):grep("<tfoot>", ame.html))]
cat(ame.html, sep="\n", file="ame.html")
## latex version
ame.latex <- do.call("texreg", ame)
tmp <- tempfile()
cat(ame.latex, sep="\n", file=tmp)
ame.latex <- readLines(tmp)
ame.latex <- ame.latex[-(el(grep("R\\$\\^2\\$", ame.latex)):grep("multicolumn", ame.latex))]
cat(ame.latex, sep="\n", file="ame.tex")
## console version
ame.screen <- do.call("screenreg", ame)
tmp <- tempfile()
cat(ame.screen, sep="\n", file=tmp)
ame.screen <- readLines(tmp)
ame.screen <- ame.screen[-(grep("---", ame.screen)[2]:(grep("\\=", ame.screen)[2] - 1))]
cat(ame.screen, sep="\n")
Note: I tried to make the greps general, but you may need to adjust them according to your model.
Result
(showing console)
=====================
AME
---------------------
cyl 0.038
(0.600)
hp -0.046 **
(0.015)
wt -3.120 ***
(0.661)
=====================
*** p < 0.001; ** p < 0.01; * p < 0.05
Yes, stargazer can output your results, if you first save them, then pass them to stargazer.
By default, if we pass a normal dataframe, stargazer will produce a summary table; which is not what we want. So we set summary = FALSE.
df<- summary(m) # I like to create a similar table as above for these results
stargazer(df, type = "text", summary = FALSE)
You can set the above to out = 'foo.html', type = 'html' for your output. The above returns:
==================================================
factor AME SE z p lower upper
--------------------------------------------------
1 cyl 0.038 0.600 0.064 0.949 -1.138 1.214
2 hp -0.046 0.015 -3.191 0.001 -0.075 -0.018
3 wt -3.120 0.661 -4.718 0.00000 -4.416 -1.824
--------------------------------------------------
Related
I want to create a regression table with modelsummary (amazing package!!!) for multinomial logistic models run with nnet::multinom that includes clustered standard errors, as well as corresponding "significance" stars and summary statistics.
Unfortunately, I cannot do this automatically with the vcov parameter within modelsummary because the sandwich package that modelsummary uses does not support nnet objects.
I was able to calculate robust standard errors with a customized function originally developed by Daina Chiba and modified by Davenport, Soule, Armstrong (available from: https://journals.sagepub.com/doi/suppl/10.1177/0003122410395370/suppl_file/Davenport_online_supplement.pdf).
I was also able to include these standard errors in the modelsummary table instead of the original ones. Yet, neither the "significance" stars nor the model summary statistics adapt to these new standard errors. I think this is because they are calculated via broom::tidy automatically by modelsummary.
I would be thankful for any advice for how to include stars and summary statistics that correspond to the clustered standard errors and respective p-values.
Another smaller question I have is whether there is any easy way of "spreading" the model statistics (e.g. number of observations or R2) such that they center below all response levels of the dependent variable and not just the first level. I am thinking about a multicolumn solution in Latex.
Here is some example code that includes how I calculate the standard errors. (Note, that the calculated clustered SEs are extremely small because they don't make sense with the example mtcars data. The only take-away is that the respective stars should correspond to the new SEs, and they don't).
# load data
dat_multinom <- mtcars
dat_multinom$cyl <- sprintf("Cyl: %s", dat_multinom$cyl)
# run multinomial logit model
mod <- nnet::multinom(cyl ~ mpg + wt + hp, data = dat_multinom, trace = FALSE)
# function to calculate clustered standard errors
mlogit.clust <- function(model,data,variable) {
beta <- c(t(coef(model)))
vcov <- vcov(model)
k <- length(beta)
n <- nrow(data)
max_lev <- length(model$lev)
xmat <- model.matrix(model)
# u is deviance residuals times model.matrix
u <- lapply(2:max_lev, function(x)
residuals(model, type = "response")[, x] * xmat)
u <- do.call(cbind, u)
m <- dim(table(data[,variable]))
u.clust <- matrix(NA, nrow = m, ncol = k)
fc <- factor(data[,variable])
for (i in 1:k) {
u.clust[, i] <- tapply(u[, i], fc, sum)
}
cl.vcov <- vcov %*% ((m / (m - 1)) * t(u.clust) %*% (u.clust)) %*% vcov
return(cl.vcov = cl.vcov)
}
# get coefficients, variance, clustered standard errors, and p values
b <- c(t(coef(mod)))
var <- mlogit.clust(mod,dat_multinom,"am")
se <- sqrt(diag(var))
p <- (1-pnorm(abs(b/se))) * 2
# modelsummary table with clustered standard errors and respective p-values
modelsummary(
mod,
statistic = "({round(se,3)}),[{round(p,3)}]",
shape = statistic ~ response,
stars = c('*' = .1, '**' = .05, '***' = .01)
)
# modelsummary table with original standard errors and respective p-values
modelsummary(
models = list(mod),
statistic = "({std.error}),[{p.value}]",
shape = statistic ~ response,
stars = c('*' = .1, '**' = .05, '***' = .01)
)
This code produces the following tables:
Model 1 / Cyl: 6
Model 1 / Cyl: 8
(Intercept)
22.759*
-6.096***
(0.286),[0]
(0.007),[0]
mpg
-38.699
-46.849
(5.169),[0]
(6.101),[0]
wt
23.196
39.327
(3.18),[0]
(4.434),[0]
hp
6.722
7.493
(0.967),[0]
(1.039),[0]
Num.Obs.
32
R2
1.000
R2 Adj.
0.971
AIC
16.0
BIC
27.7
RMSE
0.00
Note:
^^ * p < 0.1, ** p < 0.05, *** p < 0.01
Model 1 / Cyl: 6
Model 1 / Cyl: 8
(Intercept)
22.759*
-6.096***
(11.652),[0.063]
(0.371),[0.000]
mpg
-38.699
-46.849
(279.421),[0.891]
(448.578),[0.918]
wt
23.196
39.327
(210.902),[0.913]
(521.865),[0.941]
hp
6.722
7.493
(55.739),[0.905]
(72.367),[0.918]
Num.Obs.
32
R2
1.000
R2 Adj.
0.971
AIC
16.0
BIC
27.7
RMSE
0.00
Note:
^^ * p < 0.1, ** p < 0.05, *** p < 0.01
This is not super easy at the moment, I just opened a Github issue to track progress. This should be easy to improve, however, so I expect changes to be published in the next release of the package.
In the meantime, you can install the dev version of modelsummary:
library(remotes)
install_github("vincentarelbundock/modelsummary")
Them, you can use the tidy_custom mechanism described here to override standard errors and p values manually:
library(modelsummary)
tidy_custom.multinom <- function(x, ...) {
b <- coef(x)
var <- mlogit.clust(x, dat_multinom, "am")
out <- data.frame(
term = rep(colnames(b), times = nrow(b)),
response = rep(row.names(b), each = ncol(b)),
estimate = c(t(b)),
std.error = sqrt(diag(var))
)
out$p.value <- (1-pnorm(abs(out$estimate / out$std.error))) * 2
row.names(out) <- NULL
return(out)
}
modelsummary(
mod,
output = "markdown",
shape = term ~ model + response,
stars = TRUE)
Model 1 / Cyl: 6
Model 1 / Cyl: 8
(Intercept)
22.759***
-6.096***
(0.286)
(0.007)
mpg
-38.699***
-46.849***
(5.169)
(6.101)
wt
23.196***
39.327***
(3.180)
(4.434)
hp
6.722***
7.493***
(0.967)
(1.039)
Num.Obs.
32
R2
1.000
R2 Adj.
0.971
AIC
16.0
BIC
27.7
RMSE
0.00
For some time already, I am stuck with the following problem, and slowly I am getting desperate because I am unable to find a solution to my problem. I am facing the following issue:
When producing HTML regression result tables with Stargazer, the notes section shows the significance cutoffs as follows:
*p**p***p<0.01
However, I would prefer a layout similar to the following:
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’
I want to get this by extracting the cutoffs dynamically and combining with fixed character vectors. The Stargazer manual says:
> a character vector containing notes to be included below the table.
> The character strings can include special substrings that will be
> replaced by the corresponding cutoffs for statistical significance
> 'stars': [*], [**], and [***] will be replaced by the cutoffs, in
> percentage terms, for one, two and three 'stars,' respectively (e.g.,
> 10, 5, and 1). Similarly, [0.*], [0.**] and [0.***] will be replaced
> by the numeric value of cutoffs for one, two and three 'stars' (e.g.,
> 0.1, 0.05, and 0.01). [.*], [.**] and [.***] will omit the leading zeros (e.g., .1, .05, .01).
I now tried all possible combinations, but I always failed. I always end up with R outputting e.g., [***] in the HTML file or throwing an error.
Could you help me figuring out the right code to combine fixed string values in the notes with dynamic cutoffs?
This is an interesting problem. After inspecting the code, I think the issue arises because the code that replaces [***], [**] etc. with the appropriate cutoffs comes before the custom notes vector is applied (when it should come after). So, the solution would be to re-arrange the code so that it comes in the correct order. This requires a bit of code surgery. The following works for me; I am running stargazer_5.2:
library(stargazer)
## Create new stargazer.wrap() to rearrange order of blocks
x <- capture.output(stargazer:::.stargazer.wrap)
idx <- c(grep("for \\(i in 1:length\\(\\.format\\.cutoffs\\)\\)", x)[2],
grep("if \\(!is\\.null\\(notes\\)\\)", x),
grep("if \\(!is\\.null\\(notes\\.align\\)\\)", x)[2])
eval(parse(text = paste0("stargazer.wrap <- ", paste(x[c(1:(idx[1] - 1),
(idx[2]):(idx[3] - 1),
idx[1]:(idx[2] - 1),
idx[3]:(length(x) - 1))], collapse = "\n"))))
## Create a new stargazer.() that uses our modified stargazer.wrap() function
x <- capture.output(stargazer)
x <- gsub(".stargazer.wrap", "stargazer.wrap", x)
eval(parse(text = paste0("stargazer. <- ", paste(x[-length(x)], collapse = "\n"))))
Results: First, before:
stargazer(lm(mpg ~ wt, mtcars),
type = "text",
notes.append = FALSE,
notes = c("Signif. codes: 0 $***$ [.***] $**$ [.**] $*$ [.*]"))
# ===============================================================
# Dependent variable:
# -------------------------------------------
# mpg
# ---------------------------------------------------------------
# wt -5.344***
# (0.559)
# Constant 37.285***
# (1.878)
# ---------------------------------------------------------------
# Observations 32
# R2 0.753
# Adjusted R2 0.745
# Residual Std. Error 3.046 (df = 30)
# F Statistic 91.375*** (df = 1; 30)
# ===============================================================
# Note: Signif. codes: 0 *** [.***] ** [.**] * [.*]
We have the problem, as you point out. Now, using our modified stargazer.() function (note the dot at the end):
stargazer.(lm(mpg ~ wt, mtcars),
type = "text",
notes.append = FALSE,
notes = c("Signif. codes: 0 $***$ [.***] $**$ [.**] $*$ [.*]"))
# ========================================================
# Dependent variable:
# ------------------------------------
# mpg
# --------------------------------------------------------
# wt -5.344***
# (0.559)
# Constant 37.285***
# (1.878)
# --------------------------------------------------------
# Observations 32
# R2 0.753
# Adjusted R2 0.745
# Residual Std. Error 3.046 (df = 30)
# F Statistic 91.375*** (df = 1; 30)
# ========================================================
# Note: Signif. codes: 0 *** .01 ** .05 * .1
I'm trying to reproduce this stata example and move from stargazer to texreg. The data is available here.
To run the regression and get the se I run this code:
library(readstata13)
library(sandwich)
cluster_se <- function(model_result, data, cluster){
model_variables <- intersect(colnames(data), c(colnames(model_result$model), cluster))
model_rows <- as.integer(rownames(model_result$model))
data <- data[model_rows, model_variables]
cl <- data[[cluster]]
M <- length(unique(cl))
N <- nrow(data)
K <- model_result$rank
dfc <- (M/(M-1))*((N-1)/(N-K))
uj <- apply(estfun(model_result), 2, function(x) tapply(x, cl, sum));
vcovCL <- dfc*sandwich(model_result, meat=crossprod(uj)/N)
sqrt(diag(vcovCL))
}
elemapi2 <- read.dta13(file = 'elemapi2.dta')
lm1 <- lm(formula = api00 ~ acs_k3 + acs_46 + full + enroll, data = elemapi2)
se.lm1 <- cluster_se(model_result = lm1, data = elemapi2, cluster = "dnum")
stargazer::stargazer(lm1, type = "text", style = "aer", se = list(se.lm1))
==========================================================
api00
----------------------------------------------------------
acs_k3 6.954
(6.901)
acs_46 5.966**
(2.531)
full 4.668***
(0.703)
enroll -0.106**
(0.043)
Constant -5.200
(121.786)
Observations 395
R2 0.385
Adjusted R2 0.379
Residual Std. Error 112.198 (df = 390)
F Statistic 61.006*** (df = 4; 390)
----------------------------------------------------------
Notes: ***Significant at the 1 percent level.
**Significant at the 5 percent level.
*Significant at the 10 percent level.
texreg produces this:
texreg::screenreg(lm1, override.se=list(se.lm1))
========================
Model 1
------------------------
(Intercept) -5.20
(121.79)
acs_k3 6.95
(6.90)
acs_46 5.97 ***
(2.53)
full 4.67 ***
(0.70)
enroll -0.11 ***
(0.04)
------------------------
R^2 0.38
Adj. R^2 0.38
Num. obs. 395
RMSE 112.20
========================
How can I fix the p-values?
Robust Standard Errors with texreg are easy: just pass the coeftest directly!
This has become much easier since the question was last answered: it appears you can now just pass the coeftest with the desired variance-covariance matrix directly. Downside: you lose the goodness of fit statistics (such as R^2 and number of observations), but depending on your needs, this may not be a big problem
How to include robust standard errors with texreg
> screenreg(list(reg1, coeftest(reg1,vcov = vcovHC(reg1, 'HC1'))),
custom.model.names = c('Standard Standard Errors', 'Robust Standard Errors'))
=============================================================
Standard Standard Errors Robust Standard Errors
-------------------------------------------------------------
(Intercept) -192.89 *** -192.89 *
(55.59) (75.38)
x 2.84 ** 2.84 **
(0.96) (1.04)
-------------------------------------------------------------
R^2 0.08
Adj. R^2 0.07
Num. obs. 100
RMSE 275.88
=============================================================
*** p < 0.001, ** p < 0.01, * p < 0.05
To generate this example, I created a dataframe with heteroscedasticity, see below for full runnable sample code:
require(sandwich);
require(texreg);
set.seed(1234)
df <- data.frame(x = 1:100);
df$y <- 1 + 0.5*df$x + 5*100:1*rnorm(100)
reg1 <- lm(y ~ x, data = df)
First, notice that your usage of as.integer is dangerous and likely to cause problems once you use data with non-numeric rownames. For instance, using the built-in dataset mtcars whose rownames consist of car names, your function will coerce all rownames to NA, and your function will not work.
To your actual question, you can provide custom p-values to texreg, which means that you need to compute the corresponding p-values. To achieve this, you could compute the variance-covariance matrix, compute the test-statistics, and then compute the p-value manually, or you just compute the variance-covariance matrix and supply it to e.g. coeftest. Then you can extract the standard errors and p-values from there. Since I am unwilling to download any data, I use the mtcars-data for the following:
library(sandwich)
library(lmtest)
library(texreg)
cluster_se <- function(model_result, data, cluster){
model_variables <- intersect(colnames(data), c(colnames(model_result$model), cluster))
model_rows <- rownames(model_result$model) # changed to be able to work with mtcars, not tested with other data
data <- data[model_rows, model_variables]
cl <- data[[cluster]]
M <- length(unique(cl))
N <- nrow(data)
K <- model_result$rank
dfc <- (M/(M-1))*((N-1)/(N-K))
uj <- apply(estfun(model_result), 2, function(x) tapply(x, cl, sum));
vcovCL <- dfc*sandwich(model_result, meat=crossprod(uj)/N)
}
lm1 <- lm(formula = mpg ~ cyl + disp, data = mtcars)
vcov.lm1 <- cluster_se(model_result = lm1, data = mtcars, cluster = "carb")
standard.errors <- coeftest(lm1, vcov. = vcov.lm1)[,2]
p.values <- coeftest(lm1, vcov. = vcov.lm1)[,4]
texreg::screenreg(lm1, override.se=standard.errors, override.p = p.values)
And just for completeness sake, let's do it manually:
t.stats <- abs(coefficients(lm1) / sqrt(diag(vcov.lm1)))
t.stats
(Intercept) cyl disp
38.681699 5.365107 3.745143
These are your t-statistics using the cluster-robust standard errors. The degree of freedom is stored in lm1$df.residual, and using the built in functions for the t-distribution (see e.g. ?pt), we get:
manual.p <- 2*pt(-t.stats, df=lm1$df.residual)
manual.p
(Intercept) cyl disp
1.648628e-26 9.197470e-06 7.954759e-04
Here, pt is the distribution function, and we want to compute the probability of observing a statistic at least as extreme as the one we observe. Since we testing two-sided and it is a symmetric density, we first take the left extreme using the negative value, and then double it. This is identical to using 2*(1-pt(t.stats, df=lm1$df.residual)). Now, just to check that this yields the same result as before:
all.equal(p.values, manual.p)
[1] TRUE
I have multiple regression models in R, which I want to summarize in a nice table format that could be included in the publication. I have all the results ready, but couldn't find a way to export them, and it wouldn't be efficient to do this by hand as I need about 20 tables.
So, one of my models is:
felm1=felm(ROA~BC+size+sizesq+age | stateyeard+industryyeard, data=data)
And I'm getting desired summary in R.
However, what I want for my paper is to have only the following in the table, the estimates with t-statistic in the brackets and also the significance codes (*,,etc.).
Is there a way to create any type of table which will include the above? Lyx, excel, word, .rft, anything really.
Even better, another model that I have is (with some variables different):
felm2=felm(ROA~BC+BCHHI+size+sizesq+age | stateyeard+industryyeard, data=data)
could I have summary of the two regressions combined in one table (where same variables would be on the same row, and others would produce empty cells)?
Thank you in advance, and I'll appreciated any attempt of help.
Here is a reproducible example:
x<-rnorm(1:20)
y<-(1:20)/10+x
summary(lm(y~x))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Itercept) 1.0539 0.1368 7.702 4.19e-07 ***
x 1.0257 0.1156 8.869 5.48e-08 ***
This is the result in R. I want the result in a table to look like
(Itercept) 1.0539*** (7.702)
X 1.0257*** (8.869)
Is this possible?
The Broom package is very good for making regression tables nice for export. Results can then be exported to csv for tarting up with Excel or one can use Rmarkdown and the kable function from knitr to make Word documents (or latex).
require(broom) # for tidy()
require(knitr) # for kable()
x<-rnorm(1:20)
y<-(1:20)/10+x
model <- lm(y~x)
out <- tidy(model)
out
term estimate std.error statistic p.value
1 (Intercept) 1.036583 0.1390777 7.453261 6.615701e-07
2 x 1.055189 0.1329951 7.934044 2.756835e-07
kable(out)
|term | estimate| std.error| statistic| p.value|
|:-----------|--------:|---------:|---------:|-------:|
|(Intercept) | 1.036583| 0.1390777| 7.453261| 7e-07|
|x | 1.055189| 0.1329951| 7.934044| 3e-07|
I should mention that I now use the excellent pixiedust for exporting regression results as it allows much finer control of the output, allowing the user to do more in R and less in any other package.
see the vignette on Cran
library(dplyr) # for pipe (%>%) command
library(pixiedust)
dust(model) %>%
sprinkle(cols = c("estimate", "std.error", "statistic"), round = 2) %>%
sprinkle(cols = "p.value", fn = quote(pvalString(value))) %>%
sprinkle_colnames("Term", "Coefficient", "SE", "T-statistic",
"P-value")
Term Coefficient SE T-statistic P-value
1 (Intercept) 1.08 0.14 7.44 < 0.001
2 x 0.93 0.14 6.65 < 0.001
For text table, try this:
x<-rnorm(1:20)
y<-(1:20)/10+x
result <- lm(y~x)
library(stargazer)
stargazer(result, type = "text")
results in...
===============================================
Dependent variable:
---------------------------
y
-----------------------------------------------
x 0.854***
(0.108)
Constant 1.041***
(0.130)
-----------------------------------------------
Observations 20
R2 0.777
Adjusted R2 0.765
Residual Std. Error 0.579 (df = 18)
F Statistic 62.680*** (df = 1; 18)
===============================================
Note: *p<0.1; **p<0.05; ***p<0.01
For multiple regression, just do
stargazer(result, result, type = "text")
And, just for the sake of making the asked outcome.
addStars <- function(coeffs) {
fb <- format(coeffs[, 1], digits = 4)
s <- cut(coeffs[, 4],
breaks = c(-1, 0.01, 0.05, 0.1, 1),
labels = c("***", "**", "*", ""))
sb <- paste0(fb, s)
}
addPar <- function(coeffs) {
se <- format(coeffs[, 2], digits = 3)
pse <- paste0("(", se, ")")
}
textTable <- function(result){
coeffs <- result$coefficients
lab <- rownames(coeffs)
sb <- addStars(coeffs)
pse <- addPar(coeffs)
out <- cbind(lab,sb, pse)
colnames(out) <- NULL
out
}
print(textTable(result), quote = FALSE)
You can use xtable::xtable, Hmisc::latex, Gmisc::htmltable etc. once you have a text table. Someone posted a link in comments. :)
Looking to create an AIC selection table for a publication in LaTex format, but I can not seem to get the form I want. I have googled this to death and was VERY surprised I couldn't find an answer. I've found answers to much more obscure questions in R.
Below is a bit of code, a few tables I made that I'm not overly keen on, and at the bottom is the general structure I'd like to make, but in nice latex table format as the stargazer package does.
I tried to use extra arguments for both packages to attain what I wanted but was unsuccessful.
##Create dummy variables
a<-1:10
b<-c(10:3,1,2)
c<-c(1,4,5,3,7,3,6,2,4,5)
##Create df
df<-data.frame(a,b,c)
##Build models
m1<-lm(a~b,data=df)
summary(m1)
m2<-lm(a~c,data=df)
m3<-lm(a~b+c,data=df)
m4<-lm(a~b*c,data=df)
##View list of AIC values
AIC(m1,m2,m3,m4)
########################CREATE AIC SELECTION TABLE
##Using MuMIn Package
library(MuMIn)
modelTABLE <- model.sel(m1,m2,m3,m4)
View(modelTABLE) ##No AIC values, just AICc, no R-squared, and model name (i.e, a~b) not present
##Using stargazer Package
library(stargazer)
test<-stargazer(m1,m2,m3,m4 ,
type = "text",
title="Regression Results",
align=TRUE,
style="default",
dep.var.labels.include=TRUE,
flip=FALSE
## ,out="models.htm"
)
View(test) ##More of a table depicting individual covariate attributes, bottom of table doesn't have AIC
###Would like a table similar to the following
Model ModelName df logLik AIC delta AICweight R2
m1 a ~ b 3 -6.111801 18.2 0 0.95 0.976
m3 a ~ b + c 4 -5.993613 20 1.8 0.05 0.976
m4 a ~ b * c 5 -5.784843 21.6 3.4 0.00 0.977
m2 a ~ c 3 -24.386821 54.8 36.6 0.00 0.068
`
model.sel result is a data.frame, so you can modify it (add model names, round numbers etc) and export to latex using e.g. latex from Hmisc package.
# include R^2:
R2 <- function(x) summary(x)$r.squared
ms <- model.sel(m1, m2, m3, m4, extra = "R2")
i <- 1:4 # indices of columns with model terms
response <- "a"
res <- as.data.frame(ms)
v <- names(ms)[i]
v[v == "(Intercept)"] <- 1
# create formula-like model names:
mnames <- apply(res[, i], 1, function(x)
deparse(simplify.formula(reformulate(v[!is.na(x)], response = response))))
## OR
# mnames <- apply(res[, i], 1, function(x)
# sapply(attr(ms, "modelList"), function(x) deparse(formula(x)))
res <- cbind(model = mnames, res[, -i])
Hmisc::latex(res, file = "")