R Stargazer separate columns with vertical line - r

I want separate two columns in a stargazer regression table.
So far I have not found a suitable solution. Therefore I write here my question.
Here is the example code to create a stargazer table with 2 columns:
mod <- lm(data=iris,Sepal.Length~Species)
mod1 <- lm(data=iris,Sepal.Length~Petal.Width+Species)
stargazer(mod,mod1, type = "latex")
RMarkdown gives me this output:
But I would like to have both columns separated with a line:
Can anyone help me with this issue?
I assume that you have to use latex code to change the output. I have not found any possibilities in the Stargazer options.
Thanks in advance!

A proposition:
mod <- lm(data=iris,Sepal.Length~Species)
mod1 <- lm(data=iris,Sepal.Length~Petal.Width+Species)
mod1_sg <- capture.output(stargazer::stargazer(mod, mod1, type = "text"))
library(stringr)
mod1_sg[6:25] <- paste(str_sub(mod1_sg[6:25],1,44), str_sub(mod1_sg[6:25],46,68), sep="|")
mod1_df <- setNames(as.data.frame(noquote(mod1_sg)[-1]),"")
print(mod1_df, row.names=FALSE)
#>
#> ====================================================================
#> Dependent variable:
#> ------------------------------------------------
#> Sepal.Length
#> (1) | (2)
#> --------------------------------------------|-----------------------
#> Petal.Width | 0.917***
#> | (0.194)
#> |
#> Speciesversicolor 0.930*** | -0.060
#> (0.103) | (0.230)
#> |
#> Speciesvirginica 1.582*** | -0.050
#> (0.103) | (0.358)
#> |
#> Constant 5.006*** | 4.780***
#> (0.073) | (0.083)
#> |
#> --------------------------------------------|-----------------------
#> Observations 150 | 150
#> R2 0.619 | 0.669
#> Adjusted R2 0.614 | 0.663
#> Residual Std. Error 0.515 (df = 147) | 0.481 (df = 146)
#> F Statistic 119.265*** (df = 2; 147)|98.525*** (df = 3; 146)
#> ====================================================================
#> Note: *p<0.1; **p<0.05; ***p<0.01
# Created on 2021-02-15 by the reprex package (v0.3.0.9001)
UPDATE
For LaTeX output:
mod <- lm(data=iris,Sepal.Length~Species)
mod1 <- lm(data=iris,Sepal.Length~Petal.Width+Species)
mod1_sg <- capture.output(stargazer::stargazer(mod, mod1, type = "latex"))
mod1_sg <- sub("lcc", "lc|c", mod1_sg)
writeLines(mod1_sg)
Regards,

Related

Missing standard errors and confidence for mixed model and ggeffects

I'm trying to use ggeffects::ggpredict to make some effects plots for my model. I find that the standard errors and confidence limits are missing for many of the results. I can reproduce the problem with some simulated data. It seems specifically for observations where the standard error puts the predicted probability close to 0 or 1.
I tried to get predictions on the link scale to diagnose if it's a problem with the translation from link to response, but I don't believe this is supported by the package.
Any ideas how to address this? Many thanks.
library(tidyverse)
library(lme4)
library(ggeffects)
# number of simulated observations
n <- 1000
# simulated data with a numerical predictor x, factor predictor f, response y
# the simulated effects of x and f are somewhat weak compared to the noise, so expect high standard errors
df <- tibble(
x = seq(-0.1, 0.1, length.out = n),
g = floor(runif(n) * 3),
f = letters[1 + g] %>% as.factor(),
y = pracma::sigmoid(x + (runif(n) - 0.5) + 0.1 * (g - mean(g))),
z = if_else(y > 0.5, "high", "low") %>% as.factor()
)
# glmer model
model <- glmer(z ~ x + (1 | f), data = df, family = binomial)
print(summary(model))
#> Generalized linear mixed model fit by maximum likelihood (Laplace
#> Approximation) [glmerMod]
#> Family: binomial ( logit )
#> Formula: z ~ x + (1 | f)
#> Data: df
#>
#> AIC BIC logLik deviance df.resid
#> 1373.0 1387.8 -683.5 1367.0 997
#>
#> Scaled residuals:
#> Min 1Q Median 3Q Max
#> -1.3858 -0.9928 0.7317 0.9534 1.3600
#>
#> Random effects:
#> Groups Name Variance Std.Dev.
#> f (Intercept) 0.0337 0.1836
#> Number of obs: 1000, groups: f, 3
#>
#> Fixed effects:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) 0.02737 0.12380 0.221 0.825
#> x -4.48012 1.12066 -3.998 6.39e-05 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Correlation of Fixed Effects:
#> (Intr)
#> x -0.001
# missing standard errors
ggpredict(model, c("x", "f")) %>% print()
#> Data were 'prettified'. Consider using `terms="x [all]"` to get smooth plots.
#> # Predicted probabilities of z
#>
#> # f = a
#>
#> x | Predicted | 95% CI
#> --------------------------------
#> -0.10 | 0.62 | [0.54, 0.69]
#> 0.00 | 0.51 |
#> 0.10 | 0.40 |
#>
#> # f = b
#>
#> x | Predicted | 95% CI
#> --------------------------------
#> -0.10 | 0.62 | [0.56, 0.67]
#> 0.00 | 0.51 |
#> 0.10 | 0.40 |
#>
#> # f = c
#>
#> x | Predicted | 95% CI
#> --------------------------------
#> -0.10 | 0.62 | [0.54, 0.69]
#> 0.00 | 0.51 |
#> 0.10 | 0.40 |
ggpredict(model, c("x", "f")) %>% as_tibble() %>% print(n = 20)
#> Data were 'prettified'. Consider using `terms="x [all]"` to get smooth plots.
#> # A tibble: 9 x 6
#> x predicted std.error conf.low conf.high group
#> <dbl> <dbl> <dbl> <dbl> <dbl> <fct>
#> 1 -0.1 0.617 0.167 0.537 0.691 a
#> 2 -0.1 0.617 0.124 0.558 0.672 b
#> 3 -0.1 0.617 0.167 0.537 0.691 c
#> 4 0 0.507 NA NA NA a
#> 5 0 0.507 NA NA NA b
#> 6 0 0.507 NA NA NA c
#> 7 0.1 0.396 NA NA NA a
#> 8 0.1 0.396 NA NA NA b
#> 9 0.1 0.396 NA NA NA c
Created on 2022-04-12 by the reprex package (v2.0.1)
I think this may be due to the singular model fit.
I dug down into the guts of the code as far as here, where there appears to be a mismatch between the dimensions of the covariance matrix of the predictions (3x3) and the number of predicted values (15).
I further suspect that the problem may happen here:
rows_to_keep <- as.numeric(rownames(unique(model_matrix_data[
intersect(colnames(model_matrix_data), terms)])))
Perhaps the function is getting confused because the conditional modes/BLUPs for every group are the same (which will only be true, generically, when the random effects variance is zero) ... ?
This seems worth opening an issue on the ggeffects issues list ?

Extract plain model from tidymodel object

Is it possible to extract, say, a model of class glm from a tidymodel built with recipe and logistic_reg() %>% set_engine("glm")?
I'd like to use packages from the easystats project, which require "normal", non-tidy models. The workflow extractor function (pull_workflow_fit()) returns an object of class `"_glm" "model_fit", which doesn't seem to be compatible.
I understand I can generate a model using glm() and the same formula as in the recipe, but it seems to me the fitted parameters differ.
Thanks!
The easystats package suite supports tidymodels since the last updates:
library(tidymodels)
data(two_class_dat)
glm_spec <- logistic_reg() %>%
set_engine("glm")
norm_rec <- recipe(Class ~ A + B, data = two_class_dat) %>%
step_normalize(all_predictors())
workflow() %>%
add_recipe(norm_rec) %>%
add_model(glm_spec) %>%
fit(two_class_dat) %>%
pull_workflow_fit() %>%
parameters::model_parameters()
#> Parameter | Log-Odds | SE | 95% CI | z | p
#> ---------------------------------------------------------------
#> (Intercept) | -0.35 | 0.10 | [-0.54, -0.16] | -3.55 | < .001
#> A | -1.11 | 0.17 | [-1.44, -0.79] | -6.64 | < .001
#> B | 2.80 | 0.21 | [ 2.40, 3.22] | 13.33 | < .001
workflow() %>%
add_recipe(norm_rec) %>%
add_model(glm_spec) %>%
fit(two_class_dat) %>%
pull_workflow_fit() %>%
parameters::model_parameters() %>%
plot()
workflow() %>%
add_recipe(norm_rec) %>%
add_model(glm_spec) %>%
fit(two_class_dat) %>%
pull_workflow_fit() %>%
parameters::model_parameters() %>%
parameters::print_md()
Parameter
Log-Odds
SE
95% CI
z
p
(Intercept)
-0.35
0.10
(-0.54, -0.16)
-3.55
< .001
A
-1.11
0.17
(-1.44, -0.79)
-6.64
< .001
B
2.80
0.21
(2.40, 3.22)
13.33
< .001
workflow() %>%
add_recipe(norm_rec) %>%
add_model(glm_spec) %>%
fit(two_class_dat) %>%
pull_workflow_fit() %>%
performance::model_performance()
#> # Indices of model performance
#>
#> AIC | BIC | Tjur's R2 | RMSE | Sigma | Log_loss | Score_log | Score_spherical | PCP
#> ----------------------------------------------------------------------------------------------
#> 679.950 | 693.970 | 0.460 | 0.362 | 0.925 | 0.426 | -Inf | 0.003 | 0.733
Created on 2021-04-25 by the reprex package (v2.0.0)
You can extract out the underlying model object (whether that was created by glm or ranger or keras or anything) from a parsnip object using $fit.
library(tidymodels)
data(two_class_dat)
glm_spec <- logistic_reg() %>%
set_engine("glm")
norm_rec <- recipe(Class ~ A + B, data = two_class_dat) %>%
step_normalize(all_predictors())
glm_fit <- workflow() %>%
add_recipe(norm_rec) %>%
add_model(glm_spec) %>%
fit(two_class_dat) %>%
pull_workflow_fit()
What is in that fitted object?
## this is a parsnip object
glm_fit
#> parsnip model object
#>
#> Fit time: 5ms
#>
#> Call: stats::glm(formula = ..y ~ ., family = stats::binomial, data = data)
#>
#> Coefficients:
#> (Intercept) A B
#> -0.3491 -1.1063 2.7966
#>
#> Degrees of Freedom: 790 Total (i.e. Null); 788 Residual
#> Null Deviance: 1088
#> Residual Deviance: 673.9 AIC: 679.9
## this is a glm object
glm_fit$fit
#>
#> Call: stats::glm(formula = ..y ~ ., family = stats::binomial, data = data)
#>
#> Coefficients:
#> (Intercept) A B
#> -0.3491 -1.1063 2.7966
#>
#> Degrees of Freedom: 790 Total (i.e. Null); 788 Residual
#> Null Deviance: 1088
#> Residual Deviance: 673.9 AIC: 679.9
Created on 2021-02-04 by the reprex package (v1.0.0)
The fitted parameters will definitely not differ from calling the model directly. If you think you are finding different fitted parameters, then something may be going awry is how you are calling the model.

How to set a coefficient at a particular value, and retain the predictor in the model summary?

I am running a linear regression of the type below:
y <- lm(x ~ z, data)
I want z set to 0.8, and then I want to be able to extract the resulting estimate for z from the model output using the tidy function. I have had a look at offset(), but I am unable to see the z estimate in the model output, which I need for a summary table. Does it suffice to simply include I(z*0.8)? This would result in the below code:
y <- lm(x ~ I(z*0.8), data)
I would recommend ggeffects. For example:
library(ggeffects)
#> Warning: package 'ggeffects' was built under R version 3.6.2
library(ggplot2)
#> Registered S3 methods overwritten by 'ggplot2':
#> method from
#> [.quosures rlang
#> c.quosures rlang
#> print.quosures rlang
data(efc)
fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc)
mydf <- ggpredict(fit, terms = c("c12hour [30:80]", "c172code [1,3]"))
mydf
#> # Predicted values of Total score BARTHEL INDEX
#> # x = average number of hours of care per week
#>
#> # c172code = low level of education
#>
#> x | Predicted | 95% CI
#> -------------------------------
#> 30 | 67.15 | [64.04, 70.26]
#> 38 | 65.12 | [62.06, 68.18]
#> 47 | 62.84 | [59.81, 65.88]
#> 55 | 60.81 | [57.78, 63.85]
#> 63 | 58.79 | [55.72, 61.85]
#> 80 | 54.48 | [51.28, 57.68]
#>
#> # c172code = high level of education
#>
#> x | Predicted | 95% CI
#> -------------------------------
#> 30 | 68.58 | [65.42, 71.75]
#> 38 | 66.56 | [63.39, 69.73]
#> 47 | 64.28 | [61.08, 67.47]
#> 55 | 62.25 | [59.01, 65.50]
#> 63 | 60.23 | [56.91, 63.54]
#> 80 | 55.92 | [52.39, 59.45]
#>
#> Adjusted for:
#> * neg_c_7 = 11.84
#> * c161sex = 1.76
ggplot(mydf, aes(x, predicted, colour = group)) + geom_line()
Created on 2020-12-04 by the reprex package (v0.3.0)
From here

Extracting posterior modes and credible intervals from glmmTMB output

I normally work with lme4 package, but the glmmTMB package is increasingly becoming better suited to work with highly complicated data (think overdispersion and/or zero-inflation).
Is there a way to extract posterior modes and credible intervals from glmmTMB models, similar to how it is done for lme4 models (example here).
Details:
I am working with count data (available here) that are zero-inflated and overdispersed and have random effects. The package best suited to work with this sort of data is the glmmTMB (details here). (Note two outliers: euc0==78 and np_other_grass==20).
The data looks like this:
euc0 ea_grass ep_grass np_grass np_other_grass month year precip season prop_id quad
3 5.7 0.0 16.7 4.0 7 2006 526 Winter Barlow 1
0 6.7 0.0 28.3 0.0 7 2006 525 Winter Barlow 2
0 2.3 0.0 3.3 0.0 7 2006 524 Winter Barlow 3
0 1.7 0.0 13.3 0.0 7 2006 845 Winter Blaber 4
0 5.7 0.0 45.0 0.0 7 2006 817 Winter Blaber 5
0 11.7 1.7 46.7 0.0 7 2006 607 Winter DClark 3
The glmmTMB model:
model<-glmmTMB(euc0 ~ ea_grass + ep_grass + np_grass + np_other_grass + (1|prop_id), data = euc, family= nbinom2) #nbimom2 lets var increases quadratically
summary(model)
confint(model) #this gives the confidence intervals
How I would normally extract the posterior mode and credible intervals for a lmer/glmer model:
#extracting model estimates and credible intervals
sm.model <-arm::sim(model, n.sim=1000)
smfixef.model = sm.model#fixef
smfixef.model =coda::as.mcmc(smfixef.model)
MCMCglmm::posterior.mode(smfixef.model) #mode of the distribution
coda::HPDinterval(smfixef.model) #credible intervals
#among-brood variance
bid<-sm.model#ranef$prop_id[,,1]
bvar<-as.vector(apply(bid, 1, var)) #between brood variance posterior distribution
bvar<-coda::as.mcmc(bvar)
MCMCglmm::posterior.mode(bvar) #mode of the distribution
coda::HPDinterval(bvar) #credible intervals
Most of an answer:
Getting a multivariate Normal sample of the parameters of the conditional model is pretty easy (I think this is what arm::sim() is doing.
library(MASS)
pp <- fixef(model)$cond
vv <- vcov(model)$cond
samp <- MASS::mvrnorm(1000, mu=pp, Sigma=vv)
(then use the rest of your method above).
I'm a little skeptical that your second example is doing what you want it to do. The variance of the conditional modes is not necessarily a good estimate of the between-group variance (e.g. see here). Furthermore, I'm nervous about the half-assed-Bayesian approach (e.g., why no priors? Why look at the posterior mode, which is rarely a meaningful value in a Bayesian context?) although I do sometimes use similar approaches myself!) However, it's not too hard to use glmmTMB results to do a proper Markov chain Monte Carlo analysis:
library(tmbstan)
library(rstan)
library(coda)
library(emdbook) ## for lump.mcmc.list(), or use runjags::combine.mcmc()
t2 <- system.time(m2 <- tmbstan(model$obj))
m3 <- rstan::As.mcmc.list(m2)
lattice::xyplot(m3,layout=c(5,6))
m4 <- emdbook::lump.mcmc.list(m3)
coda::HPDinterval(m4)
It may be helpful to know that the theta column of m4 is the log of the among-group standard standard deviation ...
(See vignette("mcmc", package="glmmTMB") for a little bit more information ...)
I think Ben has already answered your question, so my answer does not add much to the discussion... Maybe just one thing, as you wrote in your comments that you're interested in the within- and between-group variances. You can get these information via parameters::random_parameters() (if I did not misunderstand what you were looking for). See example below that first generates simulated samples from a multivariate normal (just like in Ben's example), and later gives you a summary of the random effect variances...
library(readr)
library(glmmTMB)
library(parameters)
library(bayestestR)
library(insight)
euc_data <- read_csv("D:/Downloads/euc_data.csv")
model <-
glmmTMB(
euc0 ~ ea_grass + ep_grass + np_grass + np_other_grass + (1 | prop_id),
data = euc_data,
family = nbinom2
) #nbimom2 lets var increases quadratically
# generate samples
samples <- parameters::simulate_model(model)
#> Model has no zero-inflation component. Simulating from conditional parameters.
# describe samples
bayestestR::describe_posterior(samples)
#> # Description of Posterior Distributions
#>
#> Parameter | Median | 89% CI | pd | 89% ROPE | % in ROPE
#> --------------------------------------------------------------------------------
#> (Intercept) | -1.072 | [-2.183, -0.057] | 0.944 | [-0.100, 0.100] | 1.122
#> ea_grass | -0.001 | [-0.033, 0.029] | 0.525 | [-0.100, 0.100] | 100.000
#> ep_grass | -0.050 | [-0.130, 0.038] | 0.839 | [-0.100, 0.100] | 85.297
#> np_grass | -0.020 | [-0.054, 0.012] | 0.836 | [-0.100, 0.100] | 100.000
#> np_other_grass | -0.002 | [-0.362, 0.320] | 0.501 | [-0.100, 0.100] | 38.945
# or directly get summary of sample description
sp <- parameters::simulate_parameters(model, ci = .95, ci_method = "hdi", test = c("pd", "p_map"))
sp
#> Model has no zero-inflation component. Simulating from conditional parameters.
#> # Description of Posterior Distributions
#>
#> Parameter | Coefficient | p_MAP | pd | CI
#> --------------------------------------------------------------
#> (Intercept) | -1.037 | 0.281 | 0.933 | [-2.305, 0.282]
#> ea_grass | -0.001 | 0.973 | 0.511 | [-0.042, 0.037]
#> ep_grass | -0.054 | 0.553 | 0.842 | [-0.160, 0.047]
#> np_grass | -0.019 | 0.621 | 0.802 | [-0.057, 0.023]
#> np_other_grass | 0.019 | 0.999 | 0.540 | [-0.386, 0.450]
plot(sp) + see::theme_modern()
#> Model has no zero-inflation component. Simulating from conditional parameters.
# random effect variances
parameters::random_parameters(model)
#> # Random Effects
#>
#> Within-Group Variance 2.92 (1.71)
#> Between-Group Variance
#> Random Intercept (prop_id) 2.1 (1.45)
#> N (groups per factor)
#> prop_id 18
#> Observations 346
insight::get_variance(model)
#> Warning: mu of 0.2 is too close to zero, estimate of random effect variances may be unreliable.
#> $var.fixed
#> [1] 0.3056285
#>
#> $var.random
#> [1] 2.104233
#>
#> $var.residual
#> [1] 2.91602
#>
#> $var.distribution
#> [1] 2.91602
#>
#> $var.dispersion
#> [1] 0
#>
#> $var.intercept
#> prop_id
#> 2.104233
Created on 2020-05-26 by the reprex package (v0.3.0)

Trying to reproduce xtreg in stata with plm in R

I can't seem to match the xtreg command in Stata in R without using the fe option in Stata.
The coefficients are the same in Stata and R when I do a standard regression or a panel model with fixed effects.
Sample data:
library("plm" )
z <- Cigar[ Cigar$year %in% c( 63, 73) , ]
#saving so I can use in Stata
foreign::write.dta( z , "C:/Users/matthewr/Desktop/temp.dta")
So I get the same coefficient with this in R:
coef( lm( sales ~ pop , data= z2 ) )
and this in Stata
use "C:/Users/matthewr/Desktop/temp.dta" , clear
reg sales pop
And it works when I set up a panel and used the fixed effects option.
z2 <- pdata.frame( z , index=c("state", "year") )
coef( plm( sales ~ pop , data= z2 , model="within" ) ) # matches xtreg , fe
Matches this in Stata
xtset state year
xtreg sales pop, fe
I can't figure out how to match Stata when I am not using the fixed effects option
I am trying to match this result in R, and can't
This is the result I would like to reproduce:
Coefficient:-.0006838
xtreg sales pop
Stata xtreg y x is equivalent to xtreg y x, re, so what you want is to calculate random effects.
summary(plm(sales ~ pop, data=z, model="random", index=c("state", "year")))$coe
# Estimate Std. Error z-value Pr(>|z|)
# (Intercept) 1.311398e+02 6.499511330 20.176878 1.563130e-90
# pop -6.837769e-04 0.001077432 -0.634636 5.256658e-01
Stata:
xtreg sales pop, re
sales | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
pop | -.0006838 .0010774 -0.63 0.526 -.0027955 .001428
_cons | 131.1398 6.499511 20.18 0.000 118.401 143.8787
Your question has been answered by #jay.sf. I just add something else although it may not directly answer your question. Both Stata xtreg and R plm have a few options, I feel RStata package could be a convenient tool to try different options and to compare results from both Stata and R directly in RStudio. I thought it could be helpful. The Stata path is only for my computer.
library("plm" )
library(RStata)
data("Cigar", package = "plm")
z <- Cigar[ Cigar$year %in% c( 63, 73) , ]
options("RStata.StataPath" = "\"C:\\Program Files (x86)\\Stata14\\StataSE-64\"")
options("RStata.StataVersion" = 14)
# Stata fe
stata_do1 <- '
xtset state year
xtreg sales pop, fe
'
stata(stata_do1, data.out = TRUE, data.in = z)
#> .
#> . xtset state year
#> panel variable: state (strongly balanced)
#> time variable: year, 63 to 73, but with gaps
#> delta: 1 unit
#> . xtreg sales pop, fe
#>
#> Fixed-effects (within) regression Number of obs = 92
#> Group variable: state Number of groups = 46
#>
#> R-sq: Obs per group:
#> within = 0.0118 min = 2
#> between = 0.0049 avg = 2.0
#> overall = 0.0048 max = 2
#>
#> F(1,45) = 0.54
#> corr(u_i, Xb) = -0.3405 Prob > F = 0.4676
#>
#> ------------------------------------------------------------------------------
#> sales | Coef. Std. Err. t P>|t| [95% Conf. Interval]
#> -------------+----------------------------------------------------------------
#> pop | -.0032108 .0043826 -0.73 0.468 -.0120378 .0056162
#> _cons | 141.5186 18.06909 7.83 0.000 105.1256 177.9116
#> -------------+----------------------------------------------------------------
#> sigma_u | 34.093409
#> sigma_e | 15.183908
#> rho | .83448264 (fraction of variance due to u_i)
#> ------------------------------------------------------------------------------
#> F test that all u_i=0: F(45, 45) = 8.91 Prob > F = 0.0000
# R
z2 <- pdata.frame( z , index=c("state", "year") )
coef( plm( sales ~ pop , data= z2 , model="within" ) )
#> pop
#> -0.003210817
# Stata re
stata_do2 <- '
xtset state year
xtreg sales pop, re
'
stata(stata_do2, data.out = TRUE, data.in = z)
#> .
#> . xtset state year
#> panel variable: state (strongly balanced)
#> time variable: year, 63 to 73, but with gaps
#> delta: 1 unit
#> . xtreg sales pop, re
#>
#> Random-effects GLS regression Number of obs = 92
#> Group variable: state Number of groups = 46
#>
#> R-sq: Obs per group:
#> within = 0.0118 min = 2
#> between = 0.0049 avg = 2.0
#> overall = 0.0048 max = 2
#>
#> Wald chi2(1) = 0.40
#> corr(u_i, X) = 0 (assumed) Prob > chi2 = 0.5257
#>
#> ------------------------------------------------------------------------------
#> sales | Coef. Std. Err. z P>|z| [95% Conf. Interval]
#> -------------+----------------------------------------------------------------
#> pop | -.0006838 .0010774 -0.63 0.526 -.0027955 .001428
#> _cons | 131.1398 6.499511 20.18 0.000 118.401 143.8787
#> -------------+----------------------------------------------------------------
#> sigma_u | 30.573218
#> sigma_e | 15.183908
#> rho | .80214841 (fraction of variance due to u_i)
#> ------------------------------------------------------------------------------
# R random
coef(plm(sales ~ pop,
data=z,
model="random",
index=c("state", "year")))
#> (Intercept) pop
#> 1.311398e+02 -6.837769e-04
Created on 2020-01-27 by the reprex package (v0.3.0)

Resources