I am trying to calculate the residual standard error of a linear regression model using the survey package. I am working with a complex design, and the sampling weight of the complex design is given by "weight" in the code below.
fitM1 <- lm(med~x1+x2,data=pop_sample,weight=weight)
fitM2 <- svyglm(med~x1+x2,data=pop_sample,design=design)
First, if I call "summary(fitM1)", I get the following:
Call: lm(formula=med~x1+x2,data=pop_sample,weights=weight)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.001787 0.042194 0.042 0.966
x1 0.382709 0.061574 6.215 1.92e-09 ***
x2 0.958675 0.048483 19.773 < 2e-16 ***
Residual standard error: 9.231 on 272 degrees of freedom
Multiple R-squared: 0.8958, Adjusted R-squared: 0.8931
F-statistic: 334.1 on 7 and 272 DF, p-value: < 2.2e-16
Next, if I call "summary(fitM2)", I get the following:
summary(fitM2)
Call: svyglm(formula=med~x1+x2,data=pop_sample,design=design)
Survey design: svydesign(id=~id_cluster,strat=~id_stratum,weight=weight,data=pop_sample)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.001787 0.043388 0.041 0.967878
x1 0.382709 0.074755 5.120 0.000334 ***
x2 0.958675 0.041803 22.933 1.23e-10 ***
When using "lm", I can extract the residual standard error by calling:
fitMvariance <- summary(fitM1)$sigma^2
However, I can't find an analogous function for "svyglm" anywhere in the survey package. The point estimates are the same when comparing the two approaches, but the standard errors of the coefficients (and, presumably, the residual standard error of the model) are different.
Survey Analysis
use the library survey in the r to perform survey analysis, it offers a wide range of functions to calculate the statistics like Percentage, Lower CI, Upper CI, population and RSE.
RSE
we can use thesvyby function in the survey package to get all the statistics including the Root squared error
library("survey")
Survey design: svydesign(id=~id_cluster,strat=~id_stratum,weight=weight,data=pop_sample)
svyby(~med, ~x1+x2, design, svytotal, deff=TRUE, verbose=TRUE,vartype=c("se","cv","cvpct","var"))
The cvpct will give the root squared error
Refer for further information svyby
Because svyglm is built on glm not lm, the variance estimate is called $dispersion rather than $sigma
> data(api)
> dstrat<-svydesign(id = ~1, strata = ~stype, weights = ~pw, data = apistrat,
+ fpc = ~fpc)
> model<-svyglm(api00~ell+meals+mobility, design=dstrat)
> summary(model)$dispersion
variance SE
[1,] 5172 492.28
This is the estimate of $\sigma^2$, which is the population residual variance. In this example we actually have the whole population, so we can compare
> popmodel<-lm(api00~ell+meals+mobility, data=apipop)
> summary(popmodel)$sigma
[1] 70.58365
> sqrt(5172)
[1] 71.91662
Related
I use a fixed effect model with time and group fixed effects. Further, I want to calculate robust clustered standard errors. Therefore, I use coeftest(model, vcov = vcovDC(model))
I do not understand how the degrees of freedom are calculated for the provided t-statistics. Does it use the same degrees of freedom like the in the provided plm-fixed-effect model or are they adjusted. Probably my question is rather; are the degrees of freedom adjusted when one uses clustered standard errors for a two-way fixed effect model, or do they remain the same?
plm calculates an ordinary variance–covariance matrix (VCOV). When you use summary on your plm object (what you probably mean by "provided plm-fixed-effect model"), actually the plm:::summary.plm method is applied, which uses ordinary standard errors (SE) without degrees-of-freedom correction, until you change the vcov= argument defaulting to NULL to another VCOV calculated differently, e.g. with vcovCL or vcovDC.
You can do lmtest::coeftest(fit, vcov.=...), or directly summary(fit, vcov=...), as I show you below in an example.
Example
library(plm)
data(Cigar)
fit <- plm(sales ~ price, data=Cigar, effect="twoways", model="within",
index=c("state", "year"))
summary(fit)$coe
# same:
summary(fit, vcov=NULL)$coe ## default, ordinary SE
# Estimate Std. Error t-value Pr(>|t|)
# price -1.084712 0.07554847 -14.35782 1.640552e-43
Now, to get robust standard errors (without adjustment for clustering), we may use vcovCL and consider the type= argument. In ?sandwich::vcovCL we may read:
HC0 applies no small sample bias adjustment. HC1 applies a degrees of
freedom-based correction, (n-1)/(n-k) where n is the number of
observations and k is the number of explanatory or predictor variables
in the model.
summary(fit, vcov=vcovHC)$coe
# same:
summary(fit, vcov=vcovHC(fit, type="HC0"))$coe ## robust SE
# Estimate Std. Error t-value Pr(>|t|)
# price -1.084712 0.2406786 -4.506889 7.168418e-06
summary(fit, vcov=vcovHC(fit, type="HC1"))$coe ## robust SE, df-corrected
# Estimate Std. Error t-value Pr(>|t|)
# price -1.084712 0.2407658 -4.505256 7.22292e-06
The same applies to vcovDC and its type= argument for robust standard errors, doubly adjusted for clustering on group and time:
summary(fit, vcov=vcovDC(fit))$coe
# same:
summary(fit, vcov=vcovDC(fit, type="HC0"))$coe ## double-cluster-robust SE
# Estimate Std. Error t-value Pr(>|t|)
# price -1.084712 0.2923507 -3.71031 0.0002157146
summary(fit, vcov=vcovDC(fit, type="HC1"))$coe ## double-cluster-robust SE, df-corrected
# Estimate Std. Error t-value Pr(>|t|)
# price -1.084712 0.2924567 -3.708966 0.0002168511
We're trying to model a count variable with excessive zeros using a zero-inflated poisson (as implemented in pscl package). Here is a (simplified) output showing both categorical and continuous explanatory variables:
library(pscl)
> m1 <- zeroinfl(y ~ treatment + some_covar, data = d, dist =
"poisson")
> summary(m1)
Count model coefficients (poisson with log link):
Estimate Std. Error z value Pr(>|z|)
(Intercept) 3.189253 0.102256 31.189 < 2e-16 ***
treatmentB -0.282478 0.107965 -2.616 0.00889 **
treatmentC 0.227633 0.103605 2.197 0.02801 *
some_covar 0.002190 0.002329 0.940 0.34706
Zero-inflation model coefficients (binomial with logit link):
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.67251 0.74961 0.897 0.3696
treatmentB -1.72728 0.89931 -1.921 0.0548 .
treatmentC -0.31761 0.77668 -0.409 0.6826
some_covar -0.03736 0.02684 -1.392 0.1640
summary gave us some good answers but we are looking for a ANOVA-like table. So, the question is: is it ok to use car::Anova to obtain such table?
> Anova(m1)
Analysis of Deviance Table (Type II tests)
Response: y
Df Chisq Pr(>Chisq)
treatment 2 30.7830 2.068e-07 ***
some_covar 1 0.8842 0.3471
It seems to work fine but i'm not really sure whether is a valid approach since documentation is missing (seems like is only considering the 'count model' part?). Do you recommend to follow this approach or there is a better way?
I want to compute a logit regression for rare events. I decided to use the Zelig package (relogit function) to do so.
Usually, I use stargazer to extract and save regression results. However, there seem to be compatibility issues with these two packages (Using stargazer with Zelig).
I now want to extract the following information from the Zelig relogit output:
Coefficients, z values, p values, number of observations, log likelihood, AIC
I have managed to extract the p-values and coefficients. However, I failed at the rest. But I am sure these values must be accessible somehow, because they are reported in the summary() output (however, I did not manage to store the summary output as an R object). The summary cannot be processed in the same way as a regular glm summary (https://stats.stackexchange.com/questions/176821/relogit-model-from-zelig-package-in-r-how-to-get-the-estimated-coefficients)
A reproducible example:
##Initiate package, model and data
require(Zelig)
data(mid)
z.out1 <- zelig(conflict ~ major + contig + power + maxdem + mindem + years,
data = mid, model = "relogit")
##Call summary on output (reports in console most of the needed information)
summary(z.out1)
##Storing the summary fails and only produces a useless object
summary(z.out1) -> z.out1.sum
##Some of the output I can access as follows
z.out1$get_coef() -> z.out1.coeff
z.out1$get_pvalue() -> z.out1.p
z.out1$get_se() -> z.out1.se
However, I did not find similar commands for other elements, such as z values, AIC etc. However, as they are shown in the summary() call, they should be accessible somehow.
The summary call result:
Model:
Call:
z5$zelig(formula = conflict ~ major + contig + power + maxdem +
mindem + years, data = mid)
Deviance Residuals:
Min 1Q Median 3Q Max
-3.0742 -0.4444 -0.2772 0.3295 3.1556
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.535496 0.179685 -14.111 < 2e-16
major 2.432525 0.157561 15.439 < 2e-16
contig 4.121869 0.157650 26.146 < 2e-16
power 1.053351 0.217243 4.849 1.24e-06
maxdem 0.048164 0.010065 4.785 1.71e-06
mindem -0.064825 0.012802 -5.064 4.11e-07
years -0.063197 0.005705 -11.078 < 2e-16
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 3979.5 on 3125 degrees of freedom
Residual deviance: 1868.5 on 3119 degrees of freedom
AIC: 1882.5
Number of Fisher Scoring iterations: 6
Next step: Use 'setx' method
Use from_zelig_model for deviance, AIC.
m <- from_zelig_model(z.out1)
m$aic
...
Z-values are coefficient / sd.
z.out1$get_coef()[[1]]/z.out1$get_se()[[1]]
I am attempting to get the lag one autocorrelation estimates from the gls function (package {nlme}) with its SE. This is being done on a non-stationary univariate time series. Here is the output:
Generalized least squares fit by REML
Model: y ~ year
Data: tempdata
AIC BIC logLik
51.28921 54.37957 -21.64461
Correlation Structure: AR(1)
Formula: ~1
Parameter estimate(s):
Phi
0.9699799
Coefficients:
Value Std.Error t-value p-value
(Intercept) -1.1952639 3.318268 -0.3602072 0.7234
year -0.2055264 0.183759 -1.1184567 0.2799
Correlation:
(Intr)
year -0.36
Standardized residuals:
Min Q1 Med Q3 Max
-0.12504485 -0.06476076 0.13948378 0.51581993 0.66030397
Residual standard error: 3.473776
Degrees of freedom: 18 total; 16 residual
The phi coefficient seemed promising since it was under the correlation structure in the output
Correlation Structure: AR(1)
Formula: ~1
Parameter estimate(s):
Phi
0.9699799
but it regularly goes over one, which is not possible for correlation. Then there is the
Correlation:
(Intr)
Yearctr -0.36
but I was advised that this was likely not a correct estimate for the data (there were multiple test sites so this is just one of the unexpected estimates). Is there a function that outputs an AR1 estimate and its SE (other than arima)?
sample of autocorrelated data:
set.seed(29)
y = diffinv(rnorm(500))
x = 1:length(y)
gls(y~x, correlation=corAR1(form=~1))
Note: I am comparing the function arima() to gls() (or another method) to compare AR1 estimates and SE's. I am doing this under adviser request.
I am trying to understand the difference between two different fitting methods for a data set with a bounded response variable. The response variable is a fraction and therefore has a range of [0,1]. I have uncovered through my Google searching that there are a lot of different methods out there as this is a common operation. I am currently interested in the difference between the stock R GLM fit and the Beta regression offered in the betareg package. I am using the GasolineYield data set from the "betareg" package as my sample data set. Before I post the code and the results my two questions are the following:
Am I performing the Logistic Regression fit in R using the builtin R GLM correctly?
Why are the standard errors reported in the Beta regression so much smaller than the standard errors for the R logistic regression?
R Setup Code
library(betareg)
data("GasolineYield", package = "betareg")
Beta Regression code from the "betareg" package
gy = betareg(yield ~ batch + temp, data = GasolineYield)
summary(gy)
Beta Regression summary output
Call:
betareg(formula = yield ~ batch + temp, data = GasolineYield)
Standardized weighted residuals 2:
Min 1Q Median 3Q Max
-2.8750 -0.8149 0.1601 0.8384 2.0483
Coefficients (mean model with logit link):
Estimate Std. Error z value Pr(>|z|)
(Intercept) -6.1595710 0.1823247 -33.784 < 2e-16 ***
batch1 1.7277289 0.1012294 17.067 < 2e-16 ***
batch2 1.3225969 0.1179020 11.218 < 2e-16 ***
batch3 1.5723099 0.1161045 13.542 < 2e-16 ***
batch4 1.0597141 0.1023598 10.353 < 2e-16 ***
batch5 1.1337518 0.1035232 10.952 < 2e-16 ***
batch6 1.0401618 0.1060365 9.809 < 2e-16 ***
batch7 0.5436922 0.1091275 4.982 6.29e-07 ***
batch8 0.4959007 0.1089257 4.553 5.30e-06 ***
batch9 0.3857930 0.1185933 3.253 0.00114 **
temp 0.0109669 0.0004126 26.577 < 2e-16 ***
Phi coefficients (precision model with identity link):
Estimate Std. Error z value Pr(>|z|)
(phi) 440.3 110.0 4.002 6.29e-05 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Type of estimator: ML (maximum likelihood)
Log-likelihood: 84.8 on 12 Df
Pseudo R-squared: 0.9617
Number of iterations: 51 (BFGS) + 3 (Fisher scoring)
R GLM Logistic Regression code from stock R
glmfit = glm(yield ~ batch + temp, data = GasolineYield, family = "binomial")
summary(glmfit)
R GLM Logistic Regression summary output
Call:
glm(formula = yield ~ batch + temp, family = "binomial", data = GasolineYield)
Deviance Residuals:
Min 1Q Median 3Q Max
-0.100459 -0.025272 0.004217 0.032879 0.082113
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -6.130227 3.831798 -1.600 0.110
batch1 1.720311 2.127205 0.809 0.419
batch2 1.305746 2.481266 0.526 0.599
batch3 1.562343 2.440712 0.640 0.522
batch4 1.048928 2.152385 0.487 0.626
batch5 1.125075 2.176242 0.517 0.605
batch6 1.029601 2.229773 0.462 0.644
batch7 0.540401 2.294474 0.236 0.814
batch8 0.497355 2.288564 0.217 0.828
batch9 0.378315 2.494881 0.152 0.879
temp 0.010906 0.008676 1.257 0.209
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 2.34184 on 31 degrees of freedom
Residual deviance: 0.07046 on 21 degrees of freedom
AIC: 36.631
Number of Fisher Scoring iterations: 5
The standard errors are different because the variance assumptions in the two models are different.
Logistic regression assumes the response has a binomial distribution, while beta regression assumes it has a beta distribution.
The variance functions of the two are different. For the binomial, if you specify the mean (and $n$ is a given) the variance is determined. For the beta there's another free parameter, so it isn't determined by the mean and would presumably be estimated from the data.
This suggests that if you fit a quasibinomial GLM (adding a variance parameter) you might get closer to the same standard errors, but they still won't be the same, since they would weight the observations differently.
What you should actually do:
if your proportions are originally counts divided by some total count, then a binomial GLM would be an appropriate model to consider. (You would need the total counts, though.)
if your proportions are continuous fractions (the proportion of milk that's cream for example), then beta regression is an appropriate model to consider.