Coeftest function in R--Variable not reported in output - r

I have a linear regression I am running in R. I am calculating clustered standard errors. I get the output of coeftest() but in some cases it doesn't report anything for a variable. I don't get an error. Does this mean the coefficient couldn't be calculated or does the coeftest not report variables that are insignificant? I can't seem to find the answer in any of the R documentation.
Here is the output from R:
lm1 <- lm(PeaceA ~ Soc_Edu + Pol_Constitution + mediation + gdp + enrollratio + infantmortality , data=qsi.surv)
coeftest(lm1, vcov = vcovHC(lm1, type = "HC1"))
t test of coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.05780946 0.20574444 -5.1414 4.973e-06 ***
Soc_Edu -1.00735592 0.11756507 -8.5685 3.088e-11 ***
mediation 0.65682159 0.06291926 10.4391 6.087e-14 ***
gdp 0.00041894 0.00010205 4.1052 0.000156 ***
enrollratio 0.00852143 0.00177600 4.7981 1.598e-05 ***
infantmortality 0.00455383 0.00079536 5.7255 6.566e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Notice that there is nothing reported for the variable Pol_Constitution.

I assume you mean functions coeftest() from package lmtest and vcovHC() from package sandwich. In this combination, coefficients for linear dependend columns are silently dropped in coeftest's output. Thus, I assume your variable/column Pol_Constitution suffers from linear dependence.
Below is an example which demonstrates the behaviour with a linear dependend column. See how the estimated coefficient for I(2 * cyl) is NA in a simple summary() and in coeftest() but silently dropped when the latter is combind with vcovHC().
library(lmtest)
library(sandwich)
data(mtcars)
summary(mod <- lm(mpg ~ cyl + I(2*cyl), data = mtcars))
#> [...]
#> Coefficients: (1 not defined because of singularities)
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 37.8846 2.0738 18.27 < 2e-16 ***
#> cyl -2.8758 0.3224 -8.92 6.11e-10 ***
#> I(2 * cyl) NA NA NA NA
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> [...]
coeftest(mod)
#>
#> t test of coefficients:
#>
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 37.88458 2.07384 18.2678 < 2.2e-16 ***
#> cyl -2.87579 0.32241 -8.9197 6.113e-10 ***
#> I(2 * cyl) NA NA NA NA
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
coeftest(mod, vcov. = vcovHC(mod))
#>
#> t test of coefficients:
#>
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 37.88458 2.74154 13.8187 1.519e-14 ***
#> cyl -2.87579 0.38869 -7.3987 3.040e-08 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Related

How to iterate over many dependent and independent variables to produce a list of regressions using fixest

DATA BELOW
library(fixest)
analysis<-tibble(off_race = c("hispanic", "hispanic", "white","white", "hispanic", "white", "hispanic", "white", "white", "white","hispanic"), any_black_uof = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE), any_black_arrest = c(TRUE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE), prop_white_scale = c(0.866619646027524, -1.14647499712298, 1.33793539994219, 0.593565300512359, -0.712819809606193, 0.3473585867755, -1.37025501425243, 1.16596624239715, 0.104521426674564, 0.104521426674564, -1.53728347122581), prop_hisp_scale=c(-0.347382203637802, 1.54966785579018,
-0.833021026477168, -0.211470492567308, 1.48353691981021,
0.421968013870802, 2.63739845069911, -0.61002505397242, 0.66674880256898,0.66674880256898, 2.93190487813111))
I would like to run a series of regressions that iterate over these vectors
officer_race = c("black", "white", "hispanic")
primary_ind<-c("prop_white_scale","prop_hisp_scale","prop_black_scale")
outcome<-c("any_black_uof","any_white_uof","any_hisp_uof","any_black_arrest","any_white_arrest","any_hisp_arrest","any_black_stop","any_white_stop","any_hisp_stop")
Also of note, I would like to use the fixest package
where the regressions would look like this
feols(any_black_uof~ prop_white_scale,data=analysis[analysis$off_race =="black"])
feols(any_black_uof~ prop_white_scale,data=analysis[analysis$off_race =="white"])
feols(any_black_uof~ prop_white_scale,data=analysis[analysis$off_race=="hispanic"])
feols(any_black_uof~ prop_hisp_scale,data=analysis[analysis$off_race =="black"])
feols(any_black_uof~ prop_hisp_scale,data=analysis[analysis$off_race =="white"])
etc. iterating through all possible combinations and creating a list of fixest regression objects.
I tried this and it works with the lm() function. Yet, it breaks with feols.
result <- lapply(split(analysis, analysis$off_race), function(x) {
sapply(primary_ind2, function(y) {
sapply(outcome2, function(z) {
feols(paste(y, z, sep = "~"), x)
}, simplify = FALSE)
}, simplify = FALSE)
})
The reason lm() works is because we can supply formulas as strings (probably because its such a common mistake to make?) so it works with paste().
Usually we use reformulate() to create formulas based on strings and with this feols() will work.
I changed your model input data a little because we only have a fraction of your data and your model inputs contain column names that are not present in the example data.
library(fixest)
library(tibble)
officer_race <- c("black", "white", "hispanic")
primary_ind <-c("prop_white_scale","prop_hisp_scale")
outcome <- c("any_black_uof","any_black_arrest")
result <- lapply(split(analysis, analysis$off_race), function(x) {
sapply(primary_ind, function(y) {
sapply(outcome, function(z) {
feols(reformulate(y, z), x)
}, simplify = FALSE)
}, simplify = FALSE)
})
#> The variable 'any_black_uofTRUE' has been removed because of collinearity (see $collin.var).
#> The variable 'any_black_uofTRUE' has been removed because of collinearity (see $collin.var).
#> The variable 'any_black_uofTRUE' has been removed because of collinearity (see $collin.var).
#> The variable 'any_black_arrestTRUE' has been removed because of collinearity (see $collin.var).
#> The variable 'any_black_uofTRUE' has been removed because of collinearity (see $collin.var).
#> The variable 'any_black_arrestTRUE' has been removed because of collinearity (see $collin.var).
result
#> $hispanic
#> $hispanic$prop_white_scale
#> $hispanic$prop_white_scale$any_black_uof
#> OLS estimation, Dep. Var.: prop_white_scale
#> Observations: 5
#> Standard-errors: IID
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) -0.780043 0.434284 -1.79616 0.14689
#> ... 1 variable was removed because of collinearity (any_black_uofTRUE)
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> RMSE: 0.868568
#>
#> $hispanic$prop_white_scale$any_black_arrest
#> OLS estimation, Dep. Var.: prop_white_scale
#> Observations: 5
#> Standard-errors: IID
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) -1.19171 0.178578 -6.67332 0.0068613 **
#> any_black_arrestTRUE 2.05833 0.399313 5.15468 0.0141561 *
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> RMSE: 0.276652 Adj. R2: 0.864731
#>
#>
#> $hispanic$prop_hisp_scale
#> $hispanic$prop_hisp_scale$any_black_uof
#> OLS estimation, Dep. Var.: prop_hisp_scale
#> Observations: 5
#> Standard-errors: IID
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 1.65103 0.576435 2.8642 0.045735 *
#> ... 1 variable was removed because of collinearity (any_black_uofTRUE)
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> RMSE: 1.15287
#>
#> $hispanic$prop_hisp_scale$any_black_arrest
#> OLS estimation, Dep. Var.: prop_hisp_scale
#> Observations: 5
#> Standard-errors: IID
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 2.15063 0.371203 5.79366 0.010230 *
#> any_black_arrestTRUE -2.49801 0.830036 -3.00952 0.057234 .
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> RMSE: 0.575066 Adj. R2: 0.668248
#>
#>
#>
#> $white
#> $white$prop_white_scale
#> $white$prop_white_scale$any_black_uof
#> OLS estimation, Dep. Var.: prop_white_scale
#> Observations: 6
#> Standard-errors: IID
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 0.608978 0.217505 2.79984 0.038001 *
#> ... 1 variable was removed because of collinearity (any_black_uofTRUE)
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> RMSE: 0.486355
#>
#> $white$prop_white_scale$any_black_arrest
#> OLS estimation, Dep. Var.: prop_white_scale
#> Observations: 6
#> Standard-errors: IID
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 0.608978 0.217505 2.79984 0.038001 *
#> ... 1 variable was removed because of collinearity (any_black_arrestTRUE)
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> RMSE: 0.486355
#>
#>
#> $white$prop_hisp_scale
#> $white$prop_hisp_scale$any_black_uof
#> OLS estimation, Dep. Var.: prop_hisp_scale
#> Observations: 6
#> Standard-errors: IID
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 0.016825 0.269335 0.062468 0.95261
#> ... 1 variable was removed because of collinearity (any_black_uofTRUE)
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> RMSE: 0.602251
#>
#> $white$prop_hisp_scale$any_black_arrest
#> OLS estimation, Dep. Var.: prop_hisp_scale
#> Observations: 6
#> Standard-errors: IID
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 0.016825 0.269335 0.062468 0.95261
#> ... 1 variable was removed because of collinearity (any_black_arrestTRUE)
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> RMSE: 0.602251
Created on 2022-10-19 by the reprex package (v2.0.1)
Here is an option
library(dplyr)
library(purrr)
library(fixest)
library(tidyr)
fmla_dat <- crossing(outcome, primary_ind) %>%
transmute(fmla = map2(primary_ind, outcome, ~ reformulate(.x, response = .y)))
analysis %>%
group_by(off_race) %>%
summarise(out = map(fmla_dat$fmla,
~ possibly(feols, otherwise = NA)(.x, data = cur_data())), .groups = 'drop')

msmFit: Fitting Markov Switching Models - Results differ almost every time

I am very new here and am writing my first post. I hope you will bear with me. I am currently using the msmFit(object, k, sw, p, data, family, control) command in R studio to set up a markov regime switching process.
I am not an econometrician, but the results of this command should be at least approximately similar, even though it is a stochastic process. However, in my case, both the transition probabilties and the coefficients change drastically when I run the command over and over again. Does anyone have any idea why this is the case? My command looks like this: msmFit(object, k, sw).
Do I need to change my command?
Thank you very much in advance!
installed.packages("reprex")
library(mSwM)
library(reprex)
stack_data <- read.csv("C:/Users/jm/Desktop/stackoverflow/stack_test_data.txt", sep=";")
head(stack_data)
#> Time ir_us r_us p_us cpi_us ip_us m0_us m2_us
#> 1 01.01.1990 0.02594126 9.040349 26.48077 64.36143 63.02983 36.57493 47.40648
#> 2 01.02.1990 0.02807363 8.487518 24.86121 64.61383 63.60197 36.65252 47.59210
#> 3 01.03.1990 0.03041778 8.444839 24.66913 64.91671 63.90794 36.94754 47.75527
#> 4 01.04.1990 0.03297896 8.513526 24.56466 65.06815 63.79125 37.41003 47.92743
#> 5 01.05.1990 0.03573270 8.218772 23.71389 65.16911 63.95692 37.56997 47.91246
#> 6 01.06.1990 0.03867350 8.494520 24.44051 65.57294 64.17380 37.85508 48.10856
#> m1_us m3_us gdp_us ei_us ui_us r_us_silver r_us_platinum
#> 1 57.87674 47.40648 61.98087 89.79908 72.73218 58.69324 49.62649
#> 2 58.07320 47.59210 62.01272 89.34765 73.38249 58.25028 50.67231
#> 3 58.32060 47.75527 62.07644 89.03414 73.98315 56.36766 50.46813
#> 4 58.65532 47.92743 62.17202 88.93136 74.24094 56.47841 47.44024
#> 5 58.51706 47.91246 62.25323 88.98279 74.31311 55.92470 47.70916
#> 6 58.85178 48.10856 62.32005 89.06012 74.70761 55.26024 48.28187
#> r_us_palladium r_us_gold r_us_stocks
#> 1 45.98305 78.42055 19.60036
#> 2 46.05085 79.06755 18.76965
#> 3 44.64407 75.98478 19.08397
#> 4 43.55932 71.26546 19.21868
#> 5 40.08475 70.12369 19.69017
#> 6 39.83051 67.50714 20.38617
linear_model <-
lm(
ir_us ~ r_us + p_us + cpi_us + ip_us + m3_us + gdp_us + ei_us +
ui_us + r_us_silver + r_us_palladium+r_us_gold+ r_us_stocks,
data = stack_data
)
summary(linear_model)
#>
#> Call:
#> lm(formula = ir_us ~ r_us + p_us + cpi_us + ip_us + m3_us + gdp_us +
#> ei_us + ui_us + r_us_silver + r_us_palladium + r_us_gold +
#> r_us_stocks, data = stack_data)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -132.040 -22.572 3.879 23.025 108.042
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) -1.296e+03 3.218e+02 -4.028 6.82e-05 ***
#> r_us 2.982e-01 3.826e-01 0.779 0.43620
#> p_us -1.363e+00 4.978e-01 -2.739 0.00647 **
#> cpi_us 6.207e+00 3.606e+00 1.722 0.08598 .
#> ip_us -1.537e+01 1.021e+00 -15.057 < 2e-16 ***
#> m3_us -1.773e+00 4.059e-01 -4.367 1.64e-05 ***
#> gdp_us 7.302e+00 1.757e+00 4.155 4.04e-05 ***
#> ei_us 1.008e+01 3.537e+00 2.850 0.00462 **
#> ui_us 6.478e+00 3.229e+00 2.006 0.04559 *
#> r_us_silver -8.770e-01 1.094e-01 -8.019 1.41e-14 ***
#> r_us_palladium 4.054e-01 4.138e-02 9.799 < 2e-16 ***
#> r_us_gold 1.929e+00 1.999e-01 9.650 < 2e-16 ***
#> r_us_stocks 9.762e-01 1.867e-01 5.230 2.84e-07 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 39.52 on 371 degrees of freedom
#> Multiple R-squared: 0.9907, Adjusted R-squared: 0.9904
#> F-statistic: 3304 on 12 and 371 DF, p-value: < 2.2e-16
markov_stack= msmFit(linear_model, k = 2, sw = rep(TRUE,14))
summary(markov_stack)
#>Markov Switching Model
#>Call: msmFit(object = linear_model, k = 2, sw = rep(TRUE, 14))
#> AIC BIC logLik
#> 3100.603 3358.036 -1524.301
#>Coefficients:
#>Regime 1
#>---------
#> Estimate Std. Error t value Pr(>|t|)
#>(Intercept)(S) -1781.0160 193.3639 -9.2107 < 2.2e-16 ***
#>r_us(S) 7.0689 0.6361 11.1129 < 2.2e-16 ***
#>p_us(S) -9.3406 0.7354 -12.7014 < 2.2e-16 ***
#>cpi_us(S) 4.2407 2.0484 2.0702 0.038434 *
#>ip_us(S) -12.8299 1.0475 -12.2481 < 2.2e-16 ***
#>m3_us(S) -3.9402 0.3883 -10.1473 < 2.2e-16 ***
#>gdp_us(S) -6.7454 2.0428 -3.3020 0.000960 ***
#>ei_us(S) 22.5973 1.3841 16.3263 < 2.2e-16 ***
#>ui_us(S) 16.4904 1.2427 13.2698 < 2.2e-16 ***
#>r_us_silver(S) -0.4286 0.0806 -5.3176 1.051e-07 ***
#>r_us_palladium(S) 0.4086 0.0324 12.6111 < 2.2e-16 ***
#>r_us_gold(S) 0.9030 0.1607 5.6192 1.918e-08 ***
#>r_us_stocks(S) 0.5458 0.1784 3.0594 0.002218 **
#>---
#>Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>Residual standard error: 21.1034
#>Multiple R-squared: 0.9958
#>Standardized Residuals:
#> Min Q1 Med Q3 Max
#>-4.936796e+01 -4.499687e-01 -9.253977e-05 -4.761520e-22 5.441522e+01
#>Regime 2
#>---------
#> Estimate Std. Error t value Pr(>|t|)
#>(Intercept)(S) 353.0536 36.0750 9.7867 < 2.2e-16 ***
#>r_us(S) -1.5104 0.2347 -6.4354 1.231e-10 ***
#>p_us(S) 1.2404 0.2016 6.1528 7.613e-10 ***
#>cpi_us(S) 7.7954 0.5055 15.4212 < 2.2e-16 ***
#>ip_us(S) 2.5948 1.1376 2.2809 0.022554 *
#>m3_us(S) 5.2752 0.3180 16.5887 < 2.2e-16 ***
#>gdp_us(S) -9.1814 2.1197 -4.3315 1.481e-05 ***
#>ei_us(S) -5.2377 0.4792 -10.9301 < 2.2e-16 ***
#>ui_us(S) -5.1023 0.3819 -13.3603 < 2.2e-16 ***
#>r_us_silver(S) -0.5832 0.0845 -6.9018 5.135e-12 ***
#>r_us_palladium(S) 0.1364 0.0181 7.5359 4.841e-14 ***
#>r_us_gold(S) 1.8174 0.1417 12.8257 < 2.2e-16 ***
#>r_us_stocks(S) 0.3491 0.1063 3.2841 0.001023 **
#>---
#>Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>Residual standard error: 8.797613
#>Multiple R-squared: 0.9994
#>Standardized Residuals:
#> Min Q1 Med Q3 Max
#>-3.137047e+01 -3.511491e+00 1.954733e-28 3.090820e+00 3.767483e+01
#>Transition probabilities:
#> Regime 1 Regime 2
#>Regime 1 0.96021599 0.01554482
#>Regime 2 0.03978401 0.98445518
Data is available via https://syncandshare.lrz.de/getlink/fiGPrgkau1NdH8YKDfe9gBK9/

Small sample (20-25 observations) - Robust standard errors (Newey-West) do not change coefficients/standard errors. Is this normal?

I am running a simple regression (OLS)
> lm_1 <- lm(Dependent_variable_1 ~ Independent_variable_1, data = data_1)
> summary(lm_1)
Call:
lm(formula = Dependent_variable_1 ~ Independent_variable_1,
data = data_1)
Residuals:
Min 1Q Median 3Q Max
-143187 -34084 -4990 37524 136293
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 330853 13016 25.418 < 2e-16 ***
`GDP YoY% - Base` 3164631 689599 4.589 0.000118 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 66160 on 24 degrees of freedom
(4 observations deleted due to missingness)
Multiple R-squared: 0.4674, Adjusted R-squared: 0.4452
F-statistic: 21.06 on 1 and 24 DF, p-value: 0.0001181
The autocorrelation and heteroskedasticity tests follow:
> dwtest(lm_1,alternative="two.sided")
Durbin-Watson test
data: lm_1
DW = 0.93914, p-value = 0.001591
alternative hypothesis: true autocorrelation is not 0
> bptest(lm_1)
studentized Breusch-Pagan test
data: lm_1
BP = 9.261, df = 1, p-value = 0.002341
then I run a robust regression for autocorrelation and heteroskedasticity (HAC - Newey-West):
> coeftest(lm_1, vocv=NeweyWest(lm_1,lag=2, prewhite=FALSE))
t test of coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 330853 13016 25.4185 < 2.2e-16 ***
Independent_variable_1 3164631 689599 4.5891 0.0001181 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
and I get the same results for coefficients / standard errors.
Is this normal? Is this due to the small sample size?

Separate output when results='hold' in knitr

Is there a way to separate or section the output in a {r results='hold'} block in knitr without needing to manually insert a print('--------') command in between or the like? I want to avoid such extra print lines is that it makes the code much harder to read.
For example, you need to be quite trained to see where the output from one line stops and the next begins in the output from this:
```{r, results='hold'}
summary(lm(Sepal.Length ~ Species, iris))
#print('-------------') # Not a solution
summary(aov(Sepal.Length ~ Species, iris))
```
Output:
Call:
lm(formula = Sepal.Length ~ Species, data = iris)
Residuals:
Min 1Q Median 3Q Max
-1.6880 -0.3285 -0.0060 0.3120 1.3120
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.0060 0.0728 68.762 < 2e-16 ***
Speciesversicolor 0.9300 0.1030 9.033 8.77e-16 ***
Speciesvirginica 1.5820 0.1030 15.366 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.5148 on 147 degrees of freedom
Multiple R-squared: 0.6187, Adjusted R-squared: 0.6135
F-statistic: 119.3 on 2 and 147 DF, p-value: < 2.2e-16
Df Sum Sq Mean Sq F value Pr(>F)
Species 2 63.21 31.606 119.3 <2e-16 ***
Residuals 147 38.96 0.265
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Maybe some knitr option or a more general R printing option?

Extract property from function

Using code below I summarize a linear model :
x = c(1,2,3)
y = c(1,2,3)
m = lm(y ~ x)
summary(m)
This prints :
Call:
lm(formula = y ~ x)
Residuals:
1 2 3
0 0 0
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0 0 NA NA
x 1 0 Inf <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0 on 1 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: 1
F-statistic: Inf on 1 and 1 DF, p-value: < 2.2e-16
Warning message:
In summary.lm(m) : essentially perfect fit: summary may be unreliable
How to create new summary function which will just return the 'Coefficients' property :
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.874016 0.160143 -11.70 <2e-16 ***
waiting 0.075628 0.002219 34.09 <2e-16 ***
here is my code :
tosum2 <- summary(m)
summary.myclass <- function(x)
{
return(x$Coefficients)
}
class(tosum2) <- c('myclass', 'summary')
summary(tosum2)
but NULL is returned.
Update :
how can check available methods from summary function ( coef is example of method available on summary) ? methods(class="summary") returns null

Resources