msmFit: Fitting Markov Switching Models - Results differ almost every time - r

I am very new here and am writing my first post. I hope you will bear with me. I am currently using the msmFit(object, k, sw, p, data, family, control) command in R studio to set up a markov regime switching process.
I am not an econometrician, but the results of this command should be at least approximately similar, even though it is a stochastic process. However, in my case, both the transition probabilties and the coefficients change drastically when I run the command over and over again. Does anyone have any idea why this is the case? My command looks like this: msmFit(object, k, sw).
Do I need to change my command?
Thank you very much in advance!
installed.packages("reprex")
library(mSwM)
library(reprex)
stack_data <- read.csv("C:/Users/jm/Desktop/stackoverflow/stack_test_data.txt", sep=";")
head(stack_data)
#> Time ir_us r_us p_us cpi_us ip_us m0_us m2_us
#> 1 01.01.1990 0.02594126 9.040349 26.48077 64.36143 63.02983 36.57493 47.40648
#> 2 01.02.1990 0.02807363 8.487518 24.86121 64.61383 63.60197 36.65252 47.59210
#> 3 01.03.1990 0.03041778 8.444839 24.66913 64.91671 63.90794 36.94754 47.75527
#> 4 01.04.1990 0.03297896 8.513526 24.56466 65.06815 63.79125 37.41003 47.92743
#> 5 01.05.1990 0.03573270 8.218772 23.71389 65.16911 63.95692 37.56997 47.91246
#> 6 01.06.1990 0.03867350 8.494520 24.44051 65.57294 64.17380 37.85508 48.10856
#> m1_us m3_us gdp_us ei_us ui_us r_us_silver r_us_platinum
#> 1 57.87674 47.40648 61.98087 89.79908 72.73218 58.69324 49.62649
#> 2 58.07320 47.59210 62.01272 89.34765 73.38249 58.25028 50.67231
#> 3 58.32060 47.75527 62.07644 89.03414 73.98315 56.36766 50.46813
#> 4 58.65532 47.92743 62.17202 88.93136 74.24094 56.47841 47.44024
#> 5 58.51706 47.91246 62.25323 88.98279 74.31311 55.92470 47.70916
#> 6 58.85178 48.10856 62.32005 89.06012 74.70761 55.26024 48.28187
#> r_us_palladium r_us_gold r_us_stocks
#> 1 45.98305 78.42055 19.60036
#> 2 46.05085 79.06755 18.76965
#> 3 44.64407 75.98478 19.08397
#> 4 43.55932 71.26546 19.21868
#> 5 40.08475 70.12369 19.69017
#> 6 39.83051 67.50714 20.38617
linear_model <-
lm(
ir_us ~ r_us + p_us + cpi_us + ip_us + m3_us + gdp_us + ei_us +
ui_us + r_us_silver + r_us_palladium+r_us_gold+ r_us_stocks,
data = stack_data
)
summary(linear_model)
#>
#> Call:
#> lm(formula = ir_us ~ r_us + p_us + cpi_us + ip_us + m3_us + gdp_us +
#> ei_us + ui_us + r_us_silver + r_us_palladium + r_us_gold +
#> r_us_stocks, data = stack_data)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -132.040 -22.572 3.879 23.025 108.042
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) -1.296e+03 3.218e+02 -4.028 6.82e-05 ***
#> r_us 2.982e-01 3.826e-01 0.779 0.43620
#> p_us -1.363e+00 4.978e-01 -2.739 0.00647 **
#> cpi_us 6.207e+00 3.606e+00 1.722 0.08598 .
#> ip_us -1.537e+01 1.021e+00 -15.057 < 2e-16 ***
#> m3_us -1.773e+00 4.059e-01 -4.367 1.64e-05 ***
#> gdp_us 7.302e+00 1.757e+00 4.155 4.04e-05 ***
#> ei_us 1.008e+01 3.537e+00 2.850 0.00462 **
#> ui_us 6.478e+00 3.229e+00 2.006 0.04559 *
#> r_us_silver -8.770e-01 1.094e-01 -8.019 1.41e-14 ***
#> r_us_palladium 4.054e-01 4.138e-02 9.799 < 2e-16 ***
#> r_us_gold 1.929e+00 1.999e-01 9.650 < 2e-16 ***
#> r_us_stocks 9.762e-01 1.867e-01 5.230 2.84e-07 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 39.52 on 371 degrees of freedom
#> Multiple R-squared: 0.9907, Adjusted R-squared: 0.9904
#> F-statistic: 3304 on 12 and 371 DF, p-value: < 2.2e-16
markov_stack= msmFit(linear_model, k = 2, sw = rep(TRUE,14))
summary(markov_stack)
#>Markov Switching Model
#>Call: msmFit(object = linear_model, k = 2, sw = rep(TRUE, 14))
#> AIC BIC logLik
#> 3100.603 3358.036 -1524.301
#>Coefficients:
#>Regime 1
#>---------
#> Estimate Std. Error t value Pr(>|t|)
#>(Intercept)(S) -1781.0160 193.3639 -9.2107 < 2.2e-16 ***
#>r_us(S) 7.0689 0.6361 11.1129 < 2.2e-16 ***
#>p_us(S) -9.3406 0.7354 -12.7014 < 2.2e-16 ***
#>cpi_us(S) 4.2407 2.0484 2.0702 0.038434 *
#>ip_us(S) -12.8299 1.0475 -12.2481 < 2.2e-16 ***
#>m3_us(S) -3.9402 0.3883 -10.1473 < 2.2e-16 ***
#>gdp_us(S) -6.7454 2.0428 -3.3020 0.000960 ***
#>ei_us(S) 22.5973 1.3841 16.3263 < 2.2e-16 ***
#>ui_us(S) 16.4904 1.2427 13.2698 < 2.2e-16 ***
#>r_us_silver(S) -0.4286 0.0806 -5.3176 1.051e-07 ***
#>r_us_palladium(S) 0.4086 0.0324 12.6111 < 2.2e-16 ***
#>r_us_gold(S) 0.9030 0.1607 5.6192 1.918e-08 ***
#>r_us_stocks(S) 0.5458 0.1784 3.0594 0.002218 **
#>---
#>Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>Residual standard error: 21.1034
#>Multiple R-squared: 0.9958
#>Standardized Residuals:
#> Min Q1 Med Q3 Max
#>-4.936796e+01 -4.499687e-01 -9.253977e-05 -4.761520e-22 5.441522e+01
#>Regime 2
#>---------
#> Estimate Std. Error t value Pr(>|t|)
#>(Intercept)(S) 353.0536 36.0750 9.7867 < 2.2e-16 ***
#>r_us(S) -1.5104 0.2347 -6.4354 1.231e-10 ***
#>p_us(S) 1.2404 0.2016 6.1528 7.613e-10 ***
#>cpi_us(S) 7.7954 0.5055 15.4212 < 2.2e-16 ***
#>ip_us(S) 2.5948 1.1376 2.2809 0.022554 *
#>m3_us(S) 5.2752 0.3180 16.5887 < 2.2e-16 ***
#>gdp_us(S) -9.1814 2.1197 -4.3315 1.481e-05 ***
#>ei_us(S) -5.2377 0.4792 -10.9301 < 2.2e-16 ***
#>ui_us(S) -5.1023 0.3819 -13.3603 < 2.2e-16 ***
#>r_us_silver(S) -0.5832 0.0845 -6.9018 5.135e-12 ***
#>r_us_palladium(S) 0.1364 0.0181 7.5359 4.841e-14 ***
#>r_us_gold(S) 1.8174 0.1417 12.8257 < 2.2e-16 ***
#>r_us_stocks(S) 0.3491 0.1063 3.2841 0.001023 **
#>---
#>Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>Residual standard error: 8.797613
#>Multiple R-squared: 0.9994
#>Standardized Residuals:
#> Min Q1 Med Q3 Max
#>-3.137047e+01 -3.511491e+00 1.954733e-28 3.090820e+00 3.767483e+01
#>Transition probabilities:
#> Regime 1 Regime 2
#>Regime 1 0.96021599 0.01554482
#>Regime 2 0.03978401 0.98445518
Data is available via https://syncandshare.lrz.de/getlink/fiGPrgkau1NdH8YKDfe9gBK9/

Related

How to automatically inserting default key in prais winsten estimation?

I got an error message saying
Error in prais_winsten(agvprc.lm1, data = data) :
argument "index" is missing, with no default
How can I avoid inputting all the features
agvprc.lm1=lm(log(avgprc) ~ mon+tues+wed+thurs+t+wave2+wave3)
summary(agvprc.lm1)
agvprc.lm.pw = prais_winsten(agvprc.lm1, data=data)
summary(agvprc.lm.pw)
Not sure if I've understood the question correctly, but to avoid the "Error in prais_winsten(agvprc.lm1, data = data) : argument "index" is missing, with no default" error you need to provide an 'index' to the function (which is a character variable of "ID" and "time"). Using the inbuilt mtcars dataset as an example, with "cyl" as "time":
library(tidyverse)
#install.packages("prais")
library(prais)
#> Loading required package: sandwich
#> Loading required package: pcse
#>
#> Attaching package: 'pcse'
#> The following object is masked from 'package:sandwich':
#>
#> vcovPC
ggplot(mtcars, aes(x = cyl, y = mpg, group = cyl)) +
geom_boxplot() +
geom_jitter(aes(color = hp), width = 0.2)
agvprc.lm1 <- lm(log(mpg) ~ cyl + hp, data = mtcars)
summary(agvprc.lm1)
#>
#> Call:
#> lm(formula = log(mpg) ~ cyl + hp, data = mtcars)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -0.35699 -0.09882 0.01111 0.11948 0.24118
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 3.7829495 0.1062183 35.615 < 2e-16 ***
#> cyl -0.1072513 0.0279213 -3.841 0.000615 ***
#> hp -0.0011031 0.0007273 -1.517 0.140147
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.1538 on 29 degrees of freedom
#> Multiple R-squared: 0.7503, Adjusted R-squared: 0.7331
#> F-statistic: 43.57 on 2 and 29 DF, p-value: 1.83e-09
agvprc.lm.pw <- prais_winsten(formula = agvprc.lm1,
data = mtcars,
index = c("hp", "cyl"))
#> Iteration 0: rho = 0
#> Iteration 1: rho = 0.6985
#> Iteration 2: rho = 0.7309
#> Iteration 3: rho = 0.7285
#> Iteration 4: rho = 0.7287
#> Iteration 5: rho = 0.7287
#> Iteration 6: rho = 0.7287
#> Iteration 7: rho = 0.7287
summary(agvprc.lm.pw)
#>
#> Call:
#> prais_winsten(formula = agvprc.lm1, data = mtcars, index = c("hp",
#> "cyl"))
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -0.33844 -0.08166 0.03109 0.13612 0.25811
#>
#> AR(1) coefficient rho after 7 iterations: 0.7287
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 3.7643405 0.1116876 33.704 <2e-16 ***
#> cyl -0.1061198 0.0298161 -3.559 0.0013 **
#> hp -0.0011470 0.0007706 -1.489 0.1474
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.1077 on 29 degrees of freedom
#> Multiple R-squared: 0.973, Adjusted R-squared: 0.9712
#> F-statistic: 523.3 on 2 and 29 DF, p-value: < 2.2e-16
#>
#> Durbin-Watson statistic (original): 0.1278
#> Durbin-Watson statistic (transformed): 0.4019
Created on 2022-02-28 by the reprex package (v2.0.1)
# To present the index without having to write out all of the variables
# perhaps you could use:
agvprc.lm.pw <- prais_winsten(formula = agvprc.lm1,
data = mtcars,
index = names(agvprc.lm1$coefficients)[3:2])
summary(agvprc.lm.pw)
#>
#> Call:
#> prais_winsten(formula = agvprc.lm1, data = mtcars, index = names(agvprc.lm1$coefficients)[3:2])
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -0.33844 -0.08166 0.03109 0.13612 0.25811
#>
#> AR(1) coefficient rho after 7 iterations: 0.7287
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 3.7643405 0.1116876 33.704 <2e-16 ***
#> cyl -0.1061198 0.0298161 -3.559 0.0013 **
#> hp -0.0011470 0.0007706 -1.489 0.1474
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
#> Residual standard error: 0.1077 on 29 degrees of freedom
#> Multiple R-squared: 0.973, Adjusted R-squared: 0.9712
#> F-statistic: 523.3 on 2 and 29 DF, p-value: < 2.2e-16
#>
#> Durbin-Watson statistic (original): 0.1278
#> Durbin-Watson statistic (transformed): 0.4019
NB. there are a number of assumptions here that may not apply to your actual data; with more information, such as a minimal reproducible example, you will likely get a better/more informed answer on stackoverflow

Incorrect number of dimensions in forecasting regression model

I am using regression model and forecasting its data.
I have the following code:
y <- M3[[1909]]$x
data_ts <- window(y, start=1987, end = 1991-.1)
fit <- tslm(data_ts ~ trend + season)
summary(fit)
It works until now and while forecasting,
plot(forecast(fit, h=18, level=c(80,90,95,99)))
It gives the following error:
Error in `[.default`(X, , piv, drop = FALSE) :
incorrect number of dimensions
Appreciate your help.
This works for me using the current CRAN version (8.15) of the forecast package:
library(forecast)
library(Mcomp)
y <- M3[[1909]]$x
data_ts <- window(y, start=1987, end = 1991-.1)
fit <- tslm(data_ts ~ trend + season)
summary(fit)
#>
#> Call:
#> tslm(formula = data_ts ~ trend + season)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -204.81 -73.66 -11.44 69.99 368.96
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 4438.403 65.006 68.277 < 2e-16 ***
#> trend 2.402 1.323 1.815 0.07828 .
#> season2 43.298 84.788 0.511 0.61289
#> season3 598.145 84.819 7.052 3.84e-08 ***
#> season4 499.993 84.870 5.891 1.19e-06 ***
#> season5 673.940 84.942 7.934 3.05e-09 ***
#> season6 604.988 85.035 7.115 3.20e-08 ***
#> season7 571.785 85.148 6.715 1.03e-07 ***
#> season8 695.533 85.282 8.156 1.64e-09 ***
#> season9 176.930 85.436 2.071 0.04603 *
#> season10 656.028 85.610 7.663 6.58e-09 ***
#> season11 -260.875 85.804 -3.040 0.00453 **
#> season12 -887.062 91.809 -9.662 2.79e-11 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 119.9 on 34 degrees of freedom
#> Multiple R-squared: 0.949, Adjusted R-squared: 0.931
#> F-statistic: 52.74 on 12 and 34 DF, p-value: < 2.2e-16
plot(forecast(fit, h=18, level=c(80,90,95,99)))
Created on 2022-01-02 by the reprex package (v2.0.1)
Perhaps you're loading some other packages that are over-writing forecast().

Linear regression on dynamic groups in R

I have a data.table data_dt on which I want to run linear regression so that user can choose the number of columns in groups G1 and G2 using variable n_col. The following code works perfectly but it is slow due to extra time spent on creating matrices. To improve the performance of the code below, is there a way to remove Steps 1, 2, and 3 altogether by tweaking the formula of lm function and still get the same results?
library(timeSeries)
library(data.table)
data_dt = as.data.table(LPP2005REC[, -1])
n_col = 3 # Choose a number from 1 to 3
######### Step 1 ######### Create independent variable
xx <- as.matrix(data_dt[, "SPI"])
######### Step 2 ######### Create Group 1 of dependent variables
G1 <- as.matrix(data_dt[, .SD, .SDcols=c(1:n_col + 2)])
######### Step 3 ######### Create Group 2 of dependent variables
G2 <- as.matrix(data_dt[, .SD, .SDcols=c(1:n_col + 2 + n_col)])
lm(xx ~ G1 + G2)
Results -
summary(lm(xx ~ G1 + G2))
Call:
lm(formula = xx ~ G1 + G2)
Residuals:
Min 1Q Median 3Q Max
-3.763e-07 -4.130e-09 3.000e-09 9.840e-09 4.401e-07
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -4.931e-09 3.038e-09 -1.623e+00 0.1054
G1LMI -5.000e-01 4.083e-06 -1.225e+05 <2e-16 ***
G1MPI -2.000e+00 4.014e-06 -4.982e+05 <2e-16 ***
G1ALT -1.500e+00 5.556e-06 -2.700e+05 <2e-16 ***
G2LPP25 3.071e-04 1.407e-04 2.184e+00 0.0296 *
G2LPP40 -5.001e+00 2.360e-04 -2.119e+04 <2e-16 ***
G2LPP60 1.000e+01 8.704e-05 1.149e+05 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 5.762e-08 on 370 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: 1
F-statistic: 1.104e+12 on 6 and 370 DF, p-value: < 2.2e-16
This may be easier by just creating the formula with reformulate
out <- lm(reformulate(names(data_dt)[c(1:n_col + 2, 1:n_col + 2 + n_col)],
response = 'SPI'), data = data_dt)
-checking
> summary(out)
Call:
lm(formula = reformulate(names(data_dt)[c(1:n_col + 2, 1:n_col +
2 + n_col)], response = "SPI"), data = data_dt)
Residuals:
Min 1Q Median 3Q Max
-3.763e-07 -4.130e-09 3.000e-09 9.840e-09 4.401e-07
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -4.931e-09 3.038e-09 -1.623e+00 0.1054
LMI -5.000e-01 4.083e-06 -1.225e+05 <2e-16 ***
MPI -2.000e+00 4.014e-06 -4.982e+05 <2e-16 ***
ALT -1.500e+00 5.556e-06 -2.700e+05 <2e-16 ***
LPP25 3.071e-04 1.407e-04 2.184e+00 0.0296 *
LPP40 -5.001e+00 2.360e-04 -2.119e+04 <2e-16 ***
LPP60 1.000e+01 8.704e-05 1.149e+05 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 5.762e-08 on 370 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: 1
F-statistic: 1.104e+12 on 6 and 370 DF, p-value: < 2.2e-16

How to generate data for significant testing?

I want to generate some data to do linear regression and model selection. Here is simple sample I used, but how can I generate some independent variables to satisfy the P-value of them close to 0.05?
Actually I'm not sure whether this question is right or not. Thanks for any recommendations!
a=rnorm(100,mean=5,sd=2)
b=rnorm(100)
c=rnorm(100,mean=3,sd=1)
d=rnorm(100,mean=40,sd=5)
e=rnorm(100,mean=80,sd=7)
g=rnorm(100,mean=7.9,sd=0.5)
f=sample(c(0,1),100,prob=c(0.6,0.4),replace=T)
yy=2*a+0.1*b+3*c-0.6*d+0.2*e+0.9*f-2*g+rnorm(100,mean=0,sd=1)
ll=lm(yy~a+b+c+d+e+factor(f)+g)
summary(ll)
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) -3.05618 2.67722 -1.142 0.25660
# a 1.98623 0.05521 35.974 < 2e-16 ***
# b 0.05994 0.10657 0.562 0.57520
# c 2.98780 0.10386 28.767 < 2e-16 ***
# d -0.59633 0.01915 -31.134 < 2e-16 ***
# e 0.20678 0.01644 12.577 < 2e-16 ***
# factor(f)1 0.72422 0.24321 2.978 0.00371 **
# g -1.67970 0.25617 -6.557 3.15e-09 ***
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# Residual standard error: 1.122 on 92 degrees of freedom
# Multiple R-squared: 0.972, Adjusted R-squared: 0.9699
# F-statistic: 456 on 7 and 92 DF, p-value: < 2.2e-16

Coeftest function in R--Variable not reported in output

I have a linear regression I am running in R. I am calculating clustered standard errors. I get the output of coeftest() but in some cases it doesn't report anything for a variable. I don't get an error. Does this mean the coefficient couldn't be calculated or does the coeftest not report variables that are insignificant? I can't seem to find the answer in any of the R documentation.
Here is the output from R:
lm1 <- lm(PeaceA ~ Soc_Edu + Pol_Constitution + mediation + gdp + enrollratio + infantmortality , data=qsi.surv)
coeftest(lm1, vcov = vcovHC(lm1, type = "HC1"))
t test of coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.05780946 0.20574444 -5.1414 4.973e-06 ***
Soc_Edu -1.00735592 0.11756507 -8.5685 3.088e-11 ***
mediation 0.65682159 0.06291926 10.4391 6.087e-14 ***
gdp 0.00041894 0.00010205 4.1052 0.000156 ***
enrollratio 0.00852143 0.00177600 4.7981 1.598e-05 ***
infantmortality 0.00455383 0.00079536 5.7255 6.566e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Notice that there is nothing reported for the variable Pol_Constitution.
I assume you mean functions coeftest() from package lmtest and vcovHC() from package sandwich. In this combination, coefficients for linear dependend columns are silently dropped in coeftest's output. Thus, I assume your variable/column Pol_Constitution suffers from linear dependence.
Below is an example which demonstrates the behaviour with a linear dependend column. See how the estimated coefficient for I(2 * cyl) is NA in a simple summary() and in coeftest() but silently dropped when the latter is combind with vcovHC().
library(lmtest)
library(sandwich)
data(mtcars)
summary(mod <- lm(mpg ~ cyl + I(2*cyl), data = mtcars))
#> [...]
#> Coefficients: (1 not defined because of singularities)
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 37.8846 2.0738 18.27 < 2e-16 ***
#> cyl -2.8758 0.3224 -8.92 6.11e-10 ***
#> I(2 * cyl) NA NA NA NA
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> [...]
coeftest(mod)
#>
#> t test of coefficients:
#>
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 37.88458 2.07384 18.2678 < 2.2e-16 ***
#> cyl -2.87579 0.32241 -8.9197 6.113e-10 ***
#> I(2 * cyl) NA NA NA NA
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
coeftest(mod, vcov. = vcovHC(mod))
#>
#> t test of coefficients:
#>
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 37.88458 2.74154 13.8187 1.519e-14 ***
#> cyl -2.87579 0.38869 -7.3987 3.040e-08 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Resources