Getting summary statistics from model using permutation values in R - r

I am trying to obtain the summary statistics (summary()) for the linear model (below), which uses 1000 permutations of the original dataset to create a 1000 random dataset (large matrix).
random_model <- rep(NA,1000)
for (i in c(1:1000)) {
random_data <- final_data
random_data$weighted_degree <- rowSums(node.perm_1000[i,,],na.rm=T)
random_model[i] <- coef(lm(weighted_degree ~ age + sex + age*sex, data=random_data))
}
I am not simply trying to compare the models to get an overall p-value but I want to get a t-value for each of the variables in the model that uses the random permutations as well.

Try with tidy() from broom package. It returns the expected values like this (example):
# A tibble: 2 x 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 6.53 0.479 13.6 6.47e-28
2 iris$Sepal.Width -0.223 0.155 -1.44 1.52e- 1
In your case, the previous output will be stored for each element in the list of the loop according to your definition:
library(broom)
#Data
random_model <- rep(NA,1000)
#Loop
for (i in c(1:1000)) {
random_data <- final_data
random_data$weighted_degree <- rowSums(node.perm_1000[i,,],na.rm=T)
random_model[i] <- broom::tidy(lm(weighted_degree ~ age + sex + age*sex, data=random_data))
}

You should store the results of interest (estimated coefficients and t-values) in a list.
Here is a reproducible example using 10 replications on the mtcars dataset which is sampled at 50% rate for each replication.
The results of interest are retrieved using the $coefficients attribute of the summary() output on the lm object.
# The data
data(mtcars)
# Define sample size of each replication
N <- nrow(mtcars)
sample_size <- floor(N/2)
# Number of replications (model fits) and initialization of the list to store the results
set.seed(1717)
replications <- 10
random_model <- vector( "list", length=replications )
for (i in seq_along(random_model)) {
shuffle = sample(N, sample_size)
mtcars_shuffle = mtcars[shuffle, ]
random_model[[i]] <- summary(lm(mpg ~ cyl + disp + cyl*disp, data=mtcars_shuffle))$coefficients
}
For example, the model fitted for replications 1 and 10 are:
> random_model[[1]]
Estimate Std. Error t value Pr(>|t|)
(Intercept) 48.26285335 8.219065181 5.872061 7.573836e-05
cyl -3.33999161 1.366231326 -2.444675 3.089262e-02
disp -0.12941685 0.063269362 -2.045490 6.337414e-02
cyl:disp 0.01394436 0.007877833 1.770076 1.020931e-01
> random_model[[10]]
Estimate Std. Error t value Pr(>|t|)
(Intercept) 54.27312267 7.662593317 7.082866 1.277746e-05
cyl -4.40545653 1.586392001 -2.777029 1.674235e-02
disp -0.15330770 0.047932153 -3.198431 7.654790e-03
cyl:disp 0.01792561 0.006707396 2.672514 2.031615e-02

Related

R: How to modify my code to group by then loop over all columns at once

I have a data frame with many columns. The first column contains categories such as "System 1", "System 2", and the second column has numbers that represent the 0's and 1's. Please see below :
For example:
SYSTEM
Q1
Q2
S1
0
1
S1
1
0
S2
1
1
S2
0
0
S2
1
1
I have this code in R to run Bootstrap 95% CI for mean
function to obtain mean from the data (with indexing).
Here is my code:
m <- 1e4
n <- 5
set.seed(42)
df2 <- data.frame(SYSTEM=rep(c('S1', 'S2'), each=n/2), matrix(sample(0:1, m*n, replace=TRUE), m, n))
names(df2)[-1] <- paste0('Q', 1:n)
set.seed(0)
library(boot)
#define function to calculate fitted regression coefficients
coef_function <- function(formula, data, indices) {
d <- data[indices,] #allows boot to select sample
fit <- lm(formula, data=d) #fit regression model
return(coef(fit)) #return coefficient estimates of model
}
#perform bootstrapping with 2000 replications
reps <- boot(data=df2, statistic=coef_function, R=2000, formula=Q1~Q2)
#view results of boostrapping
reps
#calculate adjusted bootstrap percentile (BCa) intervals
boot.ci(reps, type="bca", index=1) #intercept of model
boot.ci(reps, type="bca", index=2) #disp predictor variable
Result should be :
ORDINARY NONPARAMETRIC BOOTSTRAP
Call:
boot(data = df2, statistic = coef_function, R = 2000, formula = Q1 ~
Q2)
Bootstrap Statistics :
original bias std. error
t1* 0.600 0.00082 0.074
t2* -0.073 -0.00182 0.099
> boot.ci(reps, type="bca", index=1) #intercept of model
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 2000 bootstrap replicates
CALL :
boot.ci(boot.out = reps, type = "bca", index = 1)
Intervals :
Level BCa
95% ( 0.45, 0.74 )
Calculations and Intervals on Original Scale
> boot.ci(reps, type="bca", index=2) #disp predictor variable
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 2000 bootstrap replicates
CALL :
boot.ci(boot.out = reps, type = "bca", index = 2)
Intervals :
Level BCa
95% (-0.26, 0.13 )
Calculations and Intervals on Original Scale
Here I'm only using Q1 and Q2. I also didn't use group by.
I don't know where if this possible to do for groups and columns at once.
Thank you in advance.
If 'Q1' is the response variable, we may group by 'SYSTEM', then loop across the columns 'Q2' to 'Q5', create the formula from the column name (cur_column()) with 'Q1' in reformulate and pass it on to boot
library(boot)
library(dplyr)
out <- df2 %>%
group_by(SYSTEM) %>%
summarise(across(Q2:Q5,
~ list(boot(cur_data(), statistic = coef_function, R = 2000,
formula = reformulate(cur_column(), response = 'Q1')))), .groups = 'drop')
-output
> out
# A tibble: 2 × 5
SYSTEM Q2 Q3 Q4 Q5
<chr> <list> <list> <list> <list>
1 S1 <boot> <boot> <boot> <boot>
2 S2 <boot> <boot> <boot> <boot>
If we extract the column, the output will be
> out$Q2
[[1]]
ORDINARY NONPARAMETRIC BOOTSTRAP
Call:
boot(data = cur_data(), statistic = coef_function, R = 2000,
formula = reformulate(cur_column(), response = "Q1"))
Bootstrap Statistics :
original bias std. error
t1* 0.48025529 -0.0001032709 0.01019634
t2* 0.02355538 0.0003813531 0.01412119
[[2]]
ORDINARY NONPARAMETRIC BOOTSTRAP
Call:
boot(data = cur_data(), statistic = coef_function, R = 2000,
formula = reformulate(cur_column(), response = "Q1"))
Bootstrap Statistics :
original bias std. error
t1* 0.49564873 -0.0002947112 0.009942382
t2* 0.01850984 0.0003610360 0.013914520

Aggregate coeffcients from linear regression with interactions [r]

I am trying to write a function that would take linear regression model, Design matrix X and would return marginal effects of our coefficients.
Marginal effects are given in regression's summary output. However, if we have interactions present in our model we have to aggregate the coefficients.
I have prepared a small example in R:
library(tidyverse)
# Desing Matrix for regression with interaction ---------------------------
X <- model.matrix(hp ~ factor(gear) + disp + mpg + wt + wt:factor(gear), data = mtcars)
y <- mtcars$hp
lm(y ~ X - 1, data = mtcars) %>%
summary()
# Model Interpretation ----------------------------------------------------
# Our reference category is 'Gear3'.
# Meaning that X(Intercept) is an intercept for Gear3
# If I want an intercept for Gear4 and Gear5 coefs Gear4 and Gear5 are deviations from refrence cathegory
# So Intercept for Gear4 is: X(Intercept) + Xfactor(gear)4 = 166.0265 + 30.2619 = 196.2884
# Same Idea hold for Gear5: X(Intercept) + Xfactor(gear)5 = 166.0265 -58.2724 = 107.7541
# Now if we want to interpret wt as a marginal affect, again we need to take into account our created interactions.
# Xwt is marginal effect of wt for the regrence cathegory (Gear3)
# If we want marginal Affect for (Gear4 cars) we need: Xwt + Xfactor(gear)4:wt = -6.4985 -9.8580 = -16.3565
# SAme Idea for gear5: Xwt + Xfactor(gear)5:wt = -6.4985 + 49.6390 = 43.1405
I would like to write a function that would take model object and desing matrix and would return a dataframe with our aggregated marginal effects:
In case given I want a function to return a dataframe:
# X(Intercept)gear3: 166.0265
# X(Intercept)gear4: 196.2884
# X(Intercept)gear5: 107.7541
#
# Xwtgear3:-6.4985
# Xwtgear4:-16.3565
# Xwtgear5:43.1405
Where the marginals effects for interactions are already precalculated/aggregated for user.
So far I was able to create some concept that is not generalize enought.
Therefore, I would like to ask for any ideas how to write function using either dplyr or datatable.
I am not sure if you are just interested in the output or the way of doing it yourself.
Regarding the former, you can use the {interactions} package, which will give you your desired output. Regarding the latter, you could have a look at how the {interactions} packages calculates this output under the hood.
library(dplyr)
library(interactions)
mtcars2 <- mtcars %>% mutate(gear = factor(gear))
fit <- lm(hp ~ wt * gear + disp + mpg, data = mtcars2)
slopes <- sim_slopes(fit, pred = wt, modx = gear, centered = "none")
#> Warning: Johnson-Neyman intervals are not available for factor moderators.
slopes$ints
#> Value of gear Est. S.E. 2.5% 97.5% t val. p
#> 1 5 107.7540 103.74667 -106.36856 321.8766 1.038626 0.30932987
#> 2 4 196.2884 100.91523 -11.99042 404.5672 1.945082 0.06357154
#> 3 3 166.0265 71.71678 18.01029 314.0426 2.315029 0.02947996
slopes$slopes
#> Value of gear Est. S.E. 2.5% 97.5% t val. p
#> 1 5 43.140490 26.69021 -11.94539 98.22637 1.6163415 0.1190910
#> 2 4 -16.356572 20.22409 -58.09705 25.38390 -0.8087667 0.4265945
#> 3 3 -6.498535 15.37599 -38.23302 25.23595 -0.4226417 0.6763194
Created on 2021-01-03 by the reprex package (v0.3.0)
If you want both results in one data.frame you could easily bind them together:
bind_rows(mutate(slopes$ints, term = "(Intercept)"),
mutate(slopes$slopes, term = "(Interaction)")) %>%
select(term, "Value of gear", Est., p)
#> term Value of gear Est. p
#> 1 (Intercept) 5 107.754033 0.30932987
#> 2 (Intercept) 4 196.288380 0.06357154
#> 3 (Intercept) 3 166.026451 0.02947996
#> 4 (Interaction) 5 43.140490 0.11909097
#> 5 (Interaction) 4 -16.356572 0.42659451
#> 6 (Interaction) 3 -6.498535 0.67631941

R function returning a data.frame using for loop

I would like to create a function that does a for loop to create multiple datasets. These datasets should be returned into a single dataset, which will be the output of my function.
I did the following code. It works when the for loop is outside the function, but it does not work when the loop is inside another function. The problem with my function, is that it only gives me back the first (i) dataset.
library(broom)
library(dplyr)
# My function
validation <- function(x, y) {
df <- NULL
for (i in 1:ncol(x)) {
coln <- colnames(x)[i]
covariate <- as.vector(x[,i])
models <- (tidy(glm(y ~ covariate, data = x, family = binomial)))
df <- (rbind(df, cbind(models, coln))) %>% filter( term != "(Intercept)")
return(df)
}
}
# Test function
validation(mtcars, mtcars$am)
term estimate std.error statistic p.value coln
covariate 0.3070282 0.1148416 2.673493 0.007506579 mpg
This function should give me the following output:
term estimate std.error statistic p.value coln
1 covariate 0.307028190 1.148416e-01 2.6734932353 0.007506579 mpg
2 covariate -0.691175096 2.536145e-01 -2.7252982408 0.006424343 cyl
3 covariate -0.014604292 5.167837e-03 -2.8259972293 0.004713367 disp
4 covariate -0.008117121 6.074337e-03 -1.3362973916 0.181452089 hp
5 covariate 5.577358500 2.062575e+00 2.7040753425 0.006849476 drat
6 covariate -4.023969940 1.436416e+00 -2.8013963535 0.005088198 wt
7 covariate -0.288189820 2.278968e-01 -1.2645629995 0.206028024 qsec
8 covariate 0.693147181 7.319250e-01 0.9470194188 0.343628884 vs
9 covariate 51.132135568 7.774641e+04 0.0006576784 0.999475249 am
10 covariate 21.006490452 3.876257e+03 0.0054192724 0.995676067 gear
11 covariate 0.073173343 2.254018e-01 0.3246350695 0.745457282 carb
If we change the return(df) from the inner loop to outer, it should work because the 'df' return inside the inner loop is just the output just got updated i.e. the first run output
validation <- function(x, y) {
df <- NULL
for (i in 1:ncol(x)) {
coln <- colnames(x)[i]
covariate <- as.vector(x[,i])
models <- (tidy(glm(y ~ covariate, data = x, family = binomial)))
df <- (rbind(df, cbind(models, coln))) %>% filter( term != "(Intercept)")
# to understand it better, create some print statement
print(sprintf("column index : %d", i))
print('-----------------')
print('df in each loop')
print(df)
print(sprintf("%dth loop ends", i))
}
df
}
-checking
validation(mtcars, mtcars$am)
# term estimate std.error statistic p.value coln
#1 covariate 0.307028190 1.148416e-01 2.6734932353 0.007506579 mpg
#2 covariate -0.691175096 2.536145e-01 -2.7252982408 0.006424343 cyl
#3 covariate -0.014604292 5.167837e-03 -2.8259972293 0.004713367 disp
#4 covariate -0.008117121 6.074337e-03 -1.3362973916 0.181452089 hp
#5 covariate 5.577358500 2.062575e+00 2.7040753425 0.006849476 drat
#6 covariate -4.023969940 1.436416e+00 -2.8013963535 0.005088198 wt
#7 covariate -0.288189820 2.278968e-01 -1.2645629995 0.206028024 qsec
#8 covariate 0.693147181 7.319250e-01 0.9470194188 0.343628884 vs
#9 covariate 51.132135568 7.774641e+04 0.0006576784 0.999475249 am
#10 covariate 21.006490452 3.876257e+03 0.0054192724 0.995676067 gear
#11 covariate 0.073173343 2.254018e-01 0.3246350695 0.745457282 carb

Clustered standard errors with texreg?

I'm trying to reproduce this stata example and move from stargazer to texreg. The data is available here.
To run the regression and get the se I run this code:
library(readstata13)
library(sandwich)
cluster_se <- function(model_result, data, cluster){
model_variables <- intersect(colnames(data), c(colnames(model_result$model), cluster))
model_rows <- as.integer(rownames(model_result$model))
data <- data[model_rows, model_variables]
cl <- data[[cluster]]
M <- length(unique(cl))
N <- nrow(data)
K <- model_result$rank
dfc <- (M/(M-1))*((N-1)/(N-K))
uj <- apply(estfun(model_result), 2, function(x) tapply(x, cl, sum));
vcovCL <- dfc*sandwich(model_result, meat=crossprod(uj)/N)
sqrt(diag(vcovCL))
}
elemapi2 <- read.dta13(file = 'elemapi2.dta')
lm1 <- lm(formula = api00 ~ acs_k3 + acs_46 + full + enroll, data = elemapi2)
se.lm1 <- cluster_se(model_result = lm1, data = elemapi2, cluster = "dnum")
stargazer::stargazer(lm1, type = "text", style = "aer", se = list(se.lm1))
==========================================================
api00
----------------------------------------------------------
acs_k3 6.954
(6.901)
acs_46 5.966**
(2.531)
full 4.668***
(0.703)
enroll -0.106**
(0.043)
Constant -5.200
(121.786)
Observations 395
R2 0.385
Adjusted R2 0.379
Residual Std. Error 112.198 (df = 390)
F Statistic 61.006*** (df = 4; 390)
----------------------------------------------------------
Notes: ***Significant at the 1 percent level.
**Significant at the 5 percent level.
*Significant at the 10 percent level.
texreg produces this:
texreg::screenreg(lm1, override.se=list(se.lm1))
========================
Model 1
------------------------
(Intercept) -5.20
(121.79)
acs_k3 6.95
(6.90)
acs_46 5.97 ***
(2.53)
full 4.67 ***
(0.70)
enroll -0.11 ***
(0.04)
------------------------
R^2 0.38
Adj. R^2 0.38
Num. obs. 395
RMSE 112.20
========================
How can I fix the p-values?
Robust Standard Errors with texreg are easy: just pass the coeftest directly!
This has become much easier since the question was last answered: it appears you can now just pass the coeftest with the desired variance-covariance matrix directly. Downside: you lose the goodness of fit statistics (such as R^2 and number of observations), but depending on your needs, this may not be a big problem
How to include robust standard errors with texreg
> screenreg(list(reg1, coeftest(reg1,vcov = vcovHC(reg1, 'HC1'))),
custom.model.names = c('Standard Standard Errors', 'Robust Standard Errors'))
=============================================================
Standard Standard Errors Robust Standard Errors
-------------------------------------------------------------
(Intercept) -192.89 *** -192.89 *
(55.59) (75.38)
x 2.84 ** 2.84 **
(0.96) (1.04)
-------------------------------------------------------------
R^2 0.08
Adj. R^2 0.07
Num. obs. 100
RMSE 275.88
=============================================================
*** p < 0.001, ** p < 0.01, * p < 0.05
To generate this example, I created a dataframe with heteroscedasticity, see below for full runnable sample code:
require(sandwich);
require(texreg);
set.seed(1234)
df <- data.frame(x = 1:100);
df$y <- 1 + 0.5*df$x + 5*100:1*rnorm(100)
reg1 <- lm(y ~ x, data = df)
First, notice that your usage of as.integer is dangerous and likely to cause problems once you use data with non-numeric rownames. For instance, using the built-in dataset mtcars whose rownames consist of car names, your function will coerce all rownames to NA, and your function will not work.
To your actual question, you can provide custom p-values to texreg, which means that you need to compute the corresponding p-values. To achieve this, you could compute the variance-covariance matrix, compute the test-statistics, and then compute the p-value manually, or you just compute the variance-covariance matrix and supply it to e.g. coeftest. Then you can extract the standard errors and p-values from there. Since I am unwilling to download any data, I use the mtcars-data for the following:
library(sandwich)
library(lmtest)
library(texreg)
cluster_se <- function(model_result, data, cluster){
model_variables <- intersect(colnames(data), c(colnames(model_result$model), cluster))
model_rows <- rownames(model_result$model) # changed to be able to work with mtcars, not tested with other data
data <- data[model_rows, model_variables]
cl <- data[[cluster]]
M <- length(unique(cl))
N <- nrow(data)
K <- model_result$rank
dfc <- (M/(M-1))*((N-1)/(N-K))
uj <- apply(estfun(model_result), 2, function(x) tapply(x, cl, sum));
vcovCL <- dfc*sandwich(model_result, meat=crossprod(uj)/N)
}
lm1 <- lm(formula = mpg ~ cyl + disp, data = mtcars)
vcov.lm1 <- cluster_se(model_result = lm1, data = mtcars, cluster = "carb")
standard.errors <- coeftest(lm1, vcov. = vcov.lm1)[,2]
p.values <- coeftest(lm1, vcov. = vcov.lm1)[,4]
texreg::screenreg(lm1, override.se=standard.errors, override.p = p.values)
And just for completeness sake, let's do it manually:
t.stats <- abs(coefficients(lm1) / sqrt(diag(vcov.lm1)))
t.stats
(Intercept) cyl disp
38.681699 5.365107 3.745143
These are your t-statistics using the cluster-robust standard errors. The degree of freedom is stored in lm1$df.residual, and using the built in functions for the t-distribution (see e.g. ?pt), we get:
manual.p <- 2*pt(-t.stats, df=lm1$df.residual)
manual.p
(Intercept) cyl disp
1.648628e-26 9.197470e-06 7.954759e-04
Here, pt is the distribution function, and we want to compute the probability of observing a statistic at least as extreme as the one we observe. Since we testing two-sided and it is a symmetric density, we first take the left extreme using the negative value, and then double it. This is identical to using 2*(1-pt(t.stats, df=lm1$df.residual)). Now, just to check that this yields the same result as before:
all.equal(p.values, manual.p)
[1] TRUE

Obtain standard errors of regression coefficients for an "mlm" object returned by `lm()`

I'd like to run 10 regressions against the same regressor, then pull all the standard errors without using a loop.
depVars <- as.matrix(data[,1:10]) # multiple dependent variables
regressor <- as.matrix([,11]) # independent variable
allModels <- lm(depVars ~ regressor) # multiple, single variable regressions
summary(allModels)[1] # Can "view" the standard error for 1st regression, but can't extract...
allModels is stored as an "mlm" object, which is really tough to work with. It'd be great if I could store a list of lm objects or a matrix with statistics of interest.
Again, the objective is to NOT use a loop. Here is a loop equivalent:
regressor <- as.matrix([,11]) # independent variable
for(i in 1:10) {
tempObject <- lm(data[,i] ~ regressor) # single regressions
table1Data[i,1] <- summary(tempObject)$coefficients[2,2] # assign std error
rm(tempObject)
}
If you put your data in long format it's very easy to get a bunch of regression results using lmList from the nlme or lme4 packages. The output is a list of regression results and the summary can give you a matrix of coefficients, just like you wanted.
library(lme4)
m <- lmList( y ~ x | group, data = dat)
summary(m)$coefficients
Those coefficients are in a simple 3 dimensional array so the standard errors are at [,2,2].
Given an "mlm" model object model, you can use the below function written by me to get standard errors of coefficients. This is very efficient: no loop, and no access to summary.mlm().
std_mlm <- function (model) {
Rinv <- with(model$qr, backsolve(qr, diag(rank)))
## unscaled standard error
std_unscaled <- sqrt(rowSums(Rinv ^ 2)[order(model$qr$pivot)])
## residual standard error
sigma <- sqrt(colSums(model$residuals ^ 2) / model$df.residual)
## return final standard error
## each column corresponds to a model
"dimnames<-"(outer(std_unscaled, sigma), list = dimnames(model$coefficients))
}
A simple, reproducible example
set.seed(0)
Y <- matrix(rnorm(50 * 5), 50) ## assume there are 5 responses
X <- rnorm(50) ## covariate
fit <- lm(Y ~ X)
We all know that it is simple to extract estimated coefficients via:
fit$coefficients ## or `coef(fit)`
# [,1] [,2] [,3] [,4] [,5]
#(Intercept) -0.21013925 0.1162145 0.04470235 0.08785647 0.02146662
#X 0.04110489 -0.1954611 -0.07979964 -0.02325163 -0.17854525
Now let's apply our std_mlm:
std_mlm(fit)
# [,1] [,2] [,3] [,4] [,5]
#(Intercept) 0.1297150 0.1400600 0.1558927 0.1456127 0.1186233
#X 0.1259283 0.1359712 0.1513418 0.1413618 0.1151603
We can of course, call summary.mlm just to check our result is correct:
coef(summary(fit))
#Response Y1 :
# Estimate Std. Error t value Pr(>|t|)
#(Intercept) -0.21013925 0.1297150 -1.6200072 0.1117830
#X 0.04110489 0.1259283 0.3264151 0.7455293
#
#Response Y2 :
# Estimate Std. Error t value Pr(>|t|)
#(Intercept) 0.1162145 0.1400600 0.8297485 0.4107887
#X -0.1954611 0.1359712 -1.4375183 0.1570583
#
#Response Y3 :
# Estimate Std. Error t value Pr(>|t|)
#(Intercept) 0.04470235 0.1558927 0.2867508 0.7755373
#X -0.07979964 0.1513418 -0.5272811 0.6004272
#
#Response Y4 :
# Estimate Std. Error t value Pr(>|t|)
#(Intercept) 0.08785647 0.1456127 0.6033574 0.5491116
#X -0.02325163 0.1413618 -0.1644831 0.8700415
#
#Response Y5 :
# Estimate Std. Error t value Pr(>|t|)
#(Intercept) 0.02146662 0.1186233 0.1809646 0.8571573
#X -0.17854525 0.1151603 -1.5504057 0.1276132
Yes, all correct!
Here an option:
put your data in the long format using regressor as an id key.
do your regression against value by group of variable.
For example , using mtcars data set:
library(reshape2)
dat.m <- melt(mtcars,id.vars='mpg') ## mpg is my regressor
library(plyr)
ddply(dat.m,.(variable),function(x)coef(lm(variable~value,data=x)))
variable (Intercept) value
1 cyl 1 8.336774e-18
2 disp 1 6.529223e-19
3 hp 1 1.106781e-18
4 drat 1 -1.505237e-16
5 wt 1 8.846955e-17
6 qsec 1 6.167713e-17
7 vs 1 2.442366e-16
8 am 1 -3.381738e-16
9 gear 1 -8.141220e-17
10 carb 1 -6.455094e-17

Resources