setting list element names based on argument to `pmap` - r

I am trying to figure out if I can use the list of arguments provided to purrr::pmap() to also name the elements of the output list from this function using purrr::set_names().
For example, here is a simple example where I am using pmap to create summary for some variables from different dataframes across grouping variables.
# setup
library(tidyverse)
library(groupedstats)
set.seed(123)
# creating the dataframes
data_1 <- tibble::as.tibble(iris)
data_2 <- tibble::as.tibble(mtcars)
data_3 <- tibble::as.tibble(airquality)
# creating a list
purrr::pmap(
.l = list(
data = list(data_1, data_2, data_3),
grouping.vars = alist(Species, c(am, cyl), Month),
measures = alist(c(Sepal.Length, Sepal.Width), wt, c(Ozone, Solar.R, Wind))
),
.f = groupedstats::grouped_summary
) %>% # assigning names to each element of the list
purrr::set_names(x = ., nm = alist(data_1, data_2, data_3))
# output
#> $data_1
#> # A tibble: 6 x 16
#> Species type variable missing complete n mean sd min p25
#> <fct> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 setosa nume~ Sepal.L~ 0 50 50 5.01 0.35 4.3 4.8
#> 2 setosa nume~ Sepal.W~ 0 50 50 3.43 0.38 2.3 3.2
#> 3 versic~ nume~ Sepal.L~ 0 50 50 5.94 0.52 4.9 5.6
#> 4 versic~ nume~ Sepal.W~ 0 50 50 2.77 0.31 2 2.52
#> 5 virgin~ nume~ Sepal.L~ 0 50 50 6.59 0.64 4.9 6.23
#> 6 virgin~ nume~ Sepal.W~ 0 50 50 2.97 0.32 2.2 2.8
#> # ... with 6 more variables: median <dbl>, p75 <dbl>, max <dbl>,
#> # std.error <dbl>, mean.low.conf <dbl>, mean.high.conf <dbl>
#>
#> $data_2
#> # A tibble: 6 x 17
#> am cyl type variable missing complete n mean sd min p25
#> <dbl> <dbl> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 6 nume~ wt 0 3 3 2.75 0.13 2.62 2.7
#> 2 1 4 nume~ wt 0 8 8 2.04 0.41 1.51 1.78
#> 3 0 6 nume~ wt 0 4 4 3.39 0.12 3.21 3.38
#> 4 0 8 nume~ wt 0 12 12 4.1 0.77 3.44 3.56
#> 5 0 4 nume~ wt 0 3 3 2.94 0.41 2.46 2.81
#> 6 1 8 nume~ wt 0 2 2 3.37 0.28 3.17 3.27
#> # ... with 6 more variables: median <dbl>, p75 <dbl>, max <dbl>,
#> # std.error <dbl>, mean.low.conf <dbl>, mean.high.conf <dbl>
#>
#> $data_3
#> # A tibble: 15 x 16
#> Month type variable missing complete n mean sd min p25
#> <int> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 5 inte~ Ozone 5 26 31 23.6 22.2 1 11
#> 2 5 inte~ Solar.R 4 27 31 181. 115. 8 72
#> 3 5 nume~ Wind 0 31 31 11.6 3.53 5.7 8.9
#> 4 6 inte~ Ozone 21 9 30 29.4 18.2 12 20
#> 5 6 inte~ Solar.R 0 30 30 190. 92.9 31 127
#> 6 6 nume~ Wind 0 30 30 10.3 3.77 1.7 8
#> 7 7 inte~ Ozone 5 26 31 59.1 31.6 7 36.2
#> 8 7 inte~ Solar.R 0 31 31 216. 80.6 7 175
#> 9 7 nume~ Wind 0 31 31 8.94 3.04 4.1 6.9
#> 10 8 inte~ Ozone 5 26 31 60.0 39.7 9 28.8
#> 11 8 inte~ Solar.R 3 28 31 172. 76.8 24 107
#> 12 8 nume~ Wind 0 31 31 8.79 3.23 2.3 6.6
#> 13 9 inte~ Ozone 1 29 30 31.4 24.1 7 16
#> 14 9 inte~ Solar.R 0 30 30 167. 79.1 14 117.
#> 15 9 nume~ Wind 0 30 30 10.2 3.46 2.8 7.55
#> # ... with 6 more variables: median <dbl>, p75 <dbl>, max <dbl>,
#> # std.error <dbl>, mean.low.conf <dbl>, mean.high.conf <dbl>
Created on 2018-10-31 by the reprex package (v0.2.1)
As can be seen here, the contents of data argument to purrr::pmap and nm argument in purrr::set_names are exactly identical ((data_1, data_2, data_3)). I want to avoid this repetition (which seems unnecessary here with 3 elements, but I have a much bigger list of arguments). I can't assign this list to a separate object because in one case it is a list, while the other one is entered as alist.
How can I do this?

From tidyverse package, you can also use lst function. lst is used for creating list. It is like tibble function to create tibble but for creating list. One of the difference with base list() is that it automatically names the list.
It is in dplyr, exported from tibble.
For the example, I also replace base alist by rlang::exprs as it is equivalent. Indeed, both are ok.
library(tidyverse)
library(groupedstats)
set.seed(123)
# creating the dataframes
data_1 <- tibble::as.tibble(iris)
data_2 <- tibble::as.tibble(mtcars)
data_3 <- tibble::as.tibble(airquality)
# creating a list
purrr::pmap(
.l = list(
data = lst(data_1, data_2, data_3),
grouping.vars = rlang::exprs(Species, c(am, cyl), Month),
measures = rlang::exprs(c(Sepal.Length, Sepal.Width), wt, c(Ozone, Solar.R, Wind))
),
.f = groupedstats::grouped_summary
) %>%
str(1)
#> List of 3
#> $ data_1:Classes 'tbl_df', 'tbl' and 'data.frame': 6 obs. of 16 variables:
#> $ data_2:Classes 'tbl_df', 'tbl' and 'data.frame': 6 obs. of 17 variables:
#> $ data_3:Classes 'tbl_df', 'tbl' and 'data.frame': 15 obs. of 16 variables:
Created on 2018-11-02 by the reprex package (v0.2.1)

Related

Error: No glance method recognized for this list

I'm trying to write a function that can flexibly group by a variable number of arguments and fit a linear model to each subset. The output should be a table with each row showing the grouping variable(s) and corresponding lm call results that broom::glance provides. But I can't figure out how to structure the output. Code that produces the same error is as follows:
library(dplyr)
library(broom)
test_fcn <- function(var1, ...) {
x <- unlist(list(...))
mtcars %>%
group_by(across(all_of(c('gear', x)))) %>%
mutate(mod = list(lm(hp ~ !!sym(var1), data = .))) %>%
summarize(broom::glance(mod))
}
test_fcn('qsec', 'cyl', 'carb')
I'm pushing my R/dplyr comfort zone by mixing static and dynamic variable arguments, so I've left them here in case that's a contributing factor. Thanks for any input!
You were nearly there.
test_fcn <- function(var1, ...) {
x <- unlist(list(...))
mtcars %>%
group_by(across(all_of(c('gear', x)))) %>%
summarise(
mod = list(lm(hp ~ !!sym(var1), data = .)),
mod = map(mod, broom::glance),
.groups = "drop")
}
test_fcn('qsec', 'cyl', 'carb') %>% unnest(mod)
## A tibble: 12 × 15
# gear cyl carb r.squared adj.r.sq…¹ sigma stati…² p.value df logLik AIC BIC devia…³ df.re…⁴
# <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <int>
# 1 3 4 1 0.502 0.485 49.2 30.2 5.77e-6 1 -169. 344. 348. 72633. 30
# 2 3 6 1 0.502 0.485 49.2 30.2 5.77e-6 1 -169. 344. 348. 72633. 30
# 3 3 8 2 0.502 0.485 49.2 30.2 5.77e-6 1 -169. 344. 348. 72633. 30
# 4 3 8 3 0.502 0.485 49.2 30.2 5.77e-6 1 -169. 344. 348. 72633. 30
# 5 3 8 4 0.502 0.485 49.2 30.2 5.77e-6 1 -169. 344. 348. 72633. 30
# 6 4 4 1 0.502 0.485 49.2 30.2 5.77e-6 1 -169. 344. 348. 72633. 30
# 7 4 4 2 0.502 0.485 49.2 30.2 5.77e-6 1 -169. 344. 348. 72633. 30
# 8 4 6 4 0.502 0.485 49.2 30.2 5.77e-6 1 -169. 344. 348. 72633. 30
# 9 5 4 2 0.502 0.485 49.2 30.2 5.77e-6 1 -169. 344. 348. 72633. 30
#10 5 6 6 0.502 0.485 49.2 30.2 5.77e-6 1 -169. 344. 348. 72633. 30
#11 5 8 4 0.502 0.485 49.2 30.2 5.77e-6 1 -169. 344. 348. 72633. 30
#12 5 8 8 0.502 0.485 49.2 30.2 5.77e-6 1 -169. 344. 348. 72633. 30
## … with 1 more variable: nobs <int>, and abbreviated variable names ¹​adj.r.squared, ²​statistic,
## ³​deviance, ⁴​df.residual
## ℹ Use `colnames()` to see all variable names
Because you are storing the lm fit objects in a list, you need to loop over the entries using purrr::map.
You might want to put the unnest into the test_fcn: a slightly more compact version would be
test_fcn <- function(var1, ...) {
x <- unlist(list(...))
mtcars %>%
group_by(across(all_of(c('gear', x)))) %>%
summarise(
mod = map(list(lm(hp ~ !!sym(var1), data = .)), broom::glance),
.groups = "drop") %>%
unnest(mod)
}
Update
Until your comment, I hadn't realised that the grouping was ignored. Here is a nest-unnest-type solution.
test_fcn <- function(var1, ...) {
x <- list(...)
mtcars %>%
group_by(across(all_of(c('gear', x)))) %>%
nest() %>%
ungroup() %>%
mutate(mod = map(
data,
~ lm(hp ~ !!sym(var1), data = .x) %>% broom::glance())) %>%
unnest(mod)
}
test_fcn('qsec', 'cyl', 'carb')
## A tibble: 12 × 16
# cyl gear carb data r.squared adj.r.s…¹ sigma statis…² p.value df logLik
# <dbl> <dbl> <dbl> <list> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1 6 4 4 <tibble> 0.911 0.867 2.74e+ 0 20.5 0.0454 1 -8.32
# 2 4 4 1 <tibble> 0.525 0.287 1.15e+ 1 2.21 0.276 1 -14.1
# 3 6 3 1 <tibble> 1 NaN NaN NaN NaN 1 Inf
# 4 8 3 2 <tibble> 0.0262 -0.461 1.74e+ 1 0.0538 0.838 1 -15.7
# 5 8 3 4 <tibble> 0.869 0.825 7.48e+ 0 19.9 0.0210 1 -15.9
# 6 4 4 2 <tibble> 0.0721 -0.392 3.18e+ 1 0.155 0.732 1 -18.1
# 7 8 3 3 <tibble> 0.538 0.0769 2.63e-14 1.17 0.475 1 91.2
# 8 4 3 1 <tibble> 0 0 NaN NA NA NA Inf
# 9 4 5 2 <tibble> 1 NaN NaN NaN NaN 1 Inf
#10 8 5 4 <tibble> 0 0 NaN NA NA NA Inf
#11 6 5 6 <tibble> 0 0 NaN NA NA NA Inf
#12 8 5 8 <tibble> 0 0 NaN NA NA NA Inf
## … with 5 more variables: AIC <dbl>, BIC <dbl>, deviance <dbl>, df.residual <int>,
## nobs <int>, and abbreviated variable names ¹​adj.r.squared, ²​statistic
## ℹ Use `colnames()` to see all variable names
Explanation: tidyr::nest nests data in a list column (with name data by default); we can then loop through the data entries, fit the model and extract model summaries with broom::glance in a new column mod; unnesting mod then gives the desired structure. If not needed, you can remove the data column with select(-data).
PS. The example produces some warnings (leading to NAs in the model summaries) from those groups where you have only a single observation.

How can I keep old columns and rename new columns when using `mutate` with `across`

When I mutate across data, the columns selected by .cols are replaced by the results of the mutation. How can I perform this operation whilst:
Keeping the columns selected by .cols in the output
Appropriately & automatically renaming the columns created by mutate?
For example:
require(dplyr)
#> Loading required package: dplyr
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
require(magrittr)
#> Loading required package: magrittr
set.seed(7337)
## Create arbitrary tibble
myTibble <- tibble(x = 1:10,
y = runif(10),
z = y * pi)
## I can mutate across these columns
mutate(myTibble, across(everything(), multiply_by, 2))
#> # A tibble: 10 x 3
#> x y z
#> <dbl> <dbl> <dbl>
#> 1 2 1.78 5.58
#> 2 4 0.658 2.07
#> 3 6 0.105 0.331
#> 4 8 1.75 5.50
#> 5 10 1.33 4.19
#> 6 12 1.02 3.20
#> 7 14 1.20 3.75
#> 8 16 0.00794 0.0250
#> 9 18 0.108 0.340
#> 10 20 1.74 5.45
## I can subsequently rename these columns
mutate(myTibble, across(everything(), multiply_by, 2)) %>%
rename_with(paste0, everything(), "_double")
#> # A tibble: 10 x 3
#> x_double y_double z_double
#> <dbl> <dbl> <dbl>
#> 1 2 1.78 5.58
#> 2 4 0.658 2.07
#> 3 6 0.105 0.331
#> 4 8 1.75 5.50
#> 5 10 1.33 4.19
#> 6 12 1.02 3.20
#> 7 14 1.20 3.75
#> 8 16 0.00794 0.0250
#> 9 18 0.108 0.340
#> 10 20 1.74 5.45
## But how can I achieve this (without the fuss of creating & joining an additional table):
# A tibble: 10 x 6
# x y z x_double y_double z_double
# <int> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1 1 0.313 0.982 2 0.625 1.96
# 2 2 0.759 2.39 4 1.52 4.77
# 3 3 0.705 2.22 6 1.41 4.43
# 4 4 0.573 1.80 8 1.15 3.60
# 5 5 0.599 1.88 10 1.20 3.77
# 6 6 0.0548 0.172 12 0.110 0.344
# 7 7 0.571 1.80 14 1.14 3.59
# 8 8 0.621 1.95 16 1.24 3.90
# 9 9 0.709 2.23 18 1.42 4.46
# 10 10 0.954 3.00 20 1.91 5.99
Created on 2021-09-16 by the reprex package (v2.0.1)
Use the .names argument of across
across names its outputs using the argument .names, which is an argument passed to glue::glue(). This is a string in which "{.col}" and "{.fn}" are replaced by the names of your columns (specified by .cols) and functions (specified by .fns)
The default value for .names is NULL, which is equivalent to "{.col}". This means that every mutated column is assigned the same name its counterpart in .cols, which effectively 'overwrites' these columns in the output.
To produce your desired table you would need to do:
require(dplyr)
#> Loading required package: dplyr
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
require(magrittr)
#> Loading required package: magrittr
set.seed(7337)
## Create arbitrary tibble
myTibble <- tibble(x = 1:10,
y = runif(10),
z = y * pi)
mutate(myTibble, across(everything(), multiply_by, 2, .names = "{.col}_double"))
#> # A tibble: 10 x 6
#> x y z x_double y_double z_double
#> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 0.889 2.79 2 1.78 5.58
#> 2 2 0.329 1.03 4 0.658 2.07
#> 3 3 0.0527 0.165 6 0.105 0.331
#> 4 4 0.875 2.75 8 1.75 5.50
#> 5 5 0.666 2.09 10 1.33 4.19
#> 6 6 0.509 1.60 12 1.02 3.20
#> 7 7 0.598 1.88 14 1.20 3.75
#> 8 8 0.00397 0.0125 16 0.00794 0.0250
#> 9 9 0.0541 0.170 18 0.108 0.340
#> 10 10 0.868 2.73 20 1.74 5.45
Created on 2021-09-16 by the reprex package (v2.0.1)
In this way, you can use across with .fns and .names to do quite a lot:
mutate(myTibble, across(everything(),
.fns = list(double = multiply_by, half = divide_by),
2,
.names = "{.col}_{.fn}"))
#> # A tibble: 10 x 9
#> x y z x_double x_half y_double y_half z_double z_half
#> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 0.889 2.79 2 0.5 1.78 0.444 5.58 1.40
#> 2 2 0.329 1.03 4 1 0.658 0.165 2.07 0.517
#> 3 3 0.0527 0.165 6 1.5 0.105 0.0263 0.331 0.0827
#> 4 4 0.875 2.75 8 2 1.75 0.437 5.50 1.37
#> 5 5 0.666 2.09 10 2.5 1.33 0.333 4.19 1.05
#> 6 6 0.509 1.60 12 3 1.02 0.255 3.20 0.800
#> 7 7 0.598 1.88 14 3.5 1.20 0.299 3.75 0.939
#> 8 8 0.00397 0.0125 16 4 0.00794 0.00199 0.0250 0.00624
#> 9 9 0.0541 0.170 18 4.5 0.108 0.0271 0.340 0.0850
#> 10 10 0.868 2.73 20 5 1.74 0.434 5.45 1.36

How do I apply a function to rows in columns with names containing a partilcular stringin R

I have a dataframe with column names: X, time, "indoors_c" and "outdoors_c". Where time is >= 13, I want to replace the value of the rows in columns with names containing the string "_c" with 0. How do I do this without referring to the entire column name? I am using R studio
Using functions from tidyverse, sse across() together with contains() to select the columns over which to apply the function. Then use if_else for the function, putting 0 values if column time is above/below some value.
Small reproducible example:
library(tidyverse)
data <- tibble(X=rnorm(20), time=1:20, indoors_c=rnorm(20), outdoors_c=rnorm(20))
data %>%
mutate(across(contains("_c"), ~ if_else(time>=13, 0, .)))
#> # A tibble: 20 x 4
#> X time indoors_c outdoors_c
#> <dbl> <int> <dbl> <dbl>
#> 1 -1.67 1 -2.41 -1.44
#> 2 1.00 2 0.113 0.701
#> 3 0.386 3 0.0248 -0.425
#> 4 0.266 4 -0.431 -0.0722
#> 5 -0.206 5 0.255 -1.34
#> 6 -0.617 6 0.441 0.761
#> 7 1.42 7 0.481 -0.892
#> 8 0.207 8 -0.112 -0.906
#> 9 1.42 9 -0.465 -0.527
#> 10 -0.934 10 -2.21 2.95
#> 11 -0.419 11 -0.639 -0.113
#> 12 0.812 12 -0.180 0.440
#> 13 0.331 13 0 0
#> 14 -0.454 14 0 0
#> 15 -0.0290 15 0 0
#> 16 0.167 16 0 0
#> 17 -0.150 17 0 0
#> 18 0.922 18 0 0
#> 19 1.77 19 0 0
#> 20 1.62 20 0 0
Created on 2021-05-23 by the reprex package (v2.0.0)
We can use case_when
library(dplyr)
data %>%
mutate(across(ends_with('_C'), ~ case_when(time > 0 ~ 0, TRUE ~ .)))

How do I combine many tibbles by a simple code?

I have pop_1910, ... pop_2000. Each tibble has the following style. I want to combine these tibbles to one tibble. I know bind_rows to do that pop_1910 %>% bind_rows(pop_1920) %>% bind_rows(pop_1930). But it is a little bit tedious. Are there some efficient ways to combine many dataframes?
> pop_2000
# A tibble: 3,143 x 3
fips year pop
<chr> <dbl> <dbl>
1 01001 2000 33364
2 01003 2000 112162
3 01005 2000 23042
4 01007 2000 15432
5 01009 2000 40165
6 01011 2000 9142
7 01013 2000 16798
8 01015 2000 90175
9 01017 2000 29086
10 01019 2000 19470
If you have them inside a list, you can use reduce() to bind all in one move.
library(tidyverse)
my_df_list <- map(1:4, ~tibble(x = rnorm(5), y = rnorm(5)))
my_df_list
#> [[1]]
#> # A tibble: 5 x 2
#> x y
#> <dbl> <dbl>
#> 1 1.99 1.19
#> 2 0.273 0.208
#> 3 1.12 1.18
#> 4 0.00855 -0.593
#> 5 0.502 -0.926
#>
#> [[2]]
#> # A tibble: 5 x 2
#> x y
#> <dbl> <dbl>
#> 1 0.570 -0.709
#> 2 0.599 -0.408
#> 3 -0.687 1.38
#> 4 0.375 1.53
#> 5 0.0394 1.90
#>
#> [[3]]
#> # A tibble: 5 x 2
#> x y
#> <dbl> <dbl>
#> 1 -0.576 1.64
#> 2 0.147 -0.0384
#> 3 0.904 0.164
#> 4 -1.16 -1.02
#> 5 -0.678 1.32
#>
#> [[4]]
#> # A tibble: 5 x 2
#> x y
#> <dbl> <dbl>
#> 1 -0.849 -0.445
#> 2 -0.786 -0.991
#> 3 1.17 -1.00
#> 4 0.222 1.65
#> 5 -0.656 -0.808
reduce(my_df_list, bind_rows)
#> # A tibble: 20 x 2
#> x y
#> <dbl> <dbl>
#> 1 1.99 1.19
#> 2 0.273 0.208
#> 3 1.12 1.18
#> 4 0.00855 -0.593
#> 5 0.502 -0.926
#> 6 0.570 -0.709
#> 7 0.599 -0.408
#> 8 -0.687 1.38
#> 9 0.375 1.53
#> 10 0.0394 1.90
#> 11 -0.576 1.64
#> 12 0.147 -0.0384
#> 13 0.904 0.164
#> 14 -1.16 -1.02
#> 15 -0.678 1.32
#> 16 -0.849 -0.445
#> 17 -0.786 -0.991
#> 18 1.17 -1.00
#> 19 0.222 1.65
#> 20 -0.656 -0.808
Created on 2021-06-07 by the reprex package (v2.0.0)
you may also use map_dfr simply
purrr::map_dfr(my_list, ~.x)
This will give you a single df binded by rows.
OR in baseR
do.call(rbind, my_list)
Even easier is piping your list to dplyr::bind_rows(), e.g.
library(dplyr)
my_list %>% bind_rows()

Performing a linear model in R of a single response with a single predictor from a large dataframe and repeat for each column

It might not be very clear from the title but what I wish to do is:
I have a dataframe df with, say, 200 columns and the first 80 columns are response variables (y1, y2, y3, ...) and the rest of 120 are predictors (x1, x2, x3, ...).
I wish to compute a linear model for each pair – lm(yi ~ xi, data = df).
Many problems and solutions I have looked through online have a either a fixed response vs many predictors or the other way around, using lapply() and its related functions.
Could anyone who is familiar with it point me to the right step?
use tidyverse
library(tidyverse)
library(broom)
df <- mtcars
y <- names(df)[1:3]
x <- names(df)[4:7]
result <- expand_grid(x, y) %>%
rowwise() %>%
mutate(frm = list(reformulate(x, y)),
model = list(lm(frm, data = df)))
result$model <- purrr::set_names(result$model, nm = paste0(result$y, " ~ ", result$x))
result$model[1:2]
#> $`mpg ~ hp`
#>
#> Call:
#> lm(formula = frm, data = df)
#>
#> Coefficients:
#> (Intercept) hp
#> 30.09886 -0.06823
#>
#>
#> $`cyl ~ hp`
#>
#> Call:
#> lm(formula = frm, data = df)
#>
#> Coefficients:
#> (Intercept) hp
#> 3.00680 0.02168
map_df(result$model, tidy)
#> # A tibble: 24 x 5
#> term estimate std.error statistic p.value
#> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 (Intercept) 30.1 1.63 18.4 6.64e-18
#> 2 hp -0.0682 0.0101 -6.74 1.79e- 7
#> 3 (Intercept) 3.01 0.425 7.07 7.41e- 8
#> 4 hp 0.0217 0.00264 8.23 3.48e- 9
#> 5 (Intercept) 21.0 32.6 0.644 5.25e- 1
#> 6 hp 1.43 0.202 7.08 7.14e- 8
#> 7 (Intercept) -7.52 5.48 -1.37 1.80e- 1
#> 8 drat 7.68 1.51 5.10 1.78e- 5
#> 9 (Intercept) 14.6 1.58 9.22 2.93e-10
#> 10 drat -2.34 0.436 -5.37 8.24e- 6
#> # ... with 14 more rows
map_df(result$model, glance)
#> # A tibble: 12 x 12
#> r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 0.602 0.589 3.86 45.5 1.79e- 7 1 -87.6 181. 186.
#> 2 0.693 0.683 1.01 67.7 3.48e- 9 1 -44.6 95.1 99.5
#> 3 0.626 0.613 77.1 50.1 7.14e- 8 1 -183. 373. 377.
#> 4 0.464 0.446 4.49 26.0 1.78e- 5 1 -92.4 191. 195.
#> 5 0.490 0.473 1.30 28.8 8.24e- 6 1 -52.7 111. 116.
#> 6 0.504 0.488 88.7 30.5 5.28e- 6 1 -188. 382. 386.
#> 7 0.753 0.745 3.05 91.4 1.29e-10 1 -80.0 166. 170.
#> 8 0.612 0.599 1.13 47.4 1.22e- 7 1 -48.3 103. 107.
#> 9 0.789 0.781 57.9 112. 1.22e-11 1 -174. 355. 359.
#> 10 0.175 0.148 5.56 6.38 1.71e- 2 1 -99.3 205. 209.
#> 11 0.350 0.328 1.46 16.1 3.66e- 4 1 -56.6 119. 124.
#> 12 0.188 0.161 114. 6.95 1.31e- 2 1 -196. 398. 402.
#> # ... with 3 more variables: deviance <dbl>, df.residual <int>, nobs <int>
Created on 2020-12-11 by the reprex package (v0.3.0)

Resources