Extract a sinlge element with unnest from broom::tidy - r

I want to get a single elemnet from the broom tidy results into an unnested data frame.
The table structure is:
> zz
# A tibble: 1,923 x 5
sys_loc_code data model tidy glance
<chr> <list> <list> <list> <list>
1 S000-001 <tibble [493 x 18]> <S3: survreg> <tibble [4 x 7]> <tibble [1 x 8]>
2 S000-002 <tibble [32 x 18]> <S3: survreg> <tibble [4 x 7]> <tibble [1 x 8]>
And when I apply the broom:tidy function I get the output:
> unnest(zz, tidy)
# A tibble: 7,692 x 8
id term estimate std.error statistic p.value conf.low conf.high
<chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 S000-001 (Intercept) 4226. 881. 4.80 1.61e- 6 2499. 5952.
2 S000-001 y -2.08 0.438 -4.76 1.93e- 6 -2.94 -1.23
3 S000-001 m 2.46 0.645 3.82 1.36e- 4 1.20 3.72
4 S000-001 Log(scale) 3.47 0.0383 90.7 0. NA NA
5 S000-002 (Intercept) 4610. 2880. 1.60 1.09e- 1 -1035. 10255.
6 S000-002 y -2.29 1.44 -1.60 1.11e- 1 -5.10 0.523
7 S000-002 m 1.69 1.33 1.27 2.05e- 1 -0.922 4.30
8 S000-002 Log(scale) 2.62 0.132 19.9 5.57e-88 NA NA
However, I need to grab only one element from this output. In this example, only the slopes of the y term for each id (-2.08 and -2.29) with the resulting table looking like:
> unnest(zz, tidy)
# A tibble: 7,692 x 2
id estimate
<chr> <dbl>
1 S000-001 -2.08
2 S000-002 -2.29
the syntax tidy(x)[2,2] works as expected when x is a sinlge class S3: "survreg", but fails when applied to a nested table of lists of the same class.
Any suggestion would be appreciated. Thanks in advance.

Given that the output of unnest is a nibble, you should be able to feed it directly into a dplyr pipeline to grab what you want. Something like this:
library(dplyr)
unnest(zz, tidy) %>%
filter(term == "y") %>%
select(id, estimate)

Related

Adjusting the p-values on a subset of regression coefficients

Edited for Clarity
I frequently do stratified analyses. However, to avoid spending Type I error on hypotheses tests
that aren't of interest, I would like to remove certain values before using p.adjust().
library(purrr)
library(dplyr, warn.conflicts = FALSE)
library(broom)
library(tidyr)
mtcars_fit <- mtcars %>%
group_by(cyl) %>% # you can use "cyl" too, very flexible
nest() %>%
mutate(
model = map(data, ~ lm(mpg ~ wt, data = .)),
coeff = map(model, tidy, conf.int = FALSE)
) %>%
unnest(coeff) %>%
select(-statistic)
mtcars_fit
#> # A tibble: 6 × 7
#> # Groups: cyl [3]
#> cyl data model term estimate std.error p.value
#> <dbl> <list> <list> <chr> <dbl> <dbl> <dbl>
#> 1 6 <tibble [7 × 10]> <lm> (Intercept) 28.4 4.18 0.00105
#> 2 6 <tibble [7 × 10]> <lm> wt -2.78 1.33 0.0918
#> 3 4 <tibble [11 × 10]> <lm> (Intercept) 39.6 4.35 0.00000777
#> 4 4 <tibble [11 × 10]> <lm> wt -5.65 1.85 0.0137
#> 5 8 <tibble [14 × 10]> <lm> (Intercept) 23.9 3.01 0.00000405
#> 6 8 <tibble [14 × 10]> <lm> wt -2.19 0.739 0.0118
#If I want to adjust the p-values for multiple comparisons for the weight only and
#save the Type I error as I don't want to test the intercept, I would do something like this
mtcars_adjusted <- mtcars_fit %>%
mutate(
p.value2 = if_else(term != "(Intercept)", p.value, NA_real_),
p.value_adj = if_else(term != "(Intercept)", p.adjust(p.value2, method = "fdr"), NA_real_),
.after = "p.value"
) %>%
select(-p.value2)
mtcars_adjusted
#> # A tibble: 6 × 8
#> # Groups: cyl [3]
#> cyl data model term estimate std.error p.value p.val…¹
#> <dbl> <list> <list> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 6 <tibble [7 × 10]> <lm> (Intercept) 28.4 4.18 1.05e-3 NA
#> 2 6 <tibble [7 × 10]> <lm> wt -2.78 1.33 9.18e-2 0.0918
#> 3 4 <tibble [11 × 10]> <lm> (Intercept) 39.6 4.35 7.77e-6 NA
#> 4 4 <tibble [11 × 10]> <lm> wt -5.65 1.85 1.37e-2 0.0137
#> 5 8 <tibble [14 × 10]> <lm> (Intercept) 23.9 3.01 4.05e-6 NA
#> 6 8 <tibble [14 × 10]> <lm> wt -2.19 0.739 1.18e-2 0.0118
#> # … with abbreviated variable name ¹​p.value_adj
As this discussion on StackOverflow indicates that dplyr and p.adjust() often don't work well together, I applied the function outside the pipe as suggested.
#To check I will filter the dataset and make sure p adjusted values are the same
p.adj <- mtcars_fit %>%
filter(term != "(Intercept)") %>%
mutate(p.value_adj = NA_real_)
p.adj$p.value_adj = p.adjust(p.adj$p.value, method = "fdr")
p.adj
#> # A tibble: 3 × 8
#> # Groups: cyl [3]
#> cyl data model term estimate std.error p.value p.value_adj
#> <dbl> <list> <list> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 6 <tibble [7 × 10]> <lm> wt -2.78 1.33 0.0918 0.0918
#> 2 4 <tibble [11 × 10]> <lm> wt -5.65 1.85 0.0137 0.0206
#> 3 8 <tibble [14 × 10]> <lm> wt -2.19 0.739 0.0118 0.0206
Created on 2022-08-18 by the reprex package (v2.0.1)
The result is that the adjusted p-values are different, so I am unsure what is correct. The fact that I adjusted the P-values in two different ways -- with objects mtcars_adjusted and p.value_adj -- and got different adjusted P-values is concerning. The adjusted P-values for each object:
mtcars_adjusted: 0.0918, 0.0137, 0.0118
p.adj: 0.0918, 0.0206, 0.0206.
The resulting dataset is that I want to keep the intercept estimates without adjusting them in the p-value. The resulting dataset would look something like mtcars_adjusted, but I want to make sure the p-values are adjusted accurately. How would I go about doing this?
Implementing your adjustment within the pipe chain
You don't need to adjust your p-values outside of mutate() in your example. Below, I show the identical result can be produced within the piping chain.
# Adjust p-values for "wt" parameter estimates using your approach
p.adj <- mtcars_fit %>%
filter(term != "(Intercept)") %>%
mutate(p.value_adj = NA_real_)
p.adj$p.value_adj = p.adjust(p.adj$p.value, method = "fdr")
# Alternative approach
p.adj_alt <- mtcars_fit %>%
ungroup() %>%
filter(term != "(Intercept)") %>%
mutate(p.value_adj = p.adjust(p.adj$p.value, method = "fdr"))
# Show they are identical once ungrouped (which you should do once you are
# done with all by-group operations)
identical(ungroup(p.adj), p.adj_alt)
#> [1] TRUE
Whether you are accomplishing what you intended with your "outside of the pipe" approach is a different question than what you asked in your post, but I encourage you to make sure it is.
Adding the intercepts
Once you have your adjusted estimates, you can add in the intercept rows by filter()ing them from the original object and passing them with your adjusted data to bind_rows(). You can also combine the two p-values columns into a single column if you'd like using coalesce().
# Get intercepts, bind into a single data.frame, and create a coalesced
# column that combined the (un)adjusted p-values
mtcars_fit %>%
filter(term == "(Intercept)") %>%
bind_rows(p.adj) %>%
ungroup() %>%
mutate(p.value_combined = coalesce(p.value, p.value_adj))
#> # A tibble: 6 × 9
#> cyl data model term estim…¹ std.e…² p.value p.val…³ p.val…⁴
#> <dbl> <list> <list> <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 6 <tibble [7 × 10]> <lm> (Inte… 28.4 4.18 1.05e-3 NA 1.05e-3
#> 2 4 <tibble [11 × 10]> <lm> (Inte… 39.6 4.35 7.77e-6 NA 7.77e-6
#> 3 8 <tibble [14 × 10]> <lm> (Inte… 23.9 3.01 4.05e-6 NA 4.05e-6
#> 4 6 <tibble [7 × 10]> <lm> wt -2.78 1.33 9.18e-2 0.0918 9.18e-2
#> 5 4 <tibble [11 × 10]> <lm> wt -5.65 1.85 1.37e-2 0.0206 1.37e-2
#> 6 8 <tibble [14 × 10]> <lm> wt -2.19 0.739 1.18e-2 0.0206 1.18e-2
#> # … with abbreviated variable names ¹​estimate, ²​std.error, ³​p.value_adj,
#> # ⁴​p.value_combined

group-wise linear models function nest_by

I have a dataframe of 4 columns: Dataset, X, Y, Group.
The task is to fit a linear model to each of the five groups (The group column contains 5 groups: a, b, c, d, e) in the dataframe and then compare the slope with the dataframe test_2. For the test_2 I have already fitted a model, as there was no group separation like in the test_1. For the test_1 we have been suggested to use the function nest_by to compute a group-wise linear models
I have tried to fit a model with the function nest_by
Input:
model <- test_1 %>%
nest_by(Group) %>%
mutate(model = list(lm(y ~ x, data = test_1)))
model
Output:
A tibble: 5 x 3
# Rowwise: Group
Group data model
<fct> <list<tibble[,3]>> <list>
1 a [58 x 3] <lm>
2 b [35 x 3] <lm>
3 c [47 x 3] <lm>
4 d [44 x 3] <lm>
5 e [38 x 3] <lm>
I do not know now how to proceed. I thought that I could ungroup them and do a summary(), but would be similar to just fit a model separately with the function filter() and create 5 separated models.
Yes, you can proceed further using tidy from broom package which is better option than summary and then doing unnest.
For example, for mtcars, for each cyl group, we can do the following,
library(tidyr)
library(dplyr)
library(purrr)
library(broom)
mtcars_model <- mtcars %>%
nest(data = -cyl) %>%
mutate(
model = map(data, ~ lm(mpg ~ wt, data = .))
)
# now simply for each cyl, tidy the model output and unnest it
mtcars_model %>%
mutate(
tidy_summary = map(model, tidy)
) %>%
unnest(tidy_summary)
#> # A tibble: 6 × 8
#> cyl data model term estimate std.error statistic p.value
#> <dbl> <list> <list> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 6 <tibble [7 × 10]> <lm> (Interce… 28.4 4.18 6.79 1.05e-3
#> 2 6 <tibble [7 × 10]> <lm> wt -2.78 1.33 -2.08 9.18e-2
#> 3 4 <tibble [11 × 10]> <lm> (Interce… 39.6 4.35 9.10 7.77e-6
#> 4 4 <tibble [11 × 10]> <lm> wt -5.65 1.85 -3.05 1.37e-2
#> 5 8 <tibble [14 × 10]> <lm> (Interce… 23.9 3.01 7.94 4.05e-6
#> 6 8 <tibble [14 × 10]> <lm> wt -2.19 0.739 -2.97 1.18e-2
Created on 2022-07-09 by the reprex package (v2.0.1)
For additional Information with examples, check here

Lost column name when applying lm with summarise/across

I want to use summarise/across with lm to fit regressions using different columns in a tibble. Like this:
library(tidyverse)
library(broom)
fits <- tibble(mtcars) %>%
summarise(across(c(vs, am), ~list(tidy(lm(wt ~ .x + mpg)))))
But the columns that get passed into lm as '.x', end up labeled as .x in the regression output.
fits %>% unnest(vs)
# A tibble: 3 x 6
term estimate std.error statistic p.value am
<chr> <dbl> <dbl> <dbl> <dbl> <list>
1 (Intercept) 6.10 0.353 17.3 8.36e-17 <tibble [3 × 5]>
2 .x 0.0738 0.239 0.308 7.60e- 1 <tibble [3 × 5]>
3 mpg -0.145 0.0200 -7.24 5.63e- 8 <tibble [3 × 5]>
I can preserve the name if I build the lm formula on the fly, and use cur_column(), but this feels kludgy:
tibble(mtcars) %>%
summarise(across(c(vs, am),
~list(tidy(lm(formula(paste0("wt ~ ", cur_column(), " + mpg"))))))) %>%
unnest(vs)
# A tibble: 3 x 6
term estimate std.error statistic p.value am
<chr> <dbl> <dbl> <dbl> <dbl> <list>
1 (Intercept) 6.10 0.353 17.3 8.36e-17 <tibble [3 × 5]>
2 vs 0.0738 0.239 0.308 7.60e- 1 <tibble [3 × 5]>
3 mpg -0.145 0.0200 -7.24 5.63e- 8 <tibble [3 × 5]>
I want the output to correctly use the true column name of .x, without having to do this workaround, but still using the summarise/across motif, without incorporating map.
Seems like this should be possible. Any suggestions?
*copying my comment from #akrun's answer to clarify what i'm looking for:
What I really want to know is, is the column name preserved in the summarise/across operation in a way that I can reference it directly in lm. Something like {{.x}} or rlang::as_name(.x). I mean, I know those don't work, but it seems like name information should be preserved, aside from just the string version in cur_column.
Can make it shorter with reformulate
library(dplyr)
library(broom)
library(tidyr)
tibble(mtcars) %>%
summarise(across(c(vs, am), ~
list(tidy(lm(reformulate(c(cur_column(), "mpg"), "wt")))))) %>%
unnest(vs)
-output
# A tibble: 3 x 6
# term estimate std.error statistic p.value am
# <chr> <dbl> <dbl> <dbl> <dbl> <list>
#1 (Intercept) 6.10 0.353 17.3 8.36e-17 <tibble [3 × 5]>
#2 vs 0.0738 0.239 0.308 7.60e- 1 <tibble [3 × 5]>
#3 mpg -0.145 0.0200 -7.24 5.63e- 8 <tibble [3 × 5]>

FDR correction - extracting p-values from lmer() and creating vectors for use in p.adjust in R

I am trying to do FDR correction for some region of interest neuroimaging data. I have run 18 linear mixed effects models overall and I have made sure that the order of the coefficients in the output would be the same in each model.
I have saved the output from each model in the following:
tidy_model1 <-tidy(model1)
tidy_model2 <-tidy(model2)
....
tidy_model18 <-tidy(model18)
I am now trying to make my life easier and create a loop which goes over a list with the names of the above model objects and creates a vector of p-values for each coefficient which I will then enter in the p.adjust function to retrieve the adjusted p-values.
so I create a list:
model_list <- list(tidy_model1,
tidy_model2,... tidy_model18)
I have tried the following loops:
for (i in 1:18) {
model_list[i] %>%
variable1_pval <- p.value[1]
}
and
for (i in 1:18) {
variable1_pval <- model_list[i]$p.value[1]
}
So the above should give me a vector of p-values for coefficient 1 of the model.
However, I get a null vector in both cases.
I know I am not providing my data but any suggestion as to why these loops might not be working are welcome!
Thank you
I made up a list of models:
library(nlme)
library(broom)
models <- lapply(1:5,function(i){
idx= sample(nrow(Orthodont),replace=TRUE)
lme(distance ~ age, random=~Sex,data = Orthodont[idx,])
})
model_list <- lapply(models,tidy,effects="fixed")
In these models, the useful coefficient is the second:
model_list[[1]]
# A tibble: 2 x 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 15.9 1.03 15.5 7.77e-26
2 age 0.739 0.0871 8.48 9.13e-13
You can obtain the p-values in a vector like this, for your example use p.value1:
sapply(model_list,function(x)x$p.value[2])
A better way to keep track of your models, and not populate the environment with variables, is to use purrr, dplyr (see more here) :
library(purrr)
library(dplyr)
models = tibble(name=1:5,models=models) %>%
mutate(tidy_res = map(models,tidy,effects="fixed"))
models
# A tibble: 5 x 3
name models tidy_res
<int> <list> <list>
1 1 <lme> <tibble [2 × 5]>
2 2 <lme> <tibble [2 × 5]>
3 3 <lme> <tibble [2 × 5]>
4 4 <lme> <tibble [2 × 5]>
5 5 <lme> <tibble [2 × 5]>
models %>% unnest(tidy_res) %>% filter(term=="age")
# A tibble: 5 x 7
name models term estimate std.error statistic p.value
<int> <list> <chr> <dbl> <dbl> <dbl> <dbl>
1 1 <lme> age 0.587 0.0601 9.77 2.44e-15
2 2 <lme> age 0.677 0.0663 10.2 3.91e-16
3 3 <lme> age 0.588 0.0603 9.74 3.05e-15
4 4 <lme> age 0.653 0.0529 12.3 2.74e-20
5 5 <lme> age 0.638 0.0623 10.2 3.34e-16

Running n linear regressions with few lines of code and storing results in a matrix

I have a tibble db of 25 dependent variables (db[,2:26]) and a vector of a single explanatory variable rmrf. All I want to do is to run a regression for each of the 25 dependent variables on the same common explanatory variable.
I want to obtain a table of alphas, betas, t.stat for alphas and R2, hence a matrix of 25 rows (one for each dependent variable) and 4 columns.
Nevertheless, despite it seems to be a pretty simple issue (I am a newbie in R), I do not understand:
how to smartly run all the 25 regressions in few lines of code [ loop, apply?]
how to extract the 4 required quantities.
While for the first issue I may have a solution (not sure though!):
varlist <- names(db)[2:26] #the 25 dependent variables
models <- lapply(varlist, function(x) {
lm(substitute(i ~ rmrf, list(i = as.name(x))), data = db)
})
for the second one I still have no idea (except using the function coefficient() of the lm class, but still cannot integrate the other 2 quantities).
Could you please help me figuring this out?
lm is vectoried across the dependent variables:
Just do
lm(as.matrix(db[,-1]) ~ rmrf, data = db)
Eg. Lets take an example of iris dataset, if we take that Petal.Width is the independent variable while the first 3 variables are the dependent vriable, then we could do:
dat <- iris[-5]
library(tidyverse)
library(broom)
lm(as.matrix(dat[-4]) ~ Petal.Width, dat) %>%
{cbind.data.frame(tidy(.)%>%
pivot_wider(response, names_from = term,
values_from = c(estimate, statistic)),
R.sq = map_dbl(summary(.),~.x$r.squared))}%>%
`rownames<-`(NULL)
response estimate_(Intercept) estimate_Petal.Width statistic_(Intercept) statistic_Petal.Width R.sq
1 Sepal.Length 4.777629 0.8885803 65.50552 17.296454 0.6690277
2 Sepal.Width 3.308426 -0.2093598 53.27795 -4.786461 0.1340482
3 Petal.Length 1.083558 2.2299405 14.84998 43.387237 0.9271098
If I got right, you want to apply LM for each pair independent ~ dependent in the dataset. You can use pivot/nest/broom strategy like this:
library(tidyverse)
library(broom)
# creating some dataset
db <- tibble(
y = rnorm(5),
x1 = rnorm(5),
x2 = rnorm(5),
x3 = rnorm(5)
)
# lets see
head(db)
# A tibble: 5 x 4
y x1 x2 x3
<dbl> <dbl> <dbl> <dbl>
1 -0.994 0.139 -0.935 0.0134
2 1.09 0.960 1.23 1.45
3 1.03 0.374 1.06 -0.900
4 1.63 -0.162 -0.498 -0.740
5 -0.0941 1.47 0.312 0.933
# pivot to long format by "independend var"
db_pivot <- db %>%
gather(key = "var_name", value = "value", -y)
head(db_pivot)
# A tibble: 6 x 3
y var_name value
<dbl> <chr> <dbl>
1 -0.368 x1 -1.29
2 -1.48 x1 -0.0813
3 -2.61 x1 0.477
4 0.602 x1 -0.525
5 -0.264 x1 0.0598
6 -0.368 x2 -0.573
# pipeline
resp <- db_pivot %>%
group_by(var_name) %>% # for each var group
nest() %>% # nest the dataset
mutate(lm_model=map(data,function(.x){ # apply lm for each dataset
lm(y~., data=.x)
})) %>%
mutate( # for each lm model fitted
coef_stats = map(lm_model, tidy), # use broom to extract coef statistics from lm model
model_stats = map(lm_model, glance) # use broom to extract regression stats from lm model
)
head(resp)
# A tibble: 3 x 5
# Groups: var_name [3]
var_name data lm_model coef_stats model_stats
<chr> <list> <list> <list> <list>
1 x1 <tibble [5 x 2]> <lm> <tibble [2 x 5]> <tibble [1 x 11]>
2 x2 <tibble [5 x 2]> <lm> <tibble [2 x 5]> <tibble [1 x 11]>
3 x3 <tibble [5 x 2]> <lm> <tibble [2 x 5]> <tibble [1 x 11]>
# coefs
resp %>%
unnest(coef_stats) %>%
select(-data,-lm_model, -model_stats)
# A tibble: 6 x 6
# Groups: var_name [3]
var_name term estimate std.error statistic p.value
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 x1 (Intercept) -1.14 0.548 -2.08 0.129
2 x1 value -1.16 0.829 -1.40 0.257
3 x2 (Intercept) -0.404 0.372 -1.09 0.356
4 x2 value -0.985 0.355 -2.77 0.0694
5 x3 (Intercept) -0.707 0.755 -0.936 0.418
6 x3 value -0.206 0.725 -0.284 0.795
# R2
resp %>%
unnest(model_stats) %>%
select(-data,-lm_model, -coef_stats)
# A tibble: 3 x 12
# Groups: var_name [3]
var_name r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC deviance df.residual
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <int> <dbl> <dbl> <dbl> <dbl> <int>
1 x1 0.394 0.192 1.12 1.95 0.257 2 -6.37 18.7 17.6 3.74 3
2 x2 0.719 0.626 0.760 7.69 0.0694 2 -4.44 14.9 13.7 1.73 3
3 x3 0.0261 -0.298 1.42 0.0805 0.795 2 -7.55 21.1 19.9 6.01 3

Resources