Splitting a lists of output containing multiple factors - r

Let say I have these three vectors:
time <- c(306,455,1010,210,883,1022,310,361,218,166)
status <- c(0,1,0,1,0,0,1,0,1,1)
gender <- c("Male","Male","Female","Male","Male","Male","Female","Female","Female","Female")
and I want to do a Survival Analysis and get the summary.
A <- survfit(Surv(time, status)~gender)
summary(A, censored = TRUE)
The output would be like this:
> summary(A, censored = TRUE)
Call: survfit(formula = Surv(time, status) ~ gender)
gender=Female
time n.risk n.event survival std.err lower 95% CI upper 95% CI
166 5 1 0.8 0.179 0.516 1
218 4 1 0.6 0.219 0.293 1
310 3 1 0.4 0.219 0.137 1
361 2 0 0.4 0.219 0.137 1
1010 1 0 0.4 0.219 0.137 1
gender=Male
time n.risk n.event survival std.err lower 95% CI upper 95% CI
210 5 1 0.800 0.179 0.516 1
306 4 0 0.800 0.179 0.516 1
455 3 1 0.533 0.248 0.214 1
883 2 0 0.533 0.248 0.214 1
1022 1 0 0.533 0.248 0.214 1
My question is, is there any way that I can split the output into Male and Female. For example:
output_Female <- ?????
output_Female
output_Female
time n.risk n.event survival std.err lower 95% CI upper 95% CI
166 5 1 0.8 0.179 0.516 1
218 4 1 0.6 0.219 0.293 1
310 3 1 0.4 0.219 0.137 1
361 2 0 0.4 0.219 0.137 1
1010 1 0 0.4 0.219 0.137 1
output_Male <- ?????
output_Male
output_Male
time n.risk n.event survival std.err lower 95% CI upper 95% CI
166 5 1 0.8 0.179 0.516 1
218 4 1 0.6 0.219 0.293 1
310 3 1 0.4 0.219 0.137 1
361 2 0 0.4 0.219 0.137 1
1010 1 0 0.4 0.219 0.137 1

Here is an option using tidy
library(broom)
library(dplyr)
tidy(A, censored = TRUE) %>%
split(.$strata)
Or with base R
txt <- capture.output(summary(A, censored = TRUE))
ind <- cumsum(grepl("gender=", txt))
lst <- lapply(split(txt[ind >0], ind[ind >0]), function(x)
read.table(text = x[-(1:2)], header = FALSE))
nm1 <- scan(text= gsub("\\s+[0-9]|%\\s+", ".", txt[4]), quiet = TRUE, what = "")
lst <- lapply(lst, setNames, nm1)

Related

How do I subtract each element from the column average and divide it by the column standard deviation

I am quite new to R so I needed some help working out this problem. I have a data frame for daily rainfall values for different regions (AEZ).
The output needs to be another table that takes the (individual rainfall - column average)/column standard deviation.
For example in the table below for 01.Jan and AEZ 3 what it should do is take (0.0402 - Average (01.Jan)) / SD(01.Jan). This calculation needs to be run for each AEZ and the output then will be another table with results of these calculations.
AEZ `01-Jan` `02-Jan` `03-Jan` `04-Jan` `05-Jan` `06-Jan` `07-Jan`
1 3 0.0402 0.0044 0.0998 0.142 0.0061 0.0267 0.0351
2 12 0.0143 0.0027 0.0027 0.0029 0.0317 0.0012 0.0012
3 48 0 0 0.0026 0.0015 0.0019 0 0
4 77 0 0 0.0059 0.0124 0.0048 0.0009 0
5 160 0.0261 0.0173 0.057 0.0221 0.0892 0 0.0003
6 162 0.167 0.0037 0.0041 0.0683 0.102 0.199 0.0308
7 178 0.0062 0.0033 0.0808 0.101 0.0033 0.0023 0.0315
This will standardise (center and scale) the original dataframe.
df[,-1] <- scale(df[,-1], center = TRUE, scale = TRUE)
To scale a copy do:
foo <- df
foo[,-1] <- scale(foo[,-1], center = TRUE, scale = TRUE)
We could use dplyr:
library(dplyr)
data %>%
mutate(across(-AEZ, ~ (.x - mean(.x)) / sd(.x)))
which returns
# A tibble: 7 x 8
AEZ `\`01-Jan\`` `\`02-Jan\`` `\`03-Jan\`` `\`04-Jan\`` `\`05-Jan\`` `\`06-Jan\`` `\`07-Jan\``
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 3 0.0663 -0.0145 1.51 1.67 -0.647 -0.0835 1.22
2 12 -0.369 -0.302 -0.793 -0.857 -0.0563 -0.429 -0.751
3 48 -0.610 -0.759 -0.795 -0.882 -0.744 -0.445 -0.821
4 77 -0.610 -0.759 -0.717 -0.684 -0.677 -0.433 -0.821
5 160 -0.171 2.17 0.495 -0.508 1.27 -0.445 -0.804
6 162 2.20 -0.133 -0.760 0.332 1.56 2.25 0.969
7 178 -0.505 -0.201 1.06 0.926 -0.711 -0.414 1.01

ggpubr's compare_means and base R's pairwise.t.test give different results

First time posting to stackoverflow, I hope somebody can help me. Thanks in advance!
I wanted to use the R package ggpubr to create a bar graph showing expression of a gene in different treatment groups, but I noticed that the included function compare_means or stat_compare_means returns way higher p-values for the comparison of all the groups than the R base function pairwise.t.test. Actually some values are way higher and some way lower. Does the ggpubr function use some more conservative assumption? Here is my data and the code sample:
Target.Name Group CT dCT f.change
81 Gen1 300 23.911 1.900 0.26794337
82 Gen1 300 24.990 3.190 0.10957572
83 Gen1 300 24.504 2.646 0.15965172
84 Gen1 30 26.379 4.486 0.04462512
85 Gen1 30 26.576 4.366 0.04852930
86 Gen1 30 27.154 4.912 0.03321549
87 Gen1 3 27.317 4.923 0.03298605
88 Gen1 3 27.119 5.288 0.02559490
89 Gen1 3 27.313 5.691 0.01935701
90 Gen1 0.3 27.388 5.857 0.01725311
91 Gen1 0.3 26.911 5.104 0.02909671
92 Gen1 0.3 26.872 5.816 0.01773816
93 Gen1 0 26.371 5.502 0.02206648
94 Gen1 0 27.283 5.778 0.01822421
95 Gen1 0 27.168 5.618 0.02034757
#-----------------------------------------
compare_means(dat_subset, formula = f.change ~ Group, method = "t.test")
pairwise.t.test(dat_subset$f.change, dat_subset$Group)
And the output is
> compare_means(dat_subset, formula = f.change ~ Group, method = "t.test")
# A tibble: 10 x 8
.y. group1 group2 p p.adj p.format p.signif method
<chr> <chr> <chr> <dbl> <dbl> <chr> <chr> <chr>
1 f.change 0 0.3 0.799 0.9 0.799 ns T-test
2 f.change 0 3 0.278 0.83 0.278 ns T-test
3 f.change 0 30 0.0351 0.32 0.035 * T-test
4 f.change 0 300 0.0767 0.54 0.077 ns T-test
5 f.change 0.3 3 0.450 0.9 0.450 ns T-test
6 f.change 0.3 30 0.0271 0.27 0.027 * T-test
7 f.change 0.3 300 0.0767 0.54 0.077 ns T-test
8 f.change 3 30 0.0573 0.46 0.057 ns T-test
9 f.change 3 300 0.0809 0.54 0.081 ns T-test
10 f.change 30 300 0.0980 0.54 0.098 ns T-test
> pairwise.t.test(dat_subset$f.change, dat_subset$Group)
Pairwise comparisons using t tests with pooled SD
data: dat_subset$f.change and dat_subset$Group
0 0.3 3 30
0.3 1.0000 - - -
3 1.0000 1.0000 - -
30 1.0000 1.0000 1.0000 -
300 0.0034 0.0034 0.0036 0.0071
P value adjustment method: holm
To obtain the same results, you'll have to specify that you don't want the variances to be pooled (pool.sd=FALSE), since the default for pairwise.t.test is TRUE, but the default for compare_means is FALSE. (or vice-versa)
pairwise.t.test(x=dat_subset$f.change, g=dat_subset$Group, pool.sd = FALSE)
data: dat_subset$f.change and dat_subset$Group
0 0.3 3 30
0.3 0.90 - - -
3 0.83 0.90 - -
30 0.32 0.27 0.46 -
300 0.54 0.54 0.54 0.54
compare_means(dat_subset, formula = f.change ~ Group, method = "t.test")
# A tibble: 10 x 8
.y. group1 group2 p p.adj p.format p.signif method
<chr> <chr> <chr> <dbl> <dbl> <chr> <chr> <chr>
1 f.change 300 30 0.0980 0.54 0.098 ns T-test
2 f.change 300 3 0.0809 0.54 0.081 ns T-test
3 f.change 300 0.3 0.0767 0.54 0.077 ns T-test
4 f.change 300 0 0.0767 0.54 0.077 ns T-test
5 f.change 30 3 0.0573 0.46 0.057 ns T-test
6 f.change 30 0.3 0.0271 0.27 0.027 * T-test
7 f.change 30 0 0.0351 0.32 0.035 * T-test
8 f.change 3 0.3 0.450 0.9 0.450 ns T-test
9 f.change 3 0 0.278 0.83 0.278 ns T-test
10 f.change 0.3 0 0.799 0.9 0.799 ns T-test
Well they both claim to use holm as the default p.adjust but they seem to differ in whether they assume equal variance. Don't have enough of your data to truly test my hypothesis but they will yield different results per this example basically taken from the help file...
data("ToothGrowth")
df <- ToothGrowth
ggpubr::compare_means(len ~ supp, df, method = "t.test")
#> # A tibble: 1 x 8
#> .y. group1 group2 p p.adj p.format p.signif method
#> <chr> <chr> <chr> <dbl> <dbl> <chr> <chr> <chr>
#> 1 len OJ VC 0.0606 0.061 0.061 ns T-test
ggpubr::compare_means(len ~ supp, df, method = "t.test", var.equal = TRUE)
#> # A tibble: 1 x 8
#> .y. group1 group2 p p.adj p.format p.signif method
#> <chr> <chr> <chr> <dbl> <dbl> <chr> <chr> <chr>
#> 1 len OJ VC 0.0604 0.06 0.06 ns T-test
pairwise.t.test(df$len, df$supp)
#>
#> Pairwise comparisons using t tests with pooled SD
#>
#> data: df$len and df$supp
#>
#> OJ
#> VC 0.06
#>
#> P value adjustment method: holm
pairwise.t.test(df$len, df$supp, pool.sd = FALSE)
#>
#> Pairwise comparisons using t tests with non-pooled SD
#>
#> data: df$len and df$supp
#>
#> OJ
#> VC 0.061
#>
#> P value adjustment method: holm
Created on 2020-05-08 by the reprex package (v0.3.0)

Dynamic portfolio re-balancing if PF weights deviate by more than a threshold

It's not so hard to backtest a portfolio with given weights and a set rebalancing frequency (e.g. daily/weekly...). There are R packages doing this, for example PerformanceAnalytics, or tidyquant's tq_portfolio which uses that function.
I would like to backtest a portfolio that is re-balanced when the weights deviate by a certain threshold given in percentage points.
Say I have two equally-weighted stocks and a threshold of +/-15 percentage points, I would rebalance to the initial weights when one of the weights exceeds 65%.
For example I have 3 stocks with equal weights (we should also be able to set other weights).
library(dplyr)
set.seed(3)
n <- 6
rets <- tibble(period = rep(1:n, 3),
stock = c(rep("A", n), rep("B", n), rep("C", n)),
ret = c(rnorm(n, 0, 0.3), rnorm(n, 0, 0.2), rnorm(n, 0, 0.1)))
target_weights <- tibble(stock = c("A", "B", "C"), target_weight = 1/3)
rets_weights <- rets %>%
left_join(target_weights, by = "stock")
rets_weights
# # A tibble: 18 x 4
# period stock ret target_weight
# <int> <chr> <dbl> <dbl>
# 1 1 A -0.289 0.333
# 2 2 A -0.0878 0.333
# 3 3 A 0.0776 0.333
# 4 4 A -0.346 0.333
# 5 5 A 0.0587 0.333
# 6 6 A 0.00904 0.333
# 7 1 B 0.0171 0.333
# 8 2 B 0.223 0.333
# 9 3 B -0.244 0.333
# 10 4 B 0.253 0.333
# 11 5 B -0.149 0.333
# 12 6 B -0.226 0.333
# 13 1 C -0.0716 0.333
# 14 2 C 0.0253 0.333
# 15 3 C 0.0152 0.333
# 16 4 C -0.0308 0.333
# 17 5 C -0.0953 0.333
# 18 6 C -0.0648 0.333
Here are the actual weights without rebalancing:
rets_weights_actual <- rets_weights %>%
group_by(stock) %>%
mutate(value = cumprod(1+ret)*target_weight[1]) %>%
group_by(period) %>%
mutate(actual_weight = value/sum(value))
rets_weights_actual
# # A tibble: 18 x 6
# # Groups: period [6]
# period stock ret target_weight value actual_weight
# <int> <chr> <dbl> <dbl> <dbl> <dbl>
# 1 1 A -0.289 0.333 0.237 0.268
# 2 2 A -0.0878 0.333 0.216 0.228
# 3 3 A 0.0776 0.333 0.233 0.268
# 4 4 A -0.346 0.333 0.153 0.178
# 5 5 A 0.0587 0.333 0.162 0.207
# 6 6 A 0.00904 0.333 0.163 0.238
# 7 1 B 0.0171 0.333 0.339 0.383
# 8 2 B 0.223 0.333 0.415 0.437
# 9 3 B -0.244 0.333 0.314 0.361
# 10 4 B 0.253 0.333 0.393 0.458
# 11 5 B -0.149 0.333 0.335 0.430
# 12 6 B -0.226 0.333 0.259 0.377
# 13 1 C -0.0716 0.333 0.309 0.349
# 14 2 C 0.0253 0.333 0.317 0.335
# 15 3 C 0.0152 0.333 0.322 0.371
# 16 4 C -0.0308 0.333 0.312 0.364
# 17 5 C -0.0953 0.333 0.282 0.363
# 18 6 C -0.0648 0.333 0.264 0.385
So I want that if in any period any stock's weight goes over or under the threshold (for example 0.33+/-0.1), the portfolio weights should be set back to the initial weights.
This has to be done dynamically, so we could have a lot of periods and a lot of stocks. Rebalancing could be necessary several times.
What I tried to solve it: I tried to work with lag and set the initial weights when the actual weights exceed the threshold, however I was unable to do so dynamically, as the weights depend on the returns given the rebalanced weights.
The approach to rebalance upon deviation by more than a certain threshold is called percentage-of-portfolio rebalancing.
My solution is to iterate period-by-period and check if the upper or lower threshold was passed. If so we reset to the initial weights.
library(tidyverse)
library(tidyquant)
rets <- FANG %>%
group_by(symbol) %>%
mutate(ret = adjusted/lag(adjusted)-1) %>%
select(symbol, date, ret) %>%
pivot_wider(names_from = "symbol", values_from = ret)
weights <- rep(0.25, 4)
threshold <- 0.05
r_out <- tibble()
i0 <- 1
trade_rebalance <- 1
pf_value <- 1
for (i in 1:nrow(rets)) {
r <- rets[i0:i,]
j <- 0
r_i <- r %>%
mutate_if(is.numeric, replace_na, 0) %>%
mutate_if(is.numeric, list(v = ~ pf_value * weights[j <<- j + 1] * cumprod(1 + .))) %>%
mutate(pf = rowSums(select(., contains("_v")))) %>%
mutate_at(vars(ends_with("_v")), list(w = ~ ./pf))
touch_upper_band <- any(r_i[nrow(r_i),] %>% select(ends_with("_w")) %>% unlist() > weights + threshold)
touch_lower_band <- any(r_i[nrow(r_i),] %>% select(ends_with("_w")) %>% unlist() < weights - threshold)
if (touch_upper_band | touch_lower_band | i == nrow(rets)) {
i0 <- i + 1
r_out <- bind_rows(r_out, r_i %>% mutate(trade_rebalance = trade_rebalance))
pf_value <- r_i[[nrow(r_i), "pf"]]
trade_rebalance <- trade_rebalance + 1
}
}
r_out %>% head()
# # A tibble: 6 x 15
# date FB AMZN NFLX GOOG FB_v AMZN_v NFLX_v GOOG_v pf FB_v_w AMZN_v_w NFLX_v_w GOOG_v_w trade_rebalance
# <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1 2013-01-02 0 0 0 0 0.25 0.25 0.25 0.25 1 0.25 0.25 0.25 0.25 1
# 2 2013-01-03 -0.00821 0.00455 0.0498 0.000581 0.248 0.251 0.262 0.250 1.01 0.245 0.248 0.259 0.247 1
# 3 2013-01-04 0.0356 0.00259 -0.00632 0.0198 0.257 0.252 0.261 0.255 1.02 0.251 0.246 0.255 0.249 1
# 4 2013-01-07 0.0229 0.0359 0.0335 -0.00436 0.263 0.261 0.270 0.254 1.05 0.251 0.249 0.257 0.243 1
# 5 2013-01-08 -0.0122 -0.00775 -0.0206 -0.00197 0.259 0.259 0.264 0.253 1.04 0.251 0.250 0.255 0.245 1
# 6 2013-01-09 0.0526 -0.000113 -0.0129 0.00657 0.273 0.259 0.261 0.255 1.05 0.261 0.247 0.249 0.244 1
r_out %>% tail()
# # A tibble: 6 x 15
# date FB AMZN NFLX GOOG FB_v AMZN_v NFLX_v GOOG_v pf FB_v_w AMZN_v_w NFLX_v_w GOOG_v_w trade_rebalance
# <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1 2016-12-22 -0.0138 -0.00553 -0.00727 -0.00415 0.945 1.10 1.32 1.08 4.45 0.213 0.247 0.297 0.243 10
# 2 2016-12-23 -0.00111 -0.00750 0.0000796 -0.00171 0.944 1.09 1.32 1.08 4.43 0.213 0.246 0.298 0.243 10
# 3 2016-12-27 0.00631 0.0142 0.0220 0.00208 0.950 1.11 1.35 1.08 4.49 0.212 0.247 0.301 0.241 10
# 4 2016-12-28 -0.00924 0.000946 -0.0192 -0.00821 1.11 1.12 1.10 1.11 4.45 0.250 0.252 0.247 0.250 11
# 5 2016-12-29 -0.00488 -0.00904 -0.00445 -0.00288 1.11 1.11 1.10 1.11 4.42 0.250 0.252 0.248 0.251 11
# 6 2016-12-30 -0.0112 -0.0200 -0.0122 -0.0140 1.09 1.09 1.08 1.09 4.36 0.251 0.250 0.248 0.251 11
Here we would have rebalanced 11 times.
r_out %>%
mutate(performance = pf-1) %>%
ggplot(aes(x = date, y = performance)) +
geom_line(data = FANG %>%
group_by(symbol) %>%
mutate(performance = adjusted/adjusted[1L]-1),
aes(color = symbol)) +
geom_line(size = 1)
The approach is slow and using a loop is far from elegant. If anyone has a better solution, I would happily upvote and accept.

Ggplot2 : bubbles representing propotions by category?

I've this data :
# A tibble: 19 x 8
country Prop_A Prop_B Prop_C Prop_D Prop_E Prop_F Prop_G
<fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Austria 1 1 0.912 0.912 0.518 0.999 0.567
2 Belgium 1 1 0.821 1 0.687 0.0990 0.925
3 Denmark NA NA NA NA NA NA NA
4 France 0.750 1 0.361 0.345 0 0.0658 0.563
5 Germany 0.928 1 0.674 0.783 0.128 0.635 0.0828
6 Greece 0 1 0 0 0 1 0
7 Hungary 0.812 1 0.812 0.812 0 0.375 0.188
8 Israel 1 1 1 0.755 0.450 0.241 0.292
9 Italy 0.962 1 0.881 0.516 0.533 0 0.0230
10 Latvia 0 1 1 0 0 0 0
11 Lithuania 0.507 1 1 0.507 0 0 0
12 Malta 1 1 1 1 0 1 0
13 Netherlands 0.818 1 1 0.682 0.5 0.182 0.682
14 Portugal 0.829 1 1 0.829 0 0.610 0.509
15 Romania 1 1 1 1 0 0.273 1
16 Spain 1 1 1 0.787 0.215 0.191 0.653
17 Sweden 0.792 1 0.792 0.167 0.375 0 0
18 Switzerland 0.697 1 1 0.547 0.126 0.724 0.210
19 Turkey 1 1 0.842 0.775 0.585 0.810 0.117
>
0.812 represent 81% for the proposal A in Hungary (7)
What I want is this kind of graphic :
https://zupimages.net/viewer.php?id=20/13/ob6z.png
I want to have "81%" in the bubble , countries in rows and the different "props" in columns.
I've tried geom_tile, but doesn't work. I don't understand if my data are not well built, or if i just don't find the good command.
Thank for your help !
Here is one approach to making a bubble plot.
library(tidyverse)
df %>%
mutate_at(vars(starts_with("Prop")), list(~. * 100)) %>%
pivot_longer(cols = starts_with("Prop"), names_to = c("Prop", "Type"), names_sep = "_") %>%
ggplot(aes(x = Type, y = country, size = value, label = value)) +
geom_point(shape = 21, fill = "white") +
geom_text(size = 3) +
scale_size(range = c(5, 15), guide = F)
Plot

Force model.matrix to follow the order of the terms in the formula in R

Lets create a matrix with fake data:
data_ex <- data.frame(y = runif(5,0,1), a1 = runif(5,0,1), b2 = runif(5,0,1),
c3 = runif(5,0,1), d4 = runif(5,0,1))
> data_ex
y a1 b2 c3 d4
1 0.162 0.221 0.483 0.989 0.558
2 0.445 0.854 0.732 0.723 0.259
3 0.884 0.041 0.893 0.985 0.947
4 0.944 0.718 0.338 0.238 0.592
5 0.094 0.867 0.026 0.334 0.314
The model's formula is as follows:
forml <- as.formula("y ~ a1 + b2 + a1:c3:d4 + a1:c3 + a1:b2 + a1:b2:c3")
> forml
y ~ a1 + b2 + a1:c3:d4 + a1:c3 + a1:b2 + a1:b2:c3
The resulting model.matrix is:
> as.matrix(model.matrix(forml, data_ex))
(Intercept) a1 b2 a1:c3 a1:b2 a1:c3:d4 a1:b2:c3
1 1 0.221 0.483 0.218 0.107 0.122 0.105
2 1 0.854 0.732 0.617 0.625 0.160 0.452
3 1 0.041 0.893 0.040 0.036 0.038 0.036
4 1 0.718 0.338 0.171 0.243 0.101 0.058
5 1 0.867 0.026 0.290 0.022 0.091 0.008
As you can see the columns are reordered from the lowest interaction grade to the highest.
I'm looking for a method that force the model.matrix function to follow the EXACT order of the terms in the formula.
The resulting matrix should be like the following:
> Correct_matrix
(Intercept) a1 b2 a1:c3:d4 a1:c3 a1:b2 a1:b2:c3
1 1 0.221 0.107 0.483 0.218 0.122 0.105
2 1 0.854 0.625 0.732 0.617 0.160 0.452
3 1 0.041 0.036 0.893 0.040 0.038 0.036
4 1 0.718 0.243 0.338 0.171 0.101 0.058
5 1 0.867 0.022 0.026 0.290 0.091 0.008
You can create the terms and keep the order of the terms with keep.order = TRUE. The resulting object can be used with model.matrix.
model.matrix(terms(forml, keep.order = TRUE), data_ex)
The result:
(Intercept) a1 b2 a1:c3:d4 a1:c3 a1:b2 a1:b2:c3
1 1 0.4604044 0.10968326 0.198301034 0.3015807 0.05049866 0.03307836
2 1 0.4795555 0.61339588 0.018934135 0.2205621 0.29415737 0.13529189
3 1 0.7560366 0.67036486 0.001418541 0.4465991 0.50682035 0.29938436
4 1 0.4490247 0.69179890 0.135388984 0.1376586 0.31063480 0.09523209
5 1 0.7198557 0.08595737 0.131564438 0.2918157 0.06187690 0.02508371
attr(,"assign")
[1] 0 1 2 3 4 5 6

Resources