Unlisting multiple columns in R data frame - r

I have a dataframe with values for multiple macro variables. When i compute log of the values and then the log differences it changes the variables into lists, causing problems with my script later on.
Example code:
#Compute log of relevant macrovariables
macro[,c("hp", "unem", "m1", "inc")] <- log(macro[,c("hp", "unem", "m1", "inc")])
colnames(macro)[2:5] <- paste(colnames(macro)[2:5], "log", sep = "_")
#Computing log differences
macro$ldiff_hp <- c(-diff(macro$hp_log), na.omit)
Im trying to unlist the columns and convert them to numeric with either of the following:
#Alternative 1
macro[,15:19]<- unlist(as.numeric(macro[,15:19]))
#Alternative 2
macro[,15:19] <- sapply(macro[,15:19],as.numeric)
It gives me the following error output:
> macro[,15:19]<- unlist(as.numeric(macro[,15:19]))
Error in unlist(as.numeric(macro[, 15:19])) :
(list) object cannot be coerced to type 'double'

Using the economics dataset from ggplot2 as example data and making use of dplyrs lag function the log differenced vars can be computed like so:
library(ggplot2)
library(dplyr)
macro <- ggplot2::economics
vars <- c("uempmed", "psavert")
vars_log <- paste(vars, "log", sep = "_")
vars_ldiff <- paste(vars, "ldiff", sep = "_")
#Compute log of relevant macrovariables
macro[, vars_log] <- sapply(macro[, vars], log)
# Lag values
macro[, vars_ldiff] <- sapply(macro[, vars_log], dplyr::lag)
# First Difference of logs
macro[, vars_ldiff] <- macro[, vars_log] - macro[, vars_ldiff]
macro
#> # A tibble: 574 x 10
#> date pce pop psavert uempmed unemploy uempmed_log psavert_log
#> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1967-07-01 507. 198712 12.6 4.5 2944 1.50 2.53
#> 2 1967-08-01 510. 198911 12.6 4.7 2945 1.55 2.53
#> 3 1967-09-01 516. 199113 11.9 4.6 2958 1.53 2.48
#> 4 1967-10-01 512. 199311 12.9 4.9 3143 1.59 2.56
#> 5 1967-11-01 517. 199498 12.8 4.7 3066 1.55 2.55
#> 6 1967-12-01 525. 199657 11.8 4.8 3018 1.57 2.47
#> 7 1968-01-01 531. 199808 11.7 5.1 2878 1.63 2.46
#> 8 1968-02-01 534. 199920 12.3 4.5 3001 1.50 2.51
#> 9 1968-03-01 544. 200056 11.7 4.1 2877 1.41 2.46
#> 10 1968-04-01 544 200208 12.3 4.6 2709 1.53 2.51
#> # ... with 564 more rows, and 2 more variables: uempmed_ldiff <dbl>,
#> # psavert_ldiff <dbl>
Created on 2020-03-23 by the reprex package (v0.3.0)

Related

R: paired t-test on multiple columns

I am trying to run a t-test on multiple columns. Basically trying to find the change from baseline to year 1 for a number of joint angles. I only want to conduct this on the study side. Below is an image with the first few rows and columns of the data. Sample Data
I have tried using both of these functions without success:
Code 1:
res <- FAI_SLS %>%
filter(study_side == "Study")%>%
select(-id,-subject,-activity,-side,-study_side,-year) %>%
map_df(~ broom::tidy(t.test(. ~ year)), .id = 'var')
I get the following error:
Error in eval(predvars, data, env) : object 'year' not found
I tried taking out -year but I still have the same issue.
Code 2:
t(sapply(FAI_SLS%>%filter(study_side == "Study")%>%select(-id,-subject,-activity,-side,-study_side,-year), function(x)
unlist(t.test(x~FAI_SLS$year)[c("estimate","p.value","statistic","conf.int")])))
I get the following error:
Error in h(simpleError(msg, call)) :
error in evaluating the argument 'x' in selecting a method for function 't': variable lengths differ (found for 'FAI_SLS$year')
Again I tried taking -year out without success.
Any suggestions on how I can fix this? Thanks
Try fitting the t-test within summarise() on all the columns you want to test (selected in across()). Here's an example with a different dataset:
library(dplyr)
library(tidyr)
data("storms")
storms %>%
filter(year %in% c(2019, 2020)) %>%
summarise(across(-c(name, year, status, category),
~broom::tidy(t.test(. ~ year)))) %>%
pivot_longer(everything(), names_to = "variable") %>%
unnest(value)
#> # A tibble: 9 × 11
#> variable estimate estimate1 estimate2 statistic p.value parameter conf.low
#> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 month 0.0917 8.93 8.84 1.15 2.52e- 1 892. -0.0654
#> 2 day 4.29 18.2 13.9 7.49 2.34e-13 641. 3.17
#> 3 hour -0.0596 9.13 9.19 -0.128 8.99e- 1 687. -0.978
#> 4 lat 2.14 25.9 23.7 3.75 1.94e- 4 668. 1.02
#> 5 long 6.06 -60.7 -66.8 4.27 2.25e- 5 736. 3.27
#> 6 wind 8.42 58.8 50.4 4.42 1.18e- 5 529. 4.68
#> 7 pressure -4.46 989. 993. -3.03 2.59e- 3 537. -7.35
#> 8 tropicalst… 7.39 153. 145. 0.810 4.18e- 1 701. -10.5
#> 9 hurricane_… 10.9 24.1 13.2 3.92 1.02e- 4 508. 5.45
#> # … with 3 more variables: conf.high <dbl>, method <chr>, alternative <chr>
Created on 2022-06-02 by the reprex package (v2.0.1)

Summarise multiple columns using weighted t-test

I have the following data and want to calculate weighted-p-value. I reviewed dplyr summarise multiple columns using t.test. But my version should use weight. I can use Code2 to do this. But there are over 30 columns. How do I calculate weighted p-values efficiently?
Code 1
# A tibble: 877 x 5
cat population farms farmland weight
<chr> <dbl> <dbl> <dbl> <dbl>
1 Treated 9.89 8.00 12.3 1
2 Control 10.3 7.81 12.1 0.714
3 Control 10.2 8.04 12.4 0.156
4 Control 10.3 7.97 12.1 0.340
5 Control 10.9 8.87 12.7 2.85
6 Control 10.4 8.35 12.5 0.934
7 Control 10.5 8.58 12.9 0.193
8 Control 10.6 8.57 12.6 0.276
9 Control 10.2 8.54 12.5 0.344
10 Control 10.5 8.76 12.6 0.625
# … with 867 more rows
Code 2
wtd.t.test(
x = df$population[df$cat == "Treated"],
y = df$population[df$cat == "Control"],
weight = df$weight[df$cat == "Treated"],
weighty = df$weight[df$cat == "Control"])$coefficients[3]
We can use summarise with across
library(dplyr)
df %>%
summarise(across(c(population:farmland),
~ weights::wtd.t.test(x = .[cat == 'Treated'],
y = .[cat == 'Control'],
weight = weight[cat == 'Treated'],
weighty= weight[cat == 'Control'])$coefficients[3]))
Or using lapply/sapply
sapply(df[2:4], function(v)
weights::wtd.t.test(x = v[df$cat == "Treated"],
y = v[df$cat == "Control"],
weight = df$weight[df$cat == "Treated"],
weighty = df$weight[df$cat == "Control"])$coefficients[3])

Create new column based on regular expression match

Problem
I would like to create a new column for relative standard deviation using following formula:stdev * 100 / abs(mean). I have over 40 variables, each with their own stdev and mean (so 80 columns). What I would like to do is use regular expressions to calculate the relative standard deviation from the 2 columns (stdev and mean) based on the preceding names. For example, for columns AceticAcid.stdevand AceticAcid.mean, calculate the relative standard deviation to automatically create a new column AcetiAcid.rsd. The equation being: AceticAcid.stdev * 100 / abs(AceticAcid.mean).
Example Dataframe
print(df)
AceticAcid.mean AceticAcid.stdev Glucose.mean Glucose.stdev Propanol.mean Propanol.stdev
1 28.75775 0.911130 48.27333 4.4991249 144.4770 38.34122
2 78.83051 10.562110 28.13337 1.2304387 134.6402 31.76264
3 40.89769 17.848381 37.10283 0.2102977 132.0253 33.76568
4 88.30174 11.028700 32.90534 1.6396036 149.7135 21.56639
5 94.04673 9.132295 14.11699 4.7725182 132.7853 15.88455
Desired Output (Don't care about the order of the new columns)
print(df_rsd)
AceticAcid.mean AceticAcid.stdev Glucose.mean Glucose.stdev Propanol.mean Propanol.stdev AceticAcid.rsd Glucose.rsd Propanol.rsd
1 28.75775 0.911130 48.27333 4.4991249 144.4770 38.34122 3.168294 9.3201039 26.53795
2 78.83051 10.562110 28.13337 1.2304387 134.6402 31.76264 13.398504 4.3735921 23.59076
3 40.89769 17.848381 37.10283 0.2102977 132.0253 33.76568 43.641536 0.5667969 25.57515
4 88.30174 11.028700 32.90534 1.6396036 149.7135 21.56639 12.489788 4.9827894 14.40511
5 94.04673 9.132295 14.11699 4.7725182 132.7853 15.88455 9.710380 33.8069175 11.96258
Repetitive Attempt...
I do not want to write these out 40 times (there has to be a nice regex way to achieve this):
df_rsd <- df %>% mutate(AceticAcid.rsd = AceticAcid.stdev * 100 / abs(AceticAcid.mean),
Glucose.rsd = Glucose.stdev * 100 / abs(Glucose.mean),
Propanol.rsd = Propanol.stdev * 100 / abs(Propanol.mean))
Reproducible Data
structure(list(AceticAcid.mean = c(28.7577520124614, 78.8305135443807,
40.89769218117, 88.3017404004931, 94.0467284293845), AceticAcid.stdev = c(0.911129987798631,
10.5621097609401, 17.8483808878809, 11.0287002893165, 9.13229470606893
), Glucose.mean = c(48.2733338139951, 28.1333662476391, 37.1028254181147,
32.9053360782564, 14.1169873066247), Glucose.stdev = c(4.49912485200912,
1.2304386717733, 0.210297667654231, 1.63960359641351, 4.77251824573614
), Propanol.mean = c(144.476965803187, 134.64017030783, 132.025340688415,
149.713488831185, 132.785289955791), Propanol.stdev = c(38.3412187267095,
31.7626409884542, 33.7656808178872, 21.5663894917816, 15.884545892477
)), class = "data.frame", row.names = c(NA, -5L))
We can use split.default to split the dataset into a list of data.frame columns based on removing the suffix part of the column names, then loop over the list with lapply, do the calculation and assign it to new column in 'df'
out <- lapply(split.default(df, sub("\\..*", "", names(df))),
function(x) x[[2]]* 100/abs(x[[1]]))
df[paste0(names(out), ".rsd")] <- out
df
# AceticAcid.mean AceticAcid.stdev Glucose.mean Glucose.stdev Propanol.mean Propanol.stdev AceticAcid.rsd Glucose.rsd Propanol.rsd
#1 28.75775 0.911130 48.27333 4.4991249 144.4770 38.34122 3.168294 9.3201039 26.53795
#2 78.83051 10.562110 28.13337 1.2304387 134.6402 31.76264 13.398504 4.3735921 23.59076
#3 40.89769 17.848381 37.10283 0.2102977 132.0253 33.76568 43.641536 0.5667969 25.57515
#4 88.30174 11.028700 32.90534 1.6396036 149.7135 21.56639 12.489788 4.9827894 14.40511
#5 94.04673 9.132295 14.11699 4.7725182 132.7853 15.88455 9.710380 33.8069175 11.96258
Or with tidyverse
library(purrr)
library(dplyr)
library(stringr)
df %>%
split.default(str_remove(names(.), "\\..*")) %>%
map_dfc(~ .x[[2]] * 100/abs(.x[[1]])) %>%
rename_all(~ str_c(., '.rsd')) %>%
bind_cols(df, .)
alternative, also with the tidyverse.
library(tidyverse)
df_long <- df %>%
mutate(measurement_number=row_number(), .before=1) %>%
pivot_longer(cols=-measurement_number, names_to="var", values_to="value") %>%
separate(var, into=c("var", "indicator")) %>%
pivot_wider(id_cols=c("measurement_number", "var"), names_from = indicator, values_from=value) %>%
mutate(rsd=stdev * 100 / abs(mean)) %>%
arrange(var, measurement_number)
df_long
#> # A tibble: 15 x 5
#> measurement_number var mean stdev rsd
#> <int> <chr> <dbl> <dbl> <dbl>
#> 1 1 AceticAcid 28.8 0.911 3.17
#> 2 2 AceticAcid 78.8 10.6 13.4
#> 3 3 AceticAcid 40.9 17.8 43.6
#> 4 4 AceticAcid 88.3 11.0 12.5
#> 5 5 AceticAcid 94.0 9.13 9.71
#> 6 1 Glucose 48.3 4.50 9.32
#> 7 2 Glucose 28.1 1.23 4.37
#> 8 3 Glucose 37.1 0.210 0.567
#> 9 4 Glucose 32.9 1.64 4.98
#> 10 5 Glucose 14.1 4.77 33.8
#> 11 1 Propanol 144. 38.3 26.5
#> 12 2 Propanol 135. 31.8 23.6
#> 13 3 Propanol 132. 33.8 25.6
#> 14 4 Propanol 150. 21.6 14.4
#> 15 5 Propanol 133. 15.9 12.0
df_wide <- df_long %>%
pivot_wider(id_cols=c("measurement_number"),
names_from = c(var),
values_from = c(mean, stdev, rsd),
names_sep = ".")
df_wide
#> # A tibble: 5 x 10
#> measurement_num~ mean.AceticAcid mean.Glucose mean.Propanol stdev.AceticAcid
#> <int> <dbl> <dbl> <dbl> <dbl>
#> 1 1 28.8 48.3 144. 0.911
#> 2 2 78.8 28.1 135. 10.6
#> 3 3 40.9 37.1 132. 17.8
#> 4 4 88.3 32.9 150. 11.0
#> 5 5 94.0 14.1 133. 9.13
#> # ... with 5 more variables: stdev.Glucose <dbl>, stdev.Propanol <dbl>,
#> # rsd.AceticAcid <dbl>, rsd.Glucose <dbl>, rsd.Propanol <dbl>
Created on 2020-05-26 by the reprex package (v0.3.0)

Looking for function or formula to create table with means and standard deviations for many groups and many variables using tidyverse

I need to prepare a table that includes the means and standards deviations for each level of several demographic variables and for many variables.
Consider the following data:
df <- tibble(place=c("London","Paris","London","Rome","Rome","Madrid","Madrid"),gender=c("m","f","f","f","m","m","f"), education = c(1,1,2,3,5,5,3), var1 = c(2.2,3.1,4.5,1,5,1.4,2.3),var2 = c(4.2,2.1,2.5,4,5,4.4,1.3),var3 = c(0.2,0.1,3.5,3,5,2.4,4.3))
I would like to get a dataframe that contains the grouping variables (place, gender, education) and their levels (e.g., London, Paris, etc.) in the first column and their means and standard deviations for each variable starting with var (var1, var2, var3) in additional columns.
I know how to do this for one group and several variables at a time. However, since I need to repeat this dozens of times I am looking for a way to automate this process. It would be great to have a function to which I simply need to pass (a) the names of the grouping variables (e.g., gender, education) and (b) the variables from which to get the M / SD (e.g. var1, var2).
The solution I look for should look like this (the stats are not correct in the example below):
my_results <- tibble(grouping_vars = c("place_London","place_Paris","place_Rome","place_Madrid","gender_m","gender_f","last_element"),mean_var1=c(1.3,2.5,4.5,1.7,2.5,3.6,4.0),sd_var1=c(0.01,0.41,0.21,0.12,0.02,0.38,0.28),mean_var2=c(4.3,4.5,4.0,1.2,2.5,1.6,2.3),sd_var2=c(0.21,0.1,0.1,0.32,0.22,0.18,0.08),mean_var3=c(2.3,2.5,2.0,3.2,3.5,0.6,5),sd_var3=c(0.51,0.15,0.51,0.52,0.52,0.15,0.48))
grouping_vars mean_var1 sd_var1 mean_var2 sd_var2 mean_var3 sd_var3
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 place_London 1.3 0.01 4.3 0.21 2.3 0.51
2 place_Paris 2.5 0.41 4.5 0.1 2.5 0.15
3 place_Rome 4.5 0.21 4 0.1 2 0.51
4 place_Madrid 1.7 0.12 1.2 0.32 3.2 0.52
5 gender_m 2.5 0.02 2.5 0.22 3.5 0.52
6 gender_f 3.6 0.38 1.6 0.18 0.6 0.15
7 last_element 4 0.28 2.3 0.08 5 0.48
Since I typically work with tidyverse, I would particularly appreciate solutions that use these packages (probably dplyr or purrr?).
EDIT:
I thought there would be an elegant way to do this using map(). Maybe there is but I haven't found it yet. For the mean time, I figured out a way that simply restructures the data into an appropriate long format and then computes the statistics.
df %>%
# all grouping vars need to be of the same type, here "factor" is most appropriate
mutate_at(grouping_vars, list(factor)) %>%
# pivot longer, so that each row is a unique combination of grouping variable and grouping level
pivot_longer(
cols = one_of(grouping_vars),
names_to = "group_var",
values_to = "group_level"
) %>%
# merge grouping variable and group level into a single column
unite(var_level,group_var,group_level, sep="_") %>%
# group by group level
group_by(var_level) %>%
# compute means and sd for each test variable
summarise_at(test_vars, list(~mean(., na.rm = TRUE), ~sd(., na.rm = TRUE)))
The result seems fine, e.g., the mean of var1 of the two people who live in London (2.2 + 4.5) is 3.35.
# A tibble: 10 x 7
var_level var1_mean var2_mean var3_mean var1_sd var2_sd var3_sd
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 education_1 2.65 3.15 0.15 0.636 1.48 0.0707
2 education_2 4.5 2.5 3.5 NA NA NA
3 education_3 1.65 2.65 3.65 0.919 1.91 0.919
4 education_5 3.2 4.7 3.7 2.55 0.424 1.84
5 gender_f 2.72 2.48 2.72 1.47 1.13 1.83
6 gender_m 2.87 4.53 2.53 1.89 0.416 2.40
7 place_London 3.35 3.35 1.85 1.63 1.20 2.33
8 place_Madrid 1.85 2.85 3.35 0.636 2.19 1.34
9 place_Paris 3.1 2.1 0.1 NA NA NA
10 place_Rome 3 4.5 4 2.83 0.707 1.41
Any thoughts on possible risks of this approach or how this could be improved?
One option is the describeBy function from psych:
library(psych)
describeBy(df,group = c("gender","education"), mat= TRUE)
Then subset what you want from there.
Another, surprisingly simple option with dplyr:
library(dplyr)
group.vars <- c("gender","education")
measure.vars <- c("var1","var2")
df %>%
group_by_at(group.vars) %>%
summarize_at(measure.vars,
list(mean =~ mean(.),sd =~ sd(.)))
# A tibble: 5 x 6
# Groups: gender [2]
gender education var1_mean var2_mean var1_sd var2_sd
<chr> <dbl> <dbl> <dbl> <dbl> <dbl>
1 f 1 3.1 2.1 NA NA
2 f 2 4.5 2.5 NA NA
3 f 3 1.65 2.65 0.919 1.91
4 m 1 2.2 4.2 NA NA
5 m 5 3.2 4.7 2.55 0.424
You can continue adding additional function to that list. For every element, the name will be appended to the variable and the result will be come the column values. Recall that ~ is shorthand for function(x).

calculating qchisq in on a sparklyr tbl

I need to use the qchisq function on a column of a sparklyr data frame.
The problem is that it seems that qchisq function is not implemented in Spark. If I am reading the error message below correctly, sparklyr tried execute a function called "QCHISQ", however this doesn't exist neither in Hive SQL, nor in Spark.
In general, is there a way to run arbitrary functions that are not implemented in Hive or Spark, with sparklyr? I know about spark_apply, but haven't figured out how to configure it yet.
> mydf = data.frame(beta=runif(100, -5, 5), pval = runif(100, 0.001, 0.1))
> mydf_tbl = copy_to(con, mydf)
> mydf_tbl
# Source: table<mydf> [?? x 2]
# Database: spark_connection
beta pval
<dbl> <dbl>
1 3.42 0.0913
2 -1.72 0.0629
3 0.515 0.0335
4 -3.12 0.0717
5 -2.12 0.0253
6 1.36 0.00640
7 -3.33 0.0896
8 1.36 0.0235
9 0.619 0.0414
10 4.73 0.0416
> mydf_tbl %>% mutate(se = sqrt(beta^2/qchisq(pval)))
Error: org.apache.spark.sql.AnalysisException: Undefined function: 'QCHISQ'.
This function is neither a registered temporary function nor a permanent function registered in the database 'default'.; line 1 pos 49
As you noted you can use spark_apply:
mydf_tbl %>%
spark_apply(function(df)
dplyr::mutate(df, se = sqrt(beta^2/qchisq(pval, df = 12))))
# # Source: table<sparklyr_tmp_14bd5feacf5> [?? x 3]
# # Database: spark_connection
# beta pval X3
# <dbl> <dbl> <dbl>
# 1 1.66 0.0763 0.686
# 2 0.153 0.0872 0.0623
# 3 2.96 0.0485 1.30
# 4 4.86 0.0349 2.22
# 5 -1.82 0.0712 0.760
# 6 2.34 0.0295 1.10
# 7 3.54 0.0297 1.65
# 8 4.57 0.0784 1.88
# 9 4.94 0.0394 2.23
# 10 -0.610 0.0906 0.246
# # ... with more rows
but fair warning - it is embarrassingly slow. Unfortunately you don't have alternative here, short of writing your own Scala / Java extensions.
In the end I've used an horrible hack, which for this case works fine.
Another solution would have been to write a User Defined Function (UDF), but sparklyr doesn't support it yet: https://github.com/rstudio/sparklyr/issues/1052
This is the hack I've used. In short, I precompute a qchisq table, upload it as a sparklyr object, then join. If I compare this with results calculated on a local data frame, I get a correlation of r=0.99999990902236146617.
#' #param n: number of significant digits to use
> check_precomputed_strategy = function(n) {
chisq = data.frame(pval=seq(0, 1, 1/(10**(n)))) %>%
mutate(qval=qchisq(pval, df=1, lower.tail = FALSE)) %>%
mutate(pval_s = as.character(round(as.integer(pval*10**n),0)))
chisq %>% head %>% print
chisq_tbl = copy_to(con, chisq, overwrite=T)
mydf = data.frame(beta=runif(100, -5, 5), pval = runif(100, 0.001, 0.1)) %>%
mutate(se1 = sqrt(beta^2/qchisq(pval, df=1, lower.tail = FALSE)))
mydf_tbl = copy_to(con, mydf)
mydf_tbl.up = mydf_tbl %>%
mutate(pval_s=as.character(round(as.integer(pval*10**n),0))) %>%
left_join(chisq_tbl, by="pval_s") %>%
mutate(se=sqrt(beta^2 / qval)) %>%
collect %>%
filter(!duplicated(beta))
mydf_tbl.up %>% head %>% print
mydf_tbl.up %>% filter(complete.cases(.)) %>% nrow %>% print
mydf_tbl.up %>% filter(complete.cases(.)) %>% select(se, se1) %>% cor
}
> check_precomputed_strategy(4)
pval qval pval_s
1 0.00000000000000000000000 Inf 0
2 0.00010000000000000000479 15.136705226623396570 1
3 0.00020000000000000000958 13.831083619091122827 2
4 0.00030000000000000002793 13.070394140069462097 3
5 0.00040000000000000001917 12.532193305401813532 4
6 0.00050000000000000001041 12.115665146397173402 5
# A tibble: 6 x 8
beta pval.x se1 myvar pval_s pval.y qval se
<dbl> <dbl> <dbl> <dbl> <chr> <dbl> <dbl> <dbl>
1 3.42 0.0913 2.03 1. 912 0.0912 2.85 2.03
2 -1.72 0.0629 0.927 1. 628 0.0628 3.46 0.927
3 0.515 0.0335 0.242 1. 335 0.0335 4.52 0.242
4 -3.12 0.0717 1.73 1. 716 0.0716 3.25 1.73
5 -2.12 0.0253 0.947 1. 253 0.0253 5.00 0.946
6 1.36 0.00640 0.498 1. 63 0.00630 7.46 0.497
[1] 100
se se1
se 1.00000000000000000000 0.99999990902236146617
se1 0.99999990902236146617 1.00000000000000000000

Resources