I am working on a problem set and absolutely cannot figure this one out. I think I've fried my brain to the point where it doesn't even make sense anymore.
Here is a look at the data ...
sex age chol tg ht wt sbp dbp vldl hdl ldl bmi
<chr> <int> <int> <int> <dbl> <dbl> <int> <int> <int> <int> <int> <dbl>
1 M 60 137 50 68.2 112. 110 70 10 53 74 2.40
2 M 26 154 202 82.8 185. 88 64 34 31 92 2.70
3 M 33 198 108 64.2 147 120 80 22 34 132 3.56
4 F 27 154 47 63.2 129 110 76 9 57 88 3.22
5 M 36 212 79 67.5 176. 130 100 16 37 159 3.87
6 F 31 197 90 64.5 121 122 78 18 58 111 2.91
7 M 28 178 163 66.5 167 118 68 19 30 135 3.78
8 F 28 146 60 63 105. 120 80 12 46 88 2.64
9 F 25 231 165 64 126 130 72 23 70 137 3.08
10 M 22 163 30 68.8 173 112 70 6 50 107 3.66
# … with 182 more rows
I must write a function, myTtest, to perform the following task:
Perform a two-sample t-tests to compare the differences of a series of numeric variables between each level of a classification variable
The first argument, dat, is a data frame
The second argument, classVar, is a character vector of length 1. It is the name of the classification variable, such as 'sex.'
The third argument, numVar, is a character vector that contains the name of the numeric variables, such as c("age", "chol", "tg"). This means I need to perform three t-tests to compare the difference of those between males and females.
The function should return a data frame with the following variables: Varname, F.mean, M.mean, t (for t-statistics), df (for degrees of freedom), and p (for p-value).
I should be able to run this ...
myTtest(dat = chol, classVar = "sex", numVar = c("age", "chol", "tg")
... and then get the data frame to appear.
Any help is greatly appreciated. I am pulling my hair out over this one! As well, as noted in my comment below, this has to be done without Tidyverse ... which is why I'm having so much trouble to begin with.
The intuition for this solution is that you can loop over your dependent variables, and call t.test() in each loop. Then save the results from each DV and stack them together in one big data frame.
I'll leave out some bits for you to fill in, but here's the gist:
First, some example data:
set.seed(123)
n <- 20
grp <- sample(c("m", "f"), n, replace = TRUE)
df <- data.frame(grp = grp, age = rnorm(n), chol = rnorm(n), tg = rnorm(n))
df
grp age chol tg
1 m 1.2240818 0.42646422 0.25331851
2 m 0.3598138 -0.29507148 -0.02854676
3 m 0.4007715 0.89512566 -0.04287046
4 f 0.1106827 0.87813349 1.36860228
5 m -0.5558411 0.82158108 -0.22577099
6 f 1.7869131 0.68864025 1.51647060
7 f 0.4978505 0.55391765 -1.54875280
8 f -1.9666172 -0.06191171 0.58461375
9 m 0.7013559 -0.30596266 0.12385424
10 m -0.4727914 -0.38047100 0.21594157
Now make a container that each of the model outputs will go into:
fits_df <- data.frame()
Loop over each DV and append the model output to fits_df each time with rbind:
for (dv in c("age", "chol", "tg")) {
frml <- as.formula(paste0(dv, " ~ grp")) # make a model formula: dv ~ grp
fit <- t.test(frml, two.sided = TRUE, data = df) # perform the t-test
# hint: use str(fit) to figure out how to pull out each value you care about
fit_df <- data.frame(
dv = col,
f_mean = xxx,
m_mean = xxx,
t = xxx,
df = xxx,
p = xxx
)
fits_df <- rbind(fits_df, fit_df)
}
Your output will look like this:
fits_df
dv f_mean m_mean t df p
1 age -0.18558068 -0.04446755 -0.297 15.679 0.7704954
2 chol 0.07731514 0.22158672 -0.375 17.828 0.7119400
3 tg 0.09349567 0.23693052 -0.345 14.284 0.7352112
One note: When you're pulling out values from fit, you may get odd row names in your output data frame. This is due to the names property of the various fit attributes. You can get rid of these by using as.numeric() or as.character() wrappers around the values you pull from fit (for example, fit$statistic can be cleaned up with as.character(round(fit$statistic, 3))).
Related
I have a dataset of successes, probabilities, and sample sizes that I am running binomial tests on.
Here is a sample of the data (note that the actual dataset has me run >100 binomial tests):
km n_1 prey_pred p0_prey_pred
<fct> <dbl> <int> <dbl>
80 93 12 0.119
81 1541 103 0.0793
83 316 5 0.0364
84 721 44 0.0796
89 866 58 0.131
I normally run this (example for first row):
n=93
p0=0.119
successes=12
binom.test(obs.successes, n, p0, "two.sided")
> Exact binomial test
data: 12 and 93
number of successes = 12, number of trials = 93, p-value = 0.74822
alternative hypothesis: true probability of success is not equal to 0.119
95 percent confidence interval:
0.068487201 0.214548325
sample estimates:
probability of success
0.12903226
Is there a way to systematically have it run multiple binomial tests on each row of data, and then storing all the output (p-value, confidence intervals, probability of success) as separate columns?
I've tried the solution proposed here, but I am clearly m
Using apply.
res <- t(`colnames<-`(apply(dat, 1, FUN=function(x) {
rr <- binom.test(x[3], x[2], x[4], "two.sided")
with(rr, c(x, "2.5%"=conf.int[1], estimate=unname(estimate),
"97.5%"=conf.int[2], p.value=unname(p.value)))
}), dat$km))
res
# km n_1 prey_pred p0_prey_pred 2.5% estimate 97.5% p.value
# 80 80 93 12 0.1190 0.068487201 0.12903226 0.21454832 7.482160e-01
# 81 81 1541 103 0.0793 0.054881013 0.06683971 0.08047927 7.307921e-02
# 83 83 316 5 0.0364 0.005157062 0.01582278 0.03653685 4.960168e-02
# 84 84 721 44 0.0796 0.044688325 0.06102635 0.08106220 7.311463e-02
# 89 89 866 58 0.1310 0.051245893 0.06697460 0.08572304 1.656621e-09
Edit
If you have multiple column sets, in wide format (and for some reason want to stay there)
dat2 <- `colnames<-`(cbind(dat, dat[-1]), c("km", "n_1.1", "prey_pred.1", "p0_prey_pred.1",
"n_1.2", "prey_pred.2", "p0_prey_pred.2"))
dat2[1:3,]
# km n_1.1 prey_pred.1 p0_prey_pred.1 n_1.2 prey_pred.2 p0_prey_pred.2
# 1 80 93 12 0.1190 93 12 0.1190
# 2 81 1541 103 0.0793 1541 103 0.0793
# 3 83 316 5 0.0364 316 5 0.0364
you may do:
res2 <- t(`colnames<-`(apply(dat2, 1, FUN=function(x) {
rr1 <- binom.test(x[3], x[2], x[4], "two.sided")
rr2 <- binom.test(x[6], x[5], x[7], "two.sided")
rrr1 <- with(rr1, c("2.5%.1"=conf.int[1], estimate.1=unname(estimate),
"97.5%.1"=conf.int[2], p.value.1=unname(p.value)))
rrr2 <- with(rr2, c("2.5%.1"=conf.int[1], estimate.1=unname(estimate),
"97.5%.1"=conf.int[2], p.value.1=unname(p.value)))
c(x, rrr1, rrr2)
}), dat2$km))
res2
# km n_1.1 prey_pred.1 p0_prey_pred.1 n_1.2 prey_pred.2 p0_prey_pred.2 2.5%.1
# 80 80 93 12 0.1190 93 12 0.1190 0.068487201
# 81 81 1541 103 0.0793 1541 103 0.0793 0.054881013
# 83 83 316 5 0.0364 316 5 0.0364 0.005157062
# 84 84 721 44 0.0796 721 44 0.0796 0.044688325
# 89 89 866 58 0.1310 866 58 0.1310 0.051245893
# estimate.1 97.5%.1 p.value.1 2.5%.1 estimate.1 97.5%.1 p.value.1
# 80 0.12903226 0.21454832 7.482160e-01 0.068487201 0.12903226 0.21454832 7.482160e-01
# 81 0.06683971 0.08047927 7.307921e-02 0.054881013 0.06683971 0.08047927 7.307921e-02
# 83 0.01582278 0.03653685 4.960168e-02 0.005157062 0.01582278 0.03653685 4.960168e-02
# 84 0.06102635 0.08106220 7.311463e-02 0.044688325 0.06102635 0.08106220 7.311463e-02
# 89 0.06697460 0.08572304 1.656621e-09 0.051245893 0.06697460 0.08572304 1.656621e-09
One could code this more nested, but I recommend to keep things easy so later others understand better what's going on, and probably including oneself.
Data:
dat <- read.table(text="km n_1 prey_pred p0_prey_pred
80 93 12 0.119
81 1541 103 0.0793
83 316 5 0.0364
84 721 44 0.0796
89 866 58 0.131 ", header=TRUE)
You can define a function for this as suggested in the comments:
my_binom <- function(x, n, p){
res <- binom.test(x, n, p)
out <- data.frame(res$p.value, res$conf.int[1], res$conf.int[2], res$estimate)
names(out) <- c("p", "lower_ci", "upper_ci", "p_success")
rownames(out) <- NULL
return(out)
}
Then you can apply it for each row
do.call("rbind.data.frame", apply(df, 1, function(row_i){
my_binom(x= row_i["prey_pred"], n= row_i["n_1"], p=
row_i["p0_prey_pred"])
}))
here's my for loop version of doing resample and remodel,
B <- 999
n <- nrow(butterfly)
estMat <- matrix(NA, B+1, 2)
estMat[B+1,] <- model$coef
for (i in 1:B) {
resample <- butterfly[sample(1:n, n, replace = TRUE),]
re.model <- lm(Hk ~ inv.alt, resample)
estMat[i,] <- re.model$coef
}
I tried to avoid for loop,
B <- 999
n <- nrow(butterfly)
resample <- replicate(B, butterfly[sample(1:n, replace = TRUE),], simplify = FALSE)
re.model <- lapply(resample, lm, formula = Hk ~ inv.alt)
re.model.coef <- sapply(re.model,coef)
estMat <- cbind(re.model.coef, model$coef)
It worked but didn't improve efficiency. Is there any approach I can do vectorization?
Sorry, not quite familiar with StackOverflow. Here's the dataset butterfly.
colony alt precip max.temp min.temp Hk
pd+ss 0.5 58 97 16 98
sb 0.8 20 92 32 36
wsb 0.57 28 98 26 72
jrc+jrh 0.55 28 98 26 67
sj 0.38 15 99 28 82
cr 0.93 21 99 28 72
mi 0.48 24 101 27 65
uo+lo 0.63 10 101 27 1
dp 1.5 19 99 23 40
pz 1.75 22 101 27 39
mc 2 58 100 18 9
hh 4.2 36 95 13 19
if 2.5 34 102 16 42
af 2 21 105 20 37
sl 6.5 40 83 0 16
gh 7.85 42 84 5 4
ep 8.95 57 79 -7 1
gl 10.5 50 81 -12 4
(Assuming butterfly$inv.alt <- 1/butterfly$alt)
You get the error because resample is not a list of resampled data.frames, which you can obtain with:
resample <- replicate(B, butterfly[sample(1:n, replace = TRUE),], simplify = FALSE)
The the following should work:
re.model <- lapply(resample, lm, formula = Hk ~ inv.alt)
To extract coefficients from a list of models, re.model$coef does work. The correct path to coefficients are: re.model[[1]]$coef, re.model[[2]]$coef, .... You can get all of them with the following code:
re.model.coef <- sapply(re.model, coef)
Then you can combined it with the observed coefficients:
estMat <- cbind(re.model.coef, model$coef)
In fact, you can put all of them into replicate:
re.model.coef <- replicate(B, {
bf.rs <- butterfly[sample(1:n, replace = TRUE),]
coef(lm(formula = Hk ~ inv.alt, data = bf.rs))
})
estMat <- cbind(re.model.coef, model$coef)
I'm new using R, I'm just starting with the outliers package. Probably this is very easy, but could anybody tell me how to run several Grubbs tests at the same time? I have 20 columns and I want to test all of them simultaneously.
Thanks in advance
Edit: Sorry for not explaining well. I'll try. I started using R today and I learned how to make Grubbs test using grubbs.test(data$S1, type=10 or 11 or 20) and it goes well. But I have a table with 20 columns, and I want to run Grubbs test for each of them simultaneously. I can do it one by one, but I think there must be a way to do it faster.
I ran the code at How to repeat the Grubbs test and flag the outliers as well, and works perfectly, but again, I would like to do it with my 20 samples.
As an example of my data:
S1 S2 S3 S4 S5 S6 S7
96 40 99 45 12 16 48
52 49 11 49 59 77 64
18 43 11 67 6 97 91
79 19 39 28 45 44 99
9 78 88 6 25 43 78
60 12 29 32 2 68 25
18 61 60 30 26 51 70
96 98 55 74 83 17 69
19 0 17 24 0 75 45
42 70 71 7 61 82 100
39 80 71 58 6 100 94
100 5 41 18 33 98 97
Hope this helps.
You can use lapply:
library(outliers)
df = data.frame(a=runif(20),b=runif(20),c=runif(20))
tests = lapply(df,grubbs.test)
# or with parameters:
tests = lapply(df,grubbs.test,opposite=T)
Results:
> tests
$a
Grubbs test for one outlier
data: X[[i]]
G = 1.80680, U = 0.81914, p-value = 0.6158
alternative hypothesis: highest value 0.963759744539857 is an outlier
$b
Grubbs test for one outlier
data: X[[i]]
G = 1.53140, U = 0.87008, p-value = 1
alternative hypothesis: highest value 0.975481075001881 is an outlier
$c
Grubbs test for one outlier
data: X[[i]]
G = 1.57910, U = 0.86186, p-value = 1
alternative hypothesis: lowest value 0.0136249314527959 is an outlier
You can access the results as follows:
> tests$a$statistic
G U
1.8067906 0.8191417
Hope this helps.
A #Florian answer can be updated a bit. For example fancy and easy-reading result can be achieved with purrr package and tidyverse. It can be useful if you are comparing loads of groups:
Load necessary packages:
library(dplyr)
library(purrr)
library(tidyr)
library(outliers)
Create some data - we're going to use the same from Florian's answer, but transformed to a modern tibble and long format:
df <- tibble(a = runif(20),
b = runif(20),
c = runif(20)) %>%
# transform to along format
tidyr::gather(letter, value)
Then instead of apply functions we can use map and map_dbl from purrr:
df %>%
group_by(letter) %>%
nest() %>%
mutate(n = map_dbl(data, ~ nrow(.x)), # number of entries
G = map(data, ~ grubbs.test(.x$value)$statistic[[1]]), # G statistic
U = map(data, ~ grubbs.test(.x$value)$statistic[[2]]), # U statistic
grubbs = map(data, ~ grubbs.test(.x$value)$alternative), # Alternative hypotesis
p_grubbs = map_dbl(data, ~ grubbs.test(.x$value)$p.value)) %>% # p-value
# Let's make the output more fancy
mutate(G = signif(unlist(G), 3),
U = signif(unlist(U), 3),
grubbs = unlist(grubbs),
p_grubbs = signif(p_grubbs, 3)) %>%
select(-data) %>% # remove temporary column
arrange(p_grubbs)
And the desired output would be this:
# A tibble: 3 x 6
letter n G U grubbs p_grubbs
<chr> <dbl> <dbl> <dbl> <chr> <dbl>
1 c 20 1.68 0.843 lowest value 0.0489965472370386 is an outlier 0.84
2 a 20 1.58 0.862 lowest value 0.0174888013862073 is an outlier 1
3 b 20 1.57 0.863 lowest value 0.0656482006888837 is an outlier 1
I want to add many new columns simultaneously to a data.table based on by-group computations. A working example of my data would look something like this:
Time Stock x1 x2 x3
1: 2014-08-22 A 15 27 34
2: 2014-08-23 A 39 44 29
3: 2014-08-24 A 20 50 5
4: 2014-08-22 B 42 22 43
5: 2014-08-23 B 44 45 12
6: 2014-08-24 B 3 21 2
Now I want to scale and sum many of the variables to get an output like:
Time Stock x1 x2 x3 x2_scale x3_scale x2_sum x3_sum
1: 2014-08-22 A 15 27 34 -1.1175975 0.7310560 121 68
2: 2014-08-23 A 39 44 29 0.3073393 0.4085313 121 68
3: 2014-08-24 A 20 50 5 0.8102582 -1.1395873 121 68
4: 2014-08-22 B 42 22 43 -0.5401315 1.1226726 88 57
5: 2014-08-23 B 44 45 12 1.1539172 -0.3274462 88 57
6: 2014-08-24 B 3 21 2 -0.6137858 -0.7952265 88 57
A brute force implementation of my problem would be:
library(data.table)
set.seed(123)
d <- data.table(Time = rep(seq.Date( Sys.Date(), length=3, by="day" )),
Stock = rep(LETTERS[1:2], each=3 ),
x1 = sample(1:50, 6),
x2 = sample(1:50, 6),
x3 = sample(1:50, 6))
d[,x2_scale:=scale(x2),by=Stock]
d[,x3_scale:=scale(x3),by=Stock]
d[,x2_sum:=sum(x2),by=Stock]
d[,x3_sum:=sum(x3),by=Stock]
Other posts describing a similar issue (Add multiple columns to R data.table in one function call? and Assign multiple columns using := in data.table, by group) suggest the following solution:
d[, c("x2_scale","x3_scale"):=list(scale(x2),scale(x3)), by=Stock]
d[, c("x2_sum","x3_sum"):=list(sum(x2),sum(x3)), by=Stock]
But again, this would get very messy with a lot of variables and also this brings up an error message with scale (but not with sum since this isn't returning a vector).
Is there a more efficient way to achieve the required result (keeping in mind that my actual data set is quite large)?
I think with a small modification to your last code you can easily do both for as many variables you want
vars <- c("x2", "x3") # <- Choose the variable you want to operate on
d[, paste0(vars, "_", "scale") := lapply(.SD, function(x) scale(x)[, 1]), .SDcols = vars, by = Stock]
d[, paste0(vars, "_", "sum") := lapply(.SD, sum), .SDcols = vars, by = Stock]
## Time Stock x1 x2 x3 x2_scale x3_scale x2_sum x3_sum
## 1: 2014-08-22 A 13 14 32 -1.1338934 1.1323092 87 44
## 2: 2014-08-23 A 25 39 9 0.7559289 -0.3701780 87 44
## 3: 2014-08-24 A 18 34 3 0.3779645 -0.7621312 87 44
## 4: 2014-08-22 B 44 8 6 -0.4730162 -0.7258662 59 32
## 5: 2014-08-23 B 49 3 18 -0.6757374 1.1406469 59 32
## 6: 2014-08-24 B 15 48 8 1.1487535 -0.4147807 59 32
For simple functions (that don't need special treatment like scale) you could easily do something like
vars <- c("x2", "x3") # <- Define the variable you want to operate on
funs <- c("min", "max", "mean", "sum") # <- define your function
for(i in funs){
d[, paste0(vars, "_", i) := lapply(.SD, eval(i)), .SDcols = vars, by = Stock]
}
Another variation using data.table
vars <- c("x2", "x3")
d[, paste0(rep(vars, each=2), "_", c("scale", "sum")) := do.call(`cbind`,
lapply(.SD, function(x) list(scale(x)[,1], sum(x)))), .SDcols=vars, by=Stock]
d
# Time Stock x1 x2 x3 x2_scale x2_sum x3_scale x3_sum
#1: 2014-08-22 A 15 27 34 -1.1175975 121 0.7310560 68
#2: 2014-08-23 A 39 44 29 0.3073393 121 0.4085313 68
#3: 2014-08-24 A 20 50 5 0.8102582 121 -1.1395873 68
#4: 2014-08-22 B 42 22 43 -0.5401315 88 1.1226726 57
#5: 2014-08-23 B 44 45 12 1.1539172 88 -0.3274462 57
#6: 2014-08-24 B 3 21 2 -0.6137858 88 -0.7952265 57
Based on comments from #Arun, you could also do:
cols <- paste0(rep(vars, each=2), "_", c("scale", "sum"))
d[,(cols):= unlist(lapply(.SD, function(x) list(scale(x)[,1L], sum(x))),
rec=F), by=Stock, .SDcols=vars]
You're probably looking for a pure data.table solution, but you could also consider using dplyr here since it works with data.tables as well (no need for conversion). Then, from dplyr you could use the function mutate_all as I do in this example here (with the first data set you showed in your question):
library(dplyr)
dt %>%
group_by(Stock) %>%
mutate_all(funs(sum, scale), x2, x3)
#Source: local data table [6 x 9]
#Groups: Stock
#
# Time Stock x1 x2 x3 x2_sum x3_sum x2_scale x3_scale
#1 2014-08-22 A 15 27 34 121 68 -1.1175975 0.7310560
#2 2014-08-23 A 39 44 29 121 68 0.3073393 0.4085313
#3 2014-08-24 A 20 50 5 121 68 0.8102582 -1.1395873
#4 2014-08-22 B 42 22 43 88 57 -0.5401315 1.1226726
#5 2014-08-23 B 44 45 12 88 57 1.1539172 -0.3274462
#6 2014-08-24 B 3 21 2 88 57 -0.6137858 -0.7952265
You can easily add more functions to be calculated which will create more columns for you. Note that mutate_all applies the function to each column except the grouping variable (Stock) by default. But you can either specify the columns you only want to apply the functions to (which I did in this example) or you can specify which columns you don't want to apply the functions to (that would be, e.g. -c(x2,x3) instead of where I wrote x2, x3).
EDIT: replaced mutate_each above with mutate_all as mutate_each will be deprecated in the near future.
EDIT: cleaner version using functional. I think this is the closest to the dplyr answer.
library(functional)
funs <- list(scale=Compose(scale, c), sum=sum) # See data.table issue #783 on github for the need for this
cols <- paste0("x", 2:3)
cols.all <- outer(cols, names(funs), paste, sep="_")
d[,
c(cols.all) := unlist(lapply(funs, Curry(lapply, X=.SD)), rec=F),
.SDcols=cols,
by=Stock
]
Produces:
Time Stock x1 x2 x3 x2_scale x3_scale x2_sum x3_sum
1: 2014-08-22 A 15 27 34 -1.1175975 0.7310560 121 68
2: 2014-08-23 A 39 44 29 0.3073393 0.4085313 121 68
3: 2014-08-24 A 20 50 5 0.8102582 -1.1395873 121 68
4: 2014-08-22 B 42 22 43 -0.5401315 1.1226726 88 57
5: 2014-08-23 B 44 45 12 1.1539172 -0.3274462 88 57
6: 2014-08-24 B 3 21 2 -0.6137858 -0.7952265 88 57
I'm looking at some ecological data (diet) and trying to work out how to group by Predator. I would like to be able to extract the data so that I can look at the weights of each individual prey for each species for each predator, i.e work out the mean weight of each species eaten by e.g Predator 117. I've put a sample of my data below.
Predator PreySpecies PreyWeight
1 114 10 4.2035496
2 114 10 1.6307026
3 115 1 407.7279775
4 115 1 255.5430495
5 117 10 4.2503708
6 117 10 3.6268814
7 117 10 6.4342073
8 117 10 1.8590861
9 117 10 2.3181421
10 117 10 0.9749844
11 117 10 0.7424772
12 117 15 4.2803743
13 118 1 126.8559155
14 118 1 276.0256158
15 118 1 123.0529734
16 118 1 427.1129793
17 118 3 237.0437606
18 120 1 345.1957190
19 121 1 160.6688815
You can use the aggregate function as follows:
aggregate(formula = PreyWeight ~ Predator + PreySpecies, data = diet, FUN = mean)
# Predator PreySpecies PreyWeight
# 1 115 1 331.635514
# 2 118 1 238.261871
# 3 120 1 345.195719
# 4 121 1 160.668881
# 5 118 3 237.043761
# 6 114 10 2.917126
# 7 117 10 2.886593
# 8 117 15 4.280374
There are a few different ways of getting what you want:
The aggregate function. Probably what you are after.
aggregate(PreyWeight ~ Predator + PreySpecies, data=dd, FUN=mean)
tapply: Very useful, but only divides the variable by a single factor, hence, we need to create a need joint factor with the paste command:
tapply(dd$PreyWeight, paste(dd$Predator, dd$PreySpecies), mean)
ddply: Part of the plyr package. Very useful. Worth learning.
require(plyr)
ddply(dd, .(Predator, PreySpecies), summarise, mean(PreyWeight))
dcast: The output is in more of a table format. Part of the reshape2 package.
require(reshape2)
dcast(dd, PreyWeight ~ PreySpecies+ Predator, mean, fill=0)
mean(data$PreyWeight[data$Predator==117]);