how to calculate standard deviation of values in 10 intervals? - r

I want to calculate a standard deviation step by 10 in R; for example
For a large number of values, I want to calculate the SD of the values in 10 intervals. 0-10, 10-20, 20-30 ...
Example: I have a vector of :
exemple <- seq (0,100,10)
If I do sd (example) : I have the value of standard deviation but for all values in example.
But, how can I do to calculate the standard deviation to this example selecting 10 by 10 steps ?
But instead of calculating the standard deviation of all these values, I want to calculate it between 0 and 10, between 10 and 20, between 20 and 30 etc…
I precise in interval 0-10 : we have values, in intervals 10-20, we have also values.. etc.
exemple2 0 to 10, we have values : 0.2, 0.3, 0.5, 0.7, 0.6, 0.7, 0.03, 0.09, 0.1, 0.05
An image for more illustrations :
Can someone help me please ?

You may use cut/findInterval to divide the data into groups and take sd of each group.
set.seed(123)
vec <- runif(100, max = 100)
tapply(vec, cut(vec, seq(0,100,10)), sd)
# (0,10] (10,20] (20,30] (30,40] (40,50] (50,60] (60,70] (70,80] (80,90] (90,100]
#3.438162 2.653866 2.876299 2.593230 2.353325 2.755474 2.454519 3.282779 3.658064 3.021508

Here is a solution using dplyr:
library(dplyr)
## Create a random a dataframe with a random variable with 1000 values between 1 and 100
df <- data.frame(x = runif(1000, 1, 100)
## Create a grouping variables, binning by 10
df$group <- findInterval(df$x, seq(10, 100, by=10))
## Calculate SD by group
df %>%
group_by(group) %>%
summarise(Std.dev = sd(x))
# A tibble: 10 x 2
group St.dev
* <int> <dbl>
1 0 2.58
2 1 2.88
3 2 2.90
4 3 2.71
5 4 2.84
6 5 2.90
7 6 2.88
8 7 2.68
9 8 2.98
10 9 2.89

Related

Compute stats for several columns at the same time using sapply

I have a dataframe as follows:
# A tibble: 6 x 4
Placebo High Medium Low
<dbl> <dbl> <dbl> <dbl>
1 0.0400 -0.04 0.0100 0.0100
2 0.04 0 -0.0100 0.04
3 0.0200 -0.1 -0.05 -0.0200
4 0.03 -0.0200 0.03 -0.00700
5 -0.00500 -0.0100 0.0200 0.0100
6 0.0300 -0.0100 NA NA
You could get the cohensD for two of the columns using the cohen.d() function from the effsize package:
df <- data.frame(Placebo = c(0.0400, 0.04, 0.0200, 0.03, -0.00500, 0.0300),
Low = c(-0.04, 0, -0.1, -0.0200, -0.0100, -0.0100),
Medium = c(0.0100, -0.0100, -0.05, 0.03, 0.0200, NA ),
High = c(0.0100, 0.04, -0.0200, -0.00700, 0.0100, NA))
library(effsize)
cohen.d(as.vector(na.omit(df$Placebo)), as.vector(na.omit(df$High)))
Interestingly enough, I'm getting the following error with this code:
Error in data[, group] : incorrect number of dimensions
However, I would like to create a function that allows you to obtain all the cohensd between one of the columns and the rest of them.
In order to get the cohensD of all columns against the Placebo we would use something like:
sapply(df, function(i) cohen.d(pull(df, as.vector(na.omit(!!Placebo))), as.vector(na.omit(i))))
But I'm not sure this would work anyway.
Edit: I don't want to erase the full row, as cohens d can be computed for different length vectors. Ideally, I would like to get the stat with the NA removed for each column independetly
It may be better to remove the NA on each of the columns separately by creating a logical index along with 'Placebo'
library(dplyr)
library(effsize)
df %>%
summarise(across(Low:High, ~ list({
i1 <- complete.cases(Placebo)& complete.cases(.x)
cohen.d(Placebo[i1], .x[i1])})))
Or if we want to use lapply/sapply, loop over the columns other than Placebo
lapply(df[-1], function(x) {
x1 <- na.omit(cbind(df$Placebo, x))
cohen.d(x1[,1], x1[,2])
})
-output
$Low
Cohen's d
d estimate: 1.947312 (large)
95 percent confidence interval:
lower upper
0.3854929 3.5091319
$Medium
Cohen's d
d estimate: 0.9622504 (large)
95 percent confidence interval:
lower upper
-0.5782851 2.5027860
$High
Cohen's d
d estimate: 0.8884639 (large)
95 percent confidence interval:
lower upper
-0.6402419 2.4171697

Producing anova from already summarized data

I have a table that looks like this:
I'm trying to run aov() on the above table, but I'm only able to create a partial output. I'm not sure how to include the standard deviation in the calculation.
Right now I'm concatenating and repeating each group like so:
groups <- c(rep('LHS', 121), rep('HS', 546), rep('Jr', 97), rep('Bachelors', 253), rep('Graduate', 155))
And then doing the same for the means (since I don't have access to the original data sheet):
means <- c(rep(38.67, 121), rep(39.6, 546), rep(41.39, 97), rep(42.55, 253), rep(40.85, 155))
At this point I can create a data fame and then run aov on it:
df <- data.frame(groups, means)
groups.aov <- aov(means ~ groups, data = df)
Unfortunately summary(groups.aov) only gives me a partial result.
Df Sum Sq Mean Sq F value Pr(>F)
groups 4 2004 501 4.247e+27 <2e-16 ***
Residuals 1167 0 0
Any other way I can go, where I can factor in the SD?
We simulate some data so that we know the calculations are correct:
set.seed(100)
df = data.frame(
groups=rep(letters[1:4],times=seq(20,35,by=5)),
value=rnorm(110,rep(1:4,times=seq(20,35,by=5)),1))
We get back something like the table you see above:
library(dplyr)
res <- df %>% group_by(groups) %>% summarize_all(c(mean=mean,sd=sd,n=length))
total <- data.frame(groups="total",mean=mean(df$value),sd=sd(df$value),n=nrow(df))
rbind(res,total)
# A tibble: 5 x 4
groups mean sd n
<fct> <dbl> <dbl> <int>
1 a 0.937 1.14 20
2 b 1.91 0.851 25
3 c 3.01 0.780 30
4 d 4.01 0.741 35
5 total 2.70 1.42 110
We always work with the sum of squares in anova. So from sd back to sum of squares, you usually multiply by n-1, and from there you can derive the F value. The detailed calculations:
# number of groups
ngroups=nrow(res)# number of groups
# total sum of squares
SST = (total$sd^2)*(total$n-1)
#error within groups
SSE = sum((res$sd^2)*(res$n-1))
aovtable = data.frame(
Df = c(ngroups-1,total$n-ngroups-1),
SumSq = c(SST-SSE,SSE)
)
aovtable$MeanSq = aovtable$SumSq / aovtable$Df
aovtable$F = c(aovtable$MeanSq[1]/aovtable$MeanSq[2],NA)
aovtable$p = c(pf(aovtable$F[1],aovtable$Df[1],aovtable$Df[2],lower.tail=FALSE),NA)
And we can compare the two results:
aovtable
Df SumSq MeanSq F p
1 3 140.55970 46.8532330 62.62887 2.705082e-23
2 105 78.55147 0.7481092 NA NA
summary(aov(value~groups,data=df))
Df Sum Sq Mean Sq F value Pr(>F)
groups 3 140.56 46.85 63.23 <2e-16 ***
Residuals 106 78.55 0.74

Wrong degrees of freedom in lsmeans and SE calculation in R

I have this sample data:
Sample Replication Days
1 1 10
1 1 14
1 1 13
1 1 14
2 1 NA
2 1 5
2 1 18
2 1 20
1 2 16
1 2 NA
1 2 18
1 2 21
2 2 15
2 2 7
2 2 12
2 2 14
I have four observations for each sample with a total of 64 samples in each of the two replications. In total, I have 512 values for both the replications. I also have some missing values designated as 'NA'. I prformed ANOVA for Mean values for each Sample for each Rep that I generated using
library(tidyverse)
df <- Data %>% group_by(Sample, Rep) %>% summarise(Mean = mean(Days, na.rm = TRUE))
curve.anova <- aov(Mean~Rep+Sample, data=df)
Result of anova is:
> summary(curve.anova)
Df Sum Sq Mean Sq F value Pr(>F)
Rep 1 6.1 6.071 2.951 0.0915 .
Sample 63 1760.5 27.945 13.585 <2e-16 ***
Residuals 54 111.1 2.057
I created a table for mean and SE values,
ANOVA<-lsmeans(curve.anova, ~Sample)
ANOVA<-summary(ANOVA)
write.csv(ANOVA, file="Desktop/ANOVA.csv")
A few lines from file are:
Sample lsmean SE df lower.CL upper.CL
1 24.875 1.014145417 54 22.84176086 26.90823914
2 25.5 1.014145417 54 23.46676086 27.53323914
3 31.32575758 1.440722628 54 28.43728262 34.21423253
4 26.375 1.014145417 54 24.34176086 28.40823914
5 26.42424242 1.440722628 54 23.53576747 29.31271738
6 25.5 1.014145417 54 23.46676086 27.53323914
7 28.375 1.014145417 54 26.34176086 30.40823914
8 24.875 1.014145417 54 22.84176086 26.90823914
9 21.16666667 1.014145417 54 19.13342752 23.19990581
10 23.875 1.014145417 54 21.84176086 25.90823914
df for all 64 samples is 54 and the error bars in the ggplot are mostly equal for all the Samples. SE values are larger than the manually calculated values. Based on anova results, df=54 is for residuals.
I want to double check the ANOVA results so that they are correct and I am correctly generating lsmeans and SE to plot a bargraph using ggplot with confirdence interval error bars.
I will appreciate any help. Thank you!
After reading your comments, I think your workflow as an issue. Basically, when you are applying your anova test, you are doing it on means of the different samples.
So, in your example, when you are doing :
curve.anova <- aov(Mean~Rep+Sample, data=df)
You are comparing these values:
> df
# A tibble: 4 x 3
# Groups: Sample [2]
Sample Replication Mean
<dbl> <dbl> <dbl>
1 1 1 12.8
2 1 2 18.3
3 2 1 14.3
4 2 2 12
So, basically, you are comparing two groups with two values per group.
So, when you tried to remove the Replication group, you get an error because the output of:
df = Data %>% group_by(Sample %>% summarise(Mean = mean(Days, na.rm = TRUE))
is now:
# A tibble: 2 x 2
Sample Mean
<dbl> <dbl>
1 1 15.1
2 2 13
So, applying anova test on that dataset means that you are comparing two groups with one value each. So, you can't compute residuals and SE.
Instead, you should do it on the full dataset without trying to calculate the mean first:
anova_data <- aov(Days~Sample+Replication, data=Data)
anova_data2 <- aov(Days~Sample, data=Data)
And their output are:
> summary(anova_data)
Df Sum Sq Mean Sq F value Pr(>F)
Sample 1 16.07 16.071 0.713 0.416
Replication 1 9.05 9.054 0.402 0.539
Residuals 11 247.80 22.528
2 observations deleted due to missingness
> summary(anova_data2)
Df Sum Sq Mean Sq F value Pr(>F)
Sample 1 16.07 16.07 0.751 0.403
Residuals 12 256.86 21.41
2 observations deleted due to missingness
Now, you can apply lsmeans:
A_d = summary(lsmeans(anova_data, ~Sample))
A_d2 = summary(lsmeans(anova_data2, ~Sample))
> A_d
Sample lsmean SE df lower.CL upper.CL
1 15.3 1.8 11 11.29 19.2
2 12.9 1.8 11 8.91 16.9
Results are averaged over the levels of: Replication
Confidence level used: 0.95
> A_d2
Sample lsmean SE df lower.CL upper.CL
1 15.1 1.75 12 11.33 19.0
2 13.0 1.75 12 9.19 16.8
Confidence level used: 0.95
It does not change a lot the mean and the SE (which is good because it means that your replicate are consistent and you don't have too much variabilities between those) but it reduces the confidence interval.
So, to plot it, you can:
library(ggplot2)
ggplot(A_d, aes(x=as.factor(Sample), y=lsmean)) +
geom_bar(stat="identity", colour="black") +
geom_errorbar(aes(ymin = lsmean - SE, ymax = lsmean + SE), width = .5)
Based on your initial question, if you want to check that the output of ANOVA is correct, you can mimick fake data like this:
d2 <- data.frame(Sample = c(rep(1,10), rep(2,10)),
Days = c(rnorm(10, mean =3), rnorm(10, mean = 8)))
Then,
curve.d2 <- aov(Days ~ Sample, data = d2)
ANOVA2 <- lsmeans(curve.d2, ~Sample)
ANOVA2 <- summary(ANOVA2)
And you get the following output:
> summary(curve.d2)
Df Sum Sq Mean Sq F value Pr(>F)
Sample 1 139.32 139.32 167.7 1.47e-10 ***
Residuals 18 14.96 0.83
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> ANOVA2
Sample lsmean SE df lower.CL upper.CL
1 2.62 0.288 18 2.02 3.23
2 7.90 0.288 18 7.29 8.51
Confidence level used: 0.95
And for the plot
ggplot(ANOVA2, aes(x=as.factor(Sample), y=lsmean)) +
geom_bar(stat="identity", colour="black") +
geom_errorbar(aes(ymin = lsmean - SE, ymax = lsmean + SE), width = .5)
As you can see, we get lsmeans for d2 close to 3 and 8 what we set at the first place. So, I think your output are correct. Maybe your data do not present any significant differences and the computation of SE are the same because the distribution of your data are the same. It is what it is.
I hope this answer helps you.
Data
df = data.frame(Sample = c(rep(1,4), rep(2,4),rep(1,4), rep(2,4)),
Replication = c(rep(1,8), rep(2,8)),
Days = c(10,14,13,14,NA,5,18,20,16,NA,18,21,15,7,12,14))

R not enough observation, arguments are treated as the container, rather than the content itself

So, I am trying to make bartlett or any test in R. it's working good with imported data:
data(foster, package = "HSAUR")
bartlett.test(weight ~ litgen,data = foster)
But not with my data:
mdat <- matrix(c(2.3,2.2,2.25, 2.2,2.1,2.2, 2.15, 2.15, 2.2, 2.25, 2.15, 2.25), nrow = 3, ncol = 4)
working_df = data.frame(mdat)
bartlett.test(X1 ~ X2, data = working_df)
Error in bartlett.test.default(c(2.3, 2.2, 2.25), c(2.2, 2.1, 2.2)) :
there must be at least 2 observations in each group
I have tried all the different functions, assignments but the problem is that the arguments are treated as a single object rather than its content
How can I make a barttlet test with my dataframes? How do make the arguments be the contents, rather than the container?
I don't know what you mean when you talk about "contents" and "container". The documentation at ?bartlett.test is pretty straightforward. You're trying to use a formula, so we'll look at the description of the formula argument:
formula a formula of the form lhs ~ rhs where lhs gives the data values and rhs the corresponding groups.
This matches with the structure of the foster data, where weight is numeric, and litgen is a categorical grouper.
head(foster)
litgen motgen weight
1 A A 61.5
2 A A 68.2
3 A A 64.0
4 A A 65.0
5 A A 59.7
6 A B 55.0
So, you need to put your data in that format.
your_data = data.frame(x = c(mdat), group = c(col(mdat)))
your_data
# x group
# 1 2.30 1
# 2 2.20 1
# 3 2.25 1
# 4 2.20 2
# 5 2.10 2
# 6 2.20 2
# 7 2.15 3
# 8 2.15 3
# 9 2.20 3
# 10 2.25 4
# 11 2.15 4
# 12 2.25 4
bartlett.test(x ~ group, data = your_data)
# Bartlett test of homogeneity of variances
#
# data: x by group
# Bartlett's K-squared = 0.86607, df = 3, p-value = 0.8336
That's all your groups at once. If you want to do pairwise comparisons, give subsets of you data to bartlett.test.

Generate summary table from bins of a plot

I have a dataset of the form:
d = data.frame(seq(0.01,1,by=0.01), c(seq(0.27,0.1,-0.01),seq(0.1,0.5,0.01),seq(0.5,0.1,-0.01)))
names(d) = c("X","Y")
ggplot(d, aes(x=X, y=Y)) + geom_line()
I am trying to generate a summary table that bins the Y variable into equal groups of 10% and generate the summary statistics of X for each bin. This is how I would like my result to look like:
Y Group X Group
0-10% {Range1: 10-30%, mean1, median1, sd1} {Range2: 85-100%, mean2, median2, sd2}
10-20% ...
20-30% ...
30-40% ...
40-50% ...
The ranges of X are not always two, 20-30% of Y has three ranges of X and 40-50% has one.
I have many large datasets on which this has to be implemented. The data is for reproducing the problem. My actual data could have many inflection points, as this code has to run on many combinations of X and Y.
Output not formatted like yours.
But here is a close solution. You can easily reformat to your liking. It seems you are binning Y in 10 groups but not sure on X. I am using 10 groups on X too.
d = data.frame(seq(0.01,1,by=0.01), c(seq(0.27,0.1,-0.01),seq(0.1,0.5,0.01),seq(0.5,0.1,-0.01)))
names(d) = c("X","Y")
library(dplyr)
d$x.decile<-ntile(d$X,10)
d$y.decile<-ntile(d$Y,10)
summary<-data.frame(d%>%group_by(y.decile, x.decile)%>%summarise(mean=mean(X),median=median(X), min=min(X), max=max(X), sd=sd(X)))
> summary
y.decile x.decile mean median min max sd
1 1 2 0.175 0.175 0.15 0.20 0.018708287
2 1 3 0.210 0.210 0.21 0.21 NaN
3 1 10 0.990 0.990 0.98 1.00 0.010000000
4 2 2 0.135 0.135 0.13 0.14 0.007071068
5 2 3 0.235 0.235 0.22 0.25 0.012909944
6 2 10 0.955 0.955 0.94 0.97 0.012909944
7 3 1 0.095 0.095 0.09 0.10 0.007071068
You can get the format you want with melt and dcast from the reshape package.
In the code below, I've cut the data into 10 Y groups and 2 X groups, just to keep the width of the output reasonable. Change 2 to 10 in the ntile function to get actual deciles for X. Also, I haven't included every summary item, but hopefully the code below will guide you for adding additional information.
library(dplyr)
library(reshape2)
sm = d %>% group_by(`Y decile`=ntile(Y,10), X.decile=ntile(X,2)) %>%
summarise(`X decile` = paste0("{Count: ", n(), ", Range: ", min(X),"-",max(X),", Median: ",median(X),"}"))
sm %>% melt(id.var=c("Y decile","X.decile")) %>%
dcast(`Y decile` ~ variable + X.decile, value.var="value", fill="")
Y decile X decile_1 X decile_2
1 1 {Count: 7, Range: 0.15-0.21, Median: 0.18} {Count: 3, Range: 0.98-1, Median: 0.99}
2 2 {Count: 6, Range: 0.13-0.25, Median: 0.225} {Count: 4, Range: 0.94-0.97, Median: 0.955}
3 3 {Count: 7, Range: 0.09-0.28, Median: 0.12} {Count: 3, Range: 0.91-0.93, Median: 0.92}
4 4 {Count: 6, Range: 0.06-0.31, Median: 0.185} {Count: 4, Range: 0.87-0.9, Median: 0.885}
5 5 {Count: 8, Range: 0.02-0.35, Median: 0.185} {Count: 2, Range: 0.85-0.86, Median: 0.855}
6 6 {Count: 5, Range: 0.01-0.39, Median: 0.37} {Count: 5, Range: 0.8-0.84, Median: 0.82}
7 7 {Count: 5, Range: 0.4-0.44, Median: 0.42} {Count: 5, Range: 0.75-0.79, Median: 0.77}
8 8 {Count: 5, Range: 0.45-0.49, Median: 0.47} {Count: 5, Range: 0.7-0.74, Median: 0.72}
9 9 {Count: 1, Range: 0.5-0.5, Median: 0.5} {Count: 9, Range: 0.51-0.69, Median: 0.65}
10 10 {Count: 10, Range: 0.55-0.64, Median: 0.595}
melt isn't actually necessary here. You could to the following, where the extra line at the end is to get more explanatory names.
sm = d %>% group_by(`Y decile`=ntile(Y,10), X.decile=ntile(X,2)) %>%
summarise(`X decile` = paste0("{N: ", n(), ", Range: ", min(X),"-",max(X),", Median: ",median(X),"}")) %>%
dcast(`Y decile` ~ X.decile, value.var="X decile", fill="", value.name=) %>%
setNames(., c(names(.)[1], paste0("X decile ", names(.)[-1])))
The quantile and aggregate functions can help you.
# Create data frame
d <- data.frame(seq(0.01,1,by=0.01), c(seq(0.27,0.1,- 0.01),seq(0.1,0.5,0.01),seq(0.5,0.1,-0.01)))
names(d) <- c("X","Y")
# Define bins
bins <- quantile(d$Y, seq(0.1,1,length.out=10))
# Create indicator variable for which bin each Y belongs in
ag <- c()
for (i in 1:nrow(d)) {ag[i] <- which(d$Y[i] < bins)[1]}
# Compute summary statistics
means <- aggregate(d$X, by=list(ag), mean)
medians <- aggregate(d$X, by=list(ag), median)
variances <- aggregate(d$X, by=list(ag), var)
# Put them all into a new data frame
data.frame(group=(1:10),mean=means[,2], median=medians[,2], variance=variances[,2])
## group mean median variance
##1 1 0.4533333 0.200 0.162250000
##2 2 0.4709091 0.240 0.148969091
##3 3 0.3990000 0.265 0.134543333
##4 4 0.4650000 0.305 0.139583333
##5 5 0.3525000 0.325 0.114278571
##6 6 0.4983333 0.385 0.097178788
##7 7 0.5950000 0.595 0.034250000
##8 8 0.5950000 0.595 0.017583333
##9 9 0.5950000 0.595 0.006472222
##10 10 0.5950000 0.595 0.001171429

Resources