How to calculate the mean in R with several conditions - r

I am trying to calculate the means of this data frame and group them in a tale! I know how to do that using averageifs in excel but because I want to eventually get the standard deviation and coefficient of variation (CV) I need to learn that in R.
For now I just need the mean. Here are my conditions:
I need a table where the "stim_ending_t" which is my time intervals are arranged from 1.0 to 3.5 in a rows. For time intervals I need these three conditions to be met while calculating the mean which is "key_resp_2.rt"
only image Visibility and soundvolume(V=1 & s=0)
Only Sound (V=0 & s=1)
Blank (V=0 & s=0)
The data frame
Expected out come

This will compute stim_ending_t (6) x modality (3) = 18 group means.
First I generate some data like your analysis_v or analysis_a data frames:
library(dplyr)
library(tidyr)
analysis_v <- data.frame(stim_ending_t = rep(seq(1, 3.5, 0.5), each = 30),
visbility = rep(c(1, 0, 0), 60),
soundvolume = rep(c(0, 1, 0), 60),
key_resp_2.rt = runif(180, 1, 5))
Then I pipe the object into the code block:
analysis_v %>%
group_by(stim_ending_t, visbility, soundvolume) %>%
summarize(average = mean(key_resp_2.rt)) %>%
ungroup() %>%
mutate(key = case_when(visbility == 0 & soundvolume == 0 ~ "blank",
visbility == 0 & soundvolume == 1 ~ "only_sound",
visbility == 1 & soundvolume == 0 ~ "only_images")) %>%
select(-visbility, -soundvolume) %>%
spread(key, average)
Which results in the requested output format:
# A tibble: 6 x 4
stim_ending_t blank only_images only_sound
<dbl> <dbl> <dbl> <dbl>
1 1 3.28 3.55 2.84
2 1.5 2.64 3.11 2.32
3 2 3.27 3.72 2.42
4 2.5 2.14 3.01 2.30
5 3 2.47 3.03 3.02
6 3.5 2.93 2.92 2.78
You would need to repeat the code block using analysis_a to get those means.

Thank you for your help #Matthew Schuelke, but with your code I was getting different results every time I run the code.
Here is how I solved the problem with this code:
name of the new data = (name of the data frame without the parentheses) %>%
group_by(stim_ending_t, visbility, soundvolume, Opening_text) %>%
summarize(m = mean(key_resp_2.rt),
sd = sd(key_resp_2.rt),
coefVar = cv(key_resp_2.rt))
The result as I wanted:
stim_ending_t visbility soundvolume Opening_text m sd coefVar
<dbl> <dbl> <dbl> <chr> <dbl> <dbl> <dbl>
1 1 0 0 Now focus on the Image 1.70 1.14 0.670
2 1 0 0 Now focus on the Sound 1.57 0.794 0.504
3 1 0 1 Now focus on the Image 1.62 1.25 0.772
4 1 0 1 Now focus on the Sound 1.84 1.17 0.637
5 1 1 0 Now focus on the Image 3.19 17.2 5.38
6 1 1 0 Now focus on the Sound 1.59 0.706 0.444

Related

How to estimate means from same column in large number of dataframes, based upon a grouping variable in R

I have a huge amount of DFs in R (>50), which correspond to different filtering I've performed, here's an example of 7 of them:
Steps_Day1 <- filter(PD2, Gait_Day == 1)
Steps_Day2 <- filter(PD2, Gait_Day == 2)
Steps_Day3 <- filter(PD2, Gait_Day == 3)
Steps_Day4 <- filter(PD2, Gait_Day == 4)
Steps_Day5 <- filter(PD2, Gait_Day == 5)
Steps_Day6 <- filter(PD2, Gait_Day == 6)
Steps_Day7 <- filter(PD2, Gait_Day == 7)
Each of the dataframes contains 19 variables, however I'm only interested in their speed (to calculate mean) and their subjectID, as each subject has multiple observations of speed in the same DF.
An example of the data we're interested in, in dataframe - Steps_Day1:
Speed SubjectID
0.6 1
0.7 1
0.7 2
0.8 2
0.1 2
1.1 3
1.2 3
1.5 4
1.7 4
0.8 4
The data goes up to 61 pts. and each particpants number of observations is much larger than this.
Now what I want to do, is create a code that automatically cycles through each of 50 dataframes (taking the 7 above as an example) and calculates the mean speed for each participant and stores this and saves it in a new dataframe, alongside the variables containing to mean for each participant in the other DFs.
An example of Steps day 1 (Values not accurate)
Speed SubjectID
0.6 1
0.7 2
1.2 3
1.7 4
and so on... Before I end up with a final DF containing in column vectors the means for each participant from each of the other data frames, which may look something like:
Steps_Day1 StepsDay2 StepsDay3 StepsDay4 SubjectID
0.6 0.8 0.5 0.4 1
0.7 0.9 0.6 0.6 2
1.2 1.1 0.4 0.7 3
1.7 1.3 0.3 0.8 4
I could do this through some horrible, messy long code - but looking to see if anyone has more intuitive ideas please!
:)
To add to the previous answer, I agree that it is much easier to do this without creating a new data frame for each day. Using some generated data, you can achieve your desired results as follows:
# Generate some data
df <- data.frame(
day = rep(1:5, 1, 100),
subject = rep(5:10, 1, 100),
speed = runif(500)
)
df %>%
group_by(day, subject) %>%
summarise(avg_speed = mean(speed)) %>%
pivot_wider(names_from = day,
names_prefix = "Steps_Day",
values_from = avg_speed)
# A tibble: 6 × 6
subject Steps_Day1 Steps_Day2 Steps_Day3 Steps_Day4 Steps_Day5
<int> <dbl> <dbl> <dbl> <dbl> <dbl>
1 5 0.605 0.416 0.502 0.516 0.517
2 6 0.592 0.458 0.625 0.531 0.460
3 7 0.475 0.396 0.586 0.517 0.449
4 8 0.430 0.435 0.489 0.512 0.548
5 9 0.512 0.645 0.509 0.484 0.566
6 10 0.530 0.453 0.545 0.497 0.460
You don't include a MCVE of your dataset so I can't test out a solution, but it seems like a pretty simple problem using tidyverse solutions.
First, why do you split PD2 into separate dataframes? If you skip that, you can just use group and summarize to get the average for groups:
PD2 %>%
group_by(Gait_Day, SubjectID) %>%
summarize(Steps = mean(Speed))
This will give you a "long-form" data.frame with 3 variables: Gait_Day, SubjectID, and Steps, which has the mean speed for that subject and day. If you want it in the format you show at the end, just pivot into "wide-form" using pivot_wider. You can see this question for further explaination on that: How to reshape data from long to wide format

Sum total distance by groups

I have a df tracking movement of points each hour. I want to find the total distance traveled by that group/trial by adding the distance between the hourly coordinates, but I'm confusing myself with apply functions.
I want to say "in each group/trial, sum [distance(hour1-hou2), distance(hour2=hour3), distance(hour3-hour4)....] until current hour so on each line, I have a cumulative distance travelled value.
I've created a fake df below.
paths <- data.frame(matrix(nrow=80,ncol=5))
colnames(paths) <- c("trt","trial","hour","X","Y")
paths$trt <- rep(c("A","B","C","D"),each=20)
paths$trial <- rep(c(rep(1,times=10),rep(2,times=10)),times=4)
paths$hour <- rep(1:10,times=8)
paths[,4:5] <- runif(160,0,50)
#this shows the paths that I want to measure.
ggplot(data=paths,aes(x=X,y=Y,group=interaction(trt,trial),color=trt))+
geom_path()
I probably want to add a column paths$dist.traveled to keep track each hour.
I think I could use apply or maybe even aggregate but I've been using PointDistance to find the distances, so I'm a bit confused. I also would rather not do a loop inside a loop, because the real dataset is large.
Here's an answer that uses {dplyr}:
library(dplyr)
paths %>%
arrange(trt, trial, hour) %>%
group_by(trt, trial) %>%
mutate(dist_travelled = sqrt((X - lag(X))^2 + (Y - lag(Y))^2)) %>%
mutate(total_dist = sum(dist_travelled, na.rm = TRUE)) %>%
ungroup()
If you wanted the total distance but grouped only by trt and not trial you would just remove that from the call to group_by().
Is this what you are trying to achieve?:
paths %>%
mutate(dist.traveled = sqrt((X-lag(X))^2 + (Y-lag(Y))^2))
trt trial hour X Y dist.traveled
<chr> <dbl> <int> <dbl> <dbl> <dbl>
1 A 1 1 11.2 26.9 NA
2 A 1 2 20.1 1.48 27.0
3 A 1 3 30.4 0.601 10.4
4 A 1 4 31.1 26.6 26.0
5 A 1 5 38.1 30.4 7.88
6 A 1 6 27.9 47.9 20.2
7 A 1 7 16.5 35.3 16.9
8 A 1 8 0.328 13.0 27.6
9 A 1 9 14.0 41.7 31.8
10 A 1 10 29.7 7.27 37.8
# ... with 70 more rows
paths$dist.travelled[which(paths$hour==1)] <- NA
paths %>%
group_by(trt)%>%
summarise(total_distance = sum(dist.traveled, na.rm = TRUE))
trt total_distance
<chr> <dbl>
1 A 492.
2 B 508.
3 C 479.
4 D 462.
I am adding the new column to calculate distances for each group, and them sum them up.

Reshaping database using reshape package-part 2 [duplicate]

This question already has answers here:
Reshape multiple values at once
(2 answers)
Closed 3 years ago.
Following the interaction in the previous post Reshaping database using reshape package I create this one to ask other question.
Briefly: I have a database with some rows that it replicate for Id column, I would like to transpose it. The following cose show a little example of my database.
test<-data.frame(Id=c(1,1,2,3),
St=c(20,80,80,20),
gap=seq(0.02,0.08,by=0.02),
gip=c(0.23,0.60,0.86,2.09),
gat=c(0.0107,0.989,0.337,0.663))
I would like a final database like this figure that I attached:
With one row for each Id value and the different columns attached.
Can you give me any suggestions?
You can use dcast from data.table. This function allows to spread multiple value variables.
library(data.table)
setDT(test) # convert test to a data.table
test1 <- dcast(test, Id ~ rowid(Id),
value.var = c('St', 'gap', 'gip', 'gat'), fill = 0)
test1
# Id St_1 St_2 gap_1 gap_2 gip_1 gip_2 gat_1 gat_2
#1: 1 20 80 0.02 0.04 0.23 0.6 0.0107 0.989
#2: 2 80 0 0.06 0.00 0.86 0.0 0.3370 0.000
#3: 3 20 0 0.08 0.00 2.09 0.0 0.6630 0.000
If you want to continue with a data.frame call setDF(test1) at the end.
A dplyr/tidyr alternative is to first gather to long format, group_by Id and key and create a sequential row identifier for each group (new_key) and finally spread it back to wide form.
library(dplyr)
library(tidyr)
test %>%
gather(key, value, -Id) %>%
group_by(Id, key) %>%
mutate(new_key = paste0(key, row_number())) %>%
ungroup() %>%
select(-key) %>%
spread(new_key, value, fill = 0)
# Id gap1 gap2 gat1 gat2 gip1 gip2 St1 St2
# <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#1 1 0.02 0.04 0.0107 0.989 0.23 0.6 20 80
#2 2 0.06 0 0.337 0 0.86 0 80 0
#3 3 0.08 0 0.663 0 2.09 0 20 0

For loop warning: "number of items to replace is not a multiple of replacement length" with two dataframes

I am trying to create a new vector by applying a transformation to a variable in one of my dataframe based on data from another dataframe.
I have two dataframes df1 and df2. df1 and df2 have different dimension, I have over 20,000 rows in df1 and 76 rows in df2.
df1 is my original dataset. I created df2 for Ag_ppm as follow:
df2 <- df1%>%
filter(!is.na(Ag_ppm)) %>%
group_by(Year,Zone, SubZone) %>%
summarise(
n = sum(!is.na(Ag_ppm)),
min = min(Ag_ppm),
max = max(Ag_ppm),
mean = mean(Ag_ppm),
sd = sd(Ag_ppm),
iqr = IQR(Ag_ppm),
Q1 = quantile(Ag_ppm, 0.25),
median = median(Ag_ppm),
Q3 = quantile(Ag_ppm, 0.75),
LW = min(Ag_ppm > (quantile(Ag_ppm, .25)-1.5*IQR(Ag_ppm))),
UF = quantile(Ag_ppm, .75) + 1.5*IQR(Ag_ppm))
Here is what the first rows of each data frames look like:
head(df1, n=5)
# A tibble: 5 x 12
Year Zone SubZone Au_ppm Ag_ppm Cu_ppm Pb_ppm Zn_ppm As_ppm Sb_ppm Bi_ppm Mo_ppm
<chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1990 BugLake BugLake 0.007 3.7 17 27 23 1 1 NA 1
2 1983 Johnny Mountain Johnny Mountain 0.01 1.6 71 63 550 4 NA NA NA
3 1983 Khyber Pass Khyber Pass 0.12 11.5 275 204 8230 178 7 60 NA
4 1987 Chebry Ridge Line Grid 0.05 2.2 35 21 105 16 6 NA NA
5 1987 Chebry Handel Grid 0.004 1.3 29 27 663 45 2 NA NA
head(df2, n=5)
# A tibble: 5 x 14
# Groups: Year, Zone [3]
Year Zone SubZone n min max mean sd iqr Q1 median Q3 LW UF
<chr> <chr> <chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <int> <dbl>
1 1981 Chebry Handel 52 0.6 5.1 1.83 0.947 0.925 1.2 1.6 2.12 1 3.51
2 1981 Imperial Metals Handel 24 0.9 6.9 2.81 1.43 1.35 1.95 2.65 3.3 1 5.33
3 1983 Chebry Chebry 5 0.7 3.7 1.78 1.19 0.9 1.2 1.2 2.1 1 3.45
4 1983 Chebry Handel 17 0.1 0.7 0.318 0.163 0.2 0.2 0.3 0.4 1 0.7
5 1983 Chebry Handel Grid 225 0.1 16 0.892 1.33 0.7 0.3 0.6 1 1 2.05
I want to apply the following equation to my column Ag_ppm in df1 using the median and IQR calculated for each subgroup in df2:
Z = (X - median)/IQR
For that purpose, I wrote:
# Initialize Ag_std vector with NA values
Ag_std <- rep(NA, times = nrow(df1))
# Populate Ag_std vector with standardized Ag values
Ag_std <-
for (i in 1:nrow(df1)) {
if (!is.na(df1$Ag_ppm[i])) {
filter(df2, Zone == df1$Zone[i], Year == df1$Year[i],
SubZone == df1$SubZone[i])
Ag_std[i] <- (df1$Ag_ppm[i] - df2$median)/df2$iqr
}
}
But the loop does not work (it returns a NULL vector) and I have this warning:
1: In Ag_std[i] <- (df1$Ag_ppm[i] - df2$median)/df2$iqr :
number of items to replace is not a multiple of replacement length
I've looked similar questions, and I did not find an answer that would work for me. Any help would be much appreciated!
If there are better ways of achieving the same without a loop (I'm sure there are, e.g. apply()), I would appreciate such comments as well. Unfortunately I'm not familiar enough with the alternatives to be able to implement them quickly.
This can be done relatively easily in data.table
library(data.table)
DT <- data.table(df1)
#function to apply
fun <- function(x) (x - median(x)) / diff (quantile( x, c(.25, .75)))
# create a new column with desired result
DT[, Ag_std := fun(Ag_ppm), by = list(Year, Zone, SubZone)]
Also, I think your loop can be fixed by assigning the result of 'filter' to a temporary object
for (i in 1:nrow(df1)) {
if (!is.na(df1$Ag_ppm[i])) {
temp.var <- filter(df2, Zone == df1$Zone[i], Year == df1$Year[i],
SubZone == df1$SubZone[i])
Ag_std[i] <- (df1$Ag_ppm[i] - temp.var$median)/temp.var$iqr
}
}
Since you have df2 as a seperate dataframe, you can join and mutate:
df1 %>%
left_join(df2, by = c("Year", "Zone", "SubZone")) %>%
mutate(Z = (Ag_ppm - median) / iqr)
In fact, you could have generated info in df2 in df1 itself using summarise

How to divide dataset into balanced sets based on multiple variables

I have a large dataset I need to divide into multiple balanced sets.
The set looks something like the following:
> data<-matrix(runif(4000, min=0, max=10), nrow=500, ncol=8 )
> colnames(data)<-c("A","B","C","D","E","F","G","H")
The sets, each containing for example 20 rows, will need to be balanced across multiple variables so that each subset ends up having a similar mean of B, C, D that's included in their subgroup compared to all the other subsets.
Is there a way to do that with R? Any advice would be much appreciated. Thank you in advance!
library(tidyverse)
# Reproducible data
set.seed(2)
data<-matrix(runif(4000, min=0, max=10), nrow=500, ncol=8 )
colnames(data)<-c("A","B","C","D","E","F","G","H")
data=as.data.frame(data)
Updated Answer
It's probably not possible to get similar means across sets within each column if you want to keep observations from a given row together. With 8 columns (as in your sample data), you'd need 25 20-row sets where each column A set has the same mean, each column B set has the same mean, etc. That's a lot of constraints. Probably there are, however, algorithms that could find the set membership assignment schedule that minimizes the difference in set means.
However, if you can separately take 20 observations from each column without regard to which row it came from, then here's one option:
# Group into sets with same means
same_means = data %>%
gather(key, value) %>%
arrange(value) %>%
group_by(key) %>%
mutate(set = c(rep(1:25, 10), rep(25:1, 10)))
# Check means by set for each column
same_means %>%
group_by(key, set) %>%
summarise(mean=mean(value)) %>%
spread(key, mean) %>% as.data.frame
set A B C D E F G H
1 1 4.940018 5.018584 5.117592 4.931069 5.016401 5.171896 4.886093 5.047926
2 2 4.946496 5.018578 5.124084 4.936461 5.017041 5.172817 4.887383 5.048850
3 3 4.947443 5.021511 5.125649 4.929010 5.015181 5.173983 4.880492 5.044192
4 4 4.948340 5.014958 5.126480 4.922940 5.007478 5.175898 4.878876 5.042789
5 5 4.943010 5.018506 5.123188 4.924283 5.019847 5.174981 4.869466 5.046532
6 6 4.942808 5.019945 5.123633 4.924036 5.019279 5.186053 4.870271 5.044757
7 7 4.945312 5.022991 5.120904 4.919835 5.019173 5.187910 4.869666 5.041317
8 8 4.947457 5.024992 5.125821 4.915033 5.016782 5.187996 4.867533 5.043262
9 9 4.936680 5.020040 5.128815 4.917770 5.022527 5.180950 4.864416 5.043587
10 10 4.943435 5.022840 5.122607 4.921102 5.018274 5.183719 4.872688 5.036263
11 11 4.942015 5.024077 5.121594 4.921965 5.015766 5.185075 4.880304 5.045362
12 12 4.944416 5.024906 5.119663 4.925396 5.023136 5.183449 4.887840 5.044733
13 13 4.946751 5.020960 5.127302 4.923513 5.014100 5.186527 4.889140 5.048425
14 14 4.949517 5.011549 5.127794 4.925720 5.006624 5.188227 4.882128 5.055608
15 15 4.943008 5.013135 5.130486 4.930377 5.002825 5.194421 4.884593 5.051968
16 16 4.939554 5.021875 5.129392 4.930384 5.005527 5.197746 4.883358 5.052474
17 17 4.935909 5.019139 5.131258 4.922536 5.003273 5.204442 4.884018 5.059162
18 18 4.935830 5.022633 5.129389 4.927106 5.008391 5.210277 4.877859 5.054829
19 19 4.936171 5.025452 5.127276 4.927904 5.007995 5.206972 4.873620 5.054192
20 20 4.942925 5.018719 5.127394 4.929643 5.005699 5.202787 4.869454 5.055665
21 21 4.941351 5.014454 5.125727 4.932884 5.008633 5.205170 4.870352 5.047728
22 22 4.933846 5.019311 5.130156 4.923804 5.012874 5.213346 4.874263 5.056290
23 23 4.928815 5.021575 5.139077 4.923665 5.017180 5.211699 4.876333 5.056836
24 24 4.928739 5.024419 5.140386 4.925559 5.012995 5.214019 4.880025 5.055182
25 25 4.929357 5.025198 5.134391 4.930061 5.008571 5.217005 4.885442 5.062630
Original Answer
# Randomly group data into 20-row groups
set.seed(104)
data = data %>%
mutate(set = sample(rep(1:(500/20), each=20)))
head(data)
A B C D E F G H set
1 1.848823 6.920055 3.2283369 6.633721 6.794640 2.0288792 1.984295 2.09812642 10
2 7.023740 5.599569 0.4468325 5.198884 6.572196 0.9269249 9.700118 4.58840437 20
3 5.733263 3.426912 7.3168797 3.317611 8.301268 1.4466065 5.280740 0.09172101 19
4 1.680519 2.344975 4.9242313 6.163171 4.651894 2.2253335 1.175535 2.51299726 25
5 9.438393 4.296028 2.3563249 5.814513 1.717668 0.8130327 9.430833 0.68269106 19
6 9.434750 7.367007 1.2603451 5.952936 3.337172 5.2892300 5.139007 6.52763327 5
# Mean by set for each column
data %>% group_by(set) %>%
summarise_all(mean)
set A B C D E F G H
1 1 5.240236 6.143941 4.638874 5.367626 4.982008 4.200123 5.521844 5.083868
2 2 5.520983 5.257147 5.209941 4.504766 4.231175 3.642897 5.578811 6.439491
3 3 5.943011 3.556500 5.366094 4.583440 4.932206 4.725007 5.579103 5.420547
4 4 4.729387 4.755320 5.582982 4.763171 5.217154 5.224971 4.972047 3.892672
5 5 4.824812 4.527623 5.055745 4.556010 4.816255 4.426381 3.520427 6.398151
6 6 4.957994 7.517130 6.727288 4.757732 4.575019 6.220071 5.219651 5.130648
7 7 5.344701 4.650095 5.736826 5.161822 5.208502 5.645190 4.266679 4.243660
8 8 4.003065 4.578335 5.797876 4.968013 5.130712 6.192811 4.282839 5.669198
9 9 4.766465 4.395451 5.485031 4.577186 5.366829 5.653012 4.550389 4.367806
10 10 4.695404 5.295599 5.123817 5.358232 5.439788 5.643931 5.127332 5.089670
# ... with 15 more rows
If the total number of rows in the data frame is not divisible by the number of rows you want in each set, then you can do the following when you create the sets:
data = data %>%
mutate(set = sample(rep(1:ceiling(500/20), each=20))[1:n()])
In this case, the set sizes will vary a bit with the number of data rows is not divisible by the desired number of rows in each set.
The following approach could be worth trying for someone in a similar position.
It is based on the numerical balancing in groupdata2's fold() function, which allows creating groups with balanced means for a single column. By standardizing each of the columns and numerically balancing their rowwise sum, we might increase the chance of getting balanced means in the individual columns.
I compared this approach to creating groups randomly a few times and selecting the split with the least variance in means. It seems to be a bit better, but I'm not too convinced that this will hold in all contexts.
# Attach dplyr and groupdata2
library(dplyr)
library(groupdata2)
set.seed(1)
# Create the dataset
data <- matrix(runif(4000, min = 0, max = 10), nrow = 500, ncol = 8)
colnames(data) <- c("A", "B", "C", "D", "E", "F", "G", "H")
data <- dplyr::as_tibble(data)
# Standardize all columns and calculate row sums
data_std <- data %>%
dplyr::mutate_all(.funs = function(x){(x-mean(x))/sd(x)}) %>%
dplyr::mutate(total = rowSums(across(where(is.numeric))))
# Create groups (new column called ".folds")
# We numerically balance the "total" column
data_std <- data_std %>%
groupdata2::fold(k = 25, num_col = "total") # k = 500/20=25
# Transfer the groups to the original (non-standardized) data frame
data$group <- data_std$.folds
# Check the means
data %>%
dplyr::group_by(group) %>%
dplyr::summarise_all(.funs = mean)
> # A tibble: 25 x 9
> group A B C D E F G H
> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
> 1 1 4.48 5.05 4.80 5.65 5.04 4.60 5.12 4.85
> 2 2 5.57 5.17 3.21 5.46 4.46 5.89 5.06 4.79
> 3 3 4.33 6.02 4.57 6.18 4.76 3.79 5.94 3.71
> 4 4 4.51 4.62 4.62 5.27 4.65 5.41 5.26 5.23
> 5 5 4.55 5.10 4.19 5.41 5.28 5.39 5.57 4.23
> 6 6 4.82 4.74 6.10 4.34 4.82 5.08 4.89 4.81
> 7 7 5.88 4.49 4.13 3.91 5.62 4.75 5.46 5.26
> 8 8 4.11 5.50 5.61 4.23 5.30 4.60 4.96 5.35
> 9 9 4.30 3.74 6.45 5.60 3.56 4.92 5.57 5.32
> 10 10 5.26 5.50 4.35 5.29 4.53 4.75 4.49 5.45
> # … with 15 more rows
# Check the standard deviations of the means
# Could be used to compare methods
data %>%
dplyr::group_by(group) %>%
dplyr::summarise_all(.funs = mean) %>%
dplyr::summarise(across(where(is.numeric), sd))
> # A tibble: 1 x 8
> A B C D E F G H
> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
> 1 0.496 0.546 0.764 0.669 0.591 0.611 0.690 0.475
It might be best to compare the means and mean variances (or standard deviations as above) of different methods on the standardized data though. In that case, one could calculate the sum of the variances and minimize it.
data_std %>%
dplyr::select(-total) %>%
dplyr::group_by(.folds) %>%
dplyr::summarise_all(.funs = mean) %>%
dplyr::summarise(across(where(is.numeric), sd)) %>%
sum()
> 1.643989
Comparing multiple balanced splits
The fold() function allows creating multiple unique grouping factors (splits) at once. So here, I will perform the numerically balanced split 20 times and find the grouping with the lowest sum of the standard deviations of the means. I'll further convert it to a function.
create_multi_balanced_groups <- function(data, cols, k, num_tries){
# Extract the variables of interest
# We assume these are numeric but we could add a check
data_to_balance <- data[, cols]
# Standardize all columns
# And calculate rowwise sums
data_std <- data_to_balance %>%
dplyr::mutate_all(.funs = function(x){(x-mean(x))/sd(x)}) %>%
dplyr::mutate(total = rowSums(across(where(is.numeric))))
# Create `num_tries` unique numerically balanced splits
data_std <- data_std %>%
groupdata2::fold(
k = k,
num_fold_cols = num_tries,
num_col = "total"
)
# The new fold column names ".folds_1", ".folds_2", etc.
fold_col_names <- paste0(".folds_", seq_len(num_tries))
# Remove total column
data_std <- data_std %>%
dplyr::select(-total)
# Calculate score for each split
# This could probably be done more efficiently without a for loop
variance_scores <- c()
for (fcol in fold_col_names){
score <- data_std %>%
dplyr::group_by(!!as.name(fcol)) %>%
dplyr::summarise(across(where(is.numeric), mean)) %>%
dplyr::summarise(across(where(is.numeric), sd)) %>%
sum()
variance_scores <- append(variance_scores, score)
}
# Get the fold column with the lowest score
lowest_fcol_index <- which.min(variance_scores)
best_fcol <- fold_col_names[[lowest_fcol_index]]
# Add the best fold column / grouping factor to the original data
data[["group"]] <- data_std[[best_fcol]]
# Return the original data and the score of the best fold column
list(data, min(variance_scores))
}
# Run with 20 splits
set.seed(1)
data_grouped_and_score <- create_multi_balanced_groups(
data = data,
cols = c("A", "B", "C", "D", "E", "F", "G", "H"),
k = 25,
num_tries = 20
)
# Check data
data_grouped_and_score[[1]]
> # A tibble: 500 x 9
> A B C D E F G H group
> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <fct>
> 1 5.86 6.54 0.500 2.88 5.70 9.67 2.29 3.01 2
> 2 0.0895 4.69 5.71 0.343 8.95 7.73 5.76 9.58 1
> 3 2.94 1.78 2.06 6.66 9.54 0.600 4.26 0.771 16
> 4 2.77 1.52 0.723 8.11 8.95 1.37 6.32 6.24 7
> 5 8.14 2.49 0.467 8.51 0.889 6.28 4.47 8.63 13
> 6 2.60 8.23 9.17 5.14 2.85 8.54 8.94 0.619 23
> 7 7.24 0.260 6.64 8.35 8.59 0.0862 1.73 8.10 5
> 8 9.06 1.11 6.01 5.35 2.01 9.37 7.47 1.01 1
> 9 9.49 5.48 3.64 1.94 3.24 2.49 3.63 5.52 7
> 10 0.731 0.230 5.29 8.43 5.40 8.50 3.46 1.23 10
> # … with 490 more rows
# Check score
data_grouped_and_score[[2]]
> 1.552656
By commenting out the num_col = "total" line, we can run this without the numerical balancing. For me, this gave a score of 1.615257.
Disclaimer: I am the author of the groupdata2 package. The fold() function can also balance a categorical column (cat_col) and keep all data points with the same ID in the same fold (id_col) (e.g. to avoid leakage in cross-validation). There's a very similar partition() function as well.

Resources