Testing/Training data sets stratified on two crossed variables - r

I have a data set which is crossed with respect to two categorical variables, and only 1 rep per combination:
> examp <- data.frame(group=rep(LETTERS[1:4], each=6), class=rep(LETTERS[16:21], times=4))
> table(examp$group, examp$class)
P Q R S T U
A 1 1 1 1 1 1
B 1 1 1 1 1 1
C 1 1 1 1 1 1
D 1 1 1 1 1 1
I need to create a testing/training data set (50/50 split) which balances both group and class.
I know I can use createDataPartition from the caret package to balance it in one of the two factors, but this leaves impalance in the other factor:
> library(caret)
> examp$valid <- "test"
> examp$valid[createDataPartition(examp$group, p=0.5, list=FALSE)] <- "train"
> table(examp$group, examp$valid)
test train
A 3 3
B 3 3
C 3 3
D 3 3
> table(examp$class, examp$valid)
test train
P 1 3
Q 2 2
R 2 2
S 2 2
T 2 2
U 3 1
>
>
> examp$valid <- "test"
> examp$valid[createDataPartition(examp$class, p=0.5, list=FALSE)] <- "train"
> table(examp$group, examp$valid)
test train
A 3 3
B 3 3
C 5 1
D 1 5
> table(examp$class, examp$valid)
test train
P 2 2
Q 2 2
R 2 2
S 2 2
T 2 2
U 2 2
How can I create a partition which is balanced in both factors? If I had multiple reps per group/class combination, I would stratify by interaction(group,class), but I cannot in this case since there is only one observation in each combo.

I propose this algorithm
Randomly sort the unique group values (e.g., DBAC)
Iterate over adjacent pairs of the randomly sorted group values (e.g., first DB, then AC):
Randomly pick half of the class values
Assign the rows with the first group and in the selected half of class to TRAIN
Assign the rows with the second group and not in the selected half of class to TEST

Related

Pool items in database until minimum sample size reached and find all permutations in R

This is an example.
df <- data.frame(item=letters[1:5], n=c(3,2,2,1,1))
df
item n
1 a 3
2 b 2
3 c 2
4 d 1
5 e 1
Item needs to be grouped so that the group has a sample size of at least 4.
This would be the solution if you follow the sorting of df.
item n cluster
1 a 3 1
2 b 2 1
3 c 2 2
4 d 1 2
5 e 1 2
How to get all possible unique solutions?
Further, the code should also not allow any clusters to have a sample size less than 4.
Below, we have a brute force approach using the package partitions. The idea is that we find every partition of the rows of df. We then sum each group and check to see that the requirement has been met.
df <- data.frame(item=letters[1:5], n=c(3,2,2,1,1))
minSize <- 4
funGetClusters <- function(df, minSize) {
allParts <- partitions::listParts(nrow(df))
goodInd <- which(sapply(allParts, function(p) {
all(sapply(p, function(x) sum(df$n[x])) >= minSize)
}))
allParts[goodInd]
}
clusterBreakdown <- funGetClusters(df, minSize)
allDfs <- lapply(clusterBreakdown, function(p) {
copyDf <- df
copyDf$cluster <- 1L
clustInd <- 2L
for (i in p[-1]) {
copyDf$cluster[i] <- clustInd
}
copyDf
})
Here is the output:
allDfs
[[1]]
item n cluster
1 a 3 1
2 b 2 1
3 c 2 1
4 d 1 1
5 e 1 1
[[2]]
item n cluster
1 a 3 1
2 b 2 2
3 c 2 2
4 d 1 1
5 e 1 1
[[3]]
item n cluster
1 a 3 2
2 b 2 1
3 c 2 1
4 d 1 2
5 e 1 1
[[4]]
item n cluster
1 a 3 2
2 b 2 1
3 c 2 1
4 d 1 1
5 e 1 2
[[5]]
item n cluster
1 a 3 2
2 b 2 1
3 c 2 2
4 d 1 1
5 e 1 1
[[6]]
item n cluster
1 a 3 2
2 b 2 2
3 c 2 1
4 d 1 1
5 e 1 1
It should be noted, that there is a combinatorial explosion as the number of rows increases. For example, just with 10 rows we would have to test 115975 different partitions.
As #chinsoon comments, RcppAlgos could be a good choice for an acceptable solution for larger cases. Disclaimer, I am the author. I have answered similar questions with much larger inputs and have had good success.
Allocating tasks to parallel workers so that expected cost is roughly equal
Split a set into n unequal subsets with the key deciding factor being that the elements in the subset aggregate and equal a predetermined amount?
#AllanCameron also has a great answer and nice methodology to attacking this problem. You should give that a read as well.
Lastly, the following vignette by Robin K. S. Hankin (author of the partitions package) and Luke J. West is not only a great read, but very applicable to problems like the one presented here.
Set Partitions in R

Unpaired but not paired ttest loop in R working

I have a loop that goes through a dataframe, runs ttests and stores the resulting p-value of each ttest in another dataframe.
Here is the loop where 'mydata' is the dataframe that the ttests are being run on. 'mydata' is a dataframe with 4 columns:
df <- mydata
mydf <- data.frame(c(1:4))
# this is the new dataframe being initialized to store my p-values
row.names(mydf) <- names(df)
for(i in names(df)){
if(sd(df[[i]]) == 0) {
# this prevents the loop from terminating and returning an error when ttests
# are run on columns with binary values
} else {
ttest <- t.test(df[df$Pre==1,][[i]], df[df$Pre==2,][[i]], paired=FALSE)
# 'Pre' is the column that groups my data into
# distinct cohorts. I am comparing the Pre cohort versus the Post cohort
# in these ttests.
mydf[i,1] <- ttest$p.value
}
}
mydf
Here is my output of mydf for an unpaired (paired=FALSE) ttest:
c.1.4.
density 0.3569670
clust 0.9715987
Pre 3.0000000
HC 4.0000000
However, when I change paired=FALSE to paired=TRUE (to run a paired ttest), here is mydf:
c.1.4.
density 1
clust 2
Pre 3
HC 4
I checked this line of my loop in isolation using the first column of my dataframe, '1' in double brackets,(for paired=TRUE) and it does appear to be outputting a p-value:
ttest <- t.test(df[df$Pre==1,][[1]], df[df$Pre==2,][[1]], paired=TRUE)
ttest$p.value
[1] 0.356967
Below is a sample dataset that you can use to reproduce the error:
density clust Pre HC
RDHC008A_13 0.47991 0.676825 1 1
RDHC009A_13 0.49955 0.696441 1 1
RDHC010A_16 0.491454 0.706507 1 1
RDHC013A_13 0.442879 0.689118 1 1
RDHC014A_13 0.453823 0.691603 1 1
RDHC016A_16 0.481259 0.706978 1 1
RDHC019A_06 0.515442 0.699514 1 1
RDHC021A_15 0.449925 0.685202 1 1
RDHC022A_12 0.461319 0.705446 1 1
RDHC023A_11 0.468816 0.667698 1 1
RDHC024A_12 0.515142 0.719474 1 1
RDHC025A_13 0.496702 0.710877 1 1
RDHC026A_12 0.477061 0.695061 1 1
RDHC027A_12 0.515442 0.722269 1 1
RDHC029A_12 0.406747 0.669998 1 1
RDHC030A_12 0.476162 0.69219 1 1
RDHC032B_13 0.50075 0.685474 1 1
RDHC034B_07 0.525487 0.725558 1 1
RDHC036B_07 0.468816 0.698904 1 1
RDHC038B_07 0.470015 0.706668 1 1
RDHC039B_07 0.511544 0.712818 1 1
RDHC041A_14 0.551574 0.732983 1 1
RDHC004C_12 0.486207 0.695121 2 1
RDHC005C_12 0.505997 0.695598 2 1
RDHC006C_13 0.487406 0.697044 2 1
RDHC013C_12 0.41979 0.685518 2 1
RDHC015C_13 0.297751 0.69632 2 1
RDHC016C_16 0.463718 0.700011 2 1
RDHC019C_14 0.508096 0.690071 2 1
RDHC021C_12 0.448426 0.688265 2 1
RDHC022C_12 0.468816 0.700968 2 1
RDHC024C_12 0.515292 0.70664 2 1
RDHC025C_13 0.473163 0.704231 2 1
RDHC027C_12 0.518741 0.732939 2 1
RDHC030C_11 0.489205 0.708174 2 1
You can import it by doing the following:
copy the data and paste it within the quotation marks of the code below into R:
zz <- ""
now, assign the data to a data.frame:
mydata <- read.table(text=zz, header=TRUE)
I have no idea why changing the 'paired' parameter to TRUE would cause this to happen. Any help/advice would be much appreciated. Thanks - Paul
You initialize the mydf data.frame with the values 1:4 here
mydf <- data.frame(c(1:4))
basically the loop does nothing because t.test is throwing an error when you do PAIRED=TRUE because your two sets of values aren't the same length (and they need to be when doing a paired t-test. You have 22 values where Pre==1 and 13 values where Pre==2. You can't do a paired test with an imbalance like that.

How to write the remaining data frame in R after randomly subseting the data

I took a random sample from a data frame. But I don't know how to get the remaining data frame.
df <- data.frame(x=rep(1:3,each=2),y=6:1,z=letters[1:6])
#select 3 random rows
df[sample(nrow(df),3)]
What I want is to get the remaining data frame with the other 3 rows.
sample sets a random seed each time you run it, thus if you want to reproduce its results you will either need to set.seed or save its results in a variable.
Addressing your question, you simply need to add - before your index in order to get the rest of the data set.
Also, don't forget to add a comma after the indx if you want to select rows (unlike in your question)
set.seed(1)
indx <- sample(nrow(df), 3)
Your subset
df[indx, ]
# x y z
# 2 1 5 b
# 6 3 1 f
# 3 2 4 c
Remaining data set
df[-indx, ]
# x y z
# 1 1 6 a
# 4 2 3 d
# 5 3 2 e
Try:
> df
x y z
1 1 6 a
2 1 5 b
3 2 4 c
4 2 3 d
5 3 2 e
6 3 1 f
>
> df2 = df[sample(nrow(df),3),]
> df2
x y z
5 3 2 e
3 2 4 c
1 1 6 a
> df[!rownames(df) %in% rownames(df2),]
x y z
1 1 6 a
2 1 5 b
5 3 2 e

Data simulation according to specific rules in R

I need help simulating a dataset.
It is supposed to simulate all possible outcomes on a signal detection theory task (participants are presented with trials and have to decide whether or not they detected given signal). Now, I need a dataset of all possible values for varying number of trials.
Say, there are 6 trials, 5 with the signal present, 5 with the signal absent. I am only interested in correct detections (hits) and false alarms (Type I errors). A participant can correctly detect between 1 (I don't need 0's) and 5 and make the same number of false alarms. With all possible combinations, that would be dataset containing two variables with 5^2 cases each. To make things more complicated, even the number of trials is variable. The number of both signal and non-signal trials can vary between 1 and 20 but the total number of trials cannot be less than 3 (either 1 S trial and 2 Non-S trials, or the other way around). And for each possible combination of trials, there is a group of possible combinations of hits and false alarms.
What I need is a dataset with 5 variables (total N, N of S trials, N of Non-S trials, N of Hits, and N of False Alarms) with all the possible values.
EXAMPLE
Here are all possible data for total N of 4. Note that Signal + Noise = N_total and that N_Hit seq(1:Signal) and N_FA seq(1:Noise)
N_total Signal Noise N_Hit N_FA
4 1 3 1 1
4 1 3 1 2
4 1 3 1 3
4 2 2 1 1
4 2 2 1 2
4 2 2 2 1
4 2 2 2 2
4 3 1 1 1
4 3 1 2 1
4 3 1 3 1
I'm an R novice so any help at all would be much appreciated!
Hope the description is clear.
I created a function, which uses the number of trials as parameter.
myfunc <- function(n) {
# create a data frame of all combinations
grid <- expand.grid(rep(list(seq_len(n - 1)), 4))
# remove invalid combinations (keep valid ones)
grid <- grid[grid[3] <= grid[1] & # number of hits <= number of signals
grid[4] <= grid[2] & # false alarms <= noise
(grid[1] + grid[2]) == n , ] # signal and noise sum to total n
# remove signal and noise > 20
grid <- grid[!rowSums(grid[1:2] > 20), ]
# sort rows
grid <- grid[order(grid[1], grid[3], grid[4]), ]
# add total number of trials
res <- cbind(n, grid)
# remove row names, add column names and return the object
return(setNames("rownames<-"(res, NULL),
c("N_total", "Signal", "Noise", "N_Hit", "N_FA")))
}
Use the function:
> myfunc(4)
N_total Signal Noise N_Hit N_FA
1 4 1 3 1 1
2 4 1 3 1 2
3 4 1 3 1 3
4 4 2 2 1 1
5 4 2 2 1 2
6 4 2 2 2 1
7 4 2 2 2 2
8 4 3 1 1 1
9 4 3 1 2 1
10 4 3 1 3 1
How to apply this function to the values 3-40:
lapply(3:40, myfunc)
This will return a list of data frames.

Calculating the occurrences of numbers in the subsets of a data.frame

I have a data frame in R which is similar to the follows. Actually my real ’df’ dataframe is much bigger than this one here but I really do not want to confuse anybody so that is why I try to simplify things as much as possible.
So here’s the data frame.
id <-c(1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,3,3)
a <-c(3,1,3,3,1,3,3,3,3,1,3,2,1,2,1,3,3,2,1,1,1,3,1,3,3,3,2,1,1,3)
b <-c(3,2,1,1,1,1,1,1,1,1,1,2,1,3,2,1,1,1,2,1,3,1,2,2,1,3,3,2,3,2)
c <-c(1,3,2,3,2,1,2,3,3,2,2,3,1,2,3,3,3,1,1,2,3,3,1,2,2,3,2,2,3,2)
d <-c(3,3,3,1,3,2,2,1,2,3,2,2,2,1,3,1,2,2,3,2,3,2,3,2,1,1,1,1,1,2)
e <-c(2,3,1,2,1,2,3,3,1,1,2,1,1,3,3,2,1,1,3,3,2,2,3,3,3,2,3,2,1,3)
df <-data.frame(id,a,b,c,d,e)
df
Basically what I would like to do is to get the occurrences of numbers for each column (a,b,c,d,e) and for each id group (1,2,3) (for this latter grouping see my column ’id’).
So, for column ’a’ and for id number ’1’ (for the latter see column ’id’) the code would be something like this:
as.numeric(table(df[1:10,2]))
##The results are:
[1] 3 7
Just to briefly explain my results: in column ’a’ (and regarding only those records which have number ’1’ in column ’id’) we can say that number '1' occured 3 times and number '3' occured 7 times.
Again, just to show you another example. For column ’a’ and for id number ’2’ (for the latter grouping see again column ’id’):
as.numeric(table(df[11:20,2]))
##After running the codes the results are:
[1] 4 3 3
Let me explain a little again: in column ’a’ and regarding only those observations which have number ’2’ in column ’id’) we can say that number '1' occured 4 times, number '2' occured 3 times and number '3' occured 3 times.
So this is what I would like to do. Calculating the occurrences of numbers for each custom-defined subsets (and then collecting these values into a data frame). I know it is not a difficult task but the PROBLEM is that I’m gonna have to change the input ’df’ dataframe on a regular basis and hence both the overall number of rows and columns might change over time…
What I have done so far is that I have separated the ’df’ dataframe by columns, like this:
for (z in (2:ncol(df))) assign(paste("df",z,sep="."),df[,z])
So df.2 will refer to df$a, df.3 will equal df$b, df.4 will equal df$c etc. But I’m really stuck now and I don’t know how to move forward…
Is there a proper, ”automatic” way to solve this problem?
How about -
> library(reshape)
> dftab <- table(melt(df,'id'))
> dftab
, , value = 1
variable
id a b c d e
1 3 8 2 2 4
2 4 6 3 2 4
3 4 2 1 5 1
, , value = 2
variable
id a b c d e
1 0 1 4 3 3
2 3 3 3 6 2
3 1 4 5 3 4
, , value = 3
variable
id a b c d e
1 7 1 4 5 3
2 3 1 4 2 4
3 5 4 4 2 5
So to get the number of '3's in column 'a' and group '1'
you could just do
> dftab[3,'a',1]
[1] 4
A combination of tapply and apply can create the data you want:
tapply(df$id,df$id,function(x) apply(df[id==x,-1],2,table))
However, when a grouping doesn't have all the elements in it, as in 1a, the result will be a list for that id group rather than a nice table (matrix).
$`1`
$`1`$a
1 3
3 7
$`1`$b
1 2 3
8 1 1
$`1`$c
1 2 3
2 4 4
$`1`$d
1 2 3
2 3 5
$`1`$e
1 2 3
4 3 3
$`2`
a b c d e
1 4 6 3 2 4
2 3 3 3 6 2
3 3 1 4 2 4
$`3`
a b c d e
1 4 2 1 5 1
2 1 4 5 3 4
3 5 4 4 2 5
I'm sure someone will have a more elegant solution than this, but you can cobble it together with a simple function and dlply from the plyr package.
ColTables <- function(df) {
counts <- list()
for(a in names(df)[names(df) != "id"]) {
counts[[a]] <- table(df[a])
}
return(counts)
}
results <- dlply(df, "id", ColTables)
This gets you back a list - the first "layer" of the list will be the id variable; the second the table results for each column for that id variable. For example:
> results[['2']]['a']
$a
1 2 3
4 3 3
For id variable = 2, column = a, per your above example.
A way to do it is using the aggregate function, but you have to add a column to your dataframe
> df$freq <- 0
> aggregate(freq~a+id,df,length)
a id freq
1 1 1 3
2 3 1 7
3 1 2 4
4 2 2 3
5 3 2 3
6 1 3 4
7 2 3 1
8 3 3 5
Of course you can write a function to do it, so it's easier to do it frequently, and you don't have to add a column to your actual data frame
> frequency <- function(df,groups) {
+ relevant <- df[,groups]
+ relevant$freq <- 0
+ aggregate(freq~.,relevant,length)
+ }
> frequency(df,c("b","id"))
b id freq
1 1 1 8
2 2 1 1
3 3 1 1
4 1 2 6
5 2 2 3
6 3 2 1
7 1 3 2
8 2 3 4
9 3 3 4
You didn't say how you'd like the data. The by function might give you the output you like.
by(df, df$id, function(x) lapply(x[,-1], table))

Resources