Relabel samples in kmean results considering the order of centers - r

I am using kmeans to cluster my data, for the produced result I have a plan.
I wanted to relabel the samples based on ordered centres. Consider following example :
a = c("a","b","c","d","e","F","i","j","k","l","m","n")
b = c(1,2,3,20,21,21,40,41,42,4,23,50)
mydata = data.frame(id=a,amount=b)
result = kmeans(mydata$amount,3,nstart=10)
Here is the result :
clus$cluster
2 2 2 3 3 3 1 1 1 2 3 1
clus$centers
1 43.25
2 2.50
3 21.25
mydata = data.frame(mydata,label =clus$cluster)
mydata
id amount label
1 a 1 2
2 b 2 2
3 c 3 2
4 d 20 3
5 e 21 3
6 F 21 3
7 i 40 1
8 j 41 1
9 k 42 1
10 l 4 2
11 m 23 3
12 n 50 1
What I am looking for is sorting the centres and producing the labels accordingly:
1 2.50
2 21.25
3 43.25
and label the samples going to:
1 1 1 2 2 2 3 3 3 1 2 3
and the result should be :
id amount label
1 a 1 1
2 b 2 1
3 c 3 1
4 d 20 2
5 e 21 2
6 F 21 2
7 i 40 3
8 j 41 3
9 k 42 3
10 l 4 1
11 m 23 2
12 n 50 3
I think it is possible to do it by, order the centres and for each sample taking the index of minimum distance of samples with centres as the label of that cluster.
Is there another way that R can do it automatically ?

One idea is to create a named vector by matching your centers with the sorted centers. Then match the vector with mydata$label and replace with the names of the vector, i.e.
i1 <- setNames(match(sort(result$centers), result$centers), rownames(result$centers))
as.numeric(names(i1)[match(mydata$label, i1)])
# [1] 1 1 1 2 2 2 3 3 3 1 2 3

You can use for loop, if you don't mind loops
cls <- result$cluster
for (i in 1 : length(result$cluster))
result$cluster[cls == order(result$centers)[i]] <- i
result$cluster
#[1] 1 1 1 2 2 2 3 3 3 1 2 3

Related

How to add a column with repeating but changing sequence?

I'm trying to add a column with repeating sequence but one that changes for each group. In the example data, the group is the id column.
data <- tibble::expand_grid(id = 1:12, condition = c("a", "b", "c"))
data
id condition
1 a
1 b
1 c
2 a
2 b
2 c
3 a
3 b
3 c
... and so on
I'd like to add a column called order to repeat various combinations like 1 2 3 2 3 1 3 1 2 1 3 2 2 1 3 3 2 1 for each id.
In the end, the desired output will look like this
id condition order
1 a 1
1 b 2
1 c 3
2 a 2
2 b 3
2 c 1
3 a 3
3 b 1
3 c 2
... and so on
I'm looking for a simple mutate solution or base R solution. I tried generating a list of combinations but I'm not sure how to create a variable from that.
You can use perms from package pracma to generate all permutations, e.g.,
data %>%
cbind(order = c(t(pracma::perms(1:3))))
which gives
id condition order
1 1 a 3
2 1 b 2
3 1 c 1
4 2 a 3
5 2 b 1
6 2 c 2
7 3 a 2
8 3 b 3
9 3 c 1
10 4 a 2
11 4 b 1
12 4 c 3
13 5 a 1
14 5 b 2
15 5 c 3
16 6 a 1
17 6 b 3
18 6 c 2
19 7 a 3
20 7 b 2
21 7 c 1
22 8 a 3
23 8 b 1
24 8 c 2
25 9 a 2
26 9 b 3
27 9 c 1
28 10 a 2
29 10 b 1
30 10 c 3
31 11 a 1
32 11 b 2
33 11 c 3
34 12 a 1
35 12 b 3
36 12 c 2

Conditional difference between two data.frame columns

I have a tidy data.frame of experimental data with subjects ID who were measured three times (Trial) at a varying(!) number of time points (Session) in two different conditions (Direction) on a dependent continuous variable, say LC:
set.seed(5)
nSubjects <- 4
nDirections <- 2
nTrials <- 3
# Between 1 and 3 sessions per subject:
nSessions <- round(runif(nSubjects,
min = 1, max = 3))
mydat <- data.frame(ID = do.call(rep, args = list(1:nSubjects,
times = nSessions * nDirections * nTrials)),
Session = rep(sequence(nSessions),
each = nDirections * nTrials),
Trial = rep(rep(1:nTrials,
each = nDirections),
times = sum(nSessions)),
Direction = rep(c("up", "down"),
times = nTrials * sum(nSessions)),
LC = 1:(nDirections * nTrials * sum(nSessions)))
What I would like to calculate is a vector of length nrow(mydat) that contains the difference in LC between a given subject's and trial's and direction's first and current session. In other words, from each (absolute) LC score of any ID, session, trial and direction, the (absolute) LC from session == 1 of the same ID, trial and direction gets subtracted, like this (for the sake of simplicity I chose LC to be monotonically increasing):
# ID Session Trial Direction LC LC_diff
# 7 2 1 1 up 7 0
# 8 2 1 2 down 8 0
# 9 2 1 3 up 9 0
# 10 2 1 1 down 10 0
# 11 2 1 2 up 11 0
# 12 2 1 3 down 12 0
# 13 2 2 1 up 13 6
# 14 2 2 2 down 14 6
# 15 2 2 3 up 15 6
# 16 2 2 1 down 16 6
# 17 2 2 2 up 17 6
# 18 2 2 3 down 18 6
I thought the following code would yield the desired result:
library(dplyr)
ordered <- group_by(mydat, ID, Session, Trial, Direction)
mydat$LC_diff <- summarise(ordered,
Diff = sum(abs(LC[Trial != 1]),
- abs(LC[Trial == 1])))$Diff
But, alas:
mydat[7:18, ]
# ID Session Trial Direction LC LC_diff
# 7 2 1 1 up 7 -8
# 8 2 1 2 down 8 -7
# 9 2 1 3 up 9 10
# 10 2 1 1 down 10 9
# 11 2 1 2 up 11 12
# 12 2 1 3 down 12 11
# 13 2 2 1 up 13 -14
# 14 2 2 2 down 14 -13
# 15 2 2 3 up 15 16
# 16 2 2 1 down 16 15
# 17 2 2 2 up 17 18
# 18 2 2 3 down 18 17
I am at a complete loss here and would appreciate any pointers to where my code is wrong.
I'm not sure this is what you meant, but with data.table would be like this:
library(data.table)
setDT(mydat)[,new:= abs(LC)-abs(LC[1]),by=.(ID, Trial, Direction)]
mydat[ID==2,]
ID Session Trial Direction LC new
1: 2 1 1 up 7 0
2: 2 1 1 down 8 0
3: 2 1 2 up 9 0
4: 2 1 2 down 10 0
5: 2 1 3 up 11 0
6: 2 1 3 down 12 0
7: 2 2 1 up 13 6
8: 2 2 1 down 14 6
9: 2 2 2 up 15 6
10: 2 2 2 down 16 6
11: 2 2 3 up 17 6
12: 2 2 3 down 18 6

Randomly Assign Integers in R within groups without replacement

I am running an experiment with two experiments: experiment_1 and experiment_2. Each experiment has 5 different treatments (i.e. 1, 2, 3, 4, 5). We are trying to randomly assign the treatments within groups.
We would like to do this via sampling without replacement iteratively within each group. We want to do this to insure that we get as a balanced a sample as possible in the treatment (e.g. we don't want to end up with 4 subjects in group 1 getting assigned to treatment 2 and no one getting treatment 1). So if a group has 23 subjects, we want to split the respondent into 4 subgroups of 5, and 1 subgroup of 3. We then want to randomly sample without replacement across the first subgroup of 5, so everyone gets assigned 1 of the treatments, do the same things for the the second, third and 4th subgroup of 5, and for the final subgroup of 3 randomly sample without replacement. So we would guarantee that every treatment is assigned to at least 4 subjects, and 3 are assigned to 5 subjects within this group. We would like to do this for all the groups in the experiment and for both treatments. The resultant output would look something like this...
group experiment_1 experiment_2
[1,] 1 5 3
[2,] 1 3 2
[3,] 1 4 4
[4,] 1 1 5
[5,] 1 2 1
[6,] 1 2 3
[7,] 1 4 1
[8,] 1 3 2
[9,] 2 5 5
[10,] 2 1 4
[11,] 2 3 4
[12,] 2 1 5
[13,] 2 2 1
. . . .
. . . .
. . . .
I know how to use the sample function, but am unsure how to sample without replacement within each group, so that our output corresponds to above described procedure. Any help would be appreciated.
I think we just need to shuffle sample IDs, see this example:
set.seed(124)
#prepare groups and samples(shuffled)
df <- data.frame(group=sort(rep(1:3,9)),
sampleID=sample(1:27,27))
#treatments repeated nrow of df
df$ex1 <- rep(c(1,2,3,4,5),ceiling(nrow(df)/5))[1:nrow(df)]
df$ex2 <- rep(c(2,3,4,5,1),ceiling(nrow(df)/5))[1:nrow(df)]
df <- df[ order(df$group,df$sampleID),]
#check treatment distribution
with(df,table(group,ex1))
# ex1
# group 1 2 3 4 5
# 1 2 2 2 2 1
# 2 2 2 2 1 2
# 3 2 2 1 2 2
with(df,table(group,ex2))
# ex2
# group 1 2 3 4 5
# 1 1 2 2 2 2
# 2 2 2 2 2 1
# 3 2 2 2 1 2
How about this function:
f <- function(n,m) {sample( c( rep(1:m,n%/%m), sample(1:m,n%%m) ), n )}
"n" is the group size, "m" the number of treatments.
Each treatment must be containt at least "n %/% m" times in the group.
The treatment numbers of the remaining "n %% m" group members are
assigned arbitrarily without repetition.
The vector "c( rep(1:m,n%/%m), sample(1:m,n%%m) )" contains these treatment numbers. Finally the "sample" function
perturbes these numbers.
> f(8,5)
[1] 5 3 1 5 4 2 2 1
> f(8,5)
[1] 4 5 3 4 2 2 1 1
> f(8,5)
[1] 4 2 1 5 3 5 2 3
Here is a function that creates a dataframe, using the above function:
Plan <- function( groupSizes, numExp=2, numTreatment=5 )
{
numGroups <- length(groupSizes)
df <- data.frame( group = rep(1:numGroups,groupSizes) )
for ( e in 1:numExp )
{
df <- cbind(df,unlist(lapply(groupSizes,function(n){f(n,numTreatment)})))
colnames(df)[e+1] <- sprintf("Exp_%i", e)
}
return(df)
}
Example:
> P <- Plan(c(8,23,13,19))
> P
group Exp_1 Exp_2
1 1 4 1
2 1 1 4
3 1 2 2
4 1 2 1
5 1 3 5
6 1 5 5
7 1 1 2
8 1 3 3
9 2 5 1
10 2 2 1
11 2 5 2
12 2 1 2
13 2 2 1
14 2 1 4
15 2 3 5
16 2 5 3
17 2 2 4
18 2 5 4
19 2 2 5
20 2 1 1
21 2 4 2
22 2 3 3
23 2 4 3
24 2 2 5
25 2 3 3
26 2 5 2
27 2 1 5
28 2 3 4
29 2 4 4
30 2 4 2
31 2 4 3
32 3 2 5
33 3 5 3
34 3 5 1
35 3 5 1
36 3 2 5
37 3 4 4
38 3 1 4
39 3 3 2
40 3 3 2
41 3 3 3
42 3 1 1
43 3 4 2
44 3 4 4
45 4 5 1
46 4 3 1
47 4 1 2
48 4 1 5
49 4 3 3
50 4 3 1
51 4 4 5
52 4 2 4
53 4 5 3
54 4 2 1
55 4 4 2
56 4 2 5
57 4 4 4
58 4 5 3
59 4 5 4
60 4 1 2
61 4 2 5
62 4 3 2
63 4 4 4
Check the distribution:
> with(P,table(group,Exp_1))
Exp_1
group 1 2 3 4 5
1 2 2 2 1 1
2 4 5 4 5 5
3 2 2 3 3 3
4 3 4 4 4 4
> with(P,table(group,Exp_2))
Exp_2
group 1 2 3 4 5
1 2 2 1 1 2
2 4 5 5 5 4
3 3 3 2 3 2
4 4 4 3 4 4
>
The design of efficient experiments is a science on its own and there are a few R-packages dealing with this issue:
https://cran.r-project.org/web/views/ExperimentalDesign.html
I am afraid your approach is not optimal regarding the resources, no matter how you create the samples...
However this might help:
n <- 23
group <- sort(rep(1:5, ceiling(n/5)))[1:n]
exp1 <- rep(NA, length(group))
for(i in 1:max(group)) {
exp1[which(group == i)] <- sample(1:5)[1:sum(group == i)]
}
Not exactly sure if this meets all your constraints, but you could use the randomizr package:
library(randomizr)
experiment_1 <- complete_ra(N = 23, num_arms = 5)
experiment_2 <- block_ra(experiment_1, num_arms = 5)
table(experiment_1)
table(experiment_2)
table(experiment_1, experiment_2)
Produces output like this:
> table(experiment_1)
experiment_1
T1 T2 T3 T4 T5
4 5 5 4 5
> table(experiment_2)
experiment_2
T1 T2 T3 T4 T5
6 3 6 4 4
> table(experiment_1, experiment_2)
experiment_2
experiment_1 T1 T2 T3 T4 T5
T1 2 0 1 1 0
T2 1 1 1 1 1
T3 1 1 1 1 1
T4 1 0 2 0 1
T5 1 1 1 1 1

Finding Area Under the Curve (AUC) in R by Trapezoidal Rule

I have a below mentioned Sample List containing Data Frames (Each in has ...ID,yobs,x(independent variable)).And I want to find AUC(Trapezoidal rule)for each case(ID)..,
So that my output(master data frame) looks like following (have shown at last)
Can anybody suggest the efficient way of finding this (I have a high number of rows for each ID's)
Thank you
#Some Make up code for only one data frame
Y1=c(0,2,5,7,9)
Y2=c(0,1,3,8,11)
Y3=c(0,4,8,9,12,14,18)
t1=c(0:4)
t2=c(0:4)
t3=c(0:6)
a1=data.frame(ID=1,y=Y1,x=t1)
a2=data.frame(ID=2,y=Y2,x=t2)
a3=data.frame(ID=3,y=Y3,x=t3)
data=rbind(a1,a2,a3)
#dataA(Just to show)
ID obs time
1 1 0 0
2 1 2 1
3 1 5 2
4 1 7 3
5 1 9 4
6 2 0 0
7 2 1 1
8 2 3 2
9 2 8 3
10 2 11 4
11 3 0 0
12 3 4 1
13 3 8 2
14 3 9 3
15 3 12 4
16 3 14 5
17 3 18 6
#dataB(Just to show)
ID obs time
1 1 0 0
2 1 2 1
3 1 5 2
4 1 7 3
5 1 9 4
6 2 0 0
7 2 1 1
8 2 3 2
#dataC(Just to show)
ID obs time
1 1 0 0
2 1 2 1
3 1 5 2
4 1 7 3
5 1 9 4
6 2 0 0
7 2 1 1
8 2 3 2
##Desired output
ID AUC
dataA 1 XX
dataA 2 XX
dataA 3 XX
dataB 1 XX
dataB 2 XX
dataC 1 XX
dataC 2 XX
Here are two other ways. The first uses integrate(...) on a function defined by the linear interpolation between the points. The second uses the trapz(...) function described in the comment from #nrussel.
f <- function(x,df) approxfun(df)(x)
sapply(split(data,data$ID),function(df)c(integrate(f,min(df$x),max(df$x),df[3:2])$value))
# 1 2 3
# 18.5 17.5 56.0
library(caTools)
sapply(split(data,data$ID),function(df) trapz(df$x,df$y))
# 1 2 3
# 18.5 17.5 56.0
I'm guessing something like this would work
calcauc<-function(data) {
psum<-function(x) rowSums(embed(x,2))
stack(lapply(split(data, data$ID), function(z)
with(z, sum(psum(y) * diff(x)/ 2)))
)
}
calcauc(data)
# values ind
# 1 18.5 1
# 2 17.5 2
# 3 56.0 3
Of course normally x and y values are between 0 and 1 for ROC curves which is why we seem to have such large "AUC" values but really this is just the area of the polygon underneath the line defined by the points in the data set.
The psum function is just a helper function to calculate pair-wise sums (useful in the formula for the area of trapezoid).
Basically we use split() to look at one ID at a time, then we calculate the area for each ID, then we use stack() to bring everything back into one data.frame.

Calculations for grouped data in R

I'm stuck on the following problem in R and was hoping someone had a quick solution.
I have two sets of data, A and B, where A contains data for a control group and B a case group. I have measures for the same variables for each group.
Within A and B are subgroups - and they are in some instances paired between A and B - let's say they are siblings where one or more can be a case and one or more a control.
The data look something like this:
SET A:
Source Area group pch pch2 col col2 group2
R1-1 1983447 1 0 16 1 1 1
R1-3 1400362 1 0 16 1 1 1
R3-4 2834393 2 1 16 2 2 1
R4-2 2232820 3 2 16 3 3 1
R4-5 1713796 3 2 16 3 3 1
R4-6 1525740 3 2 16 3 3 1
R4-7 1182300 3 2 16 3 3 1
SET B:
Source Area group pch pch2 col col2 group2
R1-2 1246124 1 0 16 1 1 2
R3-1 1627610 2 1 16 2 2 2
R3-2 1401600 2 1 16 2 2 2
R4-1 1367146 3 2 16 3 3 2
R4-3 1764125 3 2 16 3 3 2
R4-4 1299864 3 2 16 3 3 2
Source is ID, Area is the variable of interest, group is group, and the rest are additional variables that are not of interest here.
What I'd like to do is calculate relative Area for each of the individuals in set B - i.e., relative to mean Area of their siblings in Set A. I'd like this value to appear as a seperate column in set B (under relArea in sample below). The output would therefore look like this:
Output (Set B):
Source Area group relArea pch pch2 col col2 group2
R1-2 1246124 1 0.736521476 0 16 1 1 2
R3-1 1627610 2 0.574235824 1 16 2 2 2
R3-2 1401600 2 0.494497411 1 16 2 2 2
R4-1 1367146 3 0.821768097 2 16 3 3 2
R4-3 1764125 3 1.06038539 2 16 3 3 2
R4-4 1299864 3 0.781326037 2 16 3 3 2
Finally, if an individual in set B does not have a sibling in set A, then his relArea value would be the Area relative to average Area of all the controls (i.e., all measurements in set A).
Any help with this would be much appreciated.
thanks,
Bjorn
You could compute the average area per group in Set A with aggregate and then add your new column:
seta = read.table(text="Source Area group pch pch2 col col2 group2
R1-1 1983447 1 0 16 1 1 1
R1-3 1400362 1 0 16 1 1 1
R3-4 2834393 2 1 16 2 2 1
R4-2 2232820 3 2 16 3 3 1
R4-5 1713796 3 2 16 3 3 1
R4-6 1525740 3 2 16 3 3 1
R4-7 1182300 3 2 16 3 3 1 ", header=T)
setb = read.table(text="Source Area group pch pch2 col col2 group2
R1-2 1246124 1 0 16 1 1 2
R3-1 1627610 2 1 16 2 2 2
R3-2 1401600 2 1 16 2 2 2
R4-1 1367146 3 2 16 3 3 2
R4-3 1764125 3 2 16 3 3 2
R4-4 1299864 3 2 16 3 3 2", header=T)
grouped.area = aggregate(seta$Area, by=list(group=seta$group), mean)
setb$relArea = setb$Area / grouped.area$x[match(setb$group, grouped.area$group)]
setb$relArea
# [1] 0.7365215 0.5742358 0.4944974 0.8217681 1.0603854 0.7813260

Resources