Make duplicate rows as replicates in R dataframe - r

I have a data frame with duplicated rows having one continuous variable column and 2-factor columns (0,1). The goal is to find the duplicated rows and identify them as replicates in a new column.
Here is the structure of the data frame
cont.var fact1 fact2
1 1.0 1 0
2 1.0 0 1
3 1.5 1 0
4 1.5 1 0
5 1.5 0 1
6 1.5 0 1
Now let's say
If cont.var has value 1.0 in two rows but has different values for fact1 and fact2, so it will be assigned two different replicates.
If cont.var has value 1.5 and fact1/fact2 is also the same for successive rows, they will be given the same replicate identifier.
Expected Output
cont.var fact1 fact2 rep
1 1.0 1 0 1
2 1.0 0 1 2
3 1.5 1 0 3
4 1.5 1 0 3
5 1.5 0 1 4
6 1.5 0 1 4
What I have tried
library(dplyr)
sample.df <- data.frame(
cont.var = c(1,1,1.5,1.5,1.5,1.5,2,2,2,3),
fact1 = c(1,0,1,1,0,0,1,1,0,1),
fact2 = c(0,1,0,0,1,1,0,0,1,0)
)
sample.df %>%
group_by(cont.var, fact1, fact2) %>%
mutate(replicate = make.unique(as.character(cont.var), "_"))
Incorrect Output
I would expect that row-1 and row-2 will have different replicate counts.
I would expect that Replicate count for row-3 == row-4 and row-5 == row-6, but row-5 != row-3
cont.var fact1 fact2 replicate
1 1.0 1 0 1
2 1.0 0 1 1
3 1.5 1 0 1.5
4 1.5 1 0 1.5_1
5 1.5 0 1 1.5
6 1.5 0 1 1.5_1
I couldn't find a straightforward solution to this; I would really appreciate any help.
Thanks in advance.

You can use data.table::rleid:
library(dplyr)
df %>%
mutate(rleid = data.table::rleid(cont.var, fact1, fact2))
cont.var fact1 fact2 rleid
1 1.0 1 0 1
2 1.0 0 1 2
3 1.5 1 0 3
4 1.5 1 0 3
5 1.5 0 1 4
6 1.5 0 1 4
If you have dplyr's dev. version, you can also use consecutive_id, the dplyr version of data.table::rleid:
#devtools::install_github("tidyverse/dplyr")
library(dplyr)
df %>%
mutate(rleid2 = consecutive_id(cont.var, fact1, fact2))
Finally, a base R option would be to match the rows by unique values:
df$rleid <- match(do.call(paste, df), do.call(paste, unique(df)))

Another dplyr method, in case you're already grouped:
quux %>%
group_by(cont.var, fact1, fact2) %>%
mutate(rep = group_indices()) %>%
ungroup()
# # A tibble: 6 x 4
# cont.var fact1 fact2 rep
# <dbl> <int> <int> <int>
# 1 1 1 0 2
# 2 1 0 1 1
# 3 1.5 1 0 4
# 4 1.5 1 0 4
# 5 1.5 0 1 3
# 6 1.5 0 1 3
While the actual values are not the same, the spirit of your request is retained.

Here is another base R solution:
sample.df <- data.frame(
cont.var = c(1,1,1.5,1.5,1.5,1.5,2,2,2,3),
fact1 = c(1,0,1,1,0,0,1,1,0,1),
fact2 = c(0,1,0,0,1,1,0,0,1,0)
)
sample.df$replicate <- cumsum(!duplicated(sample.df))
sample.df
#> cont.var fact1 fact2 replicate
#> 1 1.0 1 0 1
#> 2 1.0 0 1 2
#> 3 1.5 1 0 3
#> 4 1.5 1 0 3
#> 5 1.5 0 1 4
#> 6 1.5 0 1 4
#> 7 2.0 1 0 5
#> 8 2.0 1 0 5
#> 9 2.0 0 1 6
#> 10 3.0 1 0 7
EDIT
ensure dups are continuous:
sample.df <- sample.df[with(sample.df, order(fact2,fact1,cont.var)),]
sample.df$replicate <- cumsum(!duplicated(sample.df))
sample.df
#> cont.var fact1 fact2 replicate
#> 1 1.0 1 0 1
#> 3 1.5 1 0 2
#> 4 1.5 1 0 2
#> 7 2.0 1 0 3
#> 8 2.0 1 0 3
#> 10 3.0 1 0 4
#> 2 1.0 0 1 5
#> 5 1.5 0 1 6
#> 6 1.5 0 1 6
#> 9 2.0 0 1 7

Related

Counting Frequencies of Sequences

Suppose there are two students - each student takes an exam multiple times (e.g.result_id = 1 is the first exam, result_id = 2 is the second exam, etc.). The student can either "pass" (1) or "fail" (0).
The data looks something like this:
library(data.table)
my_data = data.frame(id = c(1,1,1,1,1,1,2,2,2,2,2,2,2,2,2), results = c(0,1,0,1,0,0,1,1,1,0,1,1,0,1,0), result_id = c(1,2,3,4,5,6,1,2,3,4,5,6,7,8,9))
my_data = setDT(my_data)
id results result_id
1: 1 0 1
2: 1 1 2
3: 1 0 3
4: 1 1 4
5: 1 0 5
6: 1 0 6
7: 2 1 1
8: 2 1 2
9: 2 1 3
10: 2 0 4
11: 2 1 5
12: 2 1 6
13: 2 0 7
14: 2 1 8
15: 2 0 9
I am interested in counting the number of times that a student passes an exam, given that the student passed the previous two exams.
I tried to do this with the following code:
my_data$current_exam = shift(my_data$results, 0)
my_data$prev_exam = shift(my_data$results, 1)
my_data$prev_2_exam = shift(my_data$results, 2)
# Count the number of exam results for each record
out <- my_data[!is.na(prev_exam), .(tally = .N), by = .(id, current_exam, prev_exam, prev_2_exam)]
out = na.omit(out)
My code produces the following results:
> out
id current_exam prev_exam prev_2_exam tally
1: 1 0 1 0 2
2: 1 1 0 1 1
3: 1 0 0 1 1
4: 2 1 0 0 1
5: 2 1 1 0 2
6: 2 1 1 1 1
7: 2 0 1 1 2
8: 2 1 0 1 2
9: 2 0 1 0 1
However, I do not think that my code is correct.
For example, with Student_ID = 2 :
My code says that "Current_Exam = 1, Prev_Exam = 1, Prev_2_Exam = 0" happens 1 time, but looking at the actual data - this does not happen at all
Can someone please show me what I am doing wrong and how I can correct this?
Note: I think that this should be the expected output:
> expected_output
id current_exam prev_exam prev_2_exam tally
1: 1 0 1 0 2
2: 1 1 0 1 1
3: 1 0 0 1 1
4: 2 1 0 0 1
5: 2 1 1 0 1
6: 2 1 1 1 1
7: 2 0 1 1 2
8: 2 1 0 1 2
9: 2 0 1 0 0
You did not consider that you can not shift the results over id without placing NA.
. <- my_data[order(my_data$id, my_data$result_id),] #sort if needed
.$p1 <- ave(.$results, .$id, FUN = \(x) c(NA, x[-length(x)]))
.$p2 <- ave(.$p1, .$id, FUN = \(x) c(NA, x[-length(x)]))
aggregate(list(tally=.$p1), .[c("id","results", "p1", "p2")], length)
# id results p1 p2 tally
#1 1 0 1 0 2
#2 2 0 1 0 1
#3 2 1 1 0 1
#4 1 0 0 1 1
#5 1 1 0 1 1
#6 2 1 0 1 2
#7 2 0 1 1 2
#8 2 1 1 1 1
.
# id results result_id p1 p2
#1 1 0 1 NA NA
#2 1 1 2 0 NA
#3 1 0 3 1 0
#4 1 1 4 0 1
#5 1 0 5 1 0
#6 1 0 6 0 1
#7 2 1 1 NA NA
#8 2 1 2 1 NA
#9 2 1 3 1 1
#10 2 0 4 1 1
#11 2 1 5 0 1
#12 2 1 6 1 0
#13 2 0 7 1 1
#14 2 1 8 0 1
#15 2 0 9 1 0
An option would be to use filter to indicate those which had passed 3 times in a row.
cbind(., n=ave(.$results, .$id, FUN = \(x) filter(x, c(1,1,1), sides=1)))
# id results result_id n
#1 1 0 1 NA
#2 1 1 2 NA
#3 1 0 3 1
#4 1 1 4 2
#5 1 0 5 1
#6 1 0 6 1
#7 2 1 1 NA
#8 2 1 2 NA
#9 2 1 3 3
#10 2 0 4 2
#11 2 1 5 2
#12 2 1 6 2
#13 2 0 7 2
#14 2 1 8 2
#15 2 0 9 1
If olny the number of times that a student passes an exam, given that the student passed the previous two exams:
sum(ave(.$results, .$id, FUN = \(x) filter(x, c(1,1,1))==3), na.rm=TRUE)
#[1] 1
sum(ave(.$results, .$id, FUN = \(x)
x==1 & c(x[-1], 0) == 1 & c(x[-1:-2], 0, 0) == 1))
#[1] 1
When trying to count events that happen in series, cumsum() comes in quite handy. As opposed to creating multiple lagged variables, this scales well to counts across a larger number of events:
library(tidyverse)
d <- my_data |>
group_by(id) |> # group to cumulate within student only
mutate(
csum = cumsum(results), # cumulative sum of results
i = csum - lag(csum, 3, 0) # substract the cumulative sum from 3 observation before. This gives the number of exams passed in the current and previous 2 observations.
)
# Ungroup to get global count
d |>
ungroup() |>
count(i == 3) # Count the number of cases where the number of exams passes within 3 observations equals 3
#> # A tibble: 2 × 2
#> `i == 3` n
#> <lgl> <int>
#> 1 FALSE 14
#> 2 TRUE 1
# Retaining the group gives counts by student
d |>
count(i == 3) # Count the number of cases where the number of exams passes within 3 observations equals 3
#> # A tibble: 3 × 3
#> # Groups: id [2]
#> id `i == 3` n
#> <dbl> <lgl> <int>
#> 1 1 FALSE 6
#> 2 2 FALSE 8
#> 3 2 TRUE 1
Since you provided the data as data.table, here is how to do the same in that ecosystem:
my_data[ , csum := cumsum(results), .(id)]
my_data[ , i := csum - lag(csum, 3, 0), .(id)]
my_data[ , .(n_cases = sum(i ==3)), id]
#> id n_cases
#> 1: 1 0
#> 2: 2 1
Here's an approach using dplyr. It uses the lag function to look back 1 and 2 results. If the sum together with the current result is 3, then the condition is met. In the example you provided, the condition is only met once
my_data %>%
group_by(id) %>%
mutate(threex = ifelse(results + lag(results,1) + lag(results, 2) == 3, 1, 0)) %>%
filter(!is.na(threex))
id results result_id threex
<dbl> <dbl> <dbl> <dbl>
1 1 0 3 0
2 1 1 4 0
3 1 0 5 0
4 1 0 6 0
5 2 1 3 1
6 2 0 4 0
7 2 1 5 0
8 2 1 6 0
9 2 0 7 0
10 2 1 8 0
11 2 0 9 0
If you then just want to capture the cases when the condition is met, add a filter.
my_data %>%
group_by(id) %>%
mutate(threex = ifelse(results + lag(results,1) + lag(results, 2) == 3, 1, 0)) %>%
filter(threex == 1)
id results result_id threex
<dbl> <dbl> <dbl> <dbl>
1 2 1 3 1
If you are looking to understand how many times the condition is met per id, you can do this.
my_data %>%
group_by(id) %>%
mutate(threex = ifelse(results + lag(results,1) + lag(results, 2) == 3, 1, 0)) %>%
filter(threex == 1) %>%
select(id) %>%
summarize(count = n())
id count
<dbl> <int>
1 2 1

Use R to find values for which a condition is first met

Consider the following sample dataset. Id is an individual identifier.
rm(list=ls()); set.seed(1)
n<-100
X<-rbinom(n, 1, 0.5) #binary covariate
j<-rep (1:n)
dat<-data.frame(id=1:n, X)
ntp<- rep(4, n)
mat<-matrix(ncol=3,nrow=1)
m=0; w <- mat
for(l in ntp)
{
m=m+1
ft<- seq(from = 2, to = 8, length.out = l)
# ft<- seq(from = 1, to = 9, length.out = l)
ft<-sort(ft)
seq<-rep(ft,each=2)
seq<-c(0,seq,10)
matid<-cbind( matrix(seq,ncol=2,nrow=l+1,byrow=T ) ,m)
w<-rbind(w,matid)
}
d<-data.frame(w[-1,])
colnames(d)<-c("time1","time2","id")
D <- round( merge(d,dat,by="id") ,2) #merging dataset
nr<-nrow(D)
D$Survival_time<-round(rexp(nr, 0.1)+1,3)
head(D,15)
id time1 time2 X Survival_time
1 1 0 2 0 21.341
2 1 2 4 0 18.987
3 1 4 6 0 4.740
4 1 6 8 0 13.296
5 1 8 10 0 6.397
6 2 0 2 0 10.566
7 2 2 4 0 2.470
8 2 4 6 0 14.907
9 2 6 8 0 8.620
10 2 8 10 0 13.376
11 3 0 2 1 45.239
12 3 2 4 1 11.545
13 3 4 6 1 11.352
14 3 6 8 1 19.760
15 3 8 10 1 7.547
How can I obtain the value at which Survival_time is less that time2 for the very first time per individual. I should end up with the following values
id Survival_time
1 4.740
2 2.470
3 7.547
Also, how can I subset the data to stop individualwise when this condition occurs. i.e obtain
id time1 time2 X Survival_time
1 1 0 2 0 21.341
2 1 2 4 0 18.987
3 1 4 6 0 4.740
6 2 0 2 0 10.566
7 2 2 4 0 2.470
11 3 0 2 1 45.239
12 3 2 4 1 11.545
13 3 4 6 1 11.352
14 3 6 8 1 19.760
15 3 8 10 1 7.547
Using data.table
library(data.table)
setDT(D)[, .SD[seq_len(.N) <= which(Survival_time < time2)[1]], id]
-output
id time1 time2 X Survival_time
1: 1 0 2 0 21.341
2: 1 2 4 0 18.987
3: 1 4 6 0 4.740
4: 2 0 2 0 10.566
5: 2 2 4 0 2.470
6: 3 0 2 1 45.239
7: 3 2 4 1 11.545
8: 3 4 6 1 11.352
9: 3 6 8 1 19.760
10: 3 8 10 1 7.547
Slight variation:
library(dplyr)
D %>% # Take D, and then
group_by(id) %>% # group by id, and then
filter(Survival_time < time2) %>% # keep Survival times < time2, and then
slice(1) %>% # keep the first row per id, and then
ungroup() # ungroup
You can use -
library(dplyr)
D %>%
group_by(id) %>%
summarise(Survival_time = Survival_time[match(TRUE, Survival_time < time2)])
#Also using which.max
#summarise(Survival_time = Survival_time[which.max(Survival_time < time2)])
# id Survival_time
# <int> <dbl>
#1 1 4.74
#2 2 2.47
#3 3 7.55
To select the rows you may till that point you may use -
D %>%
group_by(id) %>%
filter(row_number() <= match(TRUE, Survival_time < time2)) %>%
ungroup
# id time1 time2 X Survival_time
# <int> <int> <int> <int> <dbl>
# 1 1 0 2 0 21.3
# 2 1 2 4 0 19.0
# 3 1 4 6 0 4.74
# 4 2 0 2 0 10.6
# 5 2 2 4 0 2.47
# 6 3 0 2 1 45.2
# 7 3 2 4 1 11.5
# 8 3 4 6 1 11.4
# 9 3 6 8 1 19.8
#10 3 8 10 1 7.55

Split up grouped binomial data in r

I have data that looks like this
samplesize <- 6
group <- c(1,2,3)
total <- rep(samplesize,length(group))
outcomeTrue <- c(2,1,3)
df <- data.frame(group,total,outcomeTrue)
and would like my data to look like this
group2 <- c(rep(1,6),rep(2,6),rep(3,6))
outcomeTrue2 <- c(rep(1,2),rep(0,6-2),rep(1,1),rep(0,6-1),rep(1,3),rep(0,6-3))
df2 <- data.frame(group2,outcomeTrue2)
That is to say I have binary data where I am told the total observations and the successful observations, but would prefer it to be organised as individual observations with their explicit outcome as 0 or 1. i.e.Visual Example of Desired Result
Is there an easy way to do this in r, or will I need to write a loop to automate this myself?
Here is one option with tidyverrse. We uncount to expand the rows using the 'total' column, grouped by 'group', create a binary index with a logical condition based on the row_number() and the value of 'outcomeTrue'
library(tidyverse)
df %>%
uncount(total) %>%
group_by(group) %>%
mutate(outcomeTrue = as.integer(row_number() <= outcomeTrue[1]))
# A tibble: 18 x 2
# Groups: group [3]
# group outcomeTrue
# <dbl> <int>
# 1 1 1
# 2 1 1
# 3 1 0
# 4 1 0
# 5 1 0
# 6 1 0
# 7 2 1
# 8 2 0
# 9 2 0
#10 2 0
#11 2 0
#12 2 0
#13 3 1
#14 3 1
#15 3 1
#16 3 0
#17 3 0
#18 3 0
You are also there. just use the group 2 variable with the "[" function in the x position:
df[ group2 , ]
group total outcomeTrue
1 1 6 2
1.1 1 6 2
1.2 1 6 2
1.3 1 6 2
1.4 1 6 2
1.5 1 6 2
2 2 6 1
2.1 2 6 1
2.2 2 6 1
2.3 2 6 1
2.4 2 6 1
2.5 2 6 1
3 3 6 3
3.1 3 6 3
3.2 3 6 3
3.3 3 6 3
3.4 3 6 3
3.5 3 6 3
When a number or character value that matches a rowname is put in the x-position of the "[" it replicates the entire row
Here is a base R solution.
do.call(rbind, lapply(split(df, df$group), function(x) data.frame(group2 = x$group, outcome2 = rep(c(1,0), times = c(x$outcome, x$total-x$outcome)))))
# group2 outcome2
# 1.1 1 1
# 1.2 1 1
# 1.3 1 0
# 1.4 1 0
# 1.5 1 0
# 1.6 1 0
# 2.1 2 1
# 2.2 2 0
# 2.3 2 0
# 2.4 2 0
# 2.5 2 0
# 2.6 2 0
# 3.1 3 1
# 3.2 3 1
# 3.3 3 1
# 3.4 3 0
# 3.5 3 0
# 3.6 3 0

lagging variables by day and creating new row in the process

I'm trying to lag variables by day but many don't have an observation on the previous day. So I need to add an extra row in the process. Dplyr gets me close but I need a way to add a new row in the process and have many thousands of cases. Any thoughts would be much appreciated.
ID<-c(1,1,1,1,2,2)
day<-c(0,1,2,5,1,3)
v<-c(2.2,3.4,1.2,.8,6.4,2)
dat1<-as.data.frame(cbind(ID,day,v))
dat1
ID day v
1 1 0 2.2
2 1 1 3.4
3 1 2 1.2
4 1 5 0.8
5 2 1 6.4
6 2 3 2.0
Using dplyr gets me here:
dat2<-
dat1 %>%
group_by(ID) %>%
mutate(v.L = dplyr::lead(v, n = 1, default = NA))
dat2
ID day v v.L
1 1 0 2.2 3.4
2 1 1 3.4 1.2
3 1 2 1.2 0.8
4 1 5 0.8 NA
5 2 1 6.4 2.0
6 2 3 2.0 NA
But I need to get here:
ID2<-c(1,1,1,1,1,2,2,2)
day2<-c(0,1,2,4,5,1,2,3)
v2<-c(2.2,3.4,1.2,NA,.8,6.4,NA,2)
v2.L<-c(3.4,1.2,NA,.8,NA,NA,2,NA)
dat3<-as.data.frame(cbind(ID2,day2,v2,v2.L))
dat3
ID2 day2 v2 v2.L
1 1 0 2.2 3.4
2 1 1 3.4 1.2
3 1 2 1.2 NA
4 1 4 NA 0.8
5 1 5 0.8 NA
6 2 1 6.4 NA
7 2 2 NA 2.0
8 2 3 2.0 NA
You could use complete and full_seq from the tidyr package to complete the sequence of days. You'd need to remove at the end the rows that have NA in both v and v.L:
library(dplyr)
library(tidyr)
dat2 = dat1 %>%
group_by(ID) %>%
complete(day = full_seq(day,1)) %>%
mutate(v.L = lead(v)) %>%
filter(!(is.na(v) & is.na(v.L)))
ID day v v.L
<dbl> <dbl> <dbl> <dbl>
1 0 2.2 3.4
1 1 3.4 1.2
1 2 1.2 NA
1 4 NA 0.8
1 5 0.8 NA
2 1 6.4 NA
2 2 NA 2.0
2 3 2.0 NA

Reshaping a data frame and setting flag variables

I want to reshape my data frame from the df1 to df2 as appears below:
df1 <-
ID TIME RATEALL CL V1 Q V2
1 0 0 2.4 10 6 20
1 1 2 0.6 10 6 25
2 0 0 3.0 15 7 30
2 5 3 3.0 16 8 15
into a long format like this:
df2 <-
ID var TIME value
1 1 0 0
1 1 1 2
1 2 0 2.4
1 2 1 10
1 3 0 6
1 3 1 6
1 4 0 20
1 4 1 20
2 1 0 3.0
2 1 1 3.0
AND so on ...
Basically I want to give a flag variables (1: for RATEALL, 2:for CL, 3:for V1, 4:for Q,and 5: for V2 and then melt the values for each subject ID. Is there an easy way to do this in R?
You can try
df2 <- reshape2::melt(df1, c("ID", "TIME"))
names <- c("RATEALL"=1, "CL"=2, "V1"=3, "Q"=4, "V2"=5)
df2$variable <- names[df2$variable]
You could use tidyr/dplyr
library(tidyr)
library(dplyr)
res <- gather(df1,var, value, RATEALL:V2) %>%
mutate(var= as.numeric(factor(var)))
head(res)
# ID TIME var value
#1 1 0 1 0.0
#2 1 1 1 2.0
#3 2 0 1 0.0
#4 2 5 1 3.0
#5 1 0 2 2.4
#6 1 1 2 0.6

Resources