Counting across subsets in dataframes the R-way - r

In R I have a dataframe df of this form:
a b id
1 2 1234758
1 1 1234758
3 5 1234759
5 5 1234759
5 5 1234759
2 2 1234760
I want to count how many times I observe a change between df$a and df$b for each id value.
Since I am interested in some the numbers of ID as well I did
summary <- as.data.frame(table(df$id))
id n_id
2 1234758
3 1234759
1 1234760
then I wrote this to compute the number of times a and b changes for each id (both df and summary are sorted by id)
summary$jumps <- 0
k <- 1
for(i in 1:nrow(summary)) {
n <- summary$n_id[i]
for(j in k:k+n-1) if(df$a[j] != df$b[j]) summary$jumps[i] <- summary$jumps[i] + 1
k <- k + n
}
which looks up each entry in df the number of time each id appears, checks a and b and update the appropriate entry in summary. This works as intended, but it looks slow to me.
I am fairly new to R, what would we be a more efficient way of doing this in R? How would you vectorize this code? The dataframe has about 30 million such lines.

If I understand your problem correctly, you could use data.table to solve it in a fairly straight forward way;
library(data.table)
dt = setDT(df) # Create a data table
dt[,list(jumps=sum(a!=b)),by=id] # Sum rows where a!=b grouped by id
# id jumps
# 1: 1234758 1
# 2: 1234759 1
# 3: 1234760 0

We can use aggregate from base R
aggregate(cbind(jumps = a!=b)~id, df1, FUN=sum)

Related

How to aggregate duplicated rows in data.table by applying a specific function

I have a data.table (r1) that contains duplicated values for site and time. Here I create this examplary data table already showing why these duplicates appear, i.e. that I initially have two data tables that I merge.
Perhaps my problem can be solved already at the stage of merging dt1 and dt2..
dt1 <- data.table(site=c(1,1,2,2), site_type="type1", time=c(1,2,1,2), temp=c(10,12,13,NA), prec=c(10,101,1,1) )
dt2 <- data.table(site=c(3,3,2,2 ), site_type="type2", time=c(1,2,1,2), temp=c(10,12,100,140), prec=c(10,101,1000,NA), snow=c(1,1,1,1))
r1 <- rbindlist(list(dt1,dt2), fill=T)
Now, I would like to aggregate all duplicated rows (4 and 8, as well as 3 and 7) so that for the columns 'temp', 'prec', 'snow' the values will be taken from site_type='type1' unless it is NA.
The dirty solution that I found is to create a subset of r1 containing duplicated rows, and a subset with no duplicates.
duplicates <- r1[duplicated(r1,by=c("site","time")) | duplicated(r1,by=c("site","time"), fromLast=TRUE)]
no_duplicates <- r1[!(duplicated(r1,by=c("site","time")) | duplicated(r1,by=c("site","time"), fromLast=TRUE))]
Then to aggregate the duplicates wiht lapply.
aggregated_duplicates <- duplicates[, lapply(.SD, function(x) ifelse(!is.na(x[site_type=="type1"]),x[site_type=="type1"],x[site_type=="type2"])), by=c("site","time")]
And then to merge the aggregated_duplicates and no_duplicates data tables.
r1_without_duplicates <- rbindlist(list(no_duplicates, aggregated_duplicates), use.names = T)
The approach seems to work, but I have the feeling that in data table this could be done in one line either while merging dt1 and dt2, or by a smart use of 'duplicated' or 'unique'.
Any ideas?
Thanks!
Since there are 2 distinct actions in your requirements, namely, (i) use values from type2 if there is a NA in type1 and (ii) row binding 2 different data.table, I do not think that there is a single liner code to perform both simultaneously.
Another possible approach is to update dt1 for values that are NAs first before performing the rbindlist and removing dupes:
cols <- c("temp", "prec", "prec")
dt <- copy(dt1)
for (j in cols) {
dt[is.na(get(j)), (j) := dt2[.SD, on=c("site","time"), j, with=FALSE]]
}
r2 <- rbindlist(list(dt, dt2), use.names=TRUE, fill=TRUE)
r2[!duplicated(r2, by=c("site","time"))]
output:
site site_type time temp prec snow
1: 1 type1 1 10 10 NA
2: 1 type1 2 12 101 NA
3: 2 type1 1 13 1 NA
4: 2 type1 2 140 1 NA
5: 3 type2 1 10 10 1
6: 3 type2 2 12 101 1

R loops:conditioning a loop in R

Thanks for the feedback, below is a reproducible example with my desire output:
# Example Data where I would like my output
N=24
school.assignment = matrix(NA, ncol = 3, nrow = N)
school.assignment = as.data.frame(school.assignment)
colnames(school.assignment) <- c("ID","Group","Assignment")
# Number of groups and assigments per group
groups = 6
Assignment = 4
school.assignment$Group<-rep(1:groups,Assignment)
school.assignment$Group<- sort(school.assignment$Group)
school.assignment$Assignment<-rep(1:Assignment)
# IDs with number of repeats (i.e repeated students)
Data = matrix(0, ncol = 2, nrow = N/2) # ~half with repeated samples
Data = as.data.frame(Data)
colnames(Data) <- c("ID","Repeats")
Data$ID <-1:(N/2)
length(unique(Data$ID)) # unique IDS
ID=rep(seq(1:8),3)
# Genearte random repeats for each ID
Data$Repeats<-{set.seed(55)
sapply(1:(N/2),
function(x) sample(1:5,1))
}
Data=Data[-1,] #take out first row to match N=24
sum(Data$Repeats) #24 total IDs for all assigments
# List of IDs at random to use
IDs <- vector("list",dim(Data)[1]) #
for (i in 1:dim(Data)[1])
{
IDs[[i]]<-rep(Data$ID[i], times=Data$Repeats[i])
}
head(IDs)
# Object with number of repeated IDs
sample.per.ID <- vector("list",length(IDs)[1])
for (i in 1:length(IDs))
{
sample.per.ID[[i]]<-sum(length((IDs)[[i]]))
}
sum=sum(as.data.frame(sample.per.ID)); sum # 24 total IDs (including repeats)
## Unlist Vector with ransom sequence of samples
SRS.ID.order = unlist(IDs) #order of IDs with repeats
for (i in 1:length(SRS.ID.order ))
{
school.assignment$ID[i]<-SRS.ID.order [i]
}
My last loop is where I attempt to assign IDs to my matrix of school.assignment$ID. However, as you can see some IDs cross different groups and I want to condition ID assignment from the SRS.ID.order to stay within the same group (i.e. constant school.assignment$Group, below you can see that this is not the case, for example ID 4 is in group 1 and 2)
> head(school.assignment)
ID Group Assignment
1 2 1 1
2 2 1 2
3 3 1 3
4 4 1 4
5 4 2 1
6 4 2 2
I would like the output of the loop to don't assign any ID (i.e. NA) to that group if the next school.assignment$ID length is longer than the space available in that group.
ID Group Assignment
1 2 1 1
2 2 1 2
3 3 1 3
4 NA 1 4
5 4 2 1
6 4 2 2
I was thinking that I need some type of indicator for the J group like this code below:
########################################
for (i in 1:length(school.assignment$ID))
{
for (j in 1:length(unique(school.assignment$Group)))
{
school.assignment$ID[i]<-ifelse(sum(is.na(school.assignment$ID[i,j]))>=sample.per.ID[i],SRS.ID.order[i],NA)
}
}
Error in school.assignment$ID[i, j] : incorrect number of dimensions
Any help is very much appreciated!
Thanks
OLD POST
I'm currently trying to do a loop in R with a a condition. My data structure is below:
> head(school.assignment)
ID Group Assignment
1 NA 1 1
2 NA 1 2
3 NA 1 3
4 NA 1 4
5 NA 2 1
6 NA 2 2
I would like to assign an ID of the same length as school.assignment to the ID variable shown below:
head(IDs)
[1] 519 519 519 343 251 251...
Not all IDs repeat the same amount of times some 1,2 or even 3 times as shown above. I have an object with the number of repeats per ID, for example:
> head(repeats)
[1] 3 1 2...
Indicating that ID=519 repeats 3 times, ID=343 only once ad ID=251 2 times etc...
There is one condition that I would like to meet:
1) I would like every single ID to be in the same group whenever possible (i.e. if there is only one spot (NA) left for ID in the matrix object "school.assignment" for group 1 then assign the ID to the next group where they will be enough spaces (i.e where NA for school.assignment$ID is >= to repeats for that ID)
My idea was to do a loop but the code below is not working:
########################################
for (i in 1:length(school.assignment$ID))
{
for (j in 1:length(unique(school.assignment$Group)))
{
school.assignment$ID[i]<-ifelse(sum(is.na(school.assignment$ID[i,j]))>=repeats[i],ID[i],NA)
}
}
Is there a way to do this loop while respecting my condition to assign IDs to only one group?
Thank you!
Consider using merge() to assign random group IDs to data frame. No need for nested for loops. Below creates a unique group data frame, assigns random numbers there, and then merges with school.assignment:
# CREATE UNIQUE GROUP DATA FRAME
Group <- unique(school.assignment$Group)
grp.ids <- as.data.frame(Group)
# CREATE RANDOM ID FIELD (THREE DIGITS BETWEEN 100 AND 999)
grp.ids$RandomID <- sample(100:999, size = nrow(grp.ids), replace = TRUE)
# MERGE DATA FRAMES
school.assignment <- merge(school.assignment, grp.ids, by="Group", all=TRUE)
# ASSIGN ID COLUMN
school.assignment$ID <- school.assignment$RandomID
# RESTRUCTURE FINAL DATA FRAME
school.assignment <- school.assignment[c("ID", "Group", "Assignment")]
OUTPUT
ID Group Assignment
977 1 1
977 1 2
977 1 3
977 1 4
368 2 1
368 2 2

Filtering a R DataFrame with repeated values in columns

I have a R DataFrame and I want to make another DF from this one, but only with the values which appears more than X times in a determinate column.
>DataFrame
Value Column
1 a
4 a
2 b
6 c
3 c
4 c
9 a
1 d
For example a want a new DataFrame only with the values in Column which appears more than 2 times, to get something like this:
>NewDataFrame
Value Column
1 a
4 a
6 c
3 c
4 c
9 a
Thank you very much for your time.
We can use table to get the count of values in 'Column' and subset the dataset ('df1') based on the names in 'tbl' that have a count greater than 'n'
n <- 2
tbl <- table(DataFrame$Column) > n
NewDataFrame <- subset(DataFrame, Column %in% names(tbl)[tbl])
# Value Column
#1 1 a
#2 4 a
#4 6 c
#5 3 c
#6 4 c
#7 9 a
Or using ave from base R
NewDataFrame <- DataFrame[with(DataFrame, ave(Column, Column, FUN=length)>n),]
Or using data.table
library(data.table)
NewDataFrame <- setDT(DataFrame)[, .SD[.N>n] , by = Column]
Or
NewDataFrame <- setDT(DataFrame)[, if(.N > n) .SD, by = Column]
Or dplyr
NewDataFrame <- DataFrame %>%
group_by(Column) %>%
filter(n()>2)

identify groups of linked episodes which chain together

Take this simple data frame of linked ids:
test <- data.frame(id1=c(10,10,1,1,24,8),id2=c(1,36,24,45,300,11))
> test
id1 id2
1 10 1
2 10 36
3 1 24
4 1 45
5 24 300
6 8 11
I now want to group together all the ids which link.
By 'link', I mean follow through the chain of links so that all ids in one group
are labelled together. A kind of branching structure. i.e:
Group 1
10 --> 1, 1 --> (24,45)
24 --> 300
300 --> NULL
45 --> NULL
10 --> 36, 36 --> NULL,
Final group members: 10,1,24,36,45,300
Group 2
8 --> 11
11 --> NULL
Final group members: 8,11
Now I roughly know the logic I would want, but don't know how I would implement it elegantly. I am thinking of a recursive use of match or %in% to go down each branch, but am truly stumped this time.
The final result I would be chasing is:
result <- data.frame(group=c(1,1,1,1,1,1,2,2),id=c(10,1,24,36,45,300,8,11))
> result
group id
1 1 10
2 1 1
3 1 24
4 1 36
5 1 45
6 1 300
7 2 8
8 2 11
The Bioconductor package RBGL (an R interface to the BOOST graph library) contains
a function, connectedComp(), which identifies the connected components in a graph --
just what you are wanting.
(To use the function, you will first need to install the graph and RBGL packages, available here and here.)
library(RBGL)
test <- data.frame(id1=c(10,10,1,1,24,8),id2=c(1,36,24,45,300,11))
## Convert your 'from-to' data to a 'node and edge-list' representation
## used by the 'graph' & 'RBGL' packages
g <- ftM2graphNEL(as.matrix(test))
## Extract the connected components
cc <- connectedComp(g)
## Massage results into the format you're after
ld <- lapply(seq_along(cc),
function(i) data.frame(group = names(cc)[i], id = cc[[i]]))
do.call(rbind, ld)
# group id
# 1 1 10
# 2 1 1
# 3 1 24
# 4 1 36
# 5 1 45
# 6 1 300
# 7 2 8
# 8 2 11
Here's an alternative answer that I have discovered myself after the nudging in the right direction by Josh. This answer uses the igraph package.
For those that are searching and come across this answer, my test dataset is referred to as an "edge list" or "adjacency list" in graph theory (http://en.wikipedia.org/wiki/Graph_theory)
library(igraph)
test <- data.frame(id1=c(10,10,1,1,24,8 ),id2=c(1,36,24,45,300,11))
gr.test <- graph_from_data_frame(test)
links <- data.frame(id=unique(unlist(test)),group=components(gr.test)$membership)
links[order(links$group),]
# id group
#1 10 1
#2 1 1
#3 24 1
#5 36 1
#6 45 1
#7 300 1
#4 8 2
#8 11 2
Without using packages:
# 2 sets of test data
mytest <- data.frame(id1=c(10,10,3,1,1,24,8,11,32,11,45),id2=c(1,36,50,24,45,300,11,8,32,12,49))
test <- data.frame(id1=c(10,10,1,1,24,8),id2=c(1,36,24,45,300,11))
grouppairs <- function(df){
# from wide to long format; assumes df is 2 columns of related id's
test <- data.frame(group = 1:nrow(df),val = unlist(df))
# keep moving to next pair until all same values have same group
i <- 0
while(any(duplicated(unique(test)$val))){
i <- i+1
# get group of matching values
matches <- test[test$val == test$val[i],'group']
# change all groups with matching values to same group
test[test$group %in% matches,'group'] <- test$group[i]
}
# renumber starting from 1 and show only unique values in group order
test$group <- match(test$group, sort(unique(test$group)))
unique(test)[order(unique(test)$group), ]
}
# test
grouppairs(test)
grouppairs(mytest)
You said recursive... and I thought I'd be super terse while I'm at it.
Test data
mytest <- data.frame(id1=c(10,10,3,1,1,24,8,11,32,11,45),id2=c(1,36,50,24,45,300,11,8,32,12,49))
test <- data.frame(id1=c(10,10,1,1,24,8),id2=c(1,36,24,45,300,11))
Recursive function to get the groupings
aveminrec <- function(v1,v2){
v2 <- ave(v1,by = v2,FUN = min)
if(identical(v1,v2)){
as.numeric(as.factor(v2))
}else{
aveminrec(v2,v1)
}
}
Prep data and simplify after
groupvalues <- function(valuepairs){
val <- unlist(valuepairs)
grp <- aveminrec(val,1:nrow(valuepairs))
unique(data.frame(grp,val)[order(grp,val), ])
}
Get results
groupvalues(test)
groupvalues(mytest)
aveminrec() is probably along the lines of what you were thinking, though I bet there's a way to be more direct about going down each branch instead of repeating ave() which is essentially split() and lapply(). Maybe recursively split and lapply? As it is, it's like repeated partial branching, or alternately simplifying 2 vectors slightly without group information loss.
Maybe parts of this would be used on a real problem, but groupvalues() is way too dense to read without some comments at least. I also haven't checked how performance compares to a for loop with ave and flipping the groups that way.

How do I take subsets of a data frame according to a grouping in R?

I have an aggregation problem which I cannot figure out how to perform efficiently in R.
Say I have the following data:
group1 <- c("a","b","a","a","b","c","c","c","c",
"c","a","a","a","b","b","b","b")
group2 <- c(1,2,3,4,1,3,5,6,5,4,1,2,3,4,3,2,1)
value <- c("apple","pear","orange","apple",
"banana","durian","lemon","lime",
"raspberry","durian","peach","nectarine",
"banana","lemon","guava","blackberry","grape")
df <- data.frame(group1,group2,value)
I am interested in sampling from the data frame df such that I randomly pick only a single row from each combination of factors group1 and group2.
As you can see, the results of table(df$group1,df$group2)
1 2 3 4 5 6
a 2 1 2 1 0 0
b 2 2 1 1 0 0
c 0 0 1 1 2 1
shows that some combinations are seen more than once, while others are never seen. For those that are seen more than once (e.g., group1="a" and group2=3), I want to randomly pick only one of the corresponding rows and return a new data frame that has only that subset of rows. That way, each possible combination of the grouping factors is represented by only a single row in the data frame.
One important aspect here is that my actual data sets can contain anywhere from 500,000 rows to >2,000,000 rows, so it is important to be mindful of performance.
I am relatively new at R, so I have been having trouble figuring out how to generate this structure correctly. One attempt looked like this (using the plyr package):
choice <- function(x,label) {
cbind(x[sample(1:nrow(x),1),],data.frame(state=label))
}
df <- ddply(df[,c("group1","group2","value")],
.(group1,group2),
pick_junc,
label="test")
Note that in this case, I am also adding an extra column to the data frame called "label" which is specified as an extra argument to the ddply function. However, I killed this after about 20 min.
In other cases, I have tried using aggregate or by or tapply, but I never know exactly what the specified function is getting, what it should return, or what to do with the result (especially for by).
I am trying to switch from python to R for exploratory data analysis, but this type of aggregation is crucial for me. In python, I can perform these operations very rapidly, but it is inconvenient as I have to generate a separate script/data structure for each different type of aggregation I want to perform.
I want to love R, so please help! Thanks!
Uri
Here is the plyr solution
set.seed(1234)
ddply(df, .(group1, group2), summarize,
value = value[sample(length(value), 1)])
This gives us
group1 group2 value
1 a 1 apple
2 a 2 nectarine
3 a 3 banana
4 a 4 apple
5 b 1 grape
6 b 2 blackberry
7 b 3 guava
8 b 4 lemon
9 c 3 durian
10 c 4 durian
11 c 5 raspberry
12 c 6 lime
EDIT. With a data frame that big, you are better off using data.table
library(data.table)
dt = data.table(df)
dt[,list(value = value[sample(length(value), 1)]),'group1, group2']
EDIT 2: Performance Comparison: Data Table is ~ 15 X faster
group1 = sample(letters, 1000000, replace = T)
group2 = sample(LETTERS, 1000000, replace = T)
value = runif(1000000, 0, 1)
df = data.frame(group1, group2, value)
dt = data.table(df)
f1_dtab = function() {
dt[,list(value = value[sample(length(value), 1)]),'group1, group2']
}
f2_plyr = function() {ddply(df, .(group1, group2), summarize, value =
value[sample(length(value), 1)])
}
f3_by = function() {do.call(rbind,by(df,list(grp1 = df$group1,grp2 = df$group2),
FUN = function(x){x[sample(nrow(x),1),]}))
}
library(rbenchmark)
benchmark(f1_dtab(), f2_plyr(), f3_by(), replications = 10)
test replications elapsed relative
f1_dtab() 10 4.764 1.00000
f2_plyr() 10 68.261 14.32851
f3_by() 10 67.369 14.14127
One more way:
with(df, tapply(value, list( group1, group2), length))
1 2 3 4 5 6
a 2 1 2 1 NA NA
b 2 2 1 1 NA NA
c NA NA 1 1 2 1
# Now use tapply to sample withing groups
# `resample` fn is from the sample help page:
# Avoids an error with sample when only one value in a group.
resample <- function(x, ...) x[sample.int(length(x), ...)]
#Create a row index
df$idx <- 1:NROW(df)
rowidxs <- with(df, unique( c( # the `c` function will make a matrix into a vector
tapply(idx, list( group1, group2),
function (x) resample(x, 1) ))))
rowidxs
# [1] 1 5 NA 12 16 NA 3 15 6 4 14 10 NA NA 7 NA NA 8
df[rowidxs[!is.na(rowidxs)] , ]

Resources