R Split data.frame using a column that represents and on/off switch - r

I have data that looks like the following:
a <- data.frame(cbind(x=seq(50),
y=rnorm(50),
z=c(rep(0,5),
rep(1,8),
rep(0,3),
rep(1,2),
rep(0,12),
rep(1,12),
rep(0,8))))
I would like to split the data.frame a on the column z but have each group as a separate data.frame as a member of a list i.e. in my example the first 5 rows would be the first item in the list the next 8 rows would be the next item in the list, the next 3 rows would be item after that etc. etc.
Simple factors combine all the 1s together and all the 0s together...
I'm sure that there is a simple way to do this, but it has eluded for at the moment.
Thanks

Try the rleid function in data.table v > 1.9.5
library(data.table)
split(a, rleid(a$z))
# $`1`
# x y z
# 1 1 -0.03737561 0
# 2 2 -0.48663043 0
# 3 3 -0.98518106 0
# 4 4 0.09014355 0
# 5 5 -0.07703517 0
#
# $`2`
# x y z
# 6 6 0.3884339 1
# 7 7 1.5962833 1
# 8 8 -1.3750668 1
# 9 9 0.7987056 1
# 10 10 0.3483114 1
# 11 11 -0.1777759 1
# 12 12 1.1239553 1
# 13 13 0.4841117 1
....

Or, also with cumsum:
split(a, c(0, cumsum(diff(a$z) != 0)))

Here are some base R options.
Using rle. A variant of rleid function in the comments by #Spacedman
split(a,inverse.rle(within.list(rle(a$z), values <- seq_along(values))))
By using cumsum after creating a logical index based on whether the adjacent elements are equal or not
split(a, cumsum(c(TRUE, a$z[-1]!=a$z[-nrow(a)])))

Related

Using two grouping designations to create one 'combined' grouping variable

Given a data.frame:
df <- data.frame(grp1 = c(1,1,1,2,2,2,3,3,3,4,4,4),
grp2 = c(1,2,3,3,4,5,6,7,8,6,9,10))
#> df
# grp1 grp2
#1 1 1
#2 1 2
#3 1 3
#4 2 3
#5 2 4
#6 2 5
#7 3 6
#8 3 7
#9 3 8
#10 4 6
#11 4 9
#12 4 10
Both coluns are grouping variables, such that all 1's in column grp1 are known to be grouped together, and so on with all 2's, etc. Then the same goes for grp2. All 1's are known to be the same, all 2's the same.
Thus, if we look at the 3rd and 4th row, based on column 1 we know that the first 3 rows can be grouped together and the second 3 rows can be grouped together. Then since rows 3 and 4 share the same grp2 value, we know that all 6 rows, in fact, can be grouped together.
Based off the same logic we can see that the last six rows can also be grouped together (since rows 7 and 10 share the same grp2).
Aside from writing a fairly involved set of for() loops, is there a more straight forward approach to this? I haven't been able to think one one yet.
The final output that I'm hoping to obtain would look something like:
# > df
# grp1 grp2 combinedGrp
# 1 1 1 1
# 2 1 2 1
# 3 1 3 1
# 4 2 3 1
# 5 2 4 1
# 6 2 5 1
# 7 3 6 2
# 8 3 7 2
# 9 3 8 2
# 10 4 6 2
# 11 4 9 2
# 12 4 10 2
Thank you for any direction on this topic!
I would define a graph and label nodes according to connected components:
gmap = unique(stack(df))
gmap$node = seq_len(nrow(gmap))
oldcols = unique(gmap$ind)
newcols = paste0("node_", oldcols)
df[ newcols ] = lapply(oldcols, function(i) with(gmap[gmap$ind == i, ],
node[ match(df[[i]], values) ]
))
library(igraph)
g = graph_from_edgelist(cbind(df$node_grp1, df$node_grp2), directed = FALSE)
gmap$group = components(g)$membership
df$group = gmap$group[ match(df$node_grp1, gmap$node) ]
grp1 grp2 node_grp1 node_grp2 group
1 1 1 1 5 1
2 1 2 1 6 1
3 1 3 1 7 1
4 2 3 2 7 1
5 2 4 2 8 1
6 2 5 2 9 1
7 3 6 3 10 2
8 3 7 3 11 2
9 3 8 3 12 2
10 4 6 4 10 2
11 4 9 4 13 2
12 4 10 4 14 2
Each unique element of grp1 or grp2 is a node and each row of df is an edge.
One way to do this is via a matrix that defines links between rows based on group membership.
This approach is related to #Frank's graph answer but uses an adjacency matrix rather than using edges to define the graph. An advantage of this approach is it can deal immediately with many > 2 grouping columns with the same code. (So long as you write the function that determines links flexibly.) A disadvantage is you need to make all pair-wise comparisons between rows to construct the matrix, so for very long vectors it could be slow. As is, #Frank's answer would work better for very long data, or if you only ever have two columns.
The steps are
compare rows based on groups and define these rows as linked (i.e., create a graph)
determine connected components of the graph defined by the links in 1.
You could do 2 a few ways. Below I show a brute force way where you 2a) collapse links, till reaching a stable link structure using matrix multiplication and 2b) convert the link structure to a factor using hclust and cutree. You could also use igraph::clusters on a graph created from the matrix.
1. construct an adjacency matrix (matrix of pairwise links) between rows
(i.e., if they in the same group, the matrix entry is 1, otherwise it's 0). First making a helper function that determines whether two rows are linked
linked_rows <- function(data){
## helper function
## returns a _function_ to compare two rows of data
## based on group membership.
## Use Vectorize so it works even on vectors of indices
Vectorize(function(i, j) {
## numeric: 1= i and j have overlapping group membership
common <- vapply(names(data), function(name)
data[i, name] == data[j, name],
FUN.VALUE=FALSE)
as.numeric(any(common))
})
}
which I use in outer to construct a matrix,
rows <- 1:nrow(df)
A <- outer(rows, rows, linked_rows(df))
2a. collapse 2-degree links to 1-degree links. That is, if rows are linked by an intermediate node but not directly linked, lump them in the same group by defining a link between them.
One iteration involves: i) matrix multiply to get the square of A, and
ii) set any non-zero entry in the squared matrix to 1 (as if it were a first degree, pairwise link)
## define as a function to use below
lump_links <- function(A) {
A <- A %*% A
A[A > 0] <- 1
A
}
repeat this till the links are stable
oldA <- 0
i <- 0
while (any(oldA != A)) {
oldA <- A
A <- lump_links(A)
}
2b. Use the stable link structure in A to define groups (connected components of the graph). You could do this a variety of ways.
One way, is to first define a distance object, then use hclust and cutree. If you think about it, we want to define linked (A[i,j] == 1) as distance 0. So the steps are a) define linked as distance 0 in a dist object, b) construct a tree from the dist object, c) cut the tree at zero height (i.e., zero distance):
df$combinedGrp <- cutree(hclust(as.dist(1 - A)), h = 0)
df
In practice you can encode steps 1 - 2 in a single function that uses the helper lump_links and linked_rows:
lump <- function(df) {
rows <- 1:nrow(df)
A <- outer(rows, rows, linked_rows(df))
oldA <- 0
while (any(oldA != A)) {
oldA <- A
A <- lump_links(A)
}
df$combinedGrp <- cutree(hclust(as.dist(1 - A)), h = 0)
df
}
This works for the original df and also for the structure in #rawr's answer
df <- data.frame(grp1 = c(1,1,1,2,2,2,3,3,3,4,4,4,5,5,6,7,8,9),
grp2 = c(1,2,3,3,4,5,6,7,8,6,9,10,11,3,12,3,6,12))
lump(df)
grp1 grp2 combinedGrp
1 1 1 1
2 1 2 1
3 1 3 1
4 2 3 1
5 2 4 1
6 2 5 1
7 3 6 2
8 3 7 2
9 3 8 2
10 4 6 2
11 4 9 2
12 4 10 2
13 5 11 1
14 5 3 1
15 6 12 3
16 7 3 1
17 8 6 2
18 9 12 3
PS
Here's a version using igraph, which makes the connection with #Frank's answer more clear:
lump2 <- function(df) {
rows <- 1:nrow(df)
A <- outer(rows, rows, linked_rows(df))
cluster_A <- igraph::clusters(igraph::graph.adjacency(A))
df$combinedGrp <- cluster_A$membership
df
}
Hope this solution helps you a bit:
Assumption: df is ordered on the basis of grp1.
## split dataset using values of grp1
split_df <- split.default(df$grp2,df$grp1)
parent <- vector('integer',length(split_df))
## find out which combinations have values of grp2 in common
for (i in seq(1,length(split_df)-1)){
for (j in seq(i+1,length(split_df))){
inter <- intersect(split_df[[i]],split_df[[j]])
if (length(inter) > 0){
parent[j] <- i
}
}
}
ans <- vector('list',length(split_df))
index <- which(parent == 0)
## index contains indices of elements that have no element common
for (i in seq_along(index)){
ans[[index[i]]] <- rep(i,length(split_df[[i]]))
}
rest_index <- seq(1,length(split_df))[-index]
for (i in rest_index){
val <- ans[[parent[i]]][1]
ans[[i]] <- rep(val,length(split_df[[i]]))
}
df$combinedGrp <- unlist(ans)
df
grp1 grp2 combinedGrp
1 1 1 1
2 1 2 1
3 1 3 1
4 2 3 1
5 2 4 1
6 2 5 1
7 3 6 2
8 3 7 2
9 3 8 2
10 4 6 2
11 4 9 2
12 4 10 2
Based on https://stackoverflow.com/a/35773701/2152245, I used a different implementation of igraph because I already had an adjacency matrix of sf polygons from st_intersects():
library(igraph)
library(sf)
# Use example data
nc <- st_read(system.file("shape/nc.shp", package="sf"))
nc <- nc[-sample(1:nrow(nc),nrow(nc)*.75),] #drop some polygons
# Find intersetions
b <- st_intersects(nc, sparse = F)
g <- graph.adjacency(b)
clu <- components(g)
gr <- groups(clu)
# Quick loop to assign the groups
for(i in 1:nrow(nc)){
for(j in 1:length(gr)){
if(i %in% gr[[j]]){
nc[i,'group'] <- j
}
}
}
# Make a new sfc object
nc_un <- group_by(nc, group) %>%
summarize(BIR74 = mean(BIR74), do_union = TRUE)
plot(nc_un['BIR74'])

Mutate with dplyr using multiple conditions

I have a data frame (df) below and I want to add an additional column, result, using dplyr that will take on the value 1 if z == "gone" and where x is the maximum value for group y.
y x z
1 a 3 gone
2 a 5 gone
3 a 8 gone
4 a 9 gone
5 a 10 gone
6 b 1
7 b 2
8 b 4
9 b 6
10 b 7
If I were to simply select the maximum for each group it would be:
df %>%
group_by(y) %>%
slice(which.max(x))
which will return:
y x z
1 a 10 gone
2 b 7
This is not what I want. I need to take advantage of the max value of x for each group in y while checking to see if z == "gone", and if TRUE 1 otherwise 0. This would look like:
y x z result
1 a 3 gone 0
2 a 5 gone 0
3 a 8 gone 0
4 a 9 gone 0
5 a 10 gone 1
6 b 1 0
7 b 2 0
8 b 4 0
9 b 6 0
10 b 7 0
I'm assuming I would use a conditional statement within mutate() but I cannot seem to find an example. Please advise.
With dplyr you can use:
df %>% group_by(y) %>% mutate(result = +(x == max(x) & z == 'gone'))
The +(..) notation is shorthand for as.integer to coerce the logical output to 1's and 0's. Some don't like it so it's a matter of shorter code versus readability. Efficiency gains can be debated on the circumstance.
Also to appreciate what data.table and dplyr have done for data manipulation with R, let's do the same thing in the old-fashioned "split-apply-combine" way:
#split data.frame by group
split.df <- split(df, df$y)
#apply required function to each group
lst <- lapply(split.df, function(dfx) {
dfx$result <- +(dfx$x == max(dfx$x) & dfx$z == "gone")
dfx})
#combine result in new data.frame
newdf <- do.call(rbind, lst)
We can do this with data.table. We convert the 'data.frame' to 'data.table' (setDT(df)), grouped by 'y', we create the logical condition for maximum value of 'x' and the 'gone' element in 'z', coerce it to 'integer' (as.integer) and assign (:=) the output to the new column ('result').
library(data.table)
setDT(df)[, result := as.integer(x==max(x) & z=='gone') , by = y]
df
# y x z result
# 1: a 3 gone 0
# 2: a 5 gone 0
# 3: a 8 gone 0
# 4: a 9 gone 0
# 5: a 10 gone 1
# 6: b 1 0
# 7: b 2 0
# 8: b 4 0
# 9: b 6 0
#10: b 7 0
Or we can use ave from base R
df$result <- with(df, +(ave(x, y, FUN=max)==x & z=='gone' ))

Subset dataframe in a list by a dataframe column criteria

I have a list of dataframes. I need to subset a dataframe of this list according to a criteria in one column of the dataframe.
(all dataframes of the list have the same number and names of columns, and the same number of rows)
For example, I have:
l <- list(data.frame(x=c(2,3,4,5), y = c(4,4,4,4), z=c(2,3,4,5)),
data.frame(x=c(1,4,7,3), y = c(7,7,7,7), z=c(2,5,7,8)),
data.frame(x=c(2,3,1,8), y = c(1,1,1,1), z=c(6,4,1,3)))
names(l) <- c("MH1", "MH2","MH3")
output
$MH1
x y z
1 2 4 2
2 3 4 3
3 4 4 4
4 5 4 5
$MH2
x y z
1 1 7 2
2 4 7 5
3 7 7 7
4 3 7 8
$MH3
x y z
1 2 1 6
2 3 1 4
3 1 1 1
4 8 1 3
So I want to subset the dataframe for which column "y" is the closest to a given number. For example if I say a=3, the chosen dataframe should be "MH1" (where column y=4)
If "l" was a dataframe I will do something like:
closestDF <- subset(l, abs(l$y - a) == min(abs(l$y - a))
How can I do this with the list of dataframes?
Following the answers and comments of #David Arenburg, #akrun and #shadow, here there are three possible solutions to the problem I posted:
Option 1)
library(data.table)
rbindlist(l)[abs(y - a) == min(abs(y - a))]
Option 2) (needs an R version > 3.1.2)
library(dplyr)
bind_rows(l) %>% filter(abs(y-a)==which.min(abs(y-a)))
Option 3) (also works perfectly, but computationally less faster than the first 2 options if used within a big loop or an iterative process)
l[[which.min(sapply(l, function(df) sum(abs(df$y - a))))]]

Removing rows after a certain value in R

I have a data frame in R,
df <- data.frame(a=c(1,1,1,2,2,5,5,5,5,5,6,6), b=c(0,1,0,0,0,0,0,1,0,0,0,1))
I want to remove the rows which has values for the variable b equal to 0 which occurs after the value equals to 1 for the duplicated variable a values.
So the output I am looking for is,
df.out <- data.frame(a=c(1,1,2,2,5,5,5,6,6), b=c(0,1,0,0,0,0,1,0,1))
Is there a way to do this in R?
This should do the trick?
ind = intersect(which(df$b==0), which(df$b==1)+1)
df.out = df[-ind,]
The which(df$b==1) returns the index of the df where b==1. add one to this and intersect with the indexes where b==0.
How about
df[ ave(df$b, df$a, FUN=function(x) x>=cummax(x))==1, ]
# a b
# 1 1 0
# 2 1 1
# 4 2 0
# 5 2 0
# 6 5 0
# 7 5 0
# 8 5 1
# 11 6 0
# 12 6 1
Here we use ave to look within each level of a and we test to see if we've seen a 1 yet with cummax.

Create a vector listing run length of original vector with same length as original vector

This problem seems trivial but I'm at my wits end after hours of reading.
I need to generate a vector of the same length as the input vector that lists for each value of the input vector the total count for that value. So, by way of example, I would want to generate the last column of this dataframe:
> df
customer.id transaction.count total.transactions
1 1 1 4
2 1 2 4
3 1 3 4
4 1 4 4
5 2 1 2
6 2 2 2
7 3 1 3
8 3 2 3
9 3 3 3
10 4 1 1
I realise this could be done two ways, either by using run lengths of the first column, or grouping the second column using the first and applying a maximum.
I've tried both tapply:
> tapply(df$transaction.count, df$customer.id, max)
And rle:
> rle(df$customer.id)
But both return a vector of shorter length than the original:
[1] 4 2 3 1
Any help gratefully accepted!
You can do it without creating transaction counter with:
df$total.transactions <- with( df,
ave( transaction.count , customer.id , FUN=length) )
You can use rle with rep to get what you want:
x <- rep(1:4, 4:1)
> x
[1] 1 1 1 1 2 2 2 3 3 4
rep(rle(x)$lengths, rle(x)$lengths)
> rep(rle(x)$lengths, rle(x)$lengths)
[1] 4 4 4 4 3 3 3 2 2 1
For performance purposes, you could store the rle object separately so it is only called once.
Or as Karsten suggested with ddply from plyr:
require(plyr)
#Expects data.frame
dat <- data.frame(x = rep(1:4, 4:1))
ddply(dat, "x", transform, total = length(x))
You are probably looking for split-apply-combine approach; have a look at ddply in the plyr package or the split function in base R.

Resources