Tidying table with multiple groups of wide columns, using tidyverse - r

I often find myself in a situation where I have a table that contains multiple groups of wide columns, like so:
replicate groupA VA1 VA2 groupB VB1 VB2
1 1 a 0.3429166 -2.30336406 f 0.05363582 1.6454078
2 2 b -1.3183732 -0.13516849 g -0.42586417 0.1541541
3 3 c -0.7908358 -0.10746447 h 1.05134242 1.4297350
4 4 d -0.9963677 -1.82557058 i -1.14532536 1.0815733
5 5 e -1.3634609 0.04385812 j -0.65643595 -0.1452877
And I'd like to turn the columns into one long table, like so:
replicate group key value
1 1 a V1 0.34291665
2 2 b V1 -1.31837322
3 3 c V1 -0.79083580
4 4 d V1 -0.99636772
5 5 e V1 -1.36346088
6 1 a V2 -2.30336406
7 2 b V2 -0.13516849
8 3 c V2 -0.10746447
9 4 d V2 -1.82557058
10 5 e V2 0.04385812
11 1 f V1 0.05363582
12 2 g V1 -0.42586417
13 3 h V1 1.05134242
14 4 i V1 -1.14532536
15 5 j V1 -0.65643595
16 1 f V2 1.64540784
17 2 g V2 0.15415408
18 3 h V2 1.42973499
19 4 i V2 1.08157329
20 5 j V2 -0.14528774
I can do this by selecting the two groups of columns individually, tidying, and then rbinding together (code below). However, this approach doesn't seem particularly elegant, and it becomes cumbersome if there are more than two groups of columns. I'm wondering whether there's a more elegant approach, using a single pipe chain of data transformations.
The fundamental question here is: How do we automate the process of breaking the table into groups of columns, tidying those, and then combining back together.
My current code:
library(dplyr)
library(tidyr)
# generate example code
df_wide <- data.frame(replicate = 1:5,
groupA = letters[1:5],
VA1 = rnorm(5),
VA2 = rnorm(5),
groupB = letters[6:10],
VB1 = rnorm(5),
VB2 = rnorm(5))
# tidy columns with A in the name
dfA <- select(df_wide, replicate, groupA, VA1, VA2) %>%
gather(key, value, VA1, VA2) %>%
mutate(key = case_when(key == "VA1" ~ "V1",
key == "VA2" ~ "V2")) %>%
select(replicate, group = groupA, key, value)
# tidy columns with B in the name
dfB <- select(df_wide, replicate, groupB, VB1, VB2) %>%
gather(key, value, VB1, VB2) %>%
mutate(key = case_when(key == "VB1" ~ "V1",
key == "VB2" ~ "V2")) %>%
select(replicate, group = groupB, key, value)
# combine
df_long <- rbind(dfA, dfB)
Note: Similar questions have been asked here and here, but I think the accepted answer shows that this here is a subtly different problem.

1
Although the question asked for a tidyverse solution, there is a convenient option with melt from data.table, which also can take multiple patterns in the measure argument.
library(data.table)
setnames(melt(melt(setDT(df1), measure = patterns('group', 'VA', 'VB')),
id.var = 1:3)[, -4, with = FALSE], 2:3, c('key', 'group'))[]
2. a
with tidyverse we can subset the datasets into a list, then loop through the list with map_df convert it to 'long' format with gather to get a single data.frame
library(tidyverse)
list(df1[1:4], df1[c(1,5:7)]) %>%
map_df(~gather(., key, value, 3:4) %>%
{names(.)[2] <- 'group';.}) %>%
mutate(key = sub('(.).(.)', '\\1\\2', key))
# replicate group key value
#1 1 a V1 0.34291660
#2 2 b V1 -1.31837320
#3 3 c V1 -0.79083580
#4 4 d V1 -0.99636770
#5 5 e V1 -1.36346090
#6 1 a V2 -2.30336406
#7 2 b V2 -0.13516849
#8 3 c V2 -0.10746447
#9 4 d V2 -1.82557058
#10 5 e V2 0.04385812
#11 1 f V1 0.05363582
#12 2 g V1 -0.42586417
#13 3 h V1 1.05134242
#14 4 i V1 -1.14532536
#15 5 j V1 -0.65643595
#16 1 f V2 1.64540780
#17 2 g V2 0.15415410
#18 3 h V2 1.42973500
#19 4 i V2 1.08157330
#20 5 j V2 -0.14528770
2.b
If we need to split based on the occurence of 'group'
split.default(df1[-1], cumsum(grepl('group', names(df1)[-1]))) %>%
map(~bind_cols(df1[1], .)) %>%
map_df(~gather(., key, value, 3:4) %>%
{names(.)[2] <- 'group';.}) %>%
mutate(key = sub('(.).(.)', '\\1\\2', key))
2.c
Included rename_at instead of names assignment in the spirit of tidyverse options
df1[-1] %>%
split.default(cumsum(grepl('group', names(df1)[-1]))) %>%
map_df(~bind_cols(df1[1], .) %>%
gather(., key, value, 3:4) %>%
rename_at(2, funs(substring(.,1, 5))))
NOTE:
1) Both 2.a, 2.b, 2.c used tidyverse functions
2) It doesn't depend upon on the substring 'A' or 'B' in the column names
3) Assumed the patterns in the OP's dataset will be 'group' followed by value columns

1) This solution consists of a:
gather which generates the desired number of rows
a mutate which combines the groupA and groupB columns and changes the key column to that requested and
select which picks out the columns wanted.
First gather the columns whose names start with V and then create a new group column from groupA and groupB choosing groupA if the key has an A in it and groupB if the key has B in it. (We used mapply(switch, ...) here for easy extension to the 3+ group case but we could have used an ifelse, viz. ifelse(grepl("A", key), as.character(groupA), as.character(groupB)), given that we have only two groups.) The mutate also reduces the key names from VA1 to V1, etc. and finally select out the columns desired.
DF %>%
gather(key, value, starts_with("V")) %>%
mutate(group = mapply(switch, gsub("[^AB]", "", key), A = groupA, B = groupB),
key = sub("[AB]", "", key)) %>%
select(replicate, group, key, value)
giving:
replicate group key value
1 1 a V1 0.34291660
2 2 b V1 -1.31837320
3 3 c V1 -0.79083580
4 4 d V1 -0.99636770
5 5 e V1 -1.36346090
6 1 a V2 -2.30336406
7 2 b V2 -0.13516849
8 3 c V2 -0.10746447
9 4 d V2 -1.82557058
10 5 e V2 0.04385812
11 1 f V1 0.05363582
12 2 g V1 -0.42586417
13 3 h V1 1.05134242
14 4 i V1 -1.14532536
15 5 j V1 -0.65643595
16 1 f V2 1.64540780
17 2 g V2 0.15415410
18 3 h V2 1.42973500
19 4 i V2 1.08157330
20 5 j V2 -0.14528770
2) Another approach would be to split the columns into groups such that all columns in a group have the same name after removing A and B from their names. Performi unlist on each such group to reduce the list to a list of plain vectors and convert that list to a data.frame. Finally gather the V columns and rearrange. Note that rownames_to_column is from the tibble package.
DF %>%
as.list %>%
split(sub("[AB]", "", names(.))) %>%
lapply(unlist) %>%
as.data.frame %>%
rownames_to_column %>%
gather(key, value, starts_with("V")) %>%
arrange(gsub("[^AB]", "", rowname), key) %>%
select(replicate, group, key, value)
2a) If the row order is not important then the rownames_to_column, arrange and select lines could be omitted shortening it to this:
DF %>%
as.list %>%
split(sub("[AB]", "", names(.))) %>%
lapply(unlist) %>%
as.data.frame %>%
gather(key, value, starts_with("V"))
Solutions (2) and (2a) could easily be converted to base-only solutions by replacing the gather with the appropriate reshape from base as in the second reshape, i.e. the one producing d2, in (3).
3) Although the question asked for a tidyverse solution there is a fairly convenient base solution consisting of two reshape calls. The varying produced by the split is: list(group = c("groupA", "groupB"), V1 = c("VA1", "VB1"), V2 = c("VA2", "VB2")) -- that is it matches up the ith column in each set of columns.
varying <- split(names(DF)[-1], gsub("[AB]", "", names(DF))[-1])
d <- reshape(DF, dir = "long", varying = varying, v.names = names(varying))
d <- subset(d, select = -c(time, id))
d2 <- reshape(d, dir = "long", varying = list(grep("V", names(d))), v.names = "value",
timevar = "key")
d2 <- subset(d2, select = c(replication, group, key, value))
d2
Note: The input in reproducible form is:
DF <- structure(list(replicate = 1:5, groupA = structure(1:5, .Label = c("a",
"b", "c", "d", "e"), class = "factor"), VA1 = c(0.3429166, -1.3183732,
-0.7908358, -0.9963677, -1.3634609), VA2 = c(-2.30336406, -0.13516849,
-0.10746447, -1.82557058, 0.04385812), groupB = structure(1:5, .Label = c("f",
"g", "h", "i", "j"), class = "factor"), VB1 = c(0.05363582, -0.42586417,
1.05134242, -1.14532536, -0.65643595), VB2 = c(1.6454078, 0.1541541,
1.429735, 1.0815733, -0.1452877)), .Names = c("replicate", "groupA",
"VA1", "VA2", "groupB", "VB1", "VB2"), class = "data.frame", row.names = c("1",
"2", "3", "4", "5"))

Related

Collapsing Columns in R using tidyverse with mutate, replace, and unite. Writing a function to reuse?

Data:
ID
B
C
1
NA
x
2
x
NA
3
x
x
Results:
ID
Unified
1
C
2
B
3
B_C
I'm trying to combine colums B and C, using mutate and unify, but how would I scale up this function so that I can reuse this for multiple columns (think 100+), instead of having to write out the variables each time? Or is there a function that's already built in to do this?
My current solution is this:
library(tidyverse)
Data %>%
mutate(B = replace(B, B == 'x', 'B'), C = replace(C, C == 'x', 'C')) %>%
unite("Unified", B:C, na.rm = TRUE, remove= TRUE)
We may use across to loop over the column, replace the value that corresponds to 'x' with column name (cur_column())
library(dplyr)
library(tidyr)
Data %>%
mutate(across(B:C, ~ replace(., .== 'x', cur_column()))) %>%
unite(Unified, B:C, na.rm = TRUE, remove = TRUE)
-output
ID Unified
1 1 C
2 2 B
3 3 B_C
data
Data <- structure(list(ID = 1:3, B = c(NA, "x", "x"), C = c("x", NA,
"x")), class = "data.frame", row.names = c(NA, -3L))
Here are couple of options.
Using dplyr -
library(dplyr)
cols <- names(Data)[-1]
Data %>%
rowwise() %>%
mutate(Unified = paste0(cols[!is.na(c_across(B:C))], collapse = '_')) %>%
ungroup -> Data
Data
# ID B C Unified
# <int> <chr> <chr> <chr>
#1 1 NA x C
#2 2 x NA B
#3 3 x x B_C
Base R
Data$Unified <- apply(Data[cols], 1, function(x)
paste0(cols[!is.na(x)], collapse = '_'))

Group data by factor level, then transform to data frame with colname being levels?

There is my problem that I can't solve it:
Data:
df <- data.frame(f1=c("a", "a", "b", "b", "c", "c", "c"),
v1=c(10, 11, 4, 5, 0, 1, 2))
data.frame:f1 is factor
f1 v1
a 10
a 11
b 4
b 5
c 0
c 1
c 2
# What I want is:(for example, fetch data with the number of element of some level == 2, then to data.frame)
a b
10 4
11 5
Thanks in advance!
I might be missing something simple here , but the below approach using dplyr works.
library(dplyr)
nlevels = 2
df1 <- df %>%
add_count(f1) %>%
filter(n == nlevels) %>%
select(-n) %>%
mutate(rn = row_number()) %>%
spread(f1, v1) %>%
select(-rn)
This gives
# a b
# <int> <int>
#1 10 NA
#2 11 NA
#3 NA 4
#4 NA 5
Now, if you want to remove NA's we can do
do.call("cbind.data.frame", lapply(df1, function(x) x[!is.na(x)]))
# a b
#1 10 4
#2 11 5
As we have filtered the dataframe which has only nlevels observations, we would have same number of rows for each column in the final dataframe.
split might be useful here to split df$v1 into parts corresponding to df$f1. Since you are always extracting equal length chunks, it can then simply be combined back to a data.frame:
spl <- split(df$v1, df$f1)
data.frame(spl[lengths(spl)==2])
# a b
#1 10 4
#2 11 5
Or do it all in one call by combining this with Filter:
data.frame(Filter(function(x) length(x)==2, split(df$v1, df$f1)))
# a b
#1 10 4
#2 11 5
Here is a solution using unstack :
unstack(
droplevels(df[ave(df$v1, df$f1, FUN = function(x) length(x) == 2)==1,]),
v1 ~ f1)
# a b
# 1 10 4
# 2 11 5
A variant, similar to #thelatemail's solution :
data.frame(Filter(function(x) length(x) == 2, unstack(df,v1 ~ f1)))
My tidyverse solution would be:
library(tidyverse)
df %>%
group_by(f1) %>%
filter(n() == 2) %>%
mutate(i = row_number()) %>%
spread(f1, v1) %>%
select(-i)
# # A tibble: 2 x 2
# a b
# * <dbl> <dbl>
# 1 10 4
# 2 11 5
or mixing approaches :
as_tibble(keep(unstack(df,v1 ~ f1), ~length(.x) == 2))
Using all base functions (but you should use tidyverse)
# Add count of instances
x$len <- ave(x$v1, x$f1, FUN = length)
# Filter, drop the count
x <- x[x$len==2, c('f1','v1')]
# Hacky pivot
result <- data.frame(
lapply(unique(x$f1), FUN = function(y) x$v1[x$f1==y])
)
colnames(result) <- unique(x$f1)
> result
a b
1 10 4
2 11 5
I'd like code this, may it helps for you
library(reshape2)
library(dplyr)
aa = data.frame(v1=c('a','a','b','b','c','c','c'),f1=c(10,11,4,5,0,1,2))
cc = aa %>% group_by(v1) %>% summarise(id = length((v1)))
dd= merge(aa,cc) #get the level
ee = dd[dd$aa==2,] #select number of level equal to 2
ee$id = rep(c(1,2),nrow(ee)/2) # reset index like (1,2,1,2)
dcast(ee, id~v1,value.var = 'f1')
all done!

Append dataFrame columns to other columns with different names and order?

I am struggling with reordering a dataFrame in R.
My dataFrame has data coming from two different sensors. So in the beginning every column has a name with the syntax "sensor number.sample number". The rowname is a coordinate of each sample.
Sadly the columns are not ordered with an ascending sample number.
How can I make an automatic ordering where after number 1 comes 2 and not 10?
With correct ordered columns I would like to cut all columns of the second sensor and append it under the rows from the first sensor. This is also tricky as the number of columns of each sensor varies in the reality.
To distinguish between both sensors I would add a postfix "a" or "b" for the new rownames.
Here my problem is that I know "rbind" but it requires identical column names, I cannot provide here. And I would also need to select the columns manually as I have no clue how to automatically select all of the second sensor.
My idea for the moment is to make subsets for each sensor, rename the columns and then use rbind with both subsets. Is this a good idea?
The rownames I then could modify with paste().
I now present simplified frames as the original is quite big. So the numbers (c(1:3)) are just exemplary.
This is how my dataFrame looks at the beginning:
myDf = data.frame(a.10= c(1:3),a.11= c(1:3),a.12= c(1:3),a.13= c(1:3),a.2= c(1:3),a.3= c(1:3),a.4= c(1:3),a.5= c(1:3),a.6= c(1:3),a.7= c(1:3),a.8= c(1:3),a.9= c(1:3),
b.1= c(1:3),b.10= c(1:3),b.11= c(1:3),b.2= c(1:3),b.3= c(1:3),b.4= c(1:3),b.5= c(1:3),b.6= c(1:3),b.7= c(1:3),b.8= c(1:3),b.9= c(1:3))
My goal is to transform the dataFrame that is looks like that:
desiredDf =data.frame(n9=rep(c(1:3),2), n10=rep(c(1:3),2), n11=rep(c(1:3),2), n12=c(c(1:3),NA, NA, NA), n13=c(c(1:3), NA, NA, NA))
rownames(desiredDf)<-(c("1a","2a","3a","1b","2b","3b"))
Thank you very much!
Here is an option.
library(tidyverse)
myDF2 <- myDf %>% gather(measure, result, a.10:b.9) %>%
separate(measure, into = c("letter", "number"), sep = "\\.") %>%
group_by(letter, number)%>%
mutate(n = row_number()) %>%
unite(col, n, letter, sep = "") %>%
ungroup() %>%
arrange(as.numeric(number))%>%
mutate(number = paste0("n", number))%>%
mutate(number = factor(number, levels = unique(number)))%>%
spread(number, result)%>%
arrange(col)
row.names(myDF2) <- myDF2$col
myDF2$col <- NULL
Convert the row names to a column, reshape into long form and separate the key, i.e. the original column names, into columns group and no converting the latter to numeric. Sort, reshape back to wide form, sort again, combine the rowname and group and preface each column name with n.
library(dplyr)
library(tibble)
library(tidyr)
myDf %>%
rownames_to_column %>%
gather(key, value, -rowname) %>%
separate(key, c("group", "no"), convert = TRUE) %>%
arrange(group, no) %>%
spread(no, value) %>%
arrange(group, rowname) %>%
unite(rowname, rowname, group, sep = "") %>%
column_to_rownames %>%
rename_all(~ paste0("n", .))
giving:
n1 n2 n3 n4 n5 n6 n7 n8 n9 n10 n11 n12 n13
1a NA 1 1 1 1 1 1 1 1 1 1 1 1
2a NA 2 2 2 2 2 2 2 2 2 2 2 2
3a NA 3 3 3 3 3 3 3 3 3 3 3 3
1b 1 1 1 1 1 1 1 1 1 1 1 NA NA
2b 2 2 2 2 2 2 2 2 2 2 2 NA NA
3b 3 3 3 3 3 3 3 3 3 3 3 NA NA
Note
Above we used this for myDf, the input.
myDf <-
structure(list(a.10 = 1:3, a.11 = 1:3, a.12 = 1:3, a.13 = 1:3,
a.2 = 1:3, a.3 = 1:3, a.4 = 1:3, a.5 = 1:3, a.6 = 1:3, a.7 = 1:3,
a.8 = 1:3, a.9 = 1:3, b.1 = 1:3, b.10 = 1:3, b.11 = 1:3,
b.2 = 1:3, b.3 = 1:3, b.4 = 1:3, b.5 = 1:3, b.6 = 1:3, b.7 = 1:3,
b.8 = 1:3, b.9 = 1:3), class = "data.frame", row.names = c(NA,
-3L))

Applying tidyr to separate only specific rows by specifying which rows to exclude

I would like to separate a column by a condition that excludes certain rows. This is a minor variation on this question: Applying tidyr separate only to specific rows But instead of specifying which rows to separate, I'd like to specify which rows to exclude from separating.
For example, lets say we want to split all rows of the 'text' column, except for the ones that have here_do in them:
#creating DF for the example
df <- data.frame(var_a = letters[1:5],
var_b = c(sample(1:100, 5)),
text = c("foo_bla",
"here_do",
"oh_yes",
"ba_a",
"lan_d"))
I guess there would be some way of using extract as we see in the related question, but I can't seem to figure out how to modify the "(here)_(do)" part to make it work:
library(tidyr)
extract(df, text, into = c("first", "sec"), "(here)_(do)", remove = FALSE)
If you don't mind using "data.table" instead, you can try:
library(data.table)
setDT(df)[!text %in% "here_do", c("first", "second") := tstrsplit(text, "_")][]
# var_a var_b text first second
# 1: a 40 foo_bla foo bla
# 2: b 4 here_do NA NA
# 3: c 12 oh_yes oh yes
# 4: d 35 ba_a ba a
# 5: e 11 lan_d lan d
One way is to separate everything then "unseparate" the rows you wanted to exlude.
library('tidyverse')
df <- data.frame(var_a = letters[1:5],
var_b = c(sample(1:100, 5)),
text = c("foo_bla",
"here_do",
"oh_yes",
"ba_a",
"lan_d"),
stringsAsFactors = F)
df %>%
separate(text, c('first_val', 'second_val'), remove = F) %>%
mutate(
first_val = ifelse(text == 'here_do', text, first_val),
second_val = ifelse(text == 'here_do', NA, first_val))
#> var_a var_b text first_val second_val
#> 1 a 45 foo_bla foo foo
#> 2 b 43 here_do here_do <NA>
#> 3 c 81 oh_yes oh oh
#> 4 d 33 ba_a ba ba
#> 5 e 15 lan_d lan lan
We can filter out the row that you do not want to separate, separate the rest of the rows, and then join the result back to the original data frame.
library(dplyr)
library(tidyr)
df2 <- df %>%
filter(!(text %in% "here_do")) %>%
separate(text, into = c("First", "Second"), remove = FALSE) %>%
right_join(df, by = c("var_a", "var_b", "text"))
df2
# var_a var_b text First Second
# 1 a 19 foo_bla foo bla
# 2 b 90 here_do <NA> <NA>
# 3 c 21 oh_yes oh yes
# 4 d 6 ba_a ba a
# 5 e 15 lan_d lan d
DATA
set.seed(244)
df <- data.frame(var_a = letters[1:5],
var_b = c(sample(1:100, 5)),
text = c("foo_bla",
"here_do",
"oh_yes",
"ba_a",
"lan_d"))

Select rows based on non-directed combinations of columns

I am trying to select the maximum value in a dataframe's third column based on the combinations of the values in the first two columns.
My problem is similar to this one but I can't find a way to implement what I need.
EDIT: Sample data changed to make the column names more obvious.
Here is some sample data:
library(tidyr)
set.seed(1234)
df <- data.frame(group1 = letters[1:4], group2 = letters[1:4])
df <- df %>% expand(group1, group2)
df <- subset(df, subset = group1!=group2)
df$score <- runif(n = 12,min = 0,max = 1)
df
# A tibble: 12 × 3
group1 group2 score
<fctr> <fctr> <dbl>
1 a b 0.113703411
2 a c 0.622299405
3 a d 0.609274733
4 b a 0.623379442
5 b c 0.860915384
6 b d 0.640310605
7 c a 0.009495756
8 c b 0.232550506
9 c d 0.666083758
10 d a 0.514251141
11 d b 0.693591292
12 d c 0.544974836
In this example rows 1 and 4 are 'duplicates'. I would like to select row 4 as the value in the score column is larger than in row 1. Ultimately I would like a dataframe to be returned with the group1 and group2 columns and the maximum value in the score column. So in this example, I expect there to be 6 rows returned.
How can I do this in R?
I'd prefer dealing with this problem in two steps:
library(dplyr)
# Create function for computing group IDs from data frame of groups (per column)
get_group_id <- function(groups) {
apply(groups, 1, function(row) {
paste0(sort(row), collapse = "_")
})
}
group_id <- get_group_id(select(df, -score))
# Perform the computation
df %>%
mutate(groupId = group_id) %>%
group_by(groupId) %>%
slice(which.max(score)) %>%
ungroup() %>%
select(-groupId)

Resources