Stack 10 Columns in R in to two columns [duplicate] - r

This question already has answers here:
Combine Multiple Columns Into Tidy Data [duplicate]
(3 answers)
Closed 5 years ago.
I'm having trouble stacking 10 columns in R into two columns of 5 where each column relates. Basically I have something like:
Name1, ID1, Name2, ID2, Name3, ID3, Name4, ID4, Name5, ID5
And I need to stack them in to a Name and ID table where the values in each Name column still match its ID counterpart. What would be the best way to approach this?
Thanks!

I would recommend melt from the "data.table" package.
Here's some sample data. (This is something you should share.)
mydf <- data.frame(
matrix(1:20, ncol = 10, dimnames = list(NULL, paste0(c("Name", "ID"),
rep(1:5, each = 2)))))
mydf
## Name1 ID1 Name2 ID2 Name3 ID3 Name4 ID4 Name5 ID5
## 1 1 3 5 7 9 11 13 15 17 19
## 2 2 4 6 8 10 12 14 16 18 20
Here's the reshaping:
library(data.table)
melt(as.data.table(mydf), measure = patterns("Name", "ID"),
value.name = c("Name", "ID"))
## variable Name ID
## 1: 1 1 3
## 2: 1 2 4
## 3: 2 5 7
## 4: 2 6 8
## 5: 3 9 11
## 6: 3 10 12
## 7: 4 13 15
## 8: 4 14 16
## 9: 5 17 19
## 10: 5 18 20

You can do this with reshaping
library(dplyr)
library(tidyr)
library(rex)
variable_regex =
rex(capture("Name" %>%
or ("ID") ),
capture(digits) )
mydf %>%
mutate(row_ID = 1:n()) %>%
gather(variable, value, -row_ID) %>%
extract(variable,
c("new_variable", "column_ID"),
variable_regex) %>%
spread(new_variable, value)

Related

Recode multiple columns to numbers increasingly in R

I have 50 columns of names, but here I have presented only 4 columns for convenience.
Name1 Name2 Name3 Name4
Rose,Ali Van,Hall Ghol,Dam Murr,kate
Camp,Laura Ka,Klo Dan,Dan Ali,Hoss
Rose,Ali Van,Hall Ghol,Dam Kol,Kan
Murr,Kate Ismal, Ismal Sian,Rozi Nas,Ami
Ghol,Dam Ka,Klo Rose,Ali Nor,Ko
Murr,Kate Ismal, Ismal Dan,Dan Nas,Ami
I want to assign numbers to each person based on the columns, a sequence of numbers.
For example, in Name 1, we get the numbers from 1-4. The repeated names will get the same numbers.
In Name 2, it should be started from 5 and so on. This will give me the following table:
Assign1 Assian2 Assian3 Assian4
1 5 8 12
2 6 9 13
1 5 8 14
3 7 10 15
4 6 11 17
3 7 9 15
I would like to have it without a loop, i.e.,sapply,i.e., sapply(dat, function(x) match(x, unique(x))).
Using dplyr or tidyverse would be great.
A tidyverse solution with purrr::accumulate():
library(tidyverse)
df %>%
mutate(as_tibble(
accumulate(across(Name1:Name4, ~ match(.x, unique(.x))), ~ .y + max(.x))
))
# Name1 Name2 Name3 Name4
# 1 1 5 8 12
# 2 2 6 9 13
# 3 1 5 8 14
# 4 3 7 10 15
# 5 4 6 11 16
# 6 3 7 9 15
Because the values in each column depend on the values in the previous column, the calculations have to be done sequentially. This is probably most succinctly achieved by a loop. Remember that lapply and sapply are simply loops-in-disguise, and won't be quicker than an explicit loop.
Note that your expected output has a mistake in it (there is a number 17 which should be 16)
output <- setNames(df, paste0('Assign', seq_along(df)))
for(i in seq_along(output)) {
output[[i]] <- match(output[[i]], unique(output[[i]]))
if(i > 1) output[[i]] <- output[[i]] + max(output[[i - 1]])
}
output
#> Assign1 Assign2 Assign3 Assign4
#> 1 1 5 8 12
#> 2 2 6 9 13
#> 3 1 5 8 14
#> 4 3 7 10 15
#> 5 4 6 11 16
#> 6 3 7 9 15
Edit
If you really want it without an explicit loop, you can do:
res <- sapply(seq_along(df), \(i) match(df[[i]], unique(df[[i]])))
res + t(replicate(nrow(df), head(c(0, cumsum(apply(res, 2, max))), -1))) |>
as.data.frame() |>
setNames(paste0('Assign', seq_along(df)))
#> Assign1 Assign2 Assign3 Assign4
#> 1 1 5 8 12
#> 2 2 6 9 13
#> 3 1 5 8 14
#> 4 3 7 10 15
#> 5 4 6 11 16
#> 6 3 7 9 15
Created on 2023-01-13 with reprex v2.0.2
Data taken from question in reproducible format
df <- structure(list(Name1 = c("Rose,Ali", "Camp,Laura", "Rose,Ali",
"Murr,Kate", "Ghol,Dam", "Murr,Kate"), Name2 = c("Van,Hall",
"Ka,Klo", "Van,Hall", "Ismal, Ismal", "Ka,Klo", "Ismal, Ismal"
), Name3 = c("Ghol,Dam", "Dan,Dan", "Ghol,Dam", "Sian,Rozi",
"Rose,Ali", "Dan,Dan"), Name4 = c("Murr,kate", "Ali,Hoss", "Kol,Kan",
"Nas,Ami", "Nor,Ko", "Nas,Ami")), row.names = c(NA, -6L),
class = "data.frame")
Here is a tidyverse approach:
First paste the column name after each of the strings in all your columns, for sorting purpose later. Then pivot it into a two-column df so that we can assign ID to them by match. Finally pivot it back to a wide format and unnest the list columns.
library(tidyverse)
df %>%
mutate(across(everything(), ~ paste0(.x, "_", cur_column()))) %>%
pivot_longer(everything(), names_to = "ab", values_to = "a") %>%
arrange(ab) %>%
mutate(b = match(a, unique(a)), .keep = "unused") %>%
pivot_wider(names_from = "ab", values_from = "b") %>%
unnest(everything())
# A tibble: 6 × 4
Name1 Name2 Name3 Name4
<int> <int> <int> <int>
1 1 5 8 12
2 2 6 9 13
3 1 5 8 14
4 3 7 10 15
5 4 6 11 16
6 3 7 9 15
Data
Taken from #Allan Cameron.
df <- structure(list(Name1 = c("Rose,Ali", "Camp,Laura", "Rose,Ali",
"Murr,Kate", "Ghol,Dam", "Murr,Kate"), Name2 = c("Van,Hall",
"Ka,Klo", "Van,Hall", "Ismal, Ismal", "Ka,Klo", "Ismal, Ismal"
), Name3 = c("Ghol,Dam", "Dan,Dan", "Ghol,Dam", "Sian,Rozi",
"Rose,Ali", "Dan,Dan"), Name4 = c("Murr,kate", "Ali,Hoss", "Kol,Kan",
"Nas,Ami", "Nor,Ko", "Nas,Ami")), row.names = c(NA, -6L),
class = "data.frame")
Update: The approach below is not ideal because ID's are not unique. Sorry.
Using a lookup table with tidyverse:
library(dplyr)
library(tidyr)
lookup <-
df |>
pivot_longer(everything()) |>
distinct() |>
arrange(name) |>
transmute(name = value, value = row_number()) |>
deframe()
df |>
mutate(across(everything(), ~ recode(., !!!lookup)))
Output:
Name1 Name2 Name3 Name4
1 1 5 4 12
2 2 6 9 13
3 1 5 4 14
4 3 7 10 15
5 4 6 1 16
6 3 7 9 15
Data from #Allan Cameron, thanks.

How to add one value to all values in a column with the same id

I am quite new to R and never worked with any bigger data. For the examples i reduced the two dataframes:
df1
id
val1
val2
11
1
2
11
2
5
22
2
2
22
4
6
...
...
...
df2
id
val1
val2
11
5
3
22
6
5
...
...
...
I am looking for a way to add the values of df2 to each value in df1 with the same id.
So the result should be something like this:
id
val1
val2
11
6
5
11
7
8
22
8
7
22
10
11
...
...
...
Because the original data is over 3000 observations of 47 variables with 8 different id I am looking for a solution where the values are not added one by one.
#reproducible data
df1 <- read.table(text = "id val1 val2
11 1 2
11 2 5
22 2 2
22 4 6", header = TRUE)
df2 <- read.table(text = "id val1 val2
11 5 3
22 6 5", header = TRUE)
You could use powerjoin to handle conflicted columns when joining.
library(powerjoin)
power_left_join(df1, df2, by = "id", conflict = `+`)
# id val1 val2
# 1 11 6 5
# 2 11 7 8
# 3 22 8 7
# 4 22 10 11
Merge the datasets then add columns:
# merge
res <- merge(df1, df2, by = "id")
# then add
cbind(res[ 1 ], res[, 2:3] + res[, 4:5])
# id val1.x val2.x
# 1 11 6 5
# 2 11 7 8
# 3 22 8 7
# 4 22 10 11
One approach is to merge both datasets by the id variable, then additioning corresponding columns to create the new val1 and val2 variables, as suggested in the comments by #zx8754. Using dplyr you can obtain the output with :
library(dplyr)
merge(df1,df2,by="id") %>%
mutate(val1=val1.x+val1.y,val2=val2.x+val2.y,)%>%
select(id,val1,val2)
Another approach - row bind then index by the table when summing
library(tidyverse)
imap(list(df1,df2), ~
mutate(.x, table = .y)) %>%
bind_rows() %>%
group_by(id) %>%
summarise(across(matches("val"), ~ .[table == 1] + .[table == 2]),.groups = "drop")

Merging multiple connected columns

I have two different columns for several samples, which are connected. I want to merge all columns of type 1 to one column and all of type 2 to one column, but the rows should stay connected.
Example:
a1 <- c(1, 2, 3, 4, 5)
b1 <- c(1, 4, 9, 16, 25)
a2 <- c(2, 4, 6, 8, 10)
b2 <- c(4, 8, 12, 16, 20)
df1 <- data.frame(a1, b1, a2, b2)
a1 b1 a2 b2
1 1 1 2 4
2 2 4 4 8
3 3 9 6 12
4 4 16 8 16
5 5 25 10 20
I want to have it like this:
a b
1 1 1
2 2 4
3 2 4
4 3 9
5 4 8
6 4 16
7 5 25
8 6 12
9 8 16
10 10 20
My case
This is the example in my case. I have a lot of columns with different names and I want to extract abs_dist_1, ... abs_dist_5 and mean_vel_1, ... mean_vel_5 in a new data frame, with all abs_dist in one column and all mean_vel in one column, but still connected.
I tried with unlist, but then of course the connection gets lost.
Thanks in advance.
A base R option using reshape
subset(
reshape(
setNames(df1, gsub("(\\d)", ".\\1", names(df1))),
direction = "long",
varying = 1:ncol(df1)
),
select = -c(time, id)
)
gives
a b
1.1 1 1
2.1 2 4
3.1 3 9
4.1 4 16
5.1 5 25
1.2 2 4
2.2 4 8
3.2 6 12
4.2 8 16
5.2 10 20
An option with pivot_longer from tidyr by specifying the names_sep as a regex lookaround to match between a lower case letter ([a-z]) and a digit in the column names
library(dplyr)
library(tidyr)
df1 %>%
pivot_longer(cols = everything(), names_to = c( '.value', 'grp'),
names_sep = "(?<=[a-z])(?=[0-9])") %>%
select(-grp)
-output
# A tibble: 10 x 2
# a b
# <dbl> <dbl>
# 1 1 1
# 2 2 4
# 3 2 4
# 4 4 8
# 5 3 9
# 6 6 12
# 7 4 16
# 8 8 16
# 9 5 25
#10 10 20
With the edited post, we need to change the names_sep i.e. the delimiter is now _ between a lower case letter and a digit
df1 %>%
pivot_longer(cols = everything(), names_to = c( '.value', 'grp'),
names_sep = "(?<=[a-z])_(?=[0-9])") %>%
select(-grp)
or with base R, use split.default on the substring of column names into a list of data.frame, then unlist each list element by looping over the list and convert to data.frame
data.frame(lapply(split.default(df1, sub("\\d+", "", names(df1))),
unlist, use.names = FALSE))
For the sake of completeness, here is a solution which uses data.table::melt() and the patterns() function to specify columns which belong together:
library(data.table)
melt(setDT(df1), measure.vars = patterns(a = "a", b = "b"))[
order(a,b), !"variable"]
a b
1: 1 1
2: 2 4
3: 2 4
4: 3 9
5: 4 8
6: 4 16
7: 5 25
8: 6 12
9: 8 16
10: 10 20
This reproduces the expected result for OP's sample dataset.
A more realistic example: reshape only selected columns
With the edit of the question, the OP has clarifified that the production data contains many more columns than those which need to be reshaped:
I have a lot of columns with different names and I want to extract
abs_dist_1, ... abs_dist_5 and mean_vel_1, ... mean_vel_5 in a new
data frame, with all abs_dist in one column and all mean_vel in one
column, but still connected.
So, the OP wants to extract and reshape the columns of interest in one go while ignoring all other data in the dataset.
To simulate this situation, we need a more elaborate dataset which includes other columns as well:
df2 <- cbind(df1, c1 = 11:15, c2 = 21:25)
df2
a1 b1 a2 b2 c1 c2
1 1 1 2 4 11 21
2 2 4 4 8 12 22
3 3 9 6 12 13 23
4 4 16 8 16 14 24
5 5 25 10 20 15 25
With a modified version of the code above
library(data.table)
cols <- c("a", "b")
result <- melt(setDT(df2), measure.vars = patterns(cols), value.name = cols)[, ..cols]
setorderv(result, cols)
result
we get
a b
1: 1 1
2: 2 4
3: 3 9
4: 4 16
5: 5 25
6: 2 4
7: 4 8
8: 6 12
9: 8 16
10: 10 20
For the production dataset as pictured in the edit, the OP needs to set
cols <- c("abs_dist", "mean_vel")

Grouped pivot_longer dplyr

This is an example dataframe. My real dataframe is larger. I highly prefer a tidyverse solution.
#my data
age <- c(18,18,19)
A1 <- c(3,5,3)
A2 <- c(4,4,3)
B1 <- c(1,5,2)
B2 <- c(2,2,5)
df <- data.frame(age, A1, A2, B1, B2)
I want my data to look like this:
#what i want
new_age <- c(18,18,18,18,19,19)
A <- c(3,5,4,4,3,3)
B <- c(1,5,2,2,2,5)
new_df <- data.frame(new_age, A, B)
I want to pivot longer and stack columns A1:A2 into column A, and B1:B2 into B. I also want to have the responses to match the correct age. For example, the 19 year old person in this example has only responded with 3's in columns A1:A2.
tidyr::pivot_longer(df, cols = -age, names_to = c(".value",'groupid'),
#1+ non digits followed by 1+ digits
names_pattern = "(\\D+)(\\d+)")
# A tibble: 6 x 4
age groupid A B
<dbl> <chr> <dbl> <dbl>
1 18 1 3 1
2 18 2 4 2
3 18 1 5 5
4 18 2 4 2
5 19 1 3 2
6 19 2 3 5
in Base R you will use reshape then select the columns you want. You can change the row names also
reshape(df,2:ncol(df),dir = "long",sep="")[,-c(2,5)] #
age A B
1.1 18 3 1
2.1 18 5 5
3.1 19 3 2
1.2 18 4 2
2.2 18 4 2
3.2 19 3 5
As you have a larger dataframe, maybe a solution with data.table will be faster. Here, you can use melt function from data.table package as follow:
library(data.table)
colA = grep("A",colnames(df),value = TRUE)
colB = grep("B",colnames(df),value = TRUE)
setDT(df)
df <- melt(df, measure = list(colA,colB), value.name = c("A","B"))
df[,variable := NULL]
dt <- dt[order(age)]
age A B
1: 18 3 1
2: 18 5 5
3: 18 4 2
4: 18 4 2
5: 19 3 2
6: 19 3 5
Does it answer your question ?
EDIT: Using patterns - suggestion from #Wimpel
As #Wimpel suggested it in comments, you can get the same result using patterns:
melt( setDT(df), measure.vars = patterns( A="^A[0-9]", B="^B[0-9]") )[, variable:=NULL][]
age A B
1: 18 3 1
2: 18 5 5
3: 19 3 2
4: 18 4 2
5: 18 4 2
6: 19 3 5

Creating a new data frame using existing data

I would like to create a new data from my existing data frame "ab". The new data frame should look like "Newdf".
a<- c(1:5)
b<-c(11:15)
ab<-data.frame(C1=a,c2=b)
ab
df<-c(1,11,2,12,3,13,4,14,5,15)
CMT<-c(1:2)
CMT1<-rep.int(CMT,times=5)
Newdf<-data.frame(DV=df,Comp=CMT1)
Newdf
Can we use dplyr package? If yes, how?
More importantly than dplyr, you'd need tidyr:
library(tidyr)
library(dplyr)
ab %>%
gather(Comp, DV) %>%
mutate(Comp = recode(Comp, "C1" = 1, "c2" = 2))
# Comp DV
# 1 1 1
# 2 1 2
# 3 1 3
# 4 1 4
# 5 1 5
# 6 2 11
# 7 2 12
# 8 2 13
# 9 2 14
# 10 2 15
Using dplyr and tidyr gives you something close...
library(tidyr)
library(dplyr)
df2 <- ab %>%
mutate(Order=1:n()) %>%
gather(key=Comp,value=DV,C1,c2) %>%
arrange(Order) %>%
mutate(Comp=recode(Comp,"C1"=1,"c2"=2)) %>%
select(DV,Comp)
df2
DV Comp
1 1 1
2 11 2
3 2 1
4 12 2
5 3 1
6 13 2
7 4 1
8 14 2
9 5 1
10 15 2
Although the OP has asked for a dpylr solution, I felt challenged to look for a data.table solution. So, FWIW, here is an alternative approach using melt().
Note that this solution does not depend on specific column names in ab as the two other dplyr solutions do. In addition, it should be working for more than two columns in ab as well (untested).
library(data.table)
melt(setDT(ab, keep.rownames = TRUE), id.vars = "rn", value.name = "DV"
)[, Comp := rleid(variable)
][order(rn)][, c("rn", "variable") := NULL][]
# DV Comp
# 1: 1 1
# 2: 11 2
# 3: 2 1
# 4: 12 2
# 5: 3 1
# 6: 13 2
# 7: 4 1
# 8: 14 2
# 9: 5 1
#10: 15 2
Data
ab <- structure(list(C1 = 1:5, c2 = 11:15), .Names = c("C1", "c2"),
row.names = c(NA, -5L), class = "data.frame")
ab
# C1 c2
#1 1 11
#2 2 12
#3 3 13
#4 4 14
#5 5 15

Resources