If anyone mind lending some knowledge... What I am trying to do is make a new dataframe based on the below data frame values.
id value
ant 10
cat 4
cat 6
dog 5
dog 3
dog 2
fly 9
What I want to do next is, in sequential order I want to make a dataframe that looks like the following.
Every time we see a new id, we create a column. The max value is 10 so there should be 10 rows.
Our first word is ant and so therefore for every row of ant, I would like a 0.
Our next column is cat. We have 2 values and what I would like to do is for the first value we see, the first 4 rows must be 0 which is followed by 6 rows of 1.
Same logic for dog, with first five rows as 0 and next three rows as 1 and last 2 as 0.
Fly has only 9 rows of 0 and the last row should contain NA.
It should look like this
ant cat dog fly
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
0 1 0 0
0 1 1 0
0 1 1 0
0 1 1 0
0 1 0 0
0 1 0 NA
I know how to do this the long way by
newdf <- data.frame(matrix(2, ncol = length(unique(df[,"id"])) , nrow = 10))
newdf$X1[1:10] <- 0
newdf$X2[1:4] <- 0
newdf$X2[5:10] <- 1
...
However, is there any way to do this more efficiently? Note that my actual data will have roughly 50 rows so that's why I am looking for a more efficient way to complete this!
Here's a tidyverse answer -
library(dplyr)
library(tidyr)
df %>%
group_by(id) %>%
mutate(val = rep(c(0, 1), length.out = n())) %>%
uncount(value) %>%
mutate(row = row_number()) %>%
complete(row = 1:10) %>%
pivot_wider(names_from = id, values_from = val) %>%
select(-row)
# ant cat dog fly
# <dbl> <dbl> <dbl> <dbl>
# 1 0 0 0 0
# 2 0 0 0 0
# 3 0 0 0 0
# 4 0 0 0 0
# 5 0 1 0 0
# 6 0 1 1 0
# 7 0 1 1 0
# 8 0 1 1 0
# 9 0 1 0 0
#10 0 1 0 NA
For each id we assign an alternate 0, 1 value and use uncount to repeat the rows based on the count. Get the data in wide format so that we have a separate column for each id.
data
df <- structure(list(id = c("ant", "cat", "cat", "dog", "dog", "dog",
"fly"), value = c(10, 4, 6, 5, 3, 2, 9)), row.names = c(NA, -7L
), class = "data.frame")
You can try the following base R code
maxlen <- with(df, max(tapply(value, id, sum)))
list2DF(
lapply(
with(df, split(value, id)),
function(x) {
`length<-`(
rep(rep(c(0, 1), length.out = length(x)), x),
maxlen
)
}
)
)
which gives
ant cat dog fly
1 0 0 0 0
2 0 0 0 0
3 0 0 0 0
4 0 0 0 0
5 0 1 0 0
6 0 1 1 0
7 0 1 1 0
8 0 1 1 0
9 0 1 0 0
10 0 1 0 NA
Related
i have a dataframe with multiple columns that code an exposure (as 1/0) over multiple time points and follow a naming pattern, e.g. exposure1_pre2, exposure1_pre1, exposure1_post ... exposuren_pre2, ...
working example
library(dplyr)
df <- tibble(exposure1_pre2 = sample(c(0, 1), size = 20, replace = T),
exposure1_pre1 = sample(c(0, 1), size = 20, replace = T),
exposure1_post = sample(c(0, 1), size = 20, replace = T),
exposure2_pre2 = sample(c(0, 1), size = 20, replace = T),
exposure2_pre1 = sample(c(0, 1), size = 20, replace = T),
exposure2_post = sample(c(0, 1), size = 20, replace = T)
)
i would like to code dummy variables that are set to 1/0 if there is a directional change from one time point to another, i.e. when exposure1_pre2 is 0 and exposure1_pre1 is 1 the new column exposure1_pre2_to_pre1 == 1.
i am trying to realize this with dplyr if_else - or ideally case_when for all possible combinations - and am thinking along the lines of
df %>%
mutate(
across(contains("pre2"),
~if_else(.x == 0 & ??? == 1, 1, 0), .names = "{???}_pre2_to_pre1")
)
as is obvious, i am lost how to structure the condition so that it looks for the similarly named *_pre2 variable to assess for a difference and also would need to take only the exposure part from the input column for the naming of the new column - i suppose a grep could do here?
thank you very much and have a good day!
Loop across the 'pre2' column, get the column name (cur_column()), replace the substring with 'pre1', get the value, do the compound logical expression and coerce the output logical to binary with + (or as.integer)
library(dplyr)
library(stringr)
df %>%
mutate(
across(contains("pre2"),
~ +(. == 0 & get(str_replace(cur_column(), 'pre2', 'pre1')) == 1),
.names = '{.col}_to_pre1'))
-output
# A tibble: 20 x 8
exposure1_pre2 exposure1_pre1 exposure1_post exposure2_pre2 exposure2_pre1 exposure2_post exposure1_pre2_to_pre1 exposure2_pre2_to_pre1
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <int> <int>
1 1 0 1 0 1 1 0 1
2 1 0 1 1 0 0 0 0
3 1 0 1 1 0 0 0 0
4 1 1 1 0 1 0 0 1
5 0 1 1 0 0 0 1 0
6 1 1 1 0 1 1 0 1
7 0 1 0 1 0 0 1 0
8 0 1 0 1 1 1 1 0
9 0 1 0 0 1 0 1 1
10 1 0 1 1 1 0 0 0
11 0 0 1 0 0 1 0 0
12 0 1 1 1 1 1 1 0
13 0 1 1 1 1 0 1 0
14 1 1 1 0 0 1 0 0
15 1 1 1 1 0 0 0 0
16 1 1 1 0 0 1 0 0
17 0 0 0 1 0 0 0 0
18 1 0 0 0 0 0 0 0
19 1 0 0 0 0 1 0 0
20 1 0 1 1 0 1 0 0
I have data that look like this
df <- data.frame(ID = c(1,2,3,4,5,6),
var1_unmod = c (1,0,0,1,0,1),
var1_me1 = c(0,1,0,0,0,0),
var1_me2 = c(1,1,1,0,1,0),
var1_me3 = c(0,0,1,0,0,0),
var1_ac1 = c(1,0,1,1,0,1),
var2_unmod = c(1,0,1,1,0,0),
var2_me1 = c(0,0,0,0,1,0),
var2_me2 = c(1,1,0,1,1,1),
var2_ac1 = c(1,1,0,1,0,0),
var2_me1ac1 = c(1,0,0,0,0,0),
var2_me2ac1 = c(1,0,0,1,1,1))
ID var1_unmod var1_me1 var1_me2 var1_me3 var1_ac1 var2_unmod var2_me1 var2_me2 var2_ac1 var2_me1ac1 var2_me2ac1
1 1 1 0 1 0 1 1 0 1 1 1 1
2 2 0 1 1 0 0 0 0 1 1 0 0
3 3 0 0 1 1 1 1 0 0 0 0 0
4 4 1 0 0 0 1 1 0 1 1 0 1
5 5 0 0 1 0 0 0 1 1 0 0 1
6 6 1 0 0 0 1 0 0 1 0 0 1
except that in the actual dataset, the prefixes aren't sequential like var1 and var2, they are basically random combinations of letters and numbers, and there are about 30 different ones.
For each of these prefixes (var1, var2, ...), I need to create a single variable that indicates whether any of the columns with that prefix that also contain me1, me2, or me3 (so for var2 this would be var2_me1, var2_me2, var2_me1ac1, var2_me2ac1) are nonzero. The output dataset would have additional columns like this:
ID var1_unmod var1_me1 var1_me2 var1_me3 var1_ac1 var1_meX var2_unmod var2_me1 var2_me2 var2_ac1 var2_me1ac1 var2_me2ac1 var2_meX
1 1 1 0 1 0 1 1 1 0 1 1 1 1 1
2 2 0 1 1 0 0 1 0 0 1 1 0 0 1
3 3 0 0 1 1 1 1 1 0 0 0 0 0 0
4 4 1 0 0 0 1 0 1 0 1 1 0 1 1
5 5 0 0 1 0 0 1 0 1 1 0 0 1 1
6 6 1 0 0 0 1 0 0 0 1 0 0 1 1
First I need to identify the applicable columns for each prefix (because there is no pattern to the prefixes, I'm thinking I will have to hard code at least this part), and then maybe somehow write a loop that iterates through the columns (stored in a vector?) for each prefix. I tend to have trouble referencing varying column names within loops. Any help is appreciated!
Here is a basic approach:
cols <- colnames(df)
varnames <- c("var1", "var2")
df2 <- df
for (i in varnames) {
newname <- paste(i, "meX", sep="_")
df2[, newname] <- apply(df2[, grepl(i, cols) & grepl("me", cols)], 1, sum)
df2[, newname] <- ifelse(df2[, newname] >= 1, 1, 0)
}
This will probably need to be modified based on the specific details of your data.
Define unique group of columns in cols, use lapply to iterate over each unique value and return 1 if there is atleast one 1 in the row in '_me' columns.
all_cols <- names(df)
cols <- c('var1', 'var2')
df[paste0(cols, '_meX')] <- lapply(cols, function(x)
as.integer(rowSums(df[grep(paste0(x, '_me'), all_cols, value = TRUE)]) > 0))
The new columns look like :
df[13:14]
# var1_meX var2_meX
#1 1 1
#2 1 1
#3 1 0
#4 0 1
#5 1 1
#6 0 1
This question already has answers here:
Split character column into several binary (0/1) columns
(7 answers)
Closed 2 years ago.
I have data (a column in a dataframe) of type character. I want to separate these characters and, depending on the content, fill separate variables with 0s and 1s.
The column can be recreated with:
df <- data.frame(var = c("1;2", NA, "1;2;3;4;5", "3;5", "1", "1;4", "3", NA, "4", "1;5"))
For example, the characters can range from 1 to 5. I want to create six variables:
var_1, var_2, var_3, var_4, var_5, and var_NA. I want var_1 to contain a 1 if that row has a 1 within the character string, and 0 if it does not.
Thank you!
Perhaps, using cSplit_e would be an option
library(splitstackshape)
library(dplyr)
cSplit_e(df, 'var', sep=";", type = 'character', fill = 0, drop = TRUE)%>%
mutate(var_NA = +(is.na(df$var)))
# var_1 var_2 var_3 var_4 var_5 var_NA
#1 1 1 0 0 0 0
#2 0 0 0 0 0 1
#3 1 1 1 1 1 0
#4 0 0 1 0 1 0
#5 1 0 0 0 0 0
#6 1 0 0 1 0 0
#7 0 0 1 0 0 0
#8 0 0 0 0 0 1
#9 0 0 0 1 0 0
#10 1 0 0 0 1 0
Or using base R
t(sapply(strsplit(df$var, "[:;]"), function(x) +(1:5 %in% x)))
In tidyverse , we can get the data in long format by splitting on ";", create a column with "var", change all values to 1 and get the data in wide format.
library(dplyr)
library(tidyr)
df %>%
mutate(row = row_number()) %>%
separate_rows(var, sep = ";") %>%
mutate(col = paste0('var_', var),
var = 1) %>%
pivot_wider(names_from = col, values_from = var, values_fill = 0) %>%
ungroup %>%
select(-row)
# A tibble: 10 x 6
# var_1 var_2 var_NA var_3 var_4 var_5
# <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1 1 1 0 0 0 0
# 2 0 0 1 0 0 0
# 3 1 1 0 1 1 1
# 4 0 0 0 1 0 1
# 5 1 0 0 0 0 0
# 6 1 0 0 0 1 0
# 7 0 0 0 1 0 0
# 8 0 0 1 0 0 0
# 9 0 0 0 0 1 0
#10 1 0 0 0 0 1
I'm looking for a better way to achieve what the code below does with a for loop. The goal is to create a dataframe (or matrix) where each row is a possible n-length sequence of 1s and 0s, followed by an n+1th column which contains a number corresponding to one of the previous columns that contains a 0.
So in the n == 3 case for example, we want to include a row like this:
1 0 0 2
but not this:
1 0 0 1
Here's the code I have now (assuming n == 3 for simplicity):
library(tidyverse)
df <- expand.grid(x = 0:1, y = 0:1, z = 0:1, target = 1:3, keep = FALSE)
for (row in 1:nrow(df)) {
df$keep[row] <- df[row, df$target[row]] == 0
}
df <- df %>%
filter(keep == TRUE) %>%
select(-keep)
head(df)
# x y z target
# 1 0 0 0 1
# 2 0 1 0 1
# 3 0 0 1 1
# 4 0 1 1 1
# 5 0 0 0 2
# 6 1 0 0 2
# 7 0 0 1 2
# 8 1 0 1 2
# 9 0 0 0 3
# 10 1 0 0 3
# 11 0 1 0 3
# 12 1 1 0 3
Seems like there has to be a better way to do this, especially with dplyr. But I can't figure out how to use the value of target to specify the column to filter on.
Using base R, we can create a row/column index to filter values from the dataframe and keep rows where the extracted value is 0.
df[df[cbind(seq_len(nrow(df)), df$target)] == 0, ]
# x y z target
#1 0 0 0 1
#3 0 1 0 1
#5 0 0 1 1
#7 0 1 1 1
#9 0 0 0 2
#10 1 0 0 2
#13 0 0 1 2
#14 1 0 1 2
#17 0 0 0 3
#18 1 0 0 3
#19 0 1 0 3
#20 1 1 0 3
data
df <- expand.grid(x = 0:1, y = 0:1, z = 0:1, target = 1:3)
I'm trying to add several sets of columns together.
Example df:
df <- data.frame(
key = 1:5,
ab0 = c(1,0,0,0,1),
ab1 = c(0,2,1,0,0),
ab5 = c(1,0,0,0,1),
bc0 = c(0,1,0,2,0),
bc1 = c(2,0,0,0,0),
bc5 = c(0,2,1,0,1),
df0 = c(0,0,0,1,0),
df1 = c(1,0,3,0,0),
df5 = c(1,0,0,0,6)
)
Giving me:
key ab0 ab1 ab5 bc0 bc1 bc5 df0 df1 df5
1 1 1 0 1 0 2 0 0 1 1
2 2 0 2 0 1 0 2 0 0 0
3 3 0 1 0 0 0 1 0 3 0
4 4 0 0 0 2 0 0 1 0 0
5 5 1 0 1 0 0 1 0 0 6
I want to add all sets of columns with 0s and 5s in them together and place them in the 0 column.
So the end result would be:
key ab0 ab1 ab5 bc0 bc1 bc5 df0 df1 df5
1 1 2 0 1 0 2 0 0 1 1
2 2 0 2 0 3 0 2 0 0 0
3 3 0 1 0 1 0 1 0 3 0
4 4 0 0 0 2 0 0 2 0 0
5 5 2 0 1 1 0 1 0 0 6
I could add the columns together using 3 lines:
df$ab0 <- df$ab0 + df$ab5
df$bc0 <- df$bc0 + df$bc5
df$df0 <- df$df0 + df$df5
But my real example has over a hundred columns so I'd like to iterate over them and use apply.
The column names of the first set are contained in col0 and the names of the second set are in col5.
col0 <- c("ab0","bc0","df0")
col5 <- c("ab5","bc5","df5")
I created a function to add the columns to gether using mapply:
fun1 <- function(df,x,y) {
df[,x] <- df[,x] + df[,y]
}
mapply(fun1,df,col0,col5)
But I get an error: Error in df[, x] : incorrect number of dimensions
Thoughts?
Simply add two data frames together by their subsetted columns, assuming they will be the same length. No loops needed. All vectorized operation.
final_df <- df[grep("0", names(df))] + df[grep("5", names(df))]
final_df <- cbind(final_df, df[grep("0", names(df), invert=TRUE)])
final_df <- final_df[order(names(final_df))]
final_df
# ab0 ab1 ab5 bc0 bc1 bc5 df0 df1 df5 key
# 1 2 0 1 0 2 0 1 1 1 1
# 2 0 2 0 3 0 2 0 0 0 2
# 3 0 1 0 1 0 1 0 3 0 3
# 4 0 0 0 2 0 0 1 0 0 4
# 5 2 0 1 1 0 1 6 0 6 5
Rextester demo
You could use map2 from the purrr package to iterate over the two vectors at once:
df <- data.frame(
key = 1:5,
ab0 = c(1,0,0,0,1),
ab1 = c(0,2,1,0,0),
ab5 = c(1,0,0,0,1),
bc0 = c(0,1,0,2,0),
bc1 = c(2,0,0,0,0),
bc5 = c(0,2,1,0,1),
df0 = c(0,0,0,1,0),
df1 = c(1,0,3,0,0),
df5 = c(1,0,0,0,6)
)
col0 <- c("ab0","bc0","df0")
col5 <- c("ab5","bc5","df5")
purrr::map2(col0, col5, function(x, y) {
df[[x]] <<- df[[x]] + df[[y]]
})
> df
key ab0 ab1 ab5 bc0 bc1 bc5 df0 df1 df5
1 1 2 0 1 0 2 0 1 1 1
2 2 0 2 0 3 0 2 0 0 0
3 3 0 1 0 1 0 1 0 3 0
4 4 0 0 0 2 0 0 1 0 0
5 5 2 0 1 1 0 1 6 0 6
Here's an approach using tidyr and dplyr from the tidyverse meta-package.
First, I bring the table into long ("tidy") format, and split out the column into two components, and spread by the number part of those components.
Then I do the calculation you describe.
Finally, I bring it back into the original format using the inverse of step 1.
library(tidyverse)
df_tidy <- df %>%
# Step 1
gather(col, value, -key) %>%
separate(col, into = c("grp", "num"), 2) %>%
spread(num, value) %>%
# Step 2
mutate(`0` = `0` + `5`) %>%
# Step 3, which is just the inverse of Step 1.
gather(num, value, -key, - grp) %>%
unite(col, c("grp", "num")) %>%
spread(col, value)
df_tidy
key ab_0 ab_1 ab_5 bc_0 bc_1 bc_5 df_0 df_1 df_5
1 1 2 0 1 0 2 0 1 1 1
2 2 0 2 0 3 0 2 0 0 0
3 3 0 1 0 1 0 1 0 3 0
4 4 0 0 0 2 0 0 1 0 0
5 5 2 0 1 1 0 1 6 0 6