I have an R dataframe with 3 columns containing values 0 or 1. I need to create a column as the concatenation of column names when the value is 1 separated by '&'. The following code works with empty space '' as the separator but fails when I change it to '&'.
Code:
A = c(1,0,1,0,0,1)
B = c(1,1,1,0,1,0)
C = c(0,0,0,1,1,1)
data = data.frame(A, B, C)
data$New = paste(ifelse(data$A == 1, "A", ""),
ifelse(data$B == 1, "B", ""),
ifelse(data$C == 1, "C", ""), sep = '')
data
Output:
A B C New
1 1 1 0 AB
2 0 1 0 B
3 1 1 0 AB
4 0 0 1 C
5 0 1 1 BC
6 1 0 1 AC
Code & Output with '&' Separator:
A = c(1,0,1,0,0,1)
B = c(1,1,1,0,1,0)
C = c(0,0,0,1,1,1)
data = data.frame(A, B, C)
data$New = paste(ifelse(data$A == 1, "A", ""),
ifelse(data$B == 1, "B", ""),
ifelse(data$C == 1, "C", ""), sep = '&')
data
A B C New
1 1 1 0 A&B&
2 0 1 0 &B&
3 1 1 0 A&B&
4 0 0 1 &&C
5 0 1 1 &B&C
6 1 0 1 A&&C
Expected Output:
A B C New
1 1 1 0 A&B
2 0 1 0 B
3 1 1 0 A&B
4 0 0 1 C
5 0 1 1 B&C
6 1 0 1 A&C
Is there a way to do this in R?
In case of a large number of columns, is there a way to do the same without writing explicit ifelse condition on each column?
We can subset the names by looping through the rows
data$New <- apply(data[1:3], 1, function(x) paste(names(x[x!=0]), collapse="&"))
data$New
#[1] "A&B" "B" "A&B" "C" "B&C" "A&C"
it can also be done column wise
library(tidyverse)
data[1:3] %>%
na_if(0) %>%
`*`(col(.)) %>%
imap(~ rep(.y, length(.x))[.x]) %>%
reduce(paste, sep= "&") %>%
str_remove("(NA&)+|(&NA)+") %>%
str_remove("&NA")
#[1] "A&B" "B" "A&B" "C" "B&C" "A&C"
You can use apply with paste to do it.
nms <- names(data)
data$New <- apply(data, 1, function(x){
paste(nms[as.logical(x)], collapse = "&")
})
data
# A B C New
#1 1 1 0 A&B
#2 0 1 0 B
#3 1 1 0 A&B
#4 0 0 1 C
#5 0 1 1 B&C
#6 1 0 1 A&C
Using which with arr.ind = TRUE, and then aggregate:
cbind(data,
new = aggregate(col ~ row, data = which(data == 1, arr.ind = TRUE),
function(x) paste(names(data)[x], collapse = "&"))[ , "col"])
# A B C new
# 1 1 1 0 A&B
# 2 0 1 0 B
# 3 1 1 0 A&B
# 4 0 0 1 C
# 5 0 1 1 B&C
# 6 1 0 1 A&C
Similar, using tapply:
ix <- which(data == 1, arr.ind = TRUE)
cbind(data,
new = tapply(ix[ , "col"], ix[ , "row"],
function(x) paste(names(data)[x], collapse = "&")))
Related
I have a dataframe containing a long list of binary variables. Each row represents a participant, and columns represent whether a participant made a certain choice (1) or not (0). For the sakes of simplicity, let's say there's only four binary variables and 6 participants.
df <- data.frame(a = c(0,1,0,1,0,1),
b = c(1,1,1,1,0,1),
c = c(0,0,0,1,1,1),
d = c(1,1,0,0,0,0))
>df
# a b c d
# 1 0 1 0 1
# 2 1 1 0 1
# 3 0 1 0 0
# 4 1 1 1 0
# 5 0 0 1 0
# 6 1 1 1 0
In the dataframe, I want to create a list of columns that reflect each unique combination of variables in df (i.e., abc, abd, bcd, cda). Then, for each row, I want to add value "1" if the row contains the particular combination corresponding to the column. So, if the participant scored 1 on "a", "b", and "c", and 0 on "d" he would have a score 1 in the newly created column "abc", but 0 in the other columns. Ideally, it would look something like this.
>df_updated
# a b c d abc abd bcd cda
# 1 0 1 0 1 0 0 0 0
# 2 1 1 0 1 0 1 0 0
# 3 0 1 0 0 0 0 0 0
# 4 1 1 1 0 1 0 0 0
# 5 0 0 1 0 0 0 0 0
# 6 1 1 1 0 0 0 0 0
The ultimate goal is to have an idea of the frequency of each of the combinations, so I can order them from the most frequently chosen to the least frequently chosen. I've been thinking about this issue for days now, but couldn't find an appropriate answer. I would very much appreciate the help.
Something like this?
funCombn <- function(data){
f <- function(x, data){
data <- data[x]
list(
name = paste(x, collapse = ""),
vec = apply(data, 1, function(x) +all(as.logical(x)))
)
}
res <- combn(names(df), 3, f, simplify = FALSE, data = df)
out <- do.call(cbind.data.frame, lapply(res, '[[', 'vec'))
names(out) <- sapply(res, '[[', 'name')
cbind(data, out)
}
funCombn(df)
# a b c d abc abd acd bcd
#1 0 1 0 1 0 0 0 0
#2 1 1 0 1 0 1 0 0
#3 0 1 0 0 0 0 0 0
#4 1 1 1 0 1 0 0 0
#5 0 0 1 0 0 0 0 0
#6 1 1 1 0 1 0 0 0
Base R option using combn :
n <- 3
cbind(df, do.call(cbind, combn(names(df), n, function(x) {
setNames(data.frame(as.integer(rowSums(df[x] == 1) == n)),
paste0(x, collapse = ''))
}, simplify = FALSE))) -> result
result
# a b c d abc abd acd bcd
#1 0 1 0 1 0 0 0 0
#2 1 1 0 1 0 1 0 0
#3 0 1 0 0 0 0 0 0
#4 1 1 1 0 1 0 0 0
#5 0 0 1 0 0 0 0 0
#6 1 1 1 0 1 0 0 0
Using combn create all combinations of column names taking n columns at a time. For each of those combinations assign 1 to those rows where all the 3 combinations are 1 or 0 otherwise.
If you are just looking for a frequency of the combinations (and they don't need to be back in the original data), then you could use something like this:
df <- data.frame(a = c(0,1,0,1,0,1),
b = c(1,1,1,1,0,1),
c = c(0,0,0,1,1,1),
d = c(1,1,0,0,0,0))
n <- names(df)
out <- sapply(n, function(x)ifelse(df[[x]] == 1, x, ""))
combs <- apply(out, 1, paste, collapse="")
sort(table(combs))
# combs
# abd b bd c abc
# 1 1 1 1 2
Ok, so let's use your data, including one row without any 1's:
df <- data.frame(
a = c(0,1,0,1,0,1,0),
b = c(1,1,1,1,0,1,0),
c = c(0,0,0,1,1,1,0),
d = c(1,1,0,0,0,0,0)
)
Now I want to paste all column names together if they have a 1, and then make that a wide table (so that all have a column for a combination). Of course, I fill all resulting NAs with 0's.
df2 <- df %>%
dplyr::mutate(
combination = paste0(
ifelse(a == 1, "a", ""), # There is possibly a way to automate this as well using across()
ifelse(b == 1, "b", ""),
ifelse(c == 1, "c", ""),
ifelse(d == 1, "d", "")
),
combination = ifelse(
combination == "",
"nothing",
paste0("comb_", combination)
),
value = ifelse(
is.na(combination),
0,
1
),
i = dplyr::row_number()
) %>%
tidyr::pivot_wider(
names_from = combination,
values_from = value,
names_repair = "unique"
) %>%
replace(., is.na(.), 0) %>%
dplyr::select(-i)
Since you want to order the original df by frequency, you can create a summary of all combinations (excluding those without anything filled in). Then you just make it a long table and pull the column for every combination (arranged by frequency) from the table.
comb_in_order <- df2 %>%
dplyr::select(
-tidyselect::any_of(
c(
names(df),
"nothing" # I think you want these last.
)
)
) %>%
dplyr::summarise(
dplyr::across(
.cols = tidyselect::everything(),
.fns = sum
)
) %>%
tidyr::pivot_longer(
cols = tidyselect::everything(),
names_to = "combination",
values_to = "frequency"
) %>%
dplyr::arrange(
dplyr::desc(frequency)
) %>%
dplyr::pull(combination)
The only thing to do then is to reconstruct the original df by these after arranging by the columns.
df2 %>%
dplyr::arrange(
across(
tidyselect::any_of(comb_in_order),
desc
)
) %>%
dplyr::select(
tidyselect::any_of(names(df))
)
This should work for all possible combinations.
My variable is as follows
variable
D
D
B
C
B
D
C
C
D
I want to make the column in the above figure below
variable
B
C
D
D
0
0
1
D
0
0
1
B
1
0
0
C
0
1
0
B
1
0
0
D
0
0
1
C
0
1
0
C
0
1
0
D
0
0
1
But I don't want a code like the one below. Because the number of factors in the variable column is too many
data = data %>% mutate(B=ifelse(variable=="B", 1,0),
C=ifelse(variable=="C", 1,0),
D=ifelse(variable=="D", 1,0))
Here is a base R approach. We can first find all unique variable values from the data frame. Then, sapply over that vector and generate a new column for each value. Finally, we can rbind this new data frame of 0/1 valued columns to the original data frame.
cols <- sort(unique(df$variable))
df2 <- sapply(cols, function(x) ifelse(df$variable == x, 1, 0))
df <- cbind(df, df2)
df
variable B C D
1 D 0 0 1
2 D 0 0 1
3 B 1 0 0
4 C 0 1 0
5 B 1 0 0
6 D 0 0 1
7 C 0 1 0
8 C 0 1 0
9 D 0 0 1
Data:
df <- data.frame(variable=c("D", "D", "B", "C", "B",
"D", "C", "C", "D"),
stringsAsFactors=FALSE)
Try this with reshaping and duplicating the original variable in order to have a reference for values. Then, you can reshape to obtain the expected output:
library(dplyr)
library(tidyr)
#Code
new <- df %>% mutate(Var=variable,Val=1,id=row_number()) %>%
pivot_wider(names_from = Var,values_from=Val,values_fill = 0) %>%
select(-id)
Output:
# A tibble: 9 x 4
variable D B C
<chr> <dbl> <dbl> <dbl>
1 D 1 0 0
2 D 1 0 0
3 B 0 1 0
4 C 0 0 1
5 B 0 1 0
6 D 1 0 0
7 C 0 0 1
8 C 0 0 1
9 D 1 0 0
Some data used:
#Data
df <- structure(list(variable = c("D", "D", "B", "C", "B", "D", "C",
"C", "D")), class = "data.frame", row.names = c(NA, -9L))
1) model.matrix
model.matrix will generate column names like variableB so the last line removes the variable part to ensure that the column names are exactly the same as in the question. Omit the last line if it is not important that the column names be exactly as shown there.
dat2 <- cbind(dat, model.matrix(~ variable - 1, dat))
names(dat2) <- sub("variable(.)", "\\1", names(dat2))
giving:
> dat2
variable B C D
1 D 0 0 1
2 D 0 0 1
3 B 1 0 0
4 C 0 1 0
5 B 1 0 0
6 D 0 0 1
7 C 0 1 0
8 C 0 1 0
9 D 0 0 1
2) outer
This can also be done using outer as shown. Each component of variable is compared to each level. We name the levels so that outer uses them as column names. The output is the same.
levs <- sort(unique(dat$variable))
names(levs) <- levs
cbind(dat, +outer(dat$variable, levs, `==`))
Note
The input in reproducible form:
Lines <- "
variable
D
D
B
C
B
D
C
C
D"
dat <- read.table(text = Lines, header = TRUE)
I have the following data frame in R:
Row number A B C D E F G H I J
1 1 1 0 0 1 0 0 1 1
2 1 0 0 0 1 0 0 1
3 1 0 0 0 1 0 0 1 1
I am trying to calculate the number of times the number changes between 1 and 0 excluding the Nulls
The result I am expecting is this
Row Number No of changes
---------- --------------
1 4
2 4
3 4
An explanation for row 1
In row 1, A has a null so we exclude that.
B and C have 1 which is our first set of values.
D and E have 0 which is our second set of values. Now Change = 1
F has our third set of values which is 1. Now Change = 1+1
G and H have 0 which is our third set of values. Now Change = 1+1+1
I and J have 1 which is our fourth set of values. Now Change = 1+1+1+1 =4
Here's a tidyverse approach.
I gather into longer format (from tidyr::pivot_longer), then add a helper column noting when we have a change from 0 to 1 or from 1 to 0, and then sum those by row.
library(tidyverse)
df %>%
# before tidyr 1.0, this would be gather(col, value, -1)
pivot_longer(-1, "col") %>%
group_by(Row.number) %>%
mutate(chg = value == 1 & lag(value) == 0 |
value == 0 & lag(value) == 1) %>%
summarize(no_chgs = sum(chg, na.rm = T))
# A tibble: 3 x 2
Row.number no_chgs
<int> <int>
1 1 4
2 2 4
3 3 4
Sample data:
df <- read.table(
header = T,
stringsAsFactors = F,
text = "'Row number' A B C D E F G H I J
1 NA 1 1 0 0 1 0 0 1 1
2 NA NA 1 0 0 0 1 0 0 1
3 NA 1 0 0 0 1 0 0 1 1")
Here's a data.table solution:
library(data.table)
dt <- as.data.table(df)
dt[,
no_change := max(rleid(na.omit(t(.SD)))) - 1,
by = RowNumber
]
dt
Alternatively, here's a base version:
apply(df[, -1],
1,
function(x) {
complete_case = complete.cases(x)
if (sum(complete_case) > 0) {
return(length(rle(x[complete_case])$lengths) - 1)
} else {
return (0)
}
}
)
I want to one hot encode my variables only for the top categories and NA and 'others'.
So in this simplified example, hot encoding b where freq > 1 and NA:
id <- c(1, 2, 3, 4, 5, 6)
b <- c(NA, "A", "C", "A", "B", "C")
c <- c(2, 3, 6, NA, 4, 7)
df <- data.frame(id, b, c)
id b c
1 1 <NA> 2
2 2 A 3
3 3 C 6
4 4 A NA
5 5 B 4
6 6 C 7
table <- as.data.frame(table(df$b))
Var1 Freq
1 A 2
2 B 1
3 C 2
table_top <- table[table$Freq > 1,]
Var1 Freq
1 A 2
3 C 2
Now, I would like to have something like this
id b_NA c b_A b_C b_Others
1 1 2 0 0 0
2 0 3 1 0 0
3 0 6 0 1 0
4 0 NA 1 0 0
5 0 4 0 0 1
6 0 7 0 1 0
I have tried with subsetting df
table_top <- as.vector(table_top$Var1)
table_only_top <- subset(df, b %in% table_top)
table_only_top
a b c
2 1 A 3
3 2 C 6
4 2 A NA
6 3 C 7
However, now I am stuck how to get to the output. In my real data I have many more categories than here, so using the names from the output is not an option. Also the others category in my real output exists of many categories.
Any hint is highly appreciated :)
Fast and sexy with data.table and mltools:
> one_hot(dt, naCols = TRUE, sparsifyNAs = TRUE)
id cat_NA cat_A cat_C cat_Others freq
1: 1 1 0 0 0 2
2: 2 0 1 0 0 3
3: 3 0 0 1 0 6
4: 4 0 1 0 0 NA
5: 5 0 0 0 1 4
6: 6 0 0 1 0 7
Code
Load libraries
library(dplyr)
library(data.table)
library(mltools)
Transform data
# Kick out all with freq == 1 and below
df <- df %>%
# Group by variables that will be onehotted
group_by(cat) %>%
# Add a count per group item column
mutate(count = n()) %>%
# Ungroup for next steps
ungroup() %>%
# Change all that have a count of 1 or below to "Others".
# If cat was a factor, we would get numeric results at this step.
mutate(cat = ifelse(!is.na(cat) & count <= 1, "Others", cat),
# Only now we turn it into a factor for the one_hot function
cat = as.factor(cat)) %>%
# Drop the count column
select(id, cat, freq)
# Turn into data.table
dt <- as.data.table(df)
Check intermediate result
> dt
id cat freq
1: 1 <NA> 2
2: 2 A 3
3: 3 C 6
4: 4 A NA
5: 5 Others 4
6: 6 C 7
Data
id <- c(1, 2, 3, 4, 5, 6)
cat <- c(NA, "A", "C", "A", "B", "C")
freq <- c(2, 3, 6, NA, 4, 7)
# It is important to have no other factor variables other
# than the variable(s) you one want to one hot. For that reason
# the automatic factoring is turned off.
df <- data.frame(id, cat, freq,
stringsAsFactors = FALSE)
> df
id cat freq
1 1 <NA> 2
2 2 A 3
3 3 C 6
4 4 A NA
5 5 B 4
6 6 C 7
Definitely not an elegant solution but it should work:
library(tideverse)
library(reshape2)
df %>%
gather(var, val, -id) %>%
add_count(var, val) %>%
mutate(res = ifelse(var == "b" & n > 1, 1, 0),
val = paste("b_", val, sep = "")) %>%
filter(var == "b" & n != 1) %>%
dcast(id ~ val, value.var = "res") %>%
full_join(df, by = c("id" = "id")) %>%
mutate(b_NA = ifelse(is.na(b), 1, 0)) %>%
mutate_at(vars(contains("b_")), funs(replace(., is.na(.), 0))) %>%
mutate(b_OTHERS = ifelse(rowSums(.[grep("b_", names(.))]) != 0, 0, 1))
id b_A b_C b c b_NA b_OTHERS
1 2 1 0 A 3 0 0
2 3 0 1 C 6 0 0
3 4 1 0 A NA 0 0
4 6 0 1 C 7 0 0
5 1 0 0 <NA> 2 1 0
6 5 0 0 B 4 0 1
You could cbind data.frames based on your different criteria.
# simple conditions -------------------------------------------------------
df <- df_orig[,-1]
df_na <- is.na(df)
colnames(df_na) <- paste0(colnames(df),"_NA")
df_A <- df=="A"
colnames(df_A) <- paste0(colnames(df),"_A")
df_C <- df=="C"
colnames(df_C) <- paste0(colnames(df),"_C")
# for counts you can use sapply with one loop -----------------------------
df_counts <- df
for(j in 1:ncol(df)) {
counts <- sapply(1:nrow(df), function(x) sum(df[x,j]==df[,j], na.rm=T) )
df_counts[,j] <- counts
}
df_counts <- df
# or avoid explicit loops altogether --------------------------------------
df_counts2 <- sapply(1:ncol(df), function(y) sapply(1:nrow(df), function(x) sum(df[x,y]==df[,y], na.rm=T) ) )
colnames(df_counts2 ) <- paste0(colnames(df),"_counts")
# cbind df's -------------------------------------------------------------
df_full <- cbind(df_orig, df_na, df_A, df_C, df_counts2)
# check if frequency greater then 1 or NA ---------------------------------
df_full$result <- df_full[,10:11] >=2 | df_full[,4:5]
df_full
The harder part is I suppose to compute the frequencies, here I included two ways. the result is:
id b c b_NA c_NA b_A c_A b_C c_C b_counts c_counts result.b_NA result.c_NA
1 1 <NA> 2 FALSE FALSE FALSE FALSE FALSE FALSE 1 1 FALSE FALSE
2 2 A 3 FALSE FALSE TRUE FALSE FALSE FALSE 2 1 TRUE FALSE
3 3 C 6 FALSE FALSE FALSE FALSE TRUE FALSE 2 1 TRUE FALSE
4 4 A NA FALSE TRUE TRUE NA FALSE NA 2 0 TRUE TRUE
5 5 B 4 FALSE FALSE FALSE FALSE FALSE FALSE 1 1 FALSE FALSE
6 6 C 7 FALSE FALSE FALSE FALSE TRUE FALSE 2 1 TRUE FALSE
You can modify the columns based on your conditions. Hope that helps
I have a data frame with a column containing a space-delimited list of character codes:
"Ab B C"
""
"X C"
"N Ab F S"
:
I want to convert this into multiple columns, one for each distinct value, indicating (with 1 or 0) that the value was found in the list. Desired result given the above example:
df$Ab = 1,0,0,1
df$B = 1,0,0,0
df$C = 1,0,1,0
df$F = 0,0,0,1
df$N = 0,0,0,1
What is the best way to do this?
Assuming you are starting with:
df <- data.frame(v1 = c("Ab B C", "", "X C", "N Ab F S"))
You can try cSplit_e from my "splitstackshape" package:
library(splitstackshape)
cSplit_e(df, "v1", sep = " ", type = "character", fill = 0)
# v1 v1_Ab v1_B v1_C v1_F v1_N v1_S v1_X
# 1 Ab B C 1 1 1 0 0 0 0
# 2 0 0 0 0 0 0 0
# 3 X C 0 0 1 0 0 0 1
# 4 N Ab F S 1 0 0 1 1 1 0
You can try
library(qdapTools)
lst <- strsplit(df1$Col1, ' ')
cbind(df1, mtabulate(lst))
# Col1 Ab B C F N S X
#1 Ab B C 1 1 1 0 0 0 0
#2 0 0 0 0 0 0 0
#3 X C 0 0 1 0 0 0 1
#4 N Ab F S 1 0 0 1 1 1 0
Or using base R
lvls <- sort(unique(unlist(lst)))
cbind(df1, t(vapply(lst, function(x) table(factor(x, levels=lvls)),
numeric(length(lvls)))))
data
df1 <- structure(list(Col1 = c("Ab B C", "", "X C", "N Ab F S")),
.Names = "Col1", row.names = c(NA, -4L), class = "data.frame")
In base R, another approach:
lst = strsplit(df$Col1, ' ')
cols = unique(unlist(lst))
m = do.call(rbind, lapply(lst, function(u) cols %in% u +0))
colnames(m) = cols
#> m
# Ab B C X N F S
#[1,] 1 1 1 0 0 0 0
#[2,] 0 0 0 0 0 0 0
#[3,] 0 0 1 1 0 0 0
#[4,] 1 0 0 0 1 1 1