Suppose I have a data frame df with two columns:
id category
A 1
B 4
C 3
D 1
I want to replace the numbers in category with the following: 1 = "A", 2 = "B", 3 = "C", 4 = "D".
I.e. the output should be
id category
A A
B D
C C
D A
Does anyone know how to do this?
Here I propose three methods to achieve your goal.
Base R
If you have a vector of values for conversion, you can use match to find the index of the vector to replace the category column.
vec <- c("1" = "A", "2" = "B", "3" = "C", "4" = "D")
df$category <- vec[match(df$category, names(vec))]
dplyr
Use a case_when statement to match the values in category, and assign new strings to it.
library(dplyr)
df %>% mutate(category = case_when(category == 1 ~ "A",
category == 2 ~ "B",
category == 3 ~ "C",
category == 4 ~ "D",
TRUE ~ NA_character_))
left_join from dplyr
Or if you have a dataframe with two columns specifying values for conversion, you can left_join them. Here, the dataframe for conversion is created by enframe.
left_join(df, enframe(vec), by = c("category" = "name")) %>% select(-value)
Output
id category
1 A A
2 B D
3 C C
4 D A
Data
df <- structure(list(id = c("A", "B", "C", "D"), category = c("A",
"D", "C", "A")), row.names = c(NA, -4L), class = "data.frame")
A possible solution:
library(tidyverse)
df %>%
mutate(category = LETTERS[category])
#> id category
#> 1 A A
#> 2 B D
#> 3 C C
#> 4 D A
Related
I think this problem can be solved in many different ways, but I basically want to find a function that will give me a dataframe with every combination of values from a list into its columns, including the incomplete sets and excluding some, but not all, redundant combinations (order isn't important for now).
So I might start out with a list like this:
List = c("A","B","C")
and I want to get a dataframe that looks like
C1 = c("A","B","C","A","A","B","A")
C2 = c("","","","B","C","C","B")
C3 = c("","","","","","","C")
df <- cbind(C1, C2, C3)
row.names(df) <- c("A", "B", "C", "AB", "AC", "BC", "ABC")
colnames(df) <- c("First_Item", "Second_Item","Third_Item")
And then it fills in each cell with the corresponding letter.
e.g. position A1 in the df would be "A", positions A2 and A3 would be empty.
any idea how to do this?
I tried with dplyr:
library(tidyr)
list_1 = c("A", "B", "C", "NA")
list_2 = c("A", "B", "C", "NA")
list_3 = c("A", "B", "C", "NA")
list_4 = c("A", "B", "C", "NA")
test <- crossing(list_1, list_2,list_3,list_4)
test <- test[apply(test, MARGIN = 1, FUN = function(x) !(duplicated(x) | !any = "NA")),]
But I want to keep all the values with multiple NAs in them, so this doesn't quite work.
expand.grid has the same problem
expand.grid(list_1 = c("A", "B", "C", "NA"),list_2 = c("A", "B", "C", "NA"),list_3 = c("A", "B", "C", "NA"),list_4 = c("A", "B", "C", "NA"))
That's basically Roland's answer:
library(magrittr) # just for the pipe-operator
List %>%
seq_along() %>%
lapply(combn, x = List, simplify = FALSE) %>%
unlist(recursive = FALSE) %>%
sapply(`length<-`, length(List)) %>%
t() %>%
data.frame()
returns
X1 X2 X3
1 A <NA> <NA>
2 B <NA> <NA>
3 C <NA> <NA>
4 A B <NA>
5 A C <NA>
6 B C <NA>
7 A B C
Further more you could use the dplyr and tidyr packages to replace NAs. Just add one more function into the pipe:
mutate(across(everything(), replace_na, ""))
Here is my approach:
library(purrr)
List <- c("xA","xB","xC") # arbitrary as per request in comments
seq_along(List) %>% # h/t #MartinGal
map(~ combn(List, m = .x) %>%
apply(2, paste, collapse = "<!>")) %>%
unlist() %>%
tibble::tibble() %>%
tidyr::separate(1, into = c("First_Item", "Second_Item", "Third_Item"),
sep = "<!>")
Returns:
# A tibble: 7 x 3
First_Item Second_Item Third_Item
<chr> <chr> <chr>
1 xA NA NA
2 xB NA NA
3 xC NA NA
4 xA xB NA
5 xA xC NA
6 xB xC NA
7 xA xB xC
I have a data frame like this.
df
Languages Order Machine Company
[1] W,X,Y,Z,H,I D D B
[2] W,X B A G
[3] W,I E B A
[4] H,I B C B
[5] W G G C
I want to get the number of rows where languages has 2 out of 3 values among W,H,I.
The result should be: 3 because row 1, row 3 and row 4 contains at least 2 values out of the3 values among W,H,I
You can use strsplit on df$Languages and take the intersect with W,H,I. Then get the lengths of this result and use which to get those which have more than 1 >1.
sum(lengths(sapply(strsplit(df$Languages, ",", TRUE), intersect, c("W","H","I"))) > 1)
#[1] 3
You can use :
sum(sapply(strsplit(df$Languages, ','), function(x)
sum(c("W","H","I") %in% x) >= 2))
#[1] 3
data
df<- structure(list(Languages = c("W,X,Y,Z,H,I", "W,X", "W,I", "H,I",
"W"), Order = c("D", "B", "E", "B", "G"), Machine = c("D", "A",
"B", "C", "G"), Company = c("B", "G", "A", "B", "C")),
class = "data.frame", row.names = c(NA, -5L))
a tidyverse approach
df %>% filter(map_int(str_split(Languages, ','), ~ sum(.x %in% c('W', 'H', 'I'))) >= 2)
Languages Order Machine Company
1 W,X,Y,Z,H,I D D B
2 W,I E B A
3 H,I B C B
I have a data frame that contains the monetary transactions among individuals. The transactions can be two-way, i.e. A can transfer money to B and B can also transfer money to A. The structure of the data frame looks like below:
From To Amount
A B $100
A C $40
A D $30
B A $25
B C $70
C A $190
C D $110
I want to summarize the total amount of transactions among each pair of individuals who have transactions with each other and the results should be something like:
Individual_1 Individual_2 Sum
A B $125
A C $230
A D $30
B C $70
C D $110
I tried to utilize the grouping feature of the package dplyr but I think it does not apply to my case.
You can use pmin/pmax to sort From and To columns and sum the Amount value.
library(dplyr)
df %>%
group_by(col1 = pmin(From, To),
col2 = pmax(From, To)) %>%
summarise(Amount = sum(readr::parse_number(Amount)))
# col1 col2 Amount
# <chr> <chr> <dbl>
#1 A B 125
#2 A C 230
#3 A D 30
#4 B C 70
#5 C D 110
Using the same logic in base R you can do :
aggregate(Amount~col1 + col2,
transform(df, col1 = pmin(From, To), col2 = pmax(From, To),
Amount = as.numeric(sub('$', '', Amount, fixed = TRUE))), sum)
data
df <- structure(list(From = c("A", "A", "A", "B", "B", "C", "C"), To = c("B",
"C", "D", "A", "C", "A", "D"), Amount = c("$100", "$40", "$30",
"$25", "$70", "$190", "$110")), class = "data.frame", row.names = c(NA, -7L))
A solution using the tidyverse package. You need to find a way to create a common grouping column with the right order of the individuals. dat2 is the final output.
library(tidyverse)
dat2 <- dat %>%
mutate(Amount = as.numeric(str_remove(Amount, "\\$"))) %>%
mutate(Group = map2_chr(From, To, ~str_c(sort(c(.x, .y)), collapse = "_"))) %>%
group_by(Group) %>%
summarize(Sum = sum(Amount, na.rm = TRUE)) %>%
separate(Group, into = c("Individual_1", "Individual_2"), sep = "_") %>%
mutate(Sum = str_c("$", Sum))
print(dat2)
# # A tibble: 5 x 3
# Individual_1 Individual_2 Sum
# <chr> <chr> <chr>
# 1 A B $125
# 2 A C $230
# 3 A D $30
# 4 B C $70
# 5 C D $110
Data
dat <- read.table(text = "From To Amount
A B $100
A C $40
A D $30
B A $25
B C $70
C A $190
C D $110",
header = TRUE)
A complete solution without packages, based on #RonakShah's great pmin/pmax approach, using list notation in aggregate (in contrast to formula notation) which allows name assignment.
with(
transform(d, a=as.numeric(gsub("\\D", "", Amount)), b=pmin(From, To), c=pmax(From, To)),
aggregate(list(Sum=a), list(Individual_1=b, Individual_2=c), function(x)
paste0("$", sum(x))))
# Individual_1 Individual_2 Sum
# 1 A B $125
# 2 A C $230
# 3 B C $70
# 4 A D $30
# 5 C D $110
Data:
d <- structure(list(From = c("A", "A", "A", "B", "B", "C", "C"), To = c("B",
"C", "D", "A", "C", "A", "D"), Amount = c("$100", "$40", "$30",
"$25", "$70", "$190", "$110")), class = "data.frame", row.names = c(NA,
-7L))
I have a data frame that consists of characters "a", "b", "x", "y".
df <- data.frame(v1 = c("a", "b", "x", "y"),
v2 = c("a", "b", "a", "y"))
Now I want to replace all values with the following scheme and also convert the whole data frame to numeric.
"a" -> 0
"b" -> 1
"x" -> 1
"y" -> 2
I know this must be somehow possible with mutate_all but I cannot figure out how
df %>% mutate_all(replace("a", 1)) %>%
mutate_all(is.character, as.numeric)
One solution could be with case_when:
df %>%
mutate_all(funs(case_when(. == "a" ~ 0,
. %in% c("b", "x") ~ 1,
. == "y" ~ 2,
TRUE ~ NA_real_)))
# v1 v2
# 1 0 0
# 2 1 1
# 3 1 0
# 4 2 2
Create a named vector with mappings and then subset it using mutate_all
vec <- c(a = 0, b = 1, x = 1, y = 2)
library(dplyr)
df %>% mutate_all(~vec[.])
# v1 v2
#1 0 0
#2 1 1
#3 1 0
#4 2 2
In base R that would be just
df[] <- vec[unlist(df)]
data
df <- data.frame(v1 = c("a", "b", "x", "y"),
v2 = c("a", "b", "a", "y"), stringsAsFactors = FALSE)
I am trying to modify a data.frame filtered by dplyr but I don't quite seem to grasp what I need to do. In the following example, I am trying to filter the data frame z and then assign a new value to the third column -- I give two examples, one with "9" and one with "NA".
require(dplyr)
z <- data.frame(w = c("a", "a", "a", "b", "c"), x = 1:5, y = c("a", "b", "c", "d", "e"))
z %>% filter(w == "a" & x == 2) %>% select(y)
z %>% filter(w == "a" & x == 2) %>% select(y) <- 9 # Should be similar to z[z$w == "a" & z$ x == 2, 3] <- 9
z %>% filter(w == "a" & x == 3) %>% select(y) <- NA # Should be similar to z[z$w == "a" & z$ x == 3, 3] <- NA
Yet, it doesn't work: I get the following error message:
"Error in z %>% filter(w == "a" & x == 3) %>% select(y) <- NA : impossible de trouver la fonction "%>%<-"
I know that I can use the old data.frame notation, but what would be the solution for dplyr?
Thanks!
Filtering will subset the data frame. If you want to keep the whole data frame, but modify part of it, you can, for example use mutate with ifelse. I've added stringsAsFactors=FALSE to your sample data so that y will be a character column.
z <- data.frame(w = c("a", "a", "a", "b", "c"), x = 1:5, y = c("a", "b", "c", "d", "e"),
stringsAsFactors=FALSE)
z %>% mutate(y = ifelse(w=="a" & x==2, 9, y))
w x y
1 a 1 a
2 a 2 9
3 a 3 c
4 b 4 d
5 c 5 e
Or with replace:
z %>% mutate(y = replace(y, w=="a" & x==2, 9),
y = replace(y, w=="a" & x==3, NA))
w x y
1 a 1 a
2 a 2 9
3 a 3 <NA>
4 b 4 d
5 c 5 e
It is my impression that the dplyr package is philosophically opposed to modifying your underlying data. You might find the data.table package friendlier for this operation:
library(data.table)
z <- data.table(w = c("a", "a", "a", "b", "c"), x = 1:5, y = c("a", "b", "c", "d", "e"))
m <- data.table(w = c("a","a"), x = c(2,3), new_y = c("9", NA))
z[m, y := new_y, on=c("w","x")]
w x y
1: a 1 a
2: a 2 9
3: a 3 NA
4: b 4 d
5: c 5 e
I'm sure there's a way in base R as well, but I don't know it. In particular, I can't get merge or match to do the job.