Paste together results within case_when (if-else) statements - r

I want to paste together results within the same case_when statement (i.e., if multiple statements are true for a given row). I know that I could do something like below to create additional columns and then unite them together. But what is the best way to make the code more efficient (and less verbose) without having to explicitly create var.m and var.o?
Data
df <- structure(list(
ind = 1:10,
var = c(-21, -60, 7, 8,
9, 10, NA, 14, 101, 160)
),
class = "data.frame",
row.names = c(NA, -10L))
Code
library(tidyverse)
df %>%
mutate(
var.m = ifelse(row.names(df) %in% (which(abs(diff(var, lag = 1)) > 10)), "derivative", NA),
var.o = ifelse((var + 50) > 90, "add", NA),
results = case_when(is.na(var) ~ "Missing Data",
var > 100 ~ "High",
var < -20 ~ "Low")) %>%
unite(
"message",
c(var.m, results, var.o),
sep = "_",
remove = TRUE,
na.rm = TRUE
)
Output/Expected Output
ind var message
1 1 -21 derivative_Low
2 2 -60 derivative_Low
3 3 7
4 4 8
5 5 9
6 6 10
7 7 NA Missing Data
8 8 14 derivative
9 9 101 derivative_High_add
10 10 160 High_add
So, in other words, is it possible to forgo creating var.m and var.o and do everything within case_when? Or is there an alternative to creating multiple variables (i.e., var.m and var.o) before concatenating the messages together?

df %>%
mutate(
message = paste(
coalesce(if_else(c(abs(diff(var, lag = 1)) > 10, FALSE), "derivative", ""), ""),
coalesce(if_else((var+50) > 90, "add", ""), ""),
coalesce(case_when(
is.na(var) ~ "Missing Data",
var > 100 ~ "High",
var < -20 ~ "Low"), ""),
sep = "_"),
message = gsub("^_|_$", "",
gsub("__", "_", message))
)
# ind var message
# 1 1 -21 derivative_Low
# 2 2 -60 derivative_Low
# 3 3 7
# 4 4 8
# 5 5 9
# 6 6 10
# 7 7 NA Missing Data
# 8 8 14 derivative
# 9 9 101 derivative_add_High
# 10 10 160 add_High
The coalesce calls are because any one of them could be NA because of the possibility of things slipping through. I could add !is.na(var) before some of the conditionals, but diff introduces a challenge since we'd also need !lead(is.na(var),default=F) or such; and for that coalesce just seemed simpler.

Related

How can I divide one variable into two variables in R?

I have a variable x which can take five values (0,1,2,3,4). I want to divide the variable into two variables. Variable 1 is supposed to contain the value 0 and variable two is supposed to contain the values 1,2,3 and 4.
I'm sure this is easy but I can't find out what i need to do.
what my data looks like:
|variable x|
|-----------|
|0|
|1|
|0|
|4|
|3|
|0|
|0|
|2|
so i get the table:
0
1
2
3
4
125
34
14
15
15
But I want my data to look like this
variable 1
125
variable 2
78
So variable 1 is supposed to contain how often 0 is in my data
and variable 2 is supposed to contain the sum of how often 1,2,3 and 4 are in my data
You can convert the variable to logical by testing whether x == 0
x <- c(0, 1, 0, 4, 3, 0, 0, 2)
table(x)
#> x
#> 0 1 2 3 4
#> 4 1 1 1 1
table(x == 0)
#> FALSE TRUE
#> 4 4
If you want the exact headings, you can do:
setNames(table(x == 0), c(0, paste(unique(sort(x[x != 0])), collapse = ","))
#> 0 1,2,3,4
#> 4 4
And if you want to change the variable to a factor you could do:
c("zero", "not zero")[1 + (x != 0)]
#> x
#> 1 zero
#> 2 not zero
#> 3 zero
#> 4 not zero
#> 5 not zero
#> 6 zero
#> 7 zero
#> 8 not zero
Created on 2022-04-02 by the reprex package (v2.0.1)
base R
You can use cbind:
x = sample(0:5, 200, replace = T)
table(x)
# x
# 0 1 2 3 4 5
# 29 38 41 35 27 30
cbind(`0` = table(x)[1], `1,2,3,4` = sum(table(x)[2:5]))
# 0 1,2,3,4
# 0 29 141
tidyverse
library(tidyverse)
ta = as.data.frame(t(as.data.frame.array(table(x))))
ta %>%
mutate(!!paste(names(.[-1]), collapse = ",") := sum(c_across(`1`:`5`)), .keep = "unused")
# 0 1,2,3,4,5
# 1 29 171
Beginning with the vector, we can get the frequency from table then put it into a dataframe. Then, we can create a new column with the names collapsed (i.e., 1,2,3,4) and get the row sum for all columns except the first one.
library(tidyverse)
tab <- data.frame(value=c(0, 1, 2, 3, 4),
freq=c(125, 34, 14, 15, 15))
x <- rep(tab$value, tab$freq)
output <- data.frame(rbind(table(x))) %>%
rename_with(~str_remove(., 'X')) %>%
mutate(!!paste0(names(.)[-1], collapse = ",") := rowSums(select(., -1))) %>%
select(1, last_col())
Output
0 1,2,3,4
1 125 78
Then, to create the 2 variables in 2 dataframes, you can split the columns into a list, change the names, then put into the global environment.
list2env(setNames(
split.default(output, seq_along(output)),
c("variable 1", "variable 2")
), envir = .GlobalEnv)
Or you could just subset:
variable1 <- data.frame(`variable 1` = output$`0`, check.names = FALSE)
variable2 <- data.frame(`variable 2` = output$`1,2,3,4`, check.names = FALSE)
Update: deleted first answer:
df[paste(names(df[2:5]), collapse = ",")] <- rowSums(df[2:5])
df[, c(1,6)]
# A tibble: 1 × 2
`0` `1,2,3,4`
<dbl> <dbl>
1 125 78
data:
df <- structure(list(`0` = 125, `1` = 34, `2` = 14, `3` = 15, `4` = 15), class = c("tbl_df", "tbl", "data.frame"), row.names = c(NA, -1L))

How can I find how these bullets are coded in my text and remove them?

I am doing topic modeling but need to remove certain characters. Specifically bullet points remain in my terms list.
USAID_stops <- c("performance", "final", "usaidgov", "kaves", "evaluation", "*", "[[:punct:]]", "U\2022")
#for (i in 1:length(chapters_1)) {
a <- SimpleCorpus(VectorSource(chapters_1[1]))
dtm_questions <- DocumentTermMatrix(a)
report_topics <- LDA(dtm_questions, k = 4)
topic_weights <- tidy(report_topics, matrix = "beta")
top_terms <- topic_weights %>%
group_by(topic) %>%
slice_max(beta, n = 10) %>%
ungroup() %>%
arrange(topic, -beta) %>%
filter(!term %in% stop_words$word) %>%
filter(!term %in% USAID_stops)
topic term beta
<int> <chr> <dbl>
1 chain 0.009267748
2 • 0.009766040
2 chain 0.009593995
2 change 0.008294549
3 nutrition 0.017117040
3 related 0.009621772
3 strategy 0.008523203
4 • 0.021312755
4 chain 0.010974153
4 ftf 0.008146484
These remain. How and where can I remove them from?
You can use mutate and str_remove to remove the bullets.
library(tidyverse)
df %>%
mutate(across(everything(), ~ str_remove(., "•")))
Output
topic term beta
1 1 chain 0.009267748
2 2 0.009766040
3 2 chain 0.009593995
4 2 change 0.008294549
5 3 nutrition 0.017117040
6 3 related 0.009621772
7 3 strategy 0.008523203
8 4 0.021312755
9 4 chain 0.010974153
10 4 ftf 0.008146484
Or you can use gsub from base R.
df$term <- gsub("•","",as.character(df$term))
You could also replace earlier before running LDA.
dtm_questions[["dimnames"]][["Terms"]] <-
gsub("•","NA",dtm_questions[["dimnames"]][["Terms"]])
If you want to replace the bullets with something else, then you can do this:
df %>%
mutate(across(term, ~ str_replace(., "•", "NA")))
# Or in base R
df$term <- gsub("•","NA",as.character(df$term))
Output
topic term beta
1 1 chain 0.009267748
2 2 NA 0.009766040
3 2 chain 0.009593995
4 2 change 0.008294549
5 3 nutrition 0.017117040
6 3 related 0.009621772
7 3 strategy 0.008523203
8 4 NA 0.021312755
9 4 chain 0.010974153
10 4 ftf 0.008146484
Data
df <-
structure(list(
topic = c(1, 2, 2, 2, 3, 3, 3, 4, 4, 4),
term = c(
"chain", "•", "chain", "change", "nutrition",
"related", "strategy", "•", "chain", "ftf"
),
beta = c(
0.009267748, 0.00976604, 0.009593995, 0.008294549,
0.01711704, 0.009621772, 0.008523203, 0.021312755,
0.010974153, 0.008146484
)
),
class = "data.frame",
row.names = c(NA, -10L))

R: Count number of times B follows A using dplyr

I have a data.frame of monthly averages of radon measured over a few months. I have labeled each value either "below" or "above" a threshold and would like to count the number of times the average value does: "below to above", "above to below", "above to above" or "below to below".
df <- data.frame(value = c(130, 200, 240, 230, 130),
level = c("below", "above","above","above", "below"))
A bit of digging into Matlab answer on here suggests that we could use the Matrix package:
require(Matrix)
sparseMatrix(i=c(2,2,2,1), j=c(2,2,2))
Produces this result which I can't yet interpret.
[1,] | |
[2,] | .
Any thoughts about a tidyverse method?
Sure, just use group by and count the values
library(dplyr)
df <- data.frame(value = c(130, 200, 240, 230, 130),
level = c("below", "above","above","above", "below"))
df %>%
group_by(grp = paste(level, lead(level))) %>%
summarise(n = n()) %>%
# drop the observation that does not have a "next" value
filter(!grepl(pattern = "NA", x = grp))
#> # A tibble: 3 × 2
#> grp n
#> <chr> <int>
#> 1 above above 2
#> 2 above below 1
#> 3 below above 1
You could use table from base R:
table(df$level[-1], df$level[-nrow(df)])
above below
above 2 1
below 1 0
EDIT in response to #HCAI's comment: applying table to multiple columns:
First, generate some data:
set.seed(1)
U = matrix(runif(4*20),nrow = 20)
dfU=data.frame(round(U))
library(plyr) # for mapvalues
df2 = data.frame(apply(dfU,
FUN = function(x) mapvalues(x, from=0:1, to=c('below','above')),
MARGIN=2))
so that df2 contains random 'above' and 'below':
X1 X2 X3 X4
1 below above above above
2 below below above below
3 above above above below
4 above below above below
5 below below above above
6 above below above below
7 above below below below
8 above below below above
9 above above above below
10 below below above above
11 below below below below
12 below above above above
13 above below below below
14 below below below below
15 above above below below
16 below above below above
17 above above below above
18 above below above below
19 below above above above
20 above below below above
Now apply table to each column and vectorize the output:
apply(df2,
FUN=function(x) as.vector(table(x[-1],
x[-nrow(df2)])),
MARGIN=2)
which gives us
X1 X2 X3 X4
[1,] 5 2 7 2
[2,] 5 6 4 6
[3,] 6 5 3 6
[4,] 3 6 5 5
All that's left is a bit of care in labeling the rows of the output. Maybe someone can come up with a clever way to merge/join the data frames resulting from apply(df2, FUN=function(x) melt(table(x[-1],x[-nrow(df2)])),2), which would maintain the row names. (I spent some time looking into it but couldn't work out how to do it easily.)
not run, so there may be a typo, but you get the idea. I'll leave it to you to deal with na and the first obs. Single pass through the vector.
library(dplyr)
summarize(increase = sum(case_when(value > lag(value) ~ 1, T ~ 0)),
decrease = sum(case_when(value > lag(value) ~ 1, T ~ 0)),
constant = sum(case_when(value = lag(value) ~ 1, T ~ 0))
)
A slightly different version:
library(dplyr)
library(stringr)
df %>%
group_by(level = str_c(level, lead(level), sep = " ")) %>%
count(level) %>%
na.omit()
level n
<chr> <int>
1 above above 2
2 above below 1
3 below above 1
Another possible solution, based on tidyverse:
library(tidyverse)
df<-data.frame(value=c(130,200, 240, 230, 130),level=c("below", "above","above","above", "below"))
df %>%
mutate(changes = str_c(lag(level), level, sep = "_")) %>%
count(changes) %>% drop_na(changes)
#> changes n
#> 1 above_above 2
#> 2 above_below 1
#> 3 below_above 1
Yet another solution, based on data.table:
library(data.table)
dt<-data.table(value=c(130,200, 240, 230, 130),level=c("below", "above","above","above", "below"))
dt[, changes := paste(shift(level), level, sep = "_")
][2:.N][,.(n = .N), keyby = .(changes)]
#> changes n
#> 1: above_above 2
#> 2: above_below 1
#> 3: below_above 1

Replacing NA values in data frame iteratively using another column

I have a data frame with about 50 overlapping columns that I need to combine across. Below is a snippet of what the data frame looks like (it is about 150 rows long and several hundred columns across)
ID PAI_Q1.y PAI_Q1.x
540 0 NA
680 1 NA
240 NA 2
330 NA 3
For a single column, the following code works perfectly:
qualtrics <- qualtrics %>%
mutate(PAI_Q1 = ifelse(is.na(PAI_Q1.y), PAI_Q1.x, PAI_Q1.y))
However, I'm having trouble writing this into a loop or a function across all of the rows that need to be converted (i.e., PAI_Q2, PAI_Q3, etc...). Below are the two attempts I've made thus far. Does anyone have suggestions for tweaks (or know of a function that exists) that let me do this basic task iteratively?
Attempt #1
mutate_col <- function(data, string, string.x, string.y){
data <- data %>% mutate(string = ifelse(is.na(string.y), string.x, string.y))
}
Error: Problem with `mutate()` column `string`.
ℹ string = ifelse(is.na(string.y), string.x, string.y).
x object 'PAI_Q1.y' not found
Attempt #2
for (i in 1:colnames(df)){
if(names(i) %in% list_of_cols){ #list of columns that must be combined
y <- paste(i, ".y", sep = "")
x <- paste(i, ".x", sep = "")
df <- df %>% mutate(i = ifelse(is.na(y), x, y))
}
}
ID PAI_Q1.y PAI_Q1.x i.1
540 0 NA PAI.Q1.y
680 1 NA PAI.Q1.y
240 NA 2 PAI.Q1.y
330 NA 3 PAI.Q1.y
You can use tidyr to change the data from wide to long, then get the data the correct format, then change back to wide format.
library(stringr)
library(tidyr)
library(dplyr)
qualtrics <- qualtrics %>%
tidyr::pivot_longer(
!ID,
names_to = "question",
values_to = "value",
values_drop_na = TRUE
) %>%
dplyr::mutate(question = stringr::str_extract(question, "[^.]+")) %>%
tidyr::pivot_wider(names_from = question, values_from = value)
Output
# A tibble: 4 × 3
ID PAI_Q1 PAI_Q2
<dbl> <dbl> <dbl>
1 540 0 0
2 680 1 1
3 240 2 2
4 330 3 3
Data
qualtrics <-
structure(
list(
ID = c(540, 680, 240, 330),
PAI_Q1.y = c(0, 1,
NA, NA),
PAI_Q1.x = c(NA, NA, 2, 3),
PAI_Q2.y = c(0, 1, NA, NA),
PAI_Q2.x = c(NA, NA, 2, 3)
),
class = "data.frame",
row.names = c(NA,-4L)
)

looking for mutate_if, but for rows not columns [duplicate]

I'm in the process of trying out a dplyr-based workflow (rather than using mostly data.table, which I'm used to), and I've come across a problem that I can't find an equivalent dplyr solution to. I commonly run into the scenario where I need to conditionally update/replace several columns based on a single condition. Here's some example code, with my data.table solution:
library(data.table)
# Create some sample data
set.seed(1)
dt <- data.table(site = sample(1:6, 50, replace=T),
space = sample(1:4, 50, replace=T),
measure = sample(c('cfl', 'led', 'linear', 'exit'), 50,
replace=T),
qty = round(runif(50) * 30),
qty.exit = 0,
delta.watts = sample(10.5:100.5, 50, replace=T),
cf = runif(50))
# Replace the values of several columns for rows where measure is "exit"
dt <- dt[measure == 'exit',
`:=`(qty.exit = qty,
cf = 0,
delta.watts = 13)]
Is there a simple dplyr solution to this same problem? I'd like to avoid using ifelse because I don't want to have to type the condition multiple times - this is a simplified example, but there are sometimes many assignments based on a single condition.
These solutions (1) maintain the pipeline, (2) do not overwrite the input and (3) only require that the condition be specified once:
1a) mutate_cond Create a simple function for data frames or data tables that can be incorporated into pipelines. This function is like mutate but only acts on the rows satisfying the condition:
mutate_cond <- function(.data, condition, ..., envir = parent.frame()) {
condition <- eval(substitute(condition), .data, envir)
.data[condition, ] <- .data[condition, ] %>% mutate(...)
.data
}
DF %>% mutate_cond(measure == 'exit', qty.exit = qty, cf = 0, delta.watts = 13)
1b) mutate_last This is an alternative function for data frames or data tables which again is like mutate but is only used within group_by (as in the example below) and only operates on the last group rather than every group. Note that TRUE > FALSE so if group_by specifies a condition then mutate_last will only operate on rows satisfying that condition.
mutate_last <- function(.data, ...) {
n <- n_groups(.data)
indices <- attr(.data, "indices")[[n]] + 1
.data[indices, ] <- .data[indices, ] %>% mutate(...)
.data
}
DF %>%
group_by(is.exit = measure == 'exit') %>%
mutate_last(qty.exit = qty, cf = 0, delta.watts = 13) %>%
ungroup() %>%
select(-is.exit)
2) factor out condition Factor out the condition by making it an extra column which is later removed. Then use ifelse, replace or arithmetic with logicals as illustrated. This also works for data tables.
library(dplyr)
DF %>% mutate(is.exit = measure == 'exit',
qty.exit = ifelse(is.exit, qty, qty.exit),
cf = (!is.exit) * cf,
delta.watts = replace(delta.watts, is.exit, 13)) %>%
select(-is.exit)
3) sqldf We could use SQL update via the sqldf package in the pipeline for data frames (but not data tables unless we convert them -- this may represent a bug in dplyr. See dplyr issue 1579). It may seem that we are undesirably modifying the input in this code due to the existence of the update but in fact the update is acting on a copy of the input in the temporarily generated database and not on the actual input.
library(sqldf)
DF %>%
do(sqldf(c("update '.'
set 'qty.exit' = qty, cf = 0, 'delta.watts' = 13
where measure = 'exit'",
"select * from '.'")))
4) row_case_when Also check out row_case_when defined in
Returning a tibble: how to vectorize with case_when? . It uses a syntax similar to case_when but applies to rows.
library(dplyr)
DF %>%
row_case_when(
measure == "exit" ~ data.frame(qty.exit = qty, cf = 0, delta.watts = 13),
TRUE ~ data.frame(qty.exit, cf, delta.watts)
)
Note 1: We used this as DF
set.seed(1)
DF <- data.frame(site = sample(1:6, 50, replace=T),
space = sample(1:4, 50, replace=T),
measure = sample(c('cfl', 'led', 'linear', 'exit'), 50,
replace=T),
qty = round(runif(50) * 30),
qty.exit = 0,
delta.watts = sample(10.5:100.5, 50, replace=T),
cf = runif(50))
Note 2: The problem of how to easily specify updating a subset of rows is also discussed in dplyr issues 134, 631, 1518 and 1573 with 631 being the main thread and 1573 being a review of the answers here.
You can do this with magrittr's two-way pipe %<>%:
library(dplyr)
library(magrittr)
dt[dt$measure=="exit",] %<>% mutate(qty.exit = qty,
cf = 0,
delta.watts = 13)
This reduces the amount of typing, but is still much slower than data.table.
Here's a solution I like:
mutate_when <- function(data, ...) {
dots <- eval(substitute(alist(...)))
for (i in seq(1, length(dots), by = 2)) {
condition <- eval(dots[[i]], envir = data)
mutations <- eval(dots[[i + 1]], envir = data[condition, , drop = FALSE])
data[condition, names(mutations)] <- mutations
}
data
}
It lets you write things like e.g.
mtcars %>% mutate_when(
mpg > 22, list(cyl = 100),
disp == 160, list(cyl = 200)
)
which is quite readable -- although it may not be as performant as it could be.
As eipi10 shows above, there's not a simple way to do a subset replacement in dplyr because DT uses pass-by-reference semantics vs dplyr using pass-by-value. dplyr requires the use of ifelse() on the whole vector, whereas DT will do the subset and update by reference (returning the whole DT). So, for this exercise, DT will be substantially faster.
You could alternatively subset first, then update, and finally recombine:
dt.sub <- dt[dt$measure == "exit",] %>%
mutate(qty.exit= qty, cf= 0, delta.watts= 13)
dt.new <- rbind(dt.sub, dt[dt$measure != "exit",])
But DT is gonna be substantially faster:
(editted to use eipi10's new answer)
library(data.table)
library(dplyr)
library(microbenchmark)
microbenchmark(dt= {dt <- dt[measure == 'exit',
`:=`(qty.exit = qty,
cf = 0,
delta.watts = 13)]},
eipi10= {dt[dt$measure=="exit",] %<>% mutate(qty.exit = qty,
cf = 0,
delta.watts = 13)},
alex= {dt.sub <- dt[dt$measure == "exit",] %>%
mutate(qty.exit= qty, cf= 0, delta.watts= 13)
dt.new <- rbind(dt.sub, dt[dt$measure != "exit",])})
Unit: microseconds
expr min lq mean median uq max neval cld
dt 591.480 672.2565 747.0771 743.341 780.973 1837.539 100 a
eipi10 3481.212 3677.1685 4008.0314 3796.909 3936.796 6857.509 100 b
alex 3412.029 3637.6350 3867.0649 3726.204 3936.985 5424.427 100 b
I just stumbled across this and really like mutate_cond() by #G. Grothendieck, but thought it might come in handy to also handle new variables. So, below has two additions:
Unrelated: Second last line made a bit more dplyr by using filter()
Three new lines at the beginning get variable names for use in mutate(), and initializes any new variables in the data frame before mutate() occurs. New variables are initialized for the remainder of the data.frame using new_init, which is set to missing (NA) as a default.
mutate_cond <- function(.data, condition, ..., new_init = NA, envir = parent.frame()) {
# Initialize any new variables as new_init
new_vars <- substitute(list(...))[-1]
new_vars %<>% sapply(deparse) %>% names %>% setdiff(names(.data))
.data[, new_vars] <- new_init
condition <- eval(substitute(condition), .data, envir)
.data[condition, ] <- .data %>% filter(condition) %>% mutate(...)
.data
}
Here are some examples using the iris data:
Change Petal.Length to 88 where Species == "setosa". This will work in the original function as well as this new version.
iris %>% mutate_cond(Species == "setosa", Petal.Length = 88)
Same as above, but also create a new variable x (NA in rows not included in the condition). Not possible before.
iris %>% mutate_cond(Species == "setosa", Petal.Length = 88, x = TRUE)
Same as above, but rows not included in the condition for x are set to FALSE.
iris %>% mutate_cond(Species == "setosa", Petal.Length = 88, x = TRUE, new_init = FALSE)
This example shows how new_init can be set to a list to initialize multiple new variables with different values. Here, two new variables are created with excluded rows being initialized using different values (x initialised as FALSE, y as NA)
iris %>% mutate_cond(Species == "setosa" & Sepal.Length < 5,
x = TRUE, y = Sepal.Length ^ 2,
new_init = list(FALSE, NA))
One concise solution would be to do the mutation on the filtered subset and then add back the non-exit rows of the table:
library(dplyr)
dt %>%
filter(measure == 'exit') %>%
mutate(qty.exit = qty, cf = 0, delta.watts = 13) %>%
rbind(dt %>% filter(measure != 'exit'))
You could split the dataset and do a regular mutate call on the TRUE part.
the split can be done with either dplyr::group_split() or base::split(), I like the base version better here since it preserves names, see the discussion at https://github.com/tidyverse/dplyr/issues/4223 .
df1 <- data.frame(site = sample(1:6, 50, replace=T),
space = sample(1:4, 50, replace=T),
measure = sample(c('cfl', 'led', 'linear', 'exit'), 50,
replace=T),
qty = round(runif(50) * 30),
qty.exit = 0,
delta.watts = sample(10.5:100.5, 50, replace=T),
cf = runif(50),
stringsAsFactors = F)
library(tidyverse)
df1 %>%
group_split(measure == "exit", .keep = FALSE) %>%
modify_at(2, ~mutate(.,qty.exit = qty, cf = 0, delta.watts = 13)) %>%
bind_rows()
#> # A tibble: 50 × 7
#> site space measure qty qty.exit delta.watts cf
#> <int> <int> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 5 1 linear 22 0 100. 0.126
#> 2 3 3 led 12 0 61.5 0.161
#> 3 6 1 led 26 0 25.5 0.307
#> 4 5 2 cfl 16 0 26.5 0.865
#> 5 6 3 linear 19 0 57.5 0.684
#> 6 1 4 led 12 0 14.5 0.802
#> 7 6 4 led 5 0 90.5 0.547
#> 8 5 4 linear 28 0 54.5 0.171
#> 9 1 2 linear 5 0 24.5 0.775
#> 10 1 2 cfl 24 0 96.5 0.144
#> # … with 40 more rows
df1 %>%
split(~measure == "exit") %>%
modify_at("TRUE", ~mutate(.,qty.exit = qty, cf = 0, delta.watts = 13)) %>%
bind_rows()
#> site space measure qty qty.exit delta.watts cf
#> 1 5 1 linear 22 0 100.5 0.125646491
#> 2 3 3 led 12 0 61.5 0.160692291
#> 3 6 1 led 26 0 25.5 0.307239765
#> 4 5 2 cfl 16 0 26.5 0.864969074
#> 5 6 3 linear 19 0 57.5 0.683945200
#> 6 1 4 led 12 0 14.5 0.802398642
#> 7 6 4 led 5 0 90.5 0.547211378
#> 8 5 4 linear 28 0 54.5 0.170614207
#> 9 1 2 linear 5 0 24.5 0.774603932
#> 10 1 2 cfl 24 0 96.5 0.144310557
#> 11 3 4 linear 21 0 93.5 0.682622390
#> 12 4 4 led 2 0 48.5 0.941718646
#> 13 4 4 cfl 2 0 100.5 0.918448627
#> 14 5 2 led 11 0 63.5 0.998143780
#> 15 4 1 led 21 0 53.5 0.644740176
#> 16 1 3 cfl 5 0 28.5 0.110610285
#> 17 1 3 linear 24 0 41.5 0.538868200
#> 18 4 3 led 29 0 19.5 0.998474289
#> 19 2 3 cfl 4 0 22.5 0.008167536
#> 20 5 1 led 20 0 56.5 0.740833476
#> 21 3 2 led 5 0 44.5 0.223967706
#> 22 1 4 led 27 0 32.5 0.199850583
#> 23 3 4 cfl 17 0 61.5 0.104023080
#> 24 1 3 cfl 11 0 34.5 0.399036247
#> 25 2 3 linear 29 0 65.5 0.600678235
#> 26 2 4 cfl 23 0 29.5 0.291611352
#> 27 6 2 linear 13 0 37.5 0.225021614
#> 28 2 3 led 17 0 62.5 0.879606956
#> 29 2 4 led 29 0 51.5 0.301759669
#> 30 5 1 led 11 0 54.5 0.793816856
#> 31 2 3 led 20 0 29.5 0.514759195
#> 32 3 4 linear 6 0 68.5 0.475085443
#> 33 1 4 led 21 0 34.5 0.133207588
#> 34 2 4 linear 25 0 80.5 0.164279355
#> 35 5 3 led 7 0 73.5 0.252937836
#> 36 6 2 led 15 0 99.5 0.554864929
#> 37 3 2 linear 6 0 44.5 0.377257874
#> 38 4 4 exit 15 15 13.0 0.000000000
#> 39 3 3 exit 10 10 13.0 0.000000000
#> 40 5 1 exit 15 15 13.0 0.000000000
#> 41 4 2 exit 1 1 13.0 0.000000000
#> 42 5 3 exit 10 10 13.0 0.000000000
#> 43 1 3 exit 14 14 13.0 0.000000000
#> 44 5 2 exit 12 12 13.0 0.000000000
#> 45 2 2 exit 30 30 13.0 0.000000000
#> 46 6 3 exit 28 28 13.0 0.000000000
#> 47 1 1 exit 14 14 13.0 0.000000000
#> 48 3 3 exit 21 21 13.0 0.000000000
#> 49 4 2 exit 13 13 13.0 0.000000000
#> 50 4 3 exit 12 12 13.0 0.000000000
Created on 2022-10-07 by the reprex package (v2.0.1)
mutate_cond is a great function, but it gives an error if there is an NA in the column(s) used to create the condition. I feel that a conditional mutate should simply leave such rows alone. This matches the behavior of filter(), which returns rows when the condition is TRUE, but omits both rows with FALSE and NA.
With this small change the function works like a charm:
mutate_cond <- function(.data, condition, ..., envir = parent.frame()) {
condition <- eval(substitute(condition), .data, envir)
condition[is.na(condition)] = FALSE
.data[condition, ] <- .data[condition, ] %>% mutate(...)
.data
}
I don't actually see any changes to dplyr that would make this much easier. case_when is great for when there are multiple different conditions and outcomes for one column but it doesn't help for this case where you want to change multiple columns based on one condition. Similarly, recode saves typing if you are replacing multiple different values in one column but doesn't help with doing so in multiple columns at once. Finally, mutate_at etc. only apply conditions to the column names not the rows in the dataframe. You could potentially write a function for mutate_at that would do it but I can't figure out how you would make it behave differently for different columns.
That said here is how I would approach it using nest form tidyr and map from purrr.
library(data.table)
library(dplyr)
library(tidyr)
library(purrr)
# Create some sample data
set.seed(1)
dt <- data.table(site = sample(1:6, 50, replace=T),
space = sample(1:4, 50, replace=T),
measure = sample(c('cfl', 'led', 'linear', 'exit'), 50,
replace=T),
qty = round(runif(50) * 30),
qty.exit = 0,
delta.watts = sample(10.5:100.5, 50, replace=T),
cf = runif(50))
dt2 <- dt %>%
nest(-measure) %>%
mutate(data = if_else(
measure == "exit",
map(data, function(x) mutate(x, qty.exit = qty, cf = 0, delta.watts = 13)),
data
)) %>%
unnest()
With the creation of rlang, a slightly modified version of Grothendieck's 1a example is possible, eliminating the need for the envir argument, as enquo() captures the environment that .p is created in automatically.
mutate_rows <- function(.data, .p, ...) {
.p <- rlang::enquo(.p)
.p_lgl <- rlang::eval_tidy(.p, .data)
.data[.p_lgl, ] <- .data[.p_lgl, ] %>% mutate(...)
.data
}
dt %>% mutate_rows(measure == "exit", qty.exit = qty, cf = 0, delta.watts = 13)
I think this answer has not been mentioned before. It runs almost as fast as the 'default' data.table-solution..
Use base::replace()
df %>% mutate( qty.exit = replace( qty.exit, measure == 'exit', qty[ measure == 'exit'] ),
cf = replace( cf, measure == 'exit', 0 ),
delta.watts = replace( delta.watts, measure == 'exit', 13 ) )
replace recycles the replacement value, so when you want the values of columns qty entered into colums qty.exit, you have to subset qty as well... hence the qty[ measure == 'exit'] in the first replacement..
now, you will probably not want to retype the measure == 'exit' all the time... so you can create an index-vector containing that selection, and use it in the functions above.
#build an index-vector matching the condition
index.v <- which( df$measure == 'exit' )
df %>% mutate( qty.exit = replace( qty.exit, index.v, qty[ index.v] ),
cf = replace( cf, index.v, 0 ),
delta.watts = replace( delta.watts, index.v, 13 ) )
benchmarks
# Unit: milliseconds
# expr min lq mean median uq max neval
# data.table 1.005018 1.053370 1.137456 1.112871 1.186228 1.690996 100
# wimpel 1.061052 1.079128 1.218183 1.105037 1.137272 7.390613 100
# wimpel.index 1.043881 1.064818 1.131675 1.085304 1.108502 4.192995 100
At the expense of breaking with the usual dplyr syntax, you can use within from base:
dt %>% within(qty.exit[measure == 'exit'] <- qty[measure == 'exit'],
delta.watts[measure == 'exit'] <- 13)
It seems to integrate well with the pipe, and you can do pretty much anything you want inside it.

Resources