I want to add several columns (filled with NA) to a data.frame using dplyr. I've defined the names of the columns in a character vector. Usually, with only one new column, you can use the following pattern:
test %>%
mutate(!!new_column := NA)
However, I don't get it to work with across:
library(dplyr)
test <- data.frame(a = 1:3)
add_cols <- c("col_1", "col_2")
test %>%
mutate(across(!!add_cols, ~ NA))
#> Error: Problem with `mutate()` input `..1`.
#> x Can't subset columns that don't exist.
#> x Columns `col_1` and `col_2` don't exist.
#> ℹ Input `..1` is `across(c("col_1", "col_2"), ~NA)`.
test %>%
mutate(!!add_cols := NA)
#> Error: The LHS of `:=` must be a string or a symbol
expected_output <- data.frame(
a = 1:3,
col_1 = rep(NA, 3),
col_2 = rep(NA, 3)
)
expected_output
#> a col_1 col_2
#> 1 1 NA NA
#> 2 2 NA NA
#> 3 3 NA NA
Created on 2021-10-05 by the reprex package (v1.0.0)
With the first approach, the column names are correctly created, but then it directly tries to find it in the existing column names. In the second approach, I can't use anything other than a single string.
Is there a tidyverse solution or do I need to resort to the good old for loop?
The !! works for a single element
for(nm in add_cols) test <- test %>%
mutate(!! nm := NA)
-output
> test
a col_1 col_2
1 1 NA NA
2 2 NA NA
3 3 NA NA
Or another option is
test %>%
bind_cols(setNames(rep(list(NA), length(add_cols)), add_cols))
a col_1 col_2
1 1 NA NA
2 2 NA NA
3 3 NA NA
In base R, this is easier
test[add_cols] <- NA
Which can be used in a pipe
test %>%
`[<-`(., add_cols, value = NA)
a col_1 col_2
1 1 NA NA
2 2 NA NA
3 3 NA NA
across works only if the columns are already present i.e. it is suggesting to loop across the columns present in the data and do some modification/create new columns with .names modification
We could make use add_column from tibble
library(tibble)
library(janitor)
add_column(test, !!! add_cols) %>%
clean_names %>%
mutate(across(all_of(add_cols), ~ NA))
a col_1 col_2
1 1 NA NA
2 2 NA NA
3 3 NA NA
Another approach:
library(tidyverse)
f <- function(x) df$x = NA
mutate(test, map_dfc(add_cols,~ f(.x)))
Related
Not sure what I'm doing wrong but I'm struggling getting the index per row of the last column (among several columns) that is not NA.
Using tidyverse and across, I'm getting as many output columns as input columns where I'd expect one single output column with the index of the respective column.
dat <- data.frame(id = c(1, 2, 3),
x = c(1, NA, NA),
y = c(NA, NA, NA),
z = c(3, 1, NA))
I tried the following (among others, inspired by this one: Return last data frame column which is not NA):
dat %>%
mutate(last = across(-id, ~max.col(!is.na(.x), ties.method="last")))
Expected outcome would be:
id x y z last
1 1 1 NA 3 3
2 2 NA NA 1 3
3 3 NA NA NA NA
The problems with your current flow:
across is going to pass one column at a time to the function/expression; your code needs a row or a matrix/frame. For this, across is not appropriate.
Your desired output of NA for the last row is inconsistent with the logic: !is.na(.x) should return c(F,F,F), which still has a max. Your logic then requires a custom function, since you need to handle it differently.
Try this adaptation of max.col into a custom function:
max.col.notna <- function (m, ties.method = c("random", "first", "last")) {
ties.method <- match.arg(ties.method)
tieM <- which(ties.method == eval(formals()[["ties.method"]]))
out <- .Internal(max.col(as.matrix(m), tieM))
m[] <- !m %in% c(0,NA) # 'm[] <-' is required to maintain the matrix shape
replace(out, rowSums(m) == 0, NA_integer_)
}
dat %>%
mutate(last = max.col.notna(!is.na(select(., -id)), ties.method = "last"))
# id x y z last
# 1 1 1 NA 3 3
# 2 2 NA NA 1 3
# 3 3 NA NA NA NA
Note: I've edited/changed the function several times, trying to ensure a consistent API to the intent of this custom function. As it stands now, the notna in the function name to me reflects a sense of "emptiness" (either 0 or NA). With this logic, the function is usable with logical (as here) and numeric data. Perhaps it's overkill, but I prefer APIs that operate consistently/predictably across input classes.
tidyverse isn't really suitable for row-wise operation. Most of the times reshaping the data into long format (as shown in #Rui Barradas answer) is a good approach.
Here is one way using rowwise keeping the data wide.
library(dplyr)
dat %>%
rowwise() %>%
mutate(last = {ind = which(!is.na(c_across(x:z)));
if(length(ind)) tail(ind, 1) else NA})
# id x y z last
# <dbl> <dbl> <lgl> <dbl> <int>
#1 1 1 NA 3 3
#2 2 NA NA 1 3
#3 3 NA NA NA NA
An R base solution:
dat$last = apply(dat[,2:4], 1,
FUN = function(x) ifelse(max(which(is.na(x))) == length(x), NA, max(which(is.na(x)))+1 ))
dat
# id x y z last
# 1 1 1 NA 3 3
# 2 2 NA NA 1 3
# 3 3 NA NA NA NA
You want to use c_across() and rowwise() to do this. rowwise() works similar to group_by_all(), except it is more explicit. c_across() creates flat vectors out of columns (whereas across() creates tibbles).
If we first define a function seperately to pull out the last non-NA value, or return NA if there are none:
get_last <- function(x){
y <- c(NA,which(!is.na(x)))
y[length(y)]
}
We can then apply that function c_across() the variables we need, but only after converting into a rowwise_df using rowwise()
dat %>%
rowwise() %>%
mutate(last = get_last(c_across(x:z)))
base R
df <- data.frame(id = c(1, 2, 3),
x = c(1, NA, NA),
y = c(NA, NA, NA),
z = c(3, 1, NA))
df$last <- apply(df[-1], 1, function(x) max(as.vector(!is.na(x)) * seq_len(length(x))))
df$last[df$last == 0] <- NA
df
#> id x y z last
#> 1 1 1 NA 3 3
#> 2 2 NA NA 1 3
#> 3 3 NA NA NA NA
Created on 2020-12-29 by the reprex package (v0.3.0)
Starting with a vector of NAs, you could step through each col and if the given element passes your check_fun returning TRUE, assign the index of that col to that element. The difference from the other answers here is that this does not check the condition row-wise or create a matrix from the data. Not sure whether creating two new temp vectors for each column is better/worse than just converting the entire data to a matrix first though.
library(tidyverse) # purrr and dplyr
last_matching_ind <- function(dat, check_fun){
check_fun <- as_mapper(check_fun)
reduce2(dat, seq_along(dat), .init = NA_integer_,
function(prev, dat, ind) if_else(check_fun(dat), ind, prev) )
}
dat %>%
mutate(last = last_matching_ind(dat[-1], ~ !is.na(.x)))
# id x y z last
# 1 1 1 NA 3 3
# 2 2 NA NA 1 3
# 3 3 NA NA NA NA
Let's consider some random data with NA's filled with.
df1=data.frame(sample(0:1,3,replace=T),sample(0:1,3,replace=T),sample(0:1,3,replace=T))
df2=data.frame(rnorm(3),runif(3),rexp(3))
df2[df1==1]<-NA
df2
rnorm.3. runif.3. rexp.3.
1 NA NA NA
2 0.6992316 NA 0.638913
3 0.6520083 0.1090714 NA
I want to replace those NA's with formula : 2*sd(x) + mean(x)
where sd is standard deviation. I want to do it of course with respect to proper columns so the NA in 1 row and 1 column should be replace by formula : 2*sd(0.6992316,0.6520083)+mean(0.6992316,0.6520083) and so on.
I tried to do it by the code : df2[df2==NA]<-2*apply(df2,2,sd,na.rm=T)+apply(df2,2,mean,na.rm=T) but nothing happened. Do you have idea how it can be done ?
I would probably write the (vectorized) function using ifelse then apply to all the columns using mutate(across(everything()))
library(dplyr)
f <- function(x) ifelse(!is.na(x), x,
2 * sd(x, na.rm = TRUE) + mean(x, na.rm = TRUE))
df2 %>%
mutate(across(everything(), f))
#> rnorm.3. runif.3. rexp.3.
#> 1 0.7424038 NA NA
#> 2 0.6992316 NA 0.638913
#> 3 0.6520083 0.1090714 NA
Note that in your example this doesn't do anything for the second two columns because they only have a single non-NA value. Calling sd on a single non-NA value produces NA.
If however, we do it with only one NA in each column (as we get by re-running your code after setting set.seed(1)), we can see this working:
set.seed(1)
df1 <- data.frame(sample(0:1, 3, replace = TRUE),
sample(0:1, 3, replace = TRUE),
sample(0:1, 3, replace = TRUE))
df2 <- data.frame(rnorm(3), runif(3), rexp(3))
df2[df1 == 1] <- NA
df2
#> rnorm.3. runif.3. rexp.3.
#> 1 -1.5399500 0.4976992 1.2132879
#> 2 NA NA 0.5548904
#> 3 -0.2947204 0.9919061 NA
df2 %>% mutate(across(everything(), f))
#> rnorm.3. runif.3. rexp.3.
#> 1 -1.5399500 0.4976992 1.2132879
#> 2 0.8436853 1.4437167 0.5548904
#> 3 -0.2947204 0.9919061 1.8152038
Does this work? The second column has NA still because there is only 1 non-NA value, standard deviation of a single value is NA, adding mean or any value to NA is also NA, hence it's not getting imputed.
library(dplyr)
library(tidyr)
df2 %>% mutate(across(everything(), ~ replace_na(., 2*sd(., na.rm = T) + mean(., na.rm = T))))
rnorm.3. runif.3. rexp.3.
1 -0.3030444 NA 0.07332792
2 -0.2226609 NA 1.76854904
3 -0.3909707 0.9099274 0.95892457
>
I wonder why my conversion of the "t5" column was not successful--
The "t5" column is all characters, I want to convert it into a numeric column, leave non-numeric value as NA, named as "t5.num" in the tibble.
My code below:
first of all I assigned the name, then trying to mutate the column, but it did not work--
d <- tibble(id = c(3, 7, 1, 10,100), t5 = c("10", "<1", "NA", "8","78"))
convert_column <- function(data, col_name) {
new_col_name <- paste0(rlang::enquo(col_name),".num")
data %>%
mutate(new_col_name = as.numeric(!!col_name))
}
d %>% convert_column("t5")
Can someone point out what is wrong with my code? thanks for your help!
To get new_col_name you don't need enquo. To assign new_col_name as name of the column use !! + :=. As you are passing col_name as a string we need to convert it to symbol (sym) and then evaluate (!!).
library(dplyr)
library(rlang)
convert_column <- function(data, col_name) {
new_col_name <- paste0(col_name,".num")
data %>% mutate(!!new_col_name := as.numeric(!!sym(col_name)))
}
d %>% convert_column("t5")
# A tibble: 5 x 3
# id t5 t5.num
# <dbl> <chr> <dbl>
#1 3 10 10
#2 7 <1 NA
#3 1 NA NA
#4 10 8 8
#5 100 78 78
Returns a warning while converting "<1" to numeric before turning it to NA.
What is the best function to use if I want to replace certain variables with NA based on a conditional?
If status = NA, then score_1:score_3 will be NA
tried:
if(df2$status == NA){
df2$score_2 <- NA
}else{
df2$score_2 <- df$score_2
}
Thanks in advance
One option is to find the NAs in 'status' and assign the columns that having 'score' as column name to NA in base R
i1 <- is.na(df2$Status)
df2[i1, grep("^Score_\\d+$", names(df2))] <- NA
Or an option in dplyr
library(dplyr)
df2 %>%
mutate_at(vars(starts_with('Score')), ~ replace(., is.na(Status), NA))
You can do this by finding out which rows in the data frame are NA and then setting the columns in those rows to NA.
df <- data.frame(client_id = 1:4,
Date = 1:4,
Status = c(1, NA, 1, NA),
Score1 = runif(4)*100,
Score2 = runif(4)*100,
Score3 = runif(4)*100)
idx <- is.na(df$Status)
df[idx, 4:6] <- NA
df
#> client_id Date Status Score1 Score2 Score3
#> 1 1 1 1 48.08677 16.62185 91.80062
#> 2 2 2 NA NA NA NA
#> 3 3 3 1 14.04552 64.55724 56.45998
#> 4 4 4 NA NA NA NA
There are a lot of posts about replacing NA values. I am aware that one could replace NAs in the following table/frame with the following:
x[is.na(x)]<-0
But, what if I want to restrict it to only certain columns? Let's me show you an example.
First, let's start with a dataset.
set.seed(1234)
x <- data.frame(a=sample(c(1,2,NA), 10, replace=T),
b=sample(c(1,2,NA), 10, replace=T),
c=sample(c(1:5,NA), 10, replace=T))
Which gives:
a b c
1 1 NA 2
2 2 2 2
3 2 1 1
4 2 NA 1
5 NA 1 2
6 2 NA 5
7 1 1 4
8 1 1 NA
9 2 1 5
10 2 1 1
Ok, so I only want to restrict the replacement to columns 'a' and 'b'. My attempt was:
x[is.na(x), 1:2]<-0
and:
x[is.na(x[1:2])]<-0
Which does not work.
My data.table attempt, where y<-data.table(x), was obviously never going to work:
y[is.na(y[,list(a,b)]), ]
I want to pass columns inside the is.na argument but that obviously wouldn't work.
I would like to do this in a data.frame and a data.table. My end goal is to recode the 1:2 to 0:1 in 'a' and 'b' while keeping 'c' the way it is, since it is not a logical variable. I have a bunch of columns so I don't want to do it one by one. And, I'd just like to know how to do this.
Do you have any suggestions?
You can do:
x[, 1:2][is.na(x[, 1:2])] <- 0
or better (IMHO), use the variable names:
x[c("a", "b")][is.na(x[c("a", "b")])] <- 0
In both cases, 1:2 or c("a", "b") can be replaced by a pre-defined vector.
Building on #Robert McDonald's tidyr::replace_na() answer, here are some dplyr options for controlling which columns the NAs are replaced:
library(tidyverse)
# by column type:
x %>%
mutate_if(is.numeric, ~replace_na(., 0))
# select columns defined in vars(col1, col2, ...):
x %>%
mutate_at(vars(a, b, c), ~replace_na(., 0))
# all columns:
x %>%
mutate_all(~replace_na(., 0))
Edit 2020-06-15
Since data.table 1.12.4 (Oct 2019), data.table gains two functions to facilitate this: nafill and setnafill.
nafill operates on columns:
cols = c('a', 'b')
y[ , (cols) := lapply(.SD, nafill, fill=0), .SDcols = cols]
setnafill operates on tables (the replacements happen by-reference/in-place)
setnafill(y, cols=cols, fill=0)
# print y to show the effect
y[]
This will also be more efficient than the other options; see ?nafill for more, the last-observation-carried-forward (LOCF) and next-observation-carried-backward (NOCB) versions of NA imputation for time series.
This will work for your data.table version:
for (col in c("a", "b")) y[is.na(get(col)), (col) := 0]
Alternatively, as David Arenburg points out below, you can use set (side benefit - you can use it either on data.frame or data.table):
for (col in 1:2) set(x, which(is.na(x[[col]])), col, 0)
This is now trivial in tidyr with replace_na(). The function appears to work for data.tables as well as data.frames:
tidyr::replace_na(x, list(a=0, b=0))
Not sure if this is more concise, but this function will also find and allow replacement of NAs (or any value you like) in selected columns of a data.table:
update.mat <- function(dt, cols, criteria) {
require(data.table)
x <- as.data.frame(which(criteria==TRUE, arr.ind = TRUE))
y <- as.matrix(subset(x, x$col %in% which((names(dt) %in% cols), arr.ind = TRUE)))
y
}
To apply it:
y[update.mat(y, c("a", "b"), is.na(y))] <- 0
The function creates a matrix of the selected columns and rows (cell coordinates) that meet the input criteria (in this case is.na == TRUE).
We can solve it in data.table way with tidyr::repalce_na function and lapply
library(data.table)
library(tidyr)
setDT(df)
df[,c("a","b","c"):=lapply(.SD,function(x) replace_na(x,0)),.SDcols=c("a","b","c")]
In this way, we can also solve paste columns with NA string. First, we replace_na(x,""),then we can use stringr::str_c to combine columns!
Starting from the data.table y, you can just write:
y[, (cols):=lapply(.SD, function(i){i[is.na(i)] <- 0; i}), .SDcols = cols]
Don't forget to library(data.table) before creating y and running this command.
This needed a bit extra for dealing with NA's in factors.
Found a useful function here, which you can then use with mutate_at or mutate_if:
replace_factor_na <- function(x){
x <- as.character(x)
x <- if_else(is.na(x), 'NONE', x)
x <- as.factor(x)
}
df <- df %>%
mutate_at(
vars(vector_of_column_names),
replace_factor_na
)
Or apply to all factor columns:
df <- df %>%
mutate_if(is.factor, replace_factor_na)
For a specific column, there is an alternative with sapply
DF <- data.frame(A = letters[1:5],
B = letters[6:10],
C = c(2, 5, NA, 8, NA))
DF_NEW <- sapply(seq(1, nrow(DF)),
function(i) ifelse(is.na(DF[i,3]) ==
TRUE,
0,
DF[i,3]))
DF[,3] <- DF_NEW
DF
For completeness, built upon #sbha's answer, here is the tidyverse version with the across() function that's available in dplyr since version 1.0 (which supersedes the *_at() variants, and others):
# random data
set.seed(1234)
x <- data.frame(a = sample(c(1, 2, NA), 10, replace = T),
b = sample(c(1, 2, NA), 10, replace = T),
c = sample(c(1:5, NA), 10, replace = T))
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
library(tidyr)
# with the magrittr pipe
x %>% mutate(across(1:2, ~ replace_na(.x, 0)))
#> a b c
#> 1 2 2 5
#> 2 2 2 2
#> 3 1 0 5
#> 4 0 2 2
#> 5 1 2 NA
#> 6 1 2 3
#> 7 2 2 4
#> 8 2 1 4
#> 9 0 0 3
#> 10 2 0 1
# with the native pipe (since R 4.1)
x |> mutate(across(1:2, ~ replace_na(.x, 0)))
#> a b c
#> 1 2 2 5
#> 2 2 2 2
#> 3 1 0 5
#> 4 0 2 2
#> 5 1 2 NA
#> 6 1 2 3
#> 7 2 2 4
#> 8 2 1 4
#> 9 0 0 3
#> 10 2 0 1
Created on 2021-12-08 by the reprex package (v2.0.1)
it's quite handy with data.table and stringr
library(data.table)
library(stringr)
x[, lapply(.SD, function(xx) {str_replace_na(xx, 0)})]
FYI
this works fine for me
DataTable DT = new DataTable();
DT = DT.AsEnumerable().Select(R =>
{
R["Campo1"] = valor;
return (R);
}).ToArray().CopyToDataTable();