How do I replace NA values with zeros in an R dataframe? - r

I have a data frame and some columns have NA values.
How do I replace these NA values with zeroes?

See my comment in #gsk3 answer. A simple example:
> m <- matrix(sample(c(NA, 1:10), 100, replace = TRUE), 10)
> d <- as.data.frame(m)
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
1 4 3 NA 3 7 6 6 10 6 5
2 9 8 9 5 10 NA 2 1 7 2
3 1 1 6 3 6 NA 1 4 1 6
4 NA 4 NA 7 10 2 NA 4 1 8
5 1 2 4 NA 2 6 2 6 7 4
6 NA 3 NA NA 10 2 1 10 8 4
7 4 4 9 10 9 8 9 4 10 NA
8 5 8 3 2 1 4 5 9 4 7
9 3 9 10 1 9 9 10 5 3 3
10 4 2 2 5 NA 9 7 2 5 5
> d[is.na(d)] <- 0
> d
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
1 4 3 0 3 7 6 6 10 6 5
2 9 8 9 5 10 0 2 1 7 2
3 1 1 6 3 6 0 1 4 1 6
4 0 4 0 7 10 2 0 4 1 8
5 1 2 4 0 2 6 2 6 7 4
6 0 3 0 0 10 2 1 10 8 4
7 4 4 9 10 9 8 9 4 10 0
8 5 8 3 2 1 4 5 9 4 7
9 3 9 10 1 9 9 10 5 3 3
10 4 2 2 5 0 9 7 2 5 5
There's no need to apply apply. =)
EDIT
You should also take a look at norm package. It has a lot of nice features for missing data analysis. =)

The dplyr hybridized options are now around 30% faster than the Base R subset reassigns. On a 100M datapoint dataframe mutate_all(~replace(., is.na(.), 0)) runs a half a second faster than the base R d[is.na(d)] <- 0 option. What one wants to avoid specifically is using an ifelse() or an if_else(). (The complete 600 trial analysis ran to over 4.5 hours mostly due to including these approaches.) Please see benchmark analyses below for the complete results.
If you are struggling with massive dataframes, data.table is the fastest option of all: 40% faster than the standard Base R approach. It also modifies the data in place, effectively allowing you to work with nearly twice as much of the data at once.
A clustering of other helpful tidyverse replacement approaches
Locationally:
index mutate_at(c(5:10), ~replace(., is.na(.), 0))
direct reference mutate_at(vars(var5:var10), ~replace(., is.na(.), 0))
fixed match mutate_at(vars(contains("1")), ~replace(., is.na(.), 0))
or in place of contains(), try ends_with(),starts_with()
pattern match mutate_at(vars(matches("\\d{2}")), ~replace(., is.na(.), 0))
Conditionally:
(change just single type and leave other types alone.)
integers mutate_if(is.integer, ~replace(., is.na(.), 0))
numbers mutate_if(is.numeric, ~replace(., is.na(.), 0))
strings mutate_if(is.character, ~replace(., is.na(.), 0))
##The Complete Analysis -
Updated for dplyr 0.8.0: functions use purrr format ~ symbols: replacing deprecated funs() arguments.
###Approaches tested:
# Base R:
baseR.sbst.rssgn <- function(x) { x[is.na(x)] <- 0; x }
baseR.replace <- function(x) { replace(x, is.na(x), 0) }
baseR.for <- function(x) { for(j in 1:ncol(x))
x[[j]][is.na(x[[j]])] = 0 }
# tidyverse
## dplyr
dplyr_if_else <- function(x) { mutate_all(x, ~if_else(is.na(.), 0, .)) }
dplyr_coalesce <- function(x) { mutate_all(x, ~coalesce(., 0)) }
## tidyr
tidyr_replace_na <- function(x) { replace_na(x, as.list(setNames(rep(0, 10), as.list(c(paste0("var", 1:10)))))) }
## hybrid
hybrd.ifelse <- function(x) { mutate_all(x, ~ifelse(is.na(.), 0, .)) }
hybrd.replace_na <- function(x) { mutate_all(x, ~replace_na(., 0)) }
hybrd.replace <- function(x) { mutate_all(x, ~replace(., is.na(.), 0)) }
hybrd.rplc_at.idx<- function(x) { mutate_at(x, c(1:10), ~replace(., is.na(.), 0)) }
hybrd.rplc_at.nse<- function(x) { mutate_at(x, vars(var1:var10), ~replace(., is.na(.), 0)) }
hybrd.rplc_at.stw<- function(x) { mutate_at(x, vars(starts_with("var")), ~replace(., is.na(.), 0)) }
hybrd.rplc_at.ctn<- function(x) { mutate_at(x, vars(contains("var")), ~replace(., is.na(.), 0)) }
hybrd.rplc_at.mtc<- function(x) { mutate_at(x, vars(matches("\\d+")), ~replace(., is.na(.), 0)) }
hybrd.rplc_if <- function(x) { mutate_if(x, is.numeric, ~replace(., is.na(.), 0)) }
# data.table
library(data.table)
DT.for.set.nms <- function(x) { for (j in names(x))
set(x,which(is.na(x[[j]])),j,0) }
DT.for.set.sqln <- function(x) { for (j in seq_len(ncol(x)))
set(x,which(is.na(x[[j]])),j,0) }
DT.nafill <- function(x) { nafill(df, fill=0)}
DT.setnafill <- function(x) { setnafill(df, fill=0)}
###The code for this analysis:
library(microbenchmark)
# 20% NA filled dataframe of 10 Million rows and 10 columns
set.seed(42) # to recreate the exact dataframe
dfN <- as.data.frame(matrix(sample(c(NA, as.numeric(1:4)), 1e7*10, replace = TRUE),
dimnames = list(NULL, paste0("var", 1:10)),
ncol = 10))
# Running 600 trials with each replacement method
# (the functions are excecuted locally - so that the original dataframe remains unmodified in all cases)
perf_results <- microbenchmark(
hybrd.ifelse = hybrd.ifelse(copy(dfN)),
dplyr_if_else = dplyr_if_else(copy(dfN)),
hybrd.replace_na = hybrd.replace_na(copy(dfN)),
baseR.sbst.rssgn = baseR.sbst.rssgn(copy(dfN)),
baseR.replace = baseR.replace(copy(dfN)),
dplyr_coalesce = dplyr_coalesce(copy(dfN)),
tidyr_replace_na = tidyr_replace_na(copy(dfN)),
hybrd.replace = hybrd.replace(copy(dfN)),
hybrd.rplc_at.ctn= hybrd.rplc_at.ctn(copy(dfN)),
hybrd.rplc_at.nse= hybrd.rplc_at.nse(copy(dfN)),
baseR.for = baseR.for(copy(dfN)),
hybrd.rplc_at.idx= hybrd.rplc_at.idx(copy(dfN)),
DT.for.set.nms = DT.for.set.nms(copy(dfN)),
DT.for.set.sqln = DT.for.set.sqln(copy(dfN)),
times = 600L
)
###Summary of Results
> print(perf_results)
Unit: milliseconds
expr min lq mean median uq max neval
hybrd.ifelse 6171.0439 6339.7046 6425.221 6407.397 6496.992 7052.851 600
dplyr_if_else 3737.4954 3877.0983 3953.857 3946.024 4023.301 4539.428 600
hybrd.replace_na 1497.8653 1706.1119 1748.464 1745.282 1789.804 2127.166 600
baseR.sbst.rssgn 1480.5098 1686.1581 1730.006 1728.477 1772.951 2010.215 600
baseR.replace 1457.4016 1681.5583 1725.481 1722.069 1766.916 2089.627 600
dplyr_coalesce 1227.6150 1483.3520 1524.245 1519.454 1561.488 1996.859 600
tidyr_replace_na 1248.3292 1473.1707 1521.889 1520.108 1570.382 1995.768 600
hybrd.replace 913.1865 1197.3133 1233.336 1238.747 1276.141 1438.646 600
hybrd.rplc_at.ctn 916.9339 1192.9885 1224.733 1227.628 1268.644 1466.085 600
hybrd.rplc_at.nse 919.0270 1191.0541 1228.749 1228.635 1275.103 2882.040 600
baseR.for 869.3169 1180.8311 1216.958 1224.407 1264.737 1459.726 600
hybrd.rplc_at.idx 839.8915 1189.7465 1223.326 1228.329 1266.375 1565.794 600
DT.for.set.nms 761.6086 915.8166 1015.457 1001.772 1106.315 1363.044 600
DT.for.set.sqln 787.3535 918.8733 1017.812 1002.042 1122.474 1321.860 600
###Boxplot of Results
ggplot(perf_results, aes(x=expr, y=time/10^9)) +
geom_boxplot() +
xlab('Expression') +
ylab('Elapsed Time (Seconds)') +
scale_y_continuous(breaks = seq(0,7,1)) +
coord_flip()
Color-coded Scatterplot of Trials (with y-axis on a log scale)
qplot(y=time/10^9, data=perf_results, colour=expr) +
labs(y = "log10 Scaled Elapsed Time per Trial (secs)", x = "Trial Number") +
coord_cartesian(ylim = c(0.75, 7.5)) +
scale_y_log10(breaks=c(0.75, 0.875, 1, 1.25, 1.5, 1.75, seq(2, 7.5)))
A note on the other high performers
When the datasets get larger, Tidyr''s replace_na had historically pulled out in front. With the current collection of 100M data points to run through, it performs almost exactly as well as a Base R For Loop. I am curious to see what happens for different sized dataframes.
Additional examples for the mutate and summarize _at and _all function variants can be found here: https://rdrr.io/cran/dplyr/man/summarise_all.html
Additionally, I found helpful demonstrations and collections of examples here: https://blog.exploratory.io/dplyr-0-5-is-awesome-heres-why-be095fd4eb8a
Attributions and Appreciations
With special thanks to:
Tyler Rinker and Akrun for demonstrating microbenchmark.
alexis_laz for working on helping me understand the use of local(), and (with Frank's patient help, too) the role that silent coercion plays in speeding up many of these approaches.
ArthurYip for the poke to add the newer coalesce() function in and update the analysis.
Gregor for the nudge to figure out the data.table functions well enough to finally include them in the lineup.
Base R For loop: alexis_laz
data.table For Loops: Matt_Dowle
Roman for explaining what is.numeric() really tests.
(Of course, please reach over and give them upvotes, too if you find those approaches useful.)
Note on my use of Numerics: If you do have a pure integer dataset, all of your functions will run faster. Please see alexiz_laz's work for more information. IRL, I can't recall encountering a data set containing more than 10-15% integers, so I am running these tests on fully numeric dataframes.
Hardware Used
3.9 GHz CPU with 24 GB RAM

For a single vector:
x <- c(1,2,NA,4,5)
x[is.na(x)] <- 0
For a data.frame, make a function out of the above, then apply it to the columns.
Please provide a reproducible example next time as detailed here:
How to make a great R reproducible example?

dplyr example:
library(dplyr)
df1 <- df1 %>%
mutate(myCol1 = if_else(is.na(myCol1), 0, myCol1))
Note: This works per selected column, if we need to do this for all column, see #reidjax's answer using mutate_each.

If we are trying to replace NAs when exporting, for example when writing to csv, then we can use:
write.csv(data, "data.csv", na = "0")

It is also possible to use tidyr::replace_na.
library(tidyr)
df <- df %>% mutate_all(funs(replace_na(.,0)))
Edit (dplyr > 1.0.0):
df %>% mutate(across(everything(), .fns = ~replace_na(.,0)))

I know the question is already answered, but doing it this way might be more useful to some:
Define this function:
na.zero <- function (x) {
x[is.na(x)] <- 0
return(x)
}
Now whenever you need to convert NA's in a vector to zero's you can do:
na.zero(some.vector)

More general approach of using replace() in matrix or vector to replace NA to 0
For example:
> x <- c(1,2,NA,NA,1,1)
> x1 <- replace(x,is.na(x),0)
> x1
[1] 1 2 0 0 1 1
This is also an alternative to using ifelse() in dplyr
df = data.frame(col = c(1,2,NA,NA,1,1))
df <- df %>%
mutate(col = replace(col,is.na(col),0))

With dplyr 0.5.0, you can use coalesce function which can be easily integrated into %>% pipeline by doing coalesce(vec, 0). This replaces all NAs in vec with 0:
Say we have a data frame with NAs:
library(dplyr)
df <- data.frame(v = c(1, 2, 3, NA, 5, 6, 8))
df
# v
# 1 1
# 2 2
# 3 3
# 4 NA
# 5 5
# 6 6
# 7 8
df %>% mutate(v = coalesce(v, 0))
# v
# 1 1
# 2 2
# 3 3
# 4 0
# 5 5
# 6 6
# 7 8

To replace all NAs in a dataframe you can use:
df %>% replace(is.na(.), 0)

Would've commented on #ianmunoz's post but I don't have enough reputation. You can combine dplyr's mutate_each and replace to take care of the NA to 0 replacement. Using the dataframe from #aL3xa's answer...
> m <- matrix(sample(c(NA, 1:10), 100, replace = TRUE), 10)
> d <- as.data.frame(m)
> d
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
1 4 8 1 9 6 9 NA 8 9 8
2 8 3 6 8 2 1 NA NA 6 3
3 6 6 3 NA 2 NA NA 5 7 7
4 10 6 1 1 7 9 1 10 3 10
5 10 6 7 10 10 3 2 5 4 6
6 2 4 1 5 7 NA NA 8 4 4
7 7 2 3 1 4 10 NA 8 7 7
8 9 5 8 10 5 3 5 8 3 2
9 9 1 8 7 6 5 NA NA 6 7
10 6 10 8 7 1 1 2 2 5 7
> d %>% mutate_each( funs_( interp( ~replace(., is.na(.),0) ) ) )
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
1 4 8 1 9 6 9 0 8 9 8
2 8 3 6 8 2 1 0 0 6 3
3 6 6 3 0 2 0 0 5 7 7
4 10 6 1 1 7 9 1 10 3 10
5 10 6 7 10 10 3 2 5 4 6
6 2 4 1 5 7 0 0 8 4 4
7 7 2 3 1 4 10 0 8 7 7
8 9 5 8 10 5 3 5 8 3 2
9 9 1 8 7 6 5 0 0 6 7
10 6 10 8 7 1 1 2 2 5 7
We're using standard evaluation (SE) here which is why we need the underscore on "funs_." We also use lazyeval's interp/~ and the . references "everything we are working with", i.e. the data frame. Now there are zeros!

Another example using imputeTS package:
library(imputeTS)
na.replace(yourDataframe, 0)

Dedicated functions, nafill and setnafill, for that purpose is in data.table.
Whenever available, they distribute columns to be computed on multiple threads.
library(data.table)
ans_df <- nafill(df, fill=0)
# or even faster, in-place
setnafill(df, fill=0)

If you want to replace NAs in factor variables, this might be useful:
n <- length(levels(data.vector))+1
data.vector <- as.numeric(data.vector)
data.vector[is.na(data.vector)] <- n
data.vector <- as.factor(data.vector)
levels(data.vector) <- c("level1","level2",...,"leveln", "NAlevel")
It transforms a factor-vector into a numeric vector and adds another artifical numeric factor level, which is then transformed back to a factor-vector with one extra "NA-level" of your choice.

dplyr >= 1.0.0
In newer versions of dplyr:
across() supersedes the family of "scoped variants" like summarise_at(), summarise_if(), and summarise_all().
df <- data.frame(a = c(LETTERS[1:3], NA), b = c(NA, 1:3))
library(tidyverse)
df %>%
mutate(across(where(anyNA), ~ replace_na(., 0)))
a b
1 A 0
2 B 1
3 C 2
4 0 3
This code will coerce 0 to be character in the first column. To replace NA based on column type you can use a purrr-like formula in where:
df %>%
mutate(across(where(~ anyNA(.) & is.character(.)), ~ replace_na(., "0")))

No need to use any library.
df <- data.frame(a=c(1,3,5,NA))
df$a[is.na(df$a)] <- 0
df

You can use replace()
For example:
> x <- c(-1,0,1,0,NA,0,1,1)
> x1 <- replace(x,5,1)
> x1
[1] -1 0 1 0 1 0 1 1
> x1 <- replace(x,5,mean(x,na.rm=T))
> x1
[1] -1.00 0.00 1.00 0.00 0.29 0.00 1.00 1.00

The cleaner package has an na_replace() generic, that at default replaces numeric values with zeroes, logicals with FALSE, dates with today, etc.:
library(dplyr)
library(cleaner)
starwars %>% na_replace()
na_replace(starwars)
It even supports vectorised replacements:
mtcars[1:6, c("mpg", "hp")] <- NA
na_replace(mtcars, mpg, hp, replacement = c(999, 123))
Documentation: https://msberends.github.io/cleaner/reference/na_replace.html

Another dplyr pipe compatible option with tidyrmethod replace_na that works for several columns:
require(dplyr)
require(tidyr)
m <- matrix(sample(c(NA, 1:10), 100, replace = TRUE), 10)
d <- as.data.frame(m)
myList <- setNames(lapply(vector("list", ncol(d)), function(x) x <- 0), names(d))
df <- d %>% replace_na(myList)
You can easily restrict to e.g. numeric columns:
d$str <- c("string", NA)
myList <- myList[sapply(d, is.numeric)]
df <- d %>% replace_na(myList)

This simple function extracted from Datacamp could help:
replace_missings <- function(x, replacement) {
is_miss <- is.na(x)
x[is_miss] <- replacement
message(sum(is_miss), " missings replaced by the value ", replacement)
x
}
Then
replace_missings(df, replacement = 0)

An easy way to write it is with if_na from hablar:
library(dplyr)
library(hablar)
df <- tibble(a = c(1, 2, 3, NA, 5, 6, 8))
df %>%
mutate(a = if_na(a, 0))
which returns:
a
<dbl>
1 1
2 2
3 3
4 0
5 5
6 6
7 8

Replace is.na & NULL in data frame.
data frame with colums
A$name[is.na(A$name)]<-0
OR
A$name[is.na(A$name)]<-"NA"
with all data frame
df[is.na(df)]<-0
with replace na with blank in data frame
df[is.na(df)]<-""
replace NULL to NA
df[is.null(df)] <- NA

if you want to assign a new name after changing the NAs in a specific column in this case column V3, use you can do also like this
my.data.frame$the.new.column.name <- ifelse(is.na(my.data.frame$V3),0,1)

I wan to add a next solution which using a popular Hmisc package.
library(Hmisc)
data(airquality)
# imputing with 0 - all columns
# although my favorite one for simple imputations is Hmisc::impute(x, "random")
> dd <- data.frame(Map(function(x) Hmisc::impute(x, 0), airquality))
> str(dd[[1]])
'impute' Named num [1:153] 41 36 12 18 0 28 23 19 8 0 ...
- attr(*, "names")= chr [1:153] "1" "2" "3" "4" ...
- attr(*, "imputed")= int [1:37] 5 10 25 26 27 32 33 34 35 36 ...
> dd[[1]][1:10]
1 2 3 4 5 6 7 8 9 10
41 36 12 18 0* 28 23 19 8 0*
There could be seen that all imputations metadata are allocated as attributes. Thus it could be used later.

This is not exactly a new solution, but I like to write inline lambdas that handle things that I can't quite get packages to do. In this case,
df %>%
(function(x) { x[is.na(x)] <- 0; return(x) })
Because R does not ever "pass by object" like you might see in Python, this solution does not modify the original variable df, and so will do quite the same as most of the other solutions, but with much less need for intricate knowledge of particular packages.
Note the parens around the function definition! Though it seems a bit redundant to me, since the function definition is surrounded in curly braces, it is required that inline functions are defined within parens for magrittr.

This is a more flexible solution. It works no matter how large your data frame is, or zero is indicated by 0 or zero or whatsoever.
library(dplyr) # make sure dplyr ver is >= 1.00
df %>%
mutate(across(everything(), na_if, 0)) # if 0 is indicated by `zero` then replace `0` with `zero`

Another option using sapply to replace all NA with zeros. Here is some reproducible code (data from #aL3xa):
set.seed(7) # for reproducibility
m <- matrix(sample(c(NA, 1:10), 100, replace = TRUE), 10)
d <- as.data.frame(m)
d
#> V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
#> 1 9 7 5 5 7 7 4 6 6 7
#> 2 2 5 10 7 8 9 8 8 1 8
#> 3 6 7 4 10 4 9 6 8 NA 10
#> 4 1 10 3 7 5 7 7 7 NA 8
#> 5 9 9 10 NA 7 10 1 5 NA 5
#> 6 5 2 5 10 8 1 1 5 10 3
#> 7 7 3 9 3 1 6 7 3 1 10
#> 8 7 7 6 8 4 4 5 NA 8 7
#> 9 2 1 1 2 7 5 9 10 9 3
#> 10 7 5 3 4 9 2 7 6 NA 5
d[sapply(d, \(x) is.na(x))] <- 0
d
#> V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
#> 1 9 7 5 5 7 7 4 6 6 7
#> 2 2 5 10 7 8 9 8 8 1 8
#> 3 6 7 4 10 4 9 6 8 0 10
#> 4 1 10 3 7 5 7 7 7 0 8
#> 5 9 9 10 0 7 10 1 5 0 5
#> 6 5 2 5 10 8 1 1 5 10 3
#> 7 7 3 9 3 1 6 7 3 1 10
#> 8 7 7 6 8 4 4 5 0 8 7
#> 9 2 1 1 2 7 5 9 10 9 3
#> 10 7 5 3 4 9 2 7 6 0 5
Created on 2023-01-15 with reprex v2.0.2
Please note: Since R 4.1.0 you can use \(x) instead of function(x).

in data.frame it is not necessary to create a new column by mutate.
library(tidyverse)
k <- c(1,2,80,NA,NA,51)
j <- c(NA,NA,3,31,12,NA)
df <- data.frame(k,j)%>%
replace_na(list(j=0))#convert only column j, for example
result
k j
1 0
2 0
80 3
NA 31
NA 12
51 0

I used this personally and works fine :
players_wd$APPROVED_WD[is.na(players_wd$APPROVED_WD)] <- 0

Related

How to vectorize the RHS of dplyr::case_when?

Suppose I have a dataframe that looks like this:
> data <- data.frame(x = c(1,1,2,2,3,4,5,6), y = c(1,2,3,4,5,6,7,8))
> data
x y
1 1 1
2 1 2
3 2 3
4 2 4
5 3 5
6 4 6
7 5 7
8 6 8
I want to use mutate and case_when to create a new id variable that will identify rows using the variable x, and give rows missing x a unique id. In other words, I should have the same id for rows one and two, rows three and four, while rows 5-8 should have their own unique ids. Suppose I want to generate these id values with a function:
id_function <- function(x, n){
set.seed(x)
res <- character(n)
for(i in seq(n)){
res[i] <- paste0(sample(c(letters, LETTERS, 0:9), 32), collapse="")
}
res
}
id_function(1, 1)
[1] "4dMaHwQnrYGu0PTjgioXKOyW75NRZtcf"
I am trying to use this function on the RHS of a case_when expression like this:
data %>%
mutate(my_id = id_function(1234, nrow(.)),
my_id = dplyr::case_when(!is.na(x) ~ id_function(x, 1),
TRUE ~ my_id))
But the RHS does not seem to be vectorized and I get the same value for all non-missing values of x:
x y my_id
1 1 1 4dMaHwQnrYGu0PTjgioXKOyW75NRZtcf
2 1 2 4dMaHwQnrYGu0PTjgioXKOyW75NRZtcf
3 2 3 4dMaHwQnrYGu0PTjgioXKOyW75NRZtcf
4 2 4 4dMaHwQnrYGu0PTjgioXKOyW75NRZtcf
5 NA 5 0vnws5giVNIzp86BHKuOZ9ch4dtL3Fqy
6 NA 6 IbKU6DjvW9ypitl7qc25Lr4sOwEfghdk
7 NA 7 8oqQMPx6IrkGhXv4KlUtYfcJ5Z1RCaDy
8 NA 8 BRsjumlCEGS6v4ANrw1bxLynOKkF90ao
I'm sure there's a way to vectorize the RHS, what am I doing wrong? Is there an easier approach to solving this problem?
I guess rowwise() would do the trick:
data %>%
rowwise() %>%
mutate(my_id = id_function(x, 1))
x y my_id
1 1 4dMaHwQnrYGu0PTjgioXKOyW75NRZtcf
1 2 4dMaHwQnrYGu0PTjgioXKOyW75NRZtcf
2 3 uof7FhqC3lOXkacp54MGZJLUR6siSKDb
2 4 uof7FhqC3lOXkacp54MGZJLUR6siSKDb
3 5 e5lMJNQEhtj4VY1KbCR9WUiPrpy7vfXo
4 6 3kYcgR7109DLbxatQIAKXFeovN8pnuUV
5 7 bQ4ok7OuDgscLUlpzKAivBj2T3m6wrWy
6 8 0jSn3Jcb2HDA5uhvG8g1ytsmRpl6CQWN
purrr map functions can be used for non-vectorized functions. The following will give you a similar result. map2 will take the two arguments expected by your id_function.
library(tidyverse)
data %>%
mutate(my_id = map2(x, 1, id_function))
Output
x y my_id
1 1 1 4dMaHwQnrYGu0PTjgioXKOyW75NRZtcf
2 1 2 4dMaHwQnrYGu0PTjgioXKOyW75NRZtcf
3 2 3 uof7FhqC3lOXkacp54MGZJLUR6siSKDb
4 2 4 uof7FhqC3lOXkacp54MGZJLUR6siSKDb
5 3 5 e5lMJNQEhtj4VY1KbCR9WUiPrpy7vfXo
6 4 6 3kYcgR7109DLbxatQIAKXFeovN8pnuUV
7 5 7 bQ4ok7OuDgscLUlpzKAivBj2T3m6wrWy
8 6 8 0jSn3Jcb2HDA5uhvG8g1ytsmRpl6CQWN

How to automate renaming of columns in wide data using R

Consider the following data in the wide format
df<-data.frame("id"=c(1,2,3,4),
"ex"=c(1,0,0,1),
"aQL"=c(5,4,NA,6),
"bQL"=c(5,7,NA,9),
"cQL"=c(5,7,NA,9),
"bST"=c(3,7,8,9),
"cST"=c(8,7,5,3),
"aXY"=c(1,9,4,4),
"cXY"=c(5,3,1,4))
I want to keep the column (or variable) names "id" and "ex" and rename the remaining columns, e.g. "aQL", "bQL" and "cQL" as "QL.1", "QL.2" and "QL.3", respectively. The other columns with names ending with "ST" and "XY" are expected to be renamed in the same manner, also having the order .1, .2 and .3. Of note is "aST" and "bXY" are missing from the data set, but I want them to be included and renamed as ST.1 and XY.2, with each having NAs as their entries. The expected output would look like
df
id ex QL.1 QL.2 QL.3 ST.1 ST.2 ST.3 XY.1 XY.2 XY.3
1 1 1 5 5 5 NA 3 8 1 NA 5
2 2 0 4 7 7 NA 7 7 9 NA 3
3 3 0 NA NA NA NA 8 5 4 NA 1
4 4 1 6 9 9 NA 9 3 4 NA 4
The main data set has many variables, so I would like the renaming to be done in an automated manner. I tried the following code
renameCol <- function(x) {
setNames(x, paste0("QL.", seq_len(ncol(x))))
}
renameCol(df)
but it does not work as expected. Thus, it renames "id" and "ex" that I want to maintain and it is not flexible on the renaming of multiple variable (i.e. QL, ST, XY). Any help is greatly appreciated.
I would suggest a tidyverse approach where there is no need of a function. In this solution you can extract the first letter of each variable name as id and then assign a number with cur_group_id so that the order is kept. Finally, with this new number you transform the variable containing the names and then you format to wide in order to obtain the expected output:
library(tidyverse)
#Data
df<-data.frame("id"=c(1,2,3,4),
"ex"=c(1,0,0,1),
"aQL"=c(5,4,NA,6),
"bQL"=c(5,7,NA,9),
"cQL"=c(5,7,NA,9),
"bST"=c(3,7,8,9),
"cST"=c(8,7,5,3),
"aXY"=c(1,9,4,4),
"cXY"=c(5,3,1,4))
#Reshape
df %>% pivot_longer(cols = -c(1,2)) %>%
#Extract first letter as id
mutate(id2=substring(name,1,1)) %>%
#Create the number id
group_by(id2) %>%
mutate(id3=cur_group_id()) %>%
#Clean name
mutate(name=substring(name,2,nchar(name))) %>%
#Create final var
mutate(name2=paste0(name,'.',id3)) %>% ungroup() %>%
dplyr::select(-c(name,id2,id3)) %>%
#Format to wide
pivot_wider(names_from = name2,values_from=value)
Output:
# A tibble: 4 x 9
id ex QL.1 QL.2 QL.3 ST.2 ST.3 XY.1 XY.3
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 1 5 5 5 3 8 1 5
2 2 0 4 7 7 7 7 9 3
3 3 0 NA NA NA 8 5 4 1
4 4 1 6 9 9 9 3 4 4
in base R you could do:
names(df) <- sub("(\\d)([A-Z]{2})$","\\2.\\1", chartr("abc","123",names(df)))
df
id ex QL.1 QL.2 QL.3 ST.2 ST.3 XY.1 XY.3
1 1 1 5 5 5 3 8 1 5
2 2 0 4 7 7 7 7 9 3
3 3 0 NA NA NA 8 5 4 1
4 4 1 6 9 9 9 3 4 4
If you need the NA columns:
names(df) <- sub("(\\d)([A-Z]{2})$","\\2.\\1", chartr("abc","123",names(df)))
a <- read.table(text=grep("\\.\\d",names(df),value = TRUE), sep=".")
b <- subset(aggregate(.~V1, a, function(x) setdiff(1:3,x)), V2>0)
df[do.call(paste, c(sep = ".", b))] <- NA
(df1 <- df[c(1, 2, order(names(df)[-(1:2)]) + 2)])
id ex QL.1 QL.2 QL.3 ST.1 ST.2 ST.3 XY.1 XY.2 XY.3
1 1 1 5 5 5 NA 3 8 1 NA 5
2 2 0 4 7 7 NA 7 7 9 NA 3
3 3 0 NA NA NA NA 8 5 4 NA 1
4 4 1 6 9 9 NA 9 3 4 NA 4
Another way you can try
colnames(df)[grepl("QL", colnames(df))] <- str_c("QL.", 1:3)
colnames(df)[grepl("ST", colnames(df))] <- str_c("ST.", 2:3)
colnames(df)[grepl("XY", colnames(df))] <- str_c("XY.", c(1,3))
# id ex QL.1 QL.2 QL.3 ST.2 ST.3 XY.1 XY.3
# 1 1 1 5 5 5 3 8 1 5
# 2 2 0 4 7 7 7 7 9 3
# 3 3 0 NA NA NA 8 5 4 1
# 4 4 1 6 9 9 9 3 4 4
Here is a solution that uses regular expressions via the stringr package:
library(stringr)
df<-data.frame("id"=c(1,2,3,4),
"ex"=c(1,0,0,1),
"aQL"=c(5,4,NA,6),
"bQL"=c(5,7,NA,9),
"cQL"=c(5,7,NA,9),
"bST"=c(3,7,8,9),
"cST"=c(8,7,5,3),
"aXY"=c(1,9,4,4),
"cXY"=c(5,3,1,4))
renameCol <- function(x) {
col_names <- colnames(x)
index_ql <- str_detect(col_names,
"^[a-z]{1}QL")
index_st <- str_detect(col_names,
"^[a-z]{1}ST")
index_xy <- str_detect(col_names,
"^[a-z]{1}XY")
replace_fun <- function(x) {which(letters %in% x)}
col_names[index_ql] <- paste0("QL.", str_replace(substr(col_names[index_ql], 1, 1),
"[a-z]", replace_fun))
col_names[index_st] <- paste0("ST.", str_replace(substr(col_names[index_st], 1, 1),
"[a-z]", replace_fun))
col_names[index_xy] <- paste0("XY.", str_replace(substr(col_names[index_xy], 1, 1),
"[a-z]", replace_fun))
col_names
}
colnames(df) <- renameCol(df)
df
#> id ex QL.1 QL.2 QL.3 ST.2 ST.3 XY.1 XY.3
#> 1 1 1 5 5 5 3 8 1 5
#> 2 2 0 4 7 7 7 7 9 3
#> 3 3 0 NA NA NA 8 5 4 1
#> 4 4 1 6 9 9 9 3 4 4
Created on 2020-09-07 by the reprex package (v0.3.0)
Edit
The function above was adapted so that it takes the order into account.
using base pattern matching:
you need to define a function that does what you want on one single column name:
f = function(x){
beg <- str_extract(x,"[a-z](?=[A-Z]{2})")
num <- which(letters == beg)
output <- paste0(str_extract(x,"(?<=[a-z])[A-Z]{2}"),".",num)
return(output)
}
here extract the lower case letter if you have two upper case letters after, find it position in alphabet, and paste the found number back to the upper case letters.
> f("cQL")
[1] "QL.3"
You can then use regmatches and regular expression directly on the name of your data frame:
m <- gregexpr("[a-z][A-Z]{2}", names(df),perl = T)
regmatches(names(df), m) <- lapply(regmatches(names(df), m), f)
names(df)
> names(df)
[1] "id" "ex" "QL.1" "QL.2" "QL.3" "ST.2" "ST.3" "XY.1" "XY.3"
It solves only the renaming part, not the the "including missing column number" part of your question

Iterate through columns to sum the previous 2 numbers of each row

In R, I have a dataframe, with columns 'A', 'B', 'C', 'D'. The columns have 100 rows.
I need to iterate through the columns to perform a calculation for all rows in the dataframe which sums the previous 2 rows of that column, and then set in new columns ('AA', 'AB', etc) what that sum is:
A B C D
1 2 3 4
2 3 4 5
3 4 5 6
4 5 6 7
5 6 7 8
6 7 8 9
to
A B C D AA AB AC AD
1 2 3 4 NA NA NA NA
2 3 4 5 3 5 7 9
3 4 5 6 5 7 9 11
4 5 6 7 7 9 11 13
5 6 7 8 9 11 13 15
6 7 8 9 11 13 15 17
Can someone explain how to create a function/loop that allows me to set the columns I want to iterate over (selected columns, not all columns) and the columns I want to set?
A base one-liner:
cbind(df, setNames(df + df[c(NA, 1:(nrow(df)-1)), ], paste0("A", names(df))))
If your data is large, this one might be the fastest because it manipulates the entire data.frame.
A dplyr solution using mutate() with across().
library(dplyr)
df %>%
mutate(across(A:D,
~ .x + lag(.x),
.names = "A{col}"))
# A B C D AA AB AC AD
# 1 1 2 3 4 NA NA NA NA
# 2 2 3 4 5 3 5 7 9
# 3 3 4 5 6 5 7 9 11
# 4 4 5 6 7 7 9 11 13
# 5 5 6 7 8 9 11 13 15
# 6 6 7 8 9 11 13 15 17
If you want to sum the previous 3 rows, the second argument of across(), i.e. .fns, should be
~ .x + lag(.x) + lag(.x, 2)
which is equivalent to the use of rollsum() in zoo:
~ zoo::rollsum(.x, k = 3, fill = NA, align = 'right')
Benchmark
A benchmark test with microbenchmark package on a new data.frame with 10000 rows and 100 columns and evaluate each expression for 10 times.
# Unit: milliseconds
# expr min lq mean median uq max neval
# darren_base 18.58418 20.88498 35.51341 33.64953 39.31909 80.24725 10
# darren_dplyr_lag 39.49278 40.27038 47.26449 42.89170 43.20267 76.72435 10
# arg0naut91_dplyr_rollsum 436.22503 482.03199 524.54800 516.81706 534.94317 677.64242 10
# Grothendieck_rollsumr 3423.92097 3611.01573 3650.16656 3622.50895 3689.26404 4060.98054 10
You can use dplyr's across (and set optional names) with rolling sum (as implemented e.g. in zoo):
library(dplyr)
library(zoo)
df %>%
mutate(
across(
A:D,
~ rollsum(., k = 2, fill = NA, align = 'right'),
.names = 'A{col}'
)
)
Output:
A B C D AA AB AC AD
1 1 2 3 4 NA NA NA NA
2 2 3 4 5 3 5 7 9
3 3 4 5 6 5 7 9 11
4 4 5 6 7 7 9 11 13
5 5 6 7 8 9 11 13 15
6 6 7 8 9 11 13 15 17
With A:D we've specified the range of column names we want to apply the function to. The assumption above in .names argument is that you want to paste together A as prefix and the column name ({col}).
Here's a data.table solution. As you ask for, it allows you to select which columns you want to apply it to rather than just for all columns.
library(data.table)
x <- data.table(A=1:6, B=2:7, C=3:8, D=4:9)
selected_cols <- c('A','B','D')
new_cols <- paste0("A",selected_cols)
x[, (new_cols) := lapply(.SD, function(col) col+shift(col, 1)), .SDcols = selected_cols]
x[]
NB This is 2 or 3 times faster than the fastest other answer.
That is a naive approach with nested for loops. Beware it is damn slow if you gonna iterate over hundreds thousand rows.
i <- 1
n <- 5
df <- data.frame(A=i:(i+n), B=(i+1):(i+n+1), C=(i+2):(i+n+2), D=(i+3):(i+n+3))
for (col in colnames(df)) {
for (ind in 1:nrow(df)) {
if (ind-1==0) {next}
s <- sum(df[c(ind-1, ind), col])
df[ind, paste0('S', col)] <- s
}
}
That is a cumsum method:
na.df <- data.frame(matrix(NA, 2, ncol(df)))
colnames(na.df) <- colnames(df)
cs1 <- cumsum(df)
cs2 <- rbind(cs1[-1:-2,], na.df)
sum.diff <- cs2-cs1
cbind(df, rbind(na.df[1,], cs1[2,], sum.diff[1:(nrow(sum.diff)-2),]))
Benchmark:
# Unit: milliseconds
# expr min lq mean median uq max neval
# darrentsai.rbind 11.5623 12.28025 23.38038 16.78240 20.83420 91.9135 100
# darrentsai.rbind.rev1 8.8267 9.10945 15.63652 9.54215 14.25090 62.6949 100
# pseudopsin.dt 7.2696 7.52080 20.26473 12.61465 17.61465 69.0110 100
# ivan866.cumsum 25.3706 30.98860 43.11623 33.78775 37.36950 91.6032 100
I believe, most of the time the cumsum method wastes on df allocations. If correctly adapted to data.table backend, it could be the fastest.
Specify the columns we want. We show several different ways to do that. Then use rollsumr to get the desired columns, set the column names and cbind DF with it.
library(zoo)
# jx <- names(DF) # if all columns wanted
# jx <- sapply(DF, is.numeric) # if all numeric columns
# jx <- c("A", "B", "C", "D") # specify columns by name
jx <- 1:4 # specify columns by position
r <- rollsumr(DF[jx], 2, fill = NA)
colnames(r) <- paste0("A", colnames(r))
cbind(DF, r)
giving:
A B C D AA AB AC AD
1 1 2 3 4 NA NA NA NA
2 2 3 4 5 3 5 7 9
3 3 4 5 6 5 7 9 11
4 4 5 6 7 7 9 11 13
5 5 6 7 8 9 11 13 15
6 6 7 8 9 11 13 15 17
Note
The input in reproducible form:
DF <- structure(list(A = 1:6, B = 2:7, C = 3:8, D = 4:9),
class = "data.frame", row.names = c(NA, -6L))

Subset columns using logical vector

I have a dataframe that I want to drop those columns with NA's rate > 70% or there is dominant value taking over 99% of rows. How can I do that in R?
I find it easier to select rows with logic vector in subset function, but how can I do the similar for columns? For example, if I write:
isNARateLt70 <- function(column) {//some code}
apply(dataframe, 2, isNARateLt70)
Then how can I continue to use this vector to subset dataframe?
If you have a data.frame like
dd <- data.frame(matrix(rpois(7*4,10),ncol=7, dimnames=list(NULL,letters[1:7])))
# a b c d e f g
# 1 11 2 5 9 7 6 10
# 2 10 5 11 13 11 11 8
# 3 14 8 6 16 9 11 9
# 4 11 8 12 8 11 6 10
You can subset with a logical vector using one of
mycols<-c(T,F,F,T,F,F,T)
dd[mycols]
dd[, mycols]
There's really no need to write a function when we have colMeans (thanks #MrFlick for the advice to change from colSums()/nrow(), and shown at the bottom of this answer).
Here's how I would approach your function if you want to use sapply on it later.
> d <- data.frame(x = rep(NA, 5), y = c(1, NA, NA, 1, 1),
z = c(rep(NA, 3), 1, 2))
> isNARateLt70 <- function(x) mean(is.na(x)) <= 0.7
> sapply(d, isNARateLt70)
# x y z
# FALSE TRUE TRUE
Then, to subset with the above line your data using the above line of code, it's
> d[sapply(d, isNARateLt70)]
But as mentioned, colMeans works just the same,
> d[colMeans(is.na(d)) <= 0.7]
# y z
# 1 1 NA
# 2 NA NA
# 3 NA NA
# 4 1 1
# 5 1 2
Maybe this will help too. The 2 parameter in apply() means apply this function column wise on the data.frame cars.
> columns <- apply(cars, 2, function(x) {mean(x) > 10})
> columns
speed dist
TRUE TRUE
> cars[1:10, columns]
speed dist
1 4 2
2 4 10
3 7 4
4 7 22
5 8 16
6 9 10
7 10 18
8 10 26
9 10 34
10 11 17

subset with pattern

Say I have a data frame df
df <- data.frame( a1 = 1:10, b1 = 2:11, c2 = 3:12 )
I wish to subset the columns, but with a pattern
df1 <- subset( df, select= (pattern = "1") )
To get
> df1
a1 b1
1 1 2
2 2 3
3 3 4
4 4 5
5 5 6
6 6 7
7 7 8
8 8 9
9 9 10
10 10 11
Is this possible?
It is possible to do this via
subset(df, select = grepl("1", names(df)))
For automating this as a function, one can use use [ to do the subsetting. Couple that with one of R's regular expression functions and you have all you need.
By way of an example, here is a custom function implementing the ideas I mentioned above.
Subset <- function(df, pattern) {
ind <- grepl(pattern, names(df))
df[, ind]
}
Note this does not error checking etc and just relies upon grepl to return a logical vector indicating which columns match pattern, which is then passed to [ to subset by columns. Applied to your df this gives:
> Subset(df, pattern = "1")
a1 b1
1 1 2
2 2 3
3 3 4
4 4 5
5 5 6
6 6 7
7 7 8
8 8 9
9 9 10
10 10 11
Same same but different:
df2 <- df[, grep("1", names(df))]
a1 b1
1 1 2
2 2 3
3 3 4
4 4 5
5 5 6
6 6 7
7 7 8
8 8 9
9 9 10
10 10 11
Base R now has a convenience function endsWith():
df[, endsWith(names(df), "1")]
In data.table you can do:
library(data.table)
setDT(df)
df[, .SD, .SDcols = patterns("1")]
# Or more precisely
df[, .SD, .SDcols = patterns("1$")]
In dplyr:
library(dplyr)
select(df, ends_with("1"))

Resources