Blend vectors in R [duplicate] - r

Background
Several SQL languages (I mostly use postgreSQL) have a function called coalesce which returns the first non null column element for each row. This can be very efficient to use when tables have a lot of NULL elements in them.
I encounter this in a lot of scenarios in R as well when dealing with not so structured data which has a lot of NA's in them.
I have made a naive implementation myself but it is ridiculously slow.
coalesce <- function(...) {
apply(cbind(...), 1, function(x) {
x[which(!is.na(x))[1]]
})
}
Example
a <- c(1, 2, NA, 4, NA)
b <- c(NA, NA, NA, 5, 6)
c <- c(7, 8, NA, 9, 10)
coalesce(a,b,c)
# [1] 1 2 NA 4 6
Question
Is there any efficient way to implement coalesce in R?

On my machine, using Reduce gets a 5x performance improvement:
coalesce2 <- function(...) {
Reduce(function(x, y) {
i <- which(is.na(x))
x[i] <- y[i]
x},
list(...))
}
> microbenchmark(coalesce(a,b,c),coalesce2(a,b,c))
Unit: microseconds
expr min lq median uq max neval
coalesce(a, b, c) 97.669 100.7950 102.0120 103.0505 243.438 100
coalesce2(a, b, c) 19.601 21.4055 22.8835 23.8315 45.419 100

Looks like coalesce1 is still available
coalesce1 <- function(...) {
ans <- ..1
for (elt in list(...)[-1]) {
i <- is.na(ans)
ans[i] <- elt[i]
}
ans
}
which is faster still (but more-or-less a hand re-write of Reduce, so less general)
> identical(coalesce(a, b, c), coalesce1(a, b, c))
[1] TRUE
> microbenchmark(coalesce(a,b,c), coalesce1(a, b, c), coalesce2(a,b,c))
Unit: microseconds
expr min lq median uq max neval
coalesce(a, b, c) 336.266 341.6385 344.7320 355.4935 538.348 100
coalesce1(a, b, c) 8.287 9.4110 10.9515 12.1295 20.940 100
coalesce2(a, b, c) 37.711 40.1615 42.0885 45.1705 67.258 100
Or for larger data compare
coalesce1a <- function(...) {
ans <- ..1
for (elt in list(...)[-1]) {
i <- which(is.na(ans))
ans[i] <- elt[i]
}
ans
}
showing that which() can sometimes be effective, even though it implies a second pass through the index.
> aa <- sample(a, 100000, TRUE)
> bb <- sample(b, 100000, TRUE)
> cc <- sample(c, 100000, TRUE)
> microbenchmark(coalesce1(aa, bb, cc),
+ coalesce1a(aa, bb, cc),
+ coalesce2(aa,bb,cc), times=10)
Unit: milliseconds
expr min lq median uq max neval
coalesce1(aa, bb, cc) 11.110024 11.137963 11.145723 11.212907 11.270533 10
coalesce1a(aa, bb, cc) 2.906067 2.953266 2.962729 2.971761 3.452251 10
coalesce2(aa, bb, cc) 3.080842 3.115607 3.139484 3.166642 3.198977 10

From data.table >= 1.12.3 you can use fcoalesce.
library(data.table)
fcoalesce(a, b, c)
# [1] 1 2 NA 4 6
fcoalesce can also take "a single plain list, data.table or data.frame". Thus, if the vectors above were columns in a data.frame (or a data.table), we could simply supply the name of the data set:
d = data.frame(a, b, c)
# or d = data.table(a, b, c)
fcoalesce(d)
# [1] 1 2 NA 4 6
For more info, including a benchmark, see NEWS item #18 for development version 1.12.3.

Using dplyr package:
library(dplyr)
coalesce(a, b, c)
# [1] 1 2 NA 4 6
Benchamark, not as fast as accepted solution:
coalesce2 <- function(...) {
Reduce(function(x, y) {
i <- which(is.na(x))
x[i] <- y[i]
x},
list(...))
}
microbenchmark::microbenchmark(
coalesce(a, b, c),
coalesce2(a, b, c)
)
# Unit: microseconds
# expr min lq mean median uq max neval cld
# coalesce(a, b, c) 21.951 24.518 27.28264 25.515 26.9405 126.293 100 b
# coalesce2(a, b, c) 7.127 8.553 9.68731 9.123 9.6930 27.368 100 a
But on a larger dataset, it is comparable:
aa <- sample(a, 100000, TRUE)
bb <- sample(b, 100000, TRUE)
cc <- sample(c, 100000, TRUE)
microbenchmark::microbenchmark(
coalesce(aa, bb, cc),
coalesce2(aa, bb, cc))
# Unit: milliseconds
# expr min lq mean median uq max neval cld
# coalesce(aa, bb, cc) 1.708511 1.837368 5.468123 3.268492 3.511241 96.99766 100 a
# coalesce2(aa, bb, cc) 1.474171 1.516506 3.312153 1.957104 3.253240 91.05223 100 a

I have a ready-to-use implementation called coalesce.na in my misc package. It seems to be competitive, but not fastest.
It will also work for vectors of different length, and has a special treatment for vectors of length one:
expr min lq median uq max neval
coalesce(aa, bb, cc) 990.060402 1030.708466 1067.000698 1083.301986 1280.734389 10
coalesce1(aa, bb, cc) 11.356584 11.448455 11.804239 12.507659 14.922052 10
coalesce1a(aa, bb, cc) 2.739395 2.786594 2.852942 3.312728 5.529927 10
coalesce2(aa, bb, cc) 2.929364 3.041345 3.593424 3.868032 7.838552 10
coalesce.na(aa, bb, cc) 4.640552 4.691107 4.858385 4.973895 5.676463 10
Here's the code:
coalesce.na <- function(x, ...) {
x.len <- length(x)
ly <- list(...)
for (y in ly) {
y.len <- length(y)
if (y.len == 1) {
x[is.na(x)] <- y
} else {
if (x.len %% y.len != 0)
warning('object length is not a multiple of first object length')
pos <- which(is.na(x))
x[pos] <- y[(pos - 1) %% y.len + 1]
}
}
x
}
Of course, as Kevin pointed out, an Rcpp solution might be faster by orders of magnitude.

A very simple solution is to use the ifelse function from the base package:
coalesce3 <- function(x, y) {
ifelse(is.na(x), y, x)
}
Although it appears to be slower than coalesce2 above:
test <- function(a, b, func) {
for (i in 1:10000) {
func(a, b)
}
}
system.time(test(a, b, coalesce2))
user system elapsed
0.11 0.00 0.10
system.time(test(a, b, coalesce3))
user system elapsed
0.16 0.00 0.15
You can use Reduce to make it work for an arbitrary number of vectors:
coalesce4 <- function(...) {
Reduce(coalesce3, list(...))
}

Here is my solution:
coalesce <- function(x){
y <- head( x[is.na(x) == F] , 1)
return(y)
}
It returns first vaule which is not NA and it works on data.table, for example if you want to use coalesce on few columns and these column names are in vector of strings:
column_names <- c("col1", "col2", "col3")
how to use:
ranking[, coalesce_column := coalesce( mget(column_names) ), by = 1:nrow(ranking)]

Another apply method, with mapply.
mapply(function(...) {temp <- c(...); temp[!is.na(temp)][1]}, a, b, c)
[1] 1 2 NA 4 6
This selects the first non-NA value if more than one exists. The last non-missing element could be selected using tail.
Maybe a bit more speed could be squeezed out of this alternative using the bare bones .mapply function, which looks a little different.
unlist(.mapply(function(...) {temp <- c(...); temp[!is.na(temp)][1]},
dots=list(a, b, c), MoreArgs=NULL))
[1] 1 2 NA 4 6
.mapplydiffers in important ways from its non-dotted cousin.
it returns a list (like Map) and so must be wrapped in some function like unlist or c to return a vector.
the set of arguments to be fed in parallel to the function in FUN must be given in a list to the dots argument.
Finally, mapply, the moreArgs argument does not have a default, so must explicitly be fed NULL.

Another option is to use do.call and pmin:
do.call(pmin, c(list(a,b,c), list(na.rm=TRUE)))
Output
[1] 1 2 NA 4 6

Related

problem with using coalesce to merge 2 columns [duplicate]

Background
Several SQL languages (I mostly use postgreSQL) have a function called coalesce which returns the first non null column element for each row. This can be very efficient to use when tables have a lot of NULL elements in them.
I encounter this in a lot of scenarios in R as well when dealing with not so structured data which has a lot of NA's in them.
I have made a naive implementation myself but it is ridiculously slow.
coalesce <- function(...) {
apply(cbind(...), 1, function(x) {
x[which(!is.na(x))[1]]
})
}
Example
a <- c(1, 2, NA, 4, NA)
b <- c(NA, NA, NA, 5, 6)
c <- c(7, 8, NA, 9, 10)
coalesce(a,b,c)
# [1] 1 2 NA 4 6
Question
Is there any efficient way to implement coalesce in R?
On my machine, using Reduce gets a 5x performance improvement:
coalesce2 <- function(...) {
Reduce(function(x, y) {
i <- which(is.na(x))
x[i] <- y[i]
x},
list(...))
}
> microbenchmark(coalesce(a,b,c),coalesce2(a,b,c))
Unit: microseconds
expr min lq median uq max neval
coalesce(a, b, c) 97.669 100.7950 102.0120 103.0505 243.438 100
coalesce2(a, b, c) 19.601 21.4055 22.8835 23.8315 45.419 100
Looks like coalesce1 is still available
coalesce1 <- function(...) {
ans <- ..1
for (elt in list(...)[-1]) {
i <- is.na(ans)
ans[i] <- elt[i]
}
ans
}
which is faster still (but more-or-less a hand re-write of Reduce, so less general)
> identical(coalesce(a, b, c), coalesce1(a, b, c))
[1] TRUE
> microbenchmark(coalesce(a,b,c), coalesce1(a, b, c), coalesce2(a,b,c))
Unit: microseconds
expr min lq median uq max neval
coalesce(a, b, c) 336.266 341.6385 344.7320 355.4935 538.348 100
coalesce1(a, b, c) 8.287 9.4110 10.9515 12.1295 20.940 100
coalesce2(a, b, c) 37.711 40.1615 42.0885 45.1705 67.258 100
Or for larger data compare
coalesce1a <- function(...) {
ans <- ..1
for (elt in list(...)[-1]) {
i <- which(is.na(ans))
ans[i] <- elt[i]
}
ans
}
showing that which() can sometimes be effective, even though it implies a second pass through the index.
> aa <- sample(a, 100000, TRUE)
> bb <- sample(b, 100000, TRUE)
> cc <- sample(c, 100000, TRUE)
> microbenchmark(coalesce1(aa, bb, cc),
+ coalesce1a(aa, bb, cc),
+ coalesce2(aa,bb,cc), times=10)
Unit: milliseconds
expr min lq median uq max neval
coalesce1(aa, bb, cc) 11.110024 11.137963 11.145723 11.212907 11.270533 10
coalesce1a(aa, bb, cc) 2.906067 2.953266 2.962729 2.971761 3.452251 10
coalesce2(aa, bb, cc) 3.080842 3.115607 3.139484 3.166642 3.198977 10
From data.table >= 1.12.3 you can use fcoalesce.
library(data.table)
fcoalesce(a, b, c)
# [1] 1 2 NA 4 6
fcoalesce can also take "a single plain list, data.table or data.frame". Thus, if the vectors above were columns in a data.frame (or a data.table), we could simply supply the name of the data set:
d = data.frame(a, b, c)
# or d = data.table(a, b, c)
fcoalesce(d)
# [1] 1 2 NA 4 6
For more info, including a benchmark, see NEWS item #18 for development version 1.12.3.
Using dplyr package:
library(dplyr)
coalesce(a, b, c)
# [1] 1 2 NA 4 6
Benchamark, not as fast as accepted solution:
coalesce2 <- function(...) {
Reduce(function(x, y) {
i <- which(is.na(x))
x[i] <- y[i]
x},
list(...))
}
microbenchmark::microbenchmark(
coalesce(a, b, c),
coalesce2(a, b, c)
)
# Unit: microseconds
# expr min lq mean median uq max neval cld
# coalesce(a, b, c) 21.951 24.518 27.28264 25.515 26.9405 126.293 100 b
# coalesce2(a, b, c) 7.127 8.553 9.68731 9.123 9.6930 27.368 100 a
But on a larger dataset, it is comparable:
aa <- sample(a, 100000, TRUE)
bb <- sample(b, 100000, TRUE)
cc <- sample(c, 100000, TRUE)
microbenchmark::microbenchmark(
coalesce(aa, bb, cc),
coalesce2(aa, bb, cc))
# Unit: milliseconds
# expr min lq mean median uq max neval cld
# coalesce(aa, bb, cc) 1.708511 1.837368 5.468123 3.268492 3.511241 96.99766 100 a
# coalesce2(aa, bb, cc) 1.474171 1.516506 3.312153 1.957104 3.253240 91.05223 100 a
I have a ready-to-use implementation called coalesce.na in my misc package. It seems to be competitive, but not fastest.
It will also work for vectors of different length, and has a special treatment for vectors of length one:
expr min lq median uq max neval
coalesce(aa, bb, cc) 990.060402 1030.708466 1067.000698 1083.301986 1280.734389 10
coalesce1(aa, bb, cc) 11.356584 11.448455 11.804239 12.507659 14.922052 10
coalesce1a(aa, bb, cc) 2.739395 2.786594 2.852942 3.312728 5.529927 10
coalesce2(aa, bb, cc) 2.929364 3.041345 3.593424 3.868032 7.838552 10
coalesce.na(aa, bb, cc) 4.640552 4.691107 4.858385 4.973895 5.676463 10
Here's the code:
coalesce.na <- function(x, ...) {
x.len <- length(x)
ly <- list(...)
for (y in ly) {
y.len <- length(y)
if (y.len == 1) {
x[is.na(x)] <- y
} else {
if (x.len %% y.len != 0)
warning('object length is not a multiple of first object length')
pos <- which(is.na(x))
x[pos] <- y[(pos - 1) %% y.len + 1]
}
}
x
}
Of course, as Kevin pointed out, an Rcpp solution might be faster by orders of magnitude.
A very simple solution is to use the ifelse function from the base package:
coalesce3 <- function(x, y) {
ifelse(is.na(x), y, x)
}
Although it appears to be slower than coalesce2 above:
test <- function(a, b, func) {
for (i in 1:10000) {
func(a, b)
}
}
system.time(test(a, b, coalesce2))
user system elapsed
0.11 0.00 0.10
system.time(test(a, b, coalesce3))
user system elapsed
0.16 0.00 0.15
You can use Reduce to make it work for an arbitrary number of vectors:
coalesce4 <- function(...) {
Reduce(coalesce3, list(...))
}
Here is my solution:
coalesce <- function(x){
y <- head( x[is.na(x) == F] , 1)
return(y)
}
It returns first vaule which is not NA and it works on data.table, for example if you want to use coalesce on few columns and these column names are in vector of strings:
column_names <- c("col1", "col2", "col3")
how to use:
ranking[, coalesce_column := coalesce( mget(column_names) ), by = 1:nrow(ranking)]
Another apply method, with mapply.
mapply(function(...) {temp <- c(...); temp[!is.na(temp)][1]}, a, b, c)
[1] 1 2 NA 4 6
This selects the first non-NA value if more than one exists. The last non-missing element could be selected using tail.
Maybe a bit more speed could be squeezed out of this alternative using the bare bones .mapply function, which looks a little different.
unlist(.mapply(function(...) {temp <- c(...); temp[!is.na(temp)][1]},
dots=list(a, b, c), MoreArgs=NULL))
[1] 1 2 NA 4 6
.mapplydiffers in important ways from its non-dotted cousin.
it returns a list (like Map) and so must be wrapped in some function like unlist or c to return a vector.
the set of arguments to be fed in parallel to the function in FUN must be given in a list to the dots argument.
Finally, mapply, the moreArgs argument does not have a default, so must explicitly be fed NULL.
Another option is to use do.call and pmin:
do.call(pmin, c(list(a,b,c), list(na.rm=TRUE)))
Output
[1] 1 2 NA 4 6

Keeping vectors (from list of vectors) whose elements do not have a proper subset within that same list (using RCPP)

I have asked this question previously (see here) and received a satisfactory answer using the purr package. However, this has proved to be a bottle neck in my program so I would like to rewrite the section using the RCPP package.
Proper subset: A proper subset S' of a set S is a subset that is strictly contained in S and so excludes S itself (note I am also excluding the empty set).
Suppose you have the following vectors in a list:
a = c(1,2)
b = c(1,3)
c = c(2,4)
d = c(1,2,3,4)
e = c(2,4,5)
f = c(1,2,3)
My aim is to keep only vectors which have no proper subset within the list, which in this example would be a, b and c.
Previous Solution
library(purr)
possibilities <- list(a,b,c,d,e,f)
keep(possibilities,
map2_lgl(.x = possibilities,
.y = seq_along(possibilities),
~ !any(map_lgl(possibilities[-.y], function(z) all(z %in% .x)))))
The notion here is to avoid the O(N^3) and use a less order instead. The other answer provided here will be slow still since it is greater than O(N^2). Here is a solution with less than O(N^2), where the worst case scenario is O(N^2) when all the elements are unique.
onlySet <- function(x){
i <- 1
repeat{
y <- sapply(x[-1], function(el)!all(is.element(x[[1]], el)))
if(all(y)){
if(i==length(x)) break
else i <- i+1
}
x <- c(x[-1][y], x[1])
}
x
}
Now to show the time difference, check out the following:
match_fun <- Vectorize(function(s1, s2) all(s1 %in% s2))
method1 <- function(a){
mat <- outer(a, a, match_fun)
a[colSums(mat) == 1]
}
poss <- rep(possibilities, 100)
microbenchmark::microbenchmark(method1(poss), onlySet(poss))
Unit: milliseconds
expr min lq mean median uq max neval cld
method1(poss) 840.7919 880.12635 932.255030 889.36380 923.32555 1420.1077 100 b
onlySet(poss) 1.9845 2.07005 2.191647 2.15945 2.24245 3.3656 100 a
Have you tried optimising the solution in base R first? For example, the following reproduces your expected output and uses (faster) base R array routines:
match_fun <- Vectorize(function(s1, s2) all(s1 %in% s2))
mat <- outer(possibilities, possibilities, match_fun)
possibilities[colSums(mat) == 1]
#[[1]]
#[1] 1 2
#
#[[2]]
#[1] 1 3
#
#[[3]]
#[1] 2 4
Inspired by Onyambu's performant solution, here is another base R option using a recursive function
f_recursive <- function(x, i = 1) {
if (i > length(x)) return(x)
idx <- which(sapply(x[-i], function(el) all(x[[i]] %in% el))) + 1
if (length(idx) == 0) f_recursive(x, i + 1) else f_recursive(x[-idx], i + 1)
}
f(possibilities)
The performance is on par with Onyambu's solution.
poss <- rep(possibilities, 100)
microbenchmark::microbenchmark(
method1(poss),
onlySet(poss),
f_recursive(poss))
#Unit: milliseconds
# expr min lq mean median uq
# method1(poss) 682.558602 710.974831 750.325377 730.627996 765.040976
# onlySet(poss) 1.700646 1.782713 1.870972 1.819820 1.918669
# f_recursive(poss) 1.681120 1.737459 1.884685 1.806384 1.901582
# max neval
# 1200.562889 100
# 2.371646 100
# 3.217013 100

combine duplicate columns into one column data frame r [duplicate]

Background
Several SQL languages (I mostly use postgreSQL) have a function called coalesce which returns the first non null column element for each row. This can be very efficient to use when tables have a lot of NULL elements in them.
I encounter this in a lot of scenarios in R as well when dealing with not so structured data which has a lot of NA's in them.
I have made a naive implementation myself but it is ridiculously slow.
coalesce <- function(...) {
apply(cbind(...), 1, function(x) {
x[which(!is.na(x))[1]]
})
}
Example
a <- c(1, 2, NA, 4, NA)
b <- c(NA, NA, NA, 5, 6)
c <- c(7, 8, NA, 9, 10)
coalesce(a,b,c)
# [1] 1 2 NA 4 6
Question
Is there any efficient way to implement coalesce in R?
On my machine, using Reduce gets a 5x performance improvement:
coalesce2 <- function(...) {
Reduce(function(x, y) {
i <- which(is.na(x))
x[i] <- y[i]
x},
list(...))
}
> microbenchmark(coalesce(a,b,c),coalesce2(a,b,c))
Unit: microseconds
expr min lq median uq max neval
coalesce(a, b, c) 97.669 100.7950 102.0120 103.0505 243.438 100
coalesce2(a, b, c) 19.601 21.4055 22.8835 23.8315 45.419 100
Looks like coalesce1 is still available
coalesce1 <- function(...) {
ans <- ..1
for (elt in list(...)[-1]) {
i <- is.na(ans)
ans[i] <- elt[i]
}
ans
}
which is faster still (but more-or-less a hand re-write of Reduce, so less general)
> identical(coalesce(a, b, c), coalesce1(a, b, c))
[1] TRUE
> microbenchmark(coalesce(a,b,c), coalesce1(a, b, c), coalesce2(a,b,c))
Unit: microseconds
expr min lq median uq max neval
coalesce(a, b, c) 336.266 341.6385 344.7320 355.4935 538.348 100
coalesce1(a, b, c) 8.287 9.4110 10.9515 12.1295 20.940 100
coalesce2(a, b, c) 37.711 40.1615 42.0885 45.1705 67.258 100
Or for larger data compare
coalesce1a <- function(...) {
ans <- ..1
for (elt in list(...)[-1]) {
i <- which(is.na(ans))
ans[i] <- elt[i]
}
ans
}
showing that which() can sometimes be effective, even though it implies a second pass through the index.
> aa <- sample(a, 100000, TRUE)
> bb <- sample(b, 100000, TRUE)
> cc <- sample(c, 100000, TRUE)
> microbenchmark(coalesce1(aa, bb, cc),
+ coalesce1a(aa, bb, cc),
+ coalesce2(aa,bb,cc), times=10)
Unit: milliseconds
expr min lq median uq max neval
coalesce1(aa, bb, cc) 11.110024 11.137963 11.145723 11.212907 11.270533 10
coalesce1a(aa, bb, cc) 2.906067 2.953266 2.962729 2.971761 3.452251 10
coalesce2(aa, bb, cc) 3.080842 3.115607 3.139484 3.166642 3.198977 10
From data.table >= 1.12.3 you can use fcoalesce.
library(data.table)
fcoalesce(a, b, c)
# [1] 1 2 NA 4 6
fcoalesce can also take "a single plain list, data.table or data.frame". Thus, if the vectors above were columns in a data.frame (or a data.table), we could simply supply the name of the data set:
d = data.frame(a, b, c)
# or d = data.table(a, b, c)
fcoalesce(d)
# [1] 1 2 NA 4 6
For more info, including a benchmark, see NEWS item #18 for development version 1.12.3.
Using dplyr package:
library(dplyr)
coalesce(a, b, c)
# [1] 1 2 NA 4 6
Benchamark, not as fast as accepted solution:
coalesce2 <- function(...) {
Reduce(function(x, y) {
i <- which(is.na(x))
x[i] <- y[i]
x},
list(...))
}
microbenchmark::microbenchmark(
coalesce(a, b, c),
coalesce2(a, b, c)
)
# Unit: microseconds
# expr min lq mean median uq max neval cld
# coalesce(a, b, c) 21.951 24.518 27.28264 25.515 26.9405 126.293 100 b
# coalesce2(a, b, c) 7.127 8.553 9.68731 9.123 9.6930 27.368 100 a
But on a larger dataset, it is comparable:
aa <- sample(a, 100000, TRUE)
bb <- sample(b, 100000, TRUE)
cc <- sample(c, 100000, TRUE)
microbenchmark::microbenchmark(
coalesce(aa, bb, cc),
coalesce2(aa, bb, cc))
# Unit: milliseconds
# expr min lq mean median uq max neval cld
# coalesce(aa, bb, cc) 1.708511 1.837368 5.468123 3.268492 3.511241 96.99766 100 a
# coalesce2(aa, bb, cc) 1.474171 1.516506 3.312153 1.957104 3.253240 91.05223 100 a
I have a ready-to-use implementation called coalesce.na in my misc package. It seems to be competitive, but not fastest.
It will also work for vectors of different length, and has a special treatment for vectors of length one:
expr min lq median uq max neval
coalesce(aa, bb, cc) 990.060402 1030.708466 1067.000698 1083.301986 1280.734389 10
coalesce1(aa, bb, cc) 11.356584 11.448455 11.804239 12.507659 14.922052 10
coalesce1a(aa, bb, cc) 2.739395 2.786594 2.852942 3.312728 5.529927 10
coalesce2(aa, bb, cc) 2.929364 3.041345 3.593424 3.868032 7.838552 10
coalesce.na(aa, bb, cc) 4.640552 4.691107 4.858385 4.973895 5.676463 10
Here's the code:
coalesce.na <- function(x, ...) {
x.len <- length(x)
ly <- list(...)
for (y in ly) {
y.len <- length(y)
if (y.len == 1) {
x[is.na(x)] <- y
} else {
if (x.len %% y.len != 0)
warning('object length is not a multiple of first object length')
pos <- which(is.na(x))
x[pos] <- y[(pos - 1) %% y.len + 1]
}
}
x
}
Of course, as Kevin pointed out, an Rcpp solution might be faster by orders of magnitude.
A very simple solution is to use the ifelse function from the base package:
coalesce3 <- function(x, y) {
ifelse(is.na(x), y, x)
}
Although it appears to be slower than coalesce2 above:
test <- function(a, b, func) {
for (i in 1:10000) {
func(a, b)
}
}
system.time(test(a, b, coalesce2))
user system elapsed
0.11 0.00 0.10
system.time(test(a, b, coalesce3))
user system elapsed
0.16 0.00 0.15
You can use Reduce to make it work for an arbitrary number of vectors:
coalesce4 <- function(...) {
Reduce(coalesce3, list(...))
}
Here is my solution:
coalesce <- function(x){
y <- head( x[is.na(x) == F] , 1)
return(y)
}
It returns first vaule which is not NA and it works on data.table, for example if you want to use coalesce on few columns and these column names are in vector of strings:
column_names <- c("col1", "col2", "col3")
how to use:
ranking[, coalesce_column := coalesce( mget(column_names) ), by = 1:nrow(ranking)]
Another apply method, with mapply.
mapply(function(...) {temp <- c(...); temp[!is.na(temp)][1]}, a, b, c)
[1] 1 2 NA 4 6
This selects the first non-NA value if more than one exists. The last non-missing element could be selected using tail.
Maybe a bit more speed could be squeezed out of this alternative using the bare bones .mapply function, which looks a little different.
unlist(.mapply(function(...) {temp <- c(...); temp[!is.na(temp)][1]},
dots=list(a, b, c), MoreArgs=NULL))
[1] 1 2 NA 4 6
.mapplydiffers in important ways from its non-dotted cousin.
it returns a list (like Map) and so must be wrapped in some function like unlist or c to return a vector.
the set of arguments to be fed in parallel to the function in FUN must be given in a list to the dots argument.
Finally, mapply, the moreArgs argument does not have a default, so must explicitly be fed NULL.
Another option is to use do.call and pmin:
do.call(pmin, c(list(a,b,c), list(na.rm=TRUE)))
Output
[1] 1 2 NA 4 6

Optimized version of grep to match vector against vector

Suppose I have two vectors of characters a and b:
set.seed(123)
categ <- c("Control", "Gr", "Or", "PMT", "P450")
genes <- paste(categ, rep(1:40, each=length(categ)), sep="_")
a0 <- paste(genes, "_", rep(1:50, each=length(genes)), "_", sep="")
b0 <- paste (a0, "1", sep="")
ite <- 200
lg <- 2000
b <- b0[1:lg]
a <- (a0[1:lg])[sample(seq(lg), ite)]
I want to apply the grep function in order to find the match of each value of a in b.
Of course I could do:
sapply(a, grep, b)
but I wonder if there is something more efficient as I will have to run this a lot of times for much larger vectors in simulations (note that I don't want to use mclapply either as I already use it to run each iteration of my simulations):
system.time(lapply(seq(100000), function(x) sapply(a, grep, b)))
library(parallel)
system.time(mclapply(seq(100000), function(x) sapply(a, grep, b), mc.cores=8))
Since you don't use regular expressions but want to find substrings in longer strings, you can use fixed = TRUE. It is much faster.
library(microbenchmark)
microbenchmark(lapply(a, grep, b), # original
lapply(paste0("^", a), grep, b), # #flodel
lapply(a, grep, b, fixed = TRUE))
Unit: microseconds
expr min lq median uq max neval
lapply(a, grep, b) 112.633 114.2340 114.9390 116.0990 326.857 100
lapply(paste0("^", a), grep, b) 119.949 121.7380 122.7425 123.9775 191.851 100
lapply(a, grep, b, fixed = TRUE) 21.004 22.5885 23.8580 24.6110 33.608 100
Testing with longer vectors (1000 times the original length).
ar <- rep(a, 1000)
br <- rep(b, 1000)
library(microbenchmark)
microbenchmark(lapply(ar, grep, br), # original
lapply(paste0("^", ar), grep, br), # #flodel
lapply(ar, grep, br, fixed = TRUE))
Unit: seconds
expr min lq median uq max neval
lapply(ar, grep, br) 32.288139 32.564223 32.726149 32.97529 37.818299 100
lapply(paste0("^", ar), grep, br) 24.997339 25.343401 25.531138 25.71615 28.238802 100
lapply(ar, grep, br, fixed = TRUE) 2.461934 2.494759 2.513931 2.55375 4.194093 100
(This took quite a while...)
Following on my last suggestion...
The big problem with what you are asking is that, a priori, you need to do length(a) * length(b) comparisons. However, you can take advantage of the fact that the matches here will only happen at the beginning of the strings (what I gathered from the comments.).
I suggested you first split your a and b vectors into lists, after looking at the first word ( "Or", "Gr", "Control", "PMT", etc.) in each item, then only look for matches in the corresponding sets. In other words, take the items in a that start with Or_ and only look for matches in the items in b that also start with Or_.
To give you an idea of why this is efficient in terms of complexity. Imagine a and b both have length n; that there are x possible prefixes, uniformly distributed throughout a and b. Then you would only have to do x * (n/x * n/x) comparisons versus n * n in your case. That's x times fewer comparisons. And you could even imagine repeating the process using the second word, third, etc. in a recursive way.
Now here is the code for it:
reduced.match <- function(a, b) {
first.word <- function(string) sub("_.*", "", string)
a.first <- first.word(a)
b.first <- first.word(b)
l.first <- unique(c(a.first, b.first))
a.first <- factor(a.first, l.first)
b.first <- factor(b.first, l.first)
a.split <- split(a, a.first)
b.split <- split(b, b.first)
a.idx.split <- split(seq_along(a), a.first)
b.idx.split <- split(seq_along(b), b.first)
unsorted.matches <-
Map(function(a, b, i) lapply(a, function(x) i[grep(x, b, fixed = TRUE)]),
a.split, b.split, b.idx.split, USE.NAMES = FALSE)
sorted.matches <-
unlist(unsorted.matches, recursive = FALSE)[
match(seq_along(a), unlist(a.idx.split))]
return(sorted.matches)
}
# sample data
set.seed(123)
n <- 10000
words <- paste0(LETTERS, LETTERS, LETTERS)
a <- paste(sample(words[-1], n, TRUE),
sample(words, n, TRUE), sep = "_")
b <- paste(sample(words[-2], n, TRUE),
sample(words, n, TRUE), sep = "_")
# testing
identical(reduced.match(a, b), lapply(a, grep, b, fixed = TRUE))
# [1] TRUE
# benchmarks
system.time(reduced.match(a, b))
# user system elapsed
# 0.187 0.000 0.187
system.time(lapply(a, grep, b, fixed = TRUE))
# user system elapsed
# 2.915 0.002 2.920
If a and b are sorted (and a unique) and one is interested in exact matches at the beginning of the string, then the following C code will usually be relatively efficient (something on the order of length(a) + length(b) string comparisons?). The R wrapper makes sure the C code and R user get appropriate data.
f3 <- local({
library(inline)
.amatch <- cfunction(c(a="character", b="character"),
includes="#include <string.h>", '
int len_a = Rf_length(a), len_b = Rf_length(b);
SEXP ans = PROTECT(allocVector(INTSXP, len_b));
memset(INTEGER(ans), 0, sizeof(int) * len_b);
int cmp, i = 0, j = 0;
while (i < len_a) {
const char *ap = CHAR(STRING_ELT(a, i));
while (j < len_b) {
cmp = strncmp(ap, CHAR(STRING_ELT(b, j)), strlen(ap));
if (cmp > 0) {
j += 1;
} else break;
}
if (j == len_b)
break;
if (cmp == 0)
INTEGER(ans)[j++] = i + 1;
else if (cmp < 0) i += 1;
}
UNPROTECT(1);
return(ans);')
function(a, b) {
locale = Sys.getlocale("LC_COLLATE")
if (locale != "C") {
warning('temporarily trying to set LC_COLLATE to "C"')
Sys.setlocale("LC_COLLATE", "C")
on.exit(Sys.setlocale("LC_COLLATE", locale))
}
a0 <- a
lvls <- unique(a)
a <- sort(lvls)
o <- order(b)
idx <- .amatch(a, b[o])[order(o)]
f <- factor(a[idx[idx != 0]], levels=lvls)
split(which(idx != 0), f)[a0]
}
})
In comparison with this semi-friendly grep
f0 <- function(a, b) {
a0 <- a
a <- unique(a)
names(a) <- a
lapply(a, grep, b, fixed=TRUE)[a0]
}
that allows for (but doesn't pay too much of a price for) duplicate 'a' values the timings for #flodel's data set are
> microbenchmark(f0(a, b), f3(a, b), times=5)
Unit: milliseconds
expr min lq median uq max neval
f0(a, b) 431.03595 431.45211 432.59346 433.96036 434.87550 5
f3(a, b) 15.70972 15.75976 15.93179 16.05184 16.06767 5
Unfortunately, this simple algorithm fails when one element is a prefix of another
> str(f0(c("a", "ab"), "abc"))
List of 2
$ : chr "abc"
$ : chr "abc"
> str(f3(c("a", "ab"), "abc"))
List of 2
$ : chr "abc"
$ : chr(0)
Contrary to a comment, for this data set (the random number seed needs to be specified for reproducibility)
set.seed(123)
categ <- c("Control", "Gr", "Or", "PMT", "P450")
genes <- paste(categ, rep(1:40, each=length(categ)), sep="_")
a0 <- paste0(genes, "_", rep(1:50, each=length(genes)), "_")
b0 <- paste0(a0, "1")
ite <- 50
lg <- 1000
b <- b0[1:lg]
a <- (a0[1:lg])[sample(seq(lg), ite)]
f3() returns the same values as grep
> identical(unname(f3(a, b)), lapply(a, grep, b, fixed=TRUE))
[1] TRUE
The algorithms f0 and f3 have been modified to return indexes in a named list.
I tested out on my own data the different solutions proposed by #flodel and #Sven Hohenstein (Note that #Martin Morgan's method cannot be tested for the moment as it doesn't support when elements of a that are prefix of other elements of a).
IMPORTANT NOTE: altough all methods give the same result in my specific case, remind that they all have their own way, and thus can give different results depending on the structure of the data
Here is a quick summary (the results are shown below):
In my tests, length(a) and length(b) are set to 200 or 400 and 2,000 or 10,000 respectively
there is only a single match of each value of a in b
the best method really depends of the problem and all deserves to be tested for each specific cases
pmatch always performs very well (notably for small length of vectors a and b, say less than 100 and 1,000 respectively - not shown below),
sapply(a, grep, b, fixed=T) and reduced.match (flodel's method) functions always perform better than sapply(a, grep, b)) and sapply(paste0("^", a), grep, b).
Here is the reproductible code along with the results of the tests
# set up the data set
library(microbenchmark)
categ <- c("Control", "Gr", "Or", "PMT", "P450")
genes <- paste(categ, rep(1:40, each=length(categ)), sep="_")
a0 <- paste(genes, "_", rep(1:50, each=length(genes)), "_", sep="")
b0 <- paste (a0, "1", sep="")
# length(a)==200 & length(b)==2,000
ite <- 200
lg <- 2000
b <- b0[1:lg]
a <- (a0[1:lg])[sample(seq(lg), ite)]
microbenchmark(as.vector(sapply(a, grep, b)), # original
as.vector(sapply(paste0("^", a), grep, b)), # #flodel 1
as.vector(sapply(a, grep, b, fixed = TRUE)), # Sven Hohenstein
unlist(reduced.match(a, b)), # # flodel 2
#~ f3(a, b), #Martin Morgan
pmatch(a, b))
Unit: milliseconds
expr min lq median
as.vector(sapply(a, grep, b)) 188.810585 189.256705 189.827765
as.vector(sapply(paste0("^", a), grep, b)) 157.600510 158.113507 158.560619
as.vector(sapply(a, grep, b, fixed = TRUE)) 23.954520 24.109275 24.269991
unlist(reduced.match(a, b)) 7.999203 8.087931 8.140260
pmatch(a, b) 7.459394 7.489923 7.586329
uq max neval
191.412879 222.131220 100
160.129008 186.695822 100
25.923741 26.380578 100
8.237207 10.063783 100
7.637560 7.888938 100
# length(a)==400 & length(b)==2,000
ite <- 400
lg <- 2000
b <- b0[1:lg]
a <- (a0[1:lg])[sample(seq(lg), ite)]
microbenchmark(as.vector(sapply(a, grep, b)), # original
as.vector(sapply(paste0("^", a), grep, b)), # #flodel 1
as.vector(sapply(a, grep, b, fixed = TRUE)), # Sven Hohenstein
unlist(reduced.match(a, b)), # # flodel 2
#~ f3(a, b), #Martin Morgan
pmatch(a, b))
Unit: milliseconds
expr min lq median
as.vector(sapply(a, grep, b)) 376.85638 379.58441 380.46107
as.vector(sapply(paste0("^", a), grep, b)) 314.38333 316.79849 318.33426
as.vector(sapply(a, grep, b, fixed = TRUE)) 49.56848 51.54113 51.90420
unlist(reduced.match(a, b)) 13.31185 13.44923 13.57679
pmatch(a, b) 15.15788 15.24773 15.36917
uq max neval
383.26959 415.23281 100
320.92588 346.66234 100
52.02379 81.65053 100
15.56503 16.83750 100
15.45680 17.58592 100
# length(a)==200 & length(b)==10,000
ite <- 200
lg <- 10000
b <- b0[1:lg]
a <- (a0[1:lg])[sample(seq(lg), ite)]
microbenchmark(as.vector(sapply(a, grep, b)), # original
as.vector(sapply(paste0("^", a), grep, b)), # #flodel 1
as.vector(sapply(a, grep, b, fixed = TRUE)), # Sven Hohenstein
unlist(reduced.match(a, b)), # # flodel 2
#~ f3(a, b), #Martin Morgan
pmatch(a, b))
Unit: milliseconds
expr min lq median
as.vector(sapply(a, grep, b)) 975.34831 978.55579 981.56864
as.vector(sapply(paste0("^", a), grep, b)) 808.79299 811.64919 814.16552
as.vector(sapply(a, grep, b, fixed = TRUE)) 119.64240 120.41718 120.73548
unlist(reduced.match(a, b)) 34.23893 34.56048 36.23506
pmatch(a, b) 37.57552 37.82128 38.01727
uq max neval
986.17827 1061.89808 100
824.41931 854.26298 100
121.20605 151.43524 100
36.57896 43.33285 100
38.21910 40.87238 100
# length(a)==400 & length(b)==10500
ite <- 400
lg <- 10000
b <- b0[1:lg]
a <- (a0[1:lg])[sample(seq(lg), ite)]
microbenchmark(as.vector(sapply(a, grep, b)), # original
as.vector(sapply(paste0("^", a), grep, b)), # #flodel 1
as.vector(sapply(a, grep, b, fixed = TRUE)), # Sven Hohenstein
unlist(reduced.match(a, b)), # # flodel 2
#~ f3(a, b), #Martin Morgan
pmatch(a, b))
Unit: milliseconds
expr min lq median
as.vector(sapply(a, grep, b)) 1977.69564 2003.73443 2028.72239
as.vector(sapply(paste0("^", a), grep, b)) 1637.46903 1659.96661 1677.21706
as.vector(sapply(a, grep, b, fixed = TRUE)) 236.81745 238.62842 239.67875
unlist(reduced.match(a, b)) 57.18344 59.09308 59.48678
pmatch(a, b) 75.03812 75.40420 75.60641
uq max neval
2076.45628 2223.94624 100
1708.86306 1905.16534 100
241.12830 283.23043 100
59.76167 88.71846 100
75.99034 90.62689 100

How to implement coalesce efficiently in R

Background
Several SQL languages (I mostly use postgreSQL) have a function called coalesce which returns the first non null column element for each row. This can be very efficient to use when tables have a lot of NULL elements in them.
I encounter this in a lot of scenarios in R as well when dealing with not so structured data which has a lot of NA's in them.
I have made a naive implementation myself but it is ridiculously slow.
coalesce <- function(...) {
apply(cbind(...), 1, function(x) {
x[which(!is.na(x))[1]]
})
}
Example
a <- c(1, 2, NA, 4, NA)
b <- c(NA, NA, NA, 5, 6)
c <- c(7, 8, NA, 9, 10)
coalesce(a,b,c)
# [1] 1 2 NA 4 6
Question
Is there any efficient way to implement coalesce in R?
On my machine, using Reduce gets a 5x performance improvement:
coalesce2 <- function(...) {
Reduce(function(x, y) {
i <- which(is.na(x))
x[i] <- y[i]
x},
list(...))
}
> microbenchmark(coalesce(a,b,c),coalesce2(a,b,c))
Unit: microseconds
expr min lq median uq max neval
coalesce(a, b, c) 97.669 100.7950 102.0120 103.0505 243.438 100
coalesce2(a, b, c) 19.601 21.4055 22.8835 23.8315 45.419 100
Looks like coalesce1 is still available
coalesce1 <- function(...) {
ans <- ..1
for (elt in list(...)[-1]) {
i <- is.na(ans)
ans[i] <- elt[i]
}
ans
}
which is faster still (but more-or-less a hand re-write of Reduce, so less general)
> identical(coalesce(a, b, c), coalesce1(a, b, c))
[1] TRUE
> microbenchmark(coalesce(a,b,c), coalesce1(a, b, c), coalesce2(a,b,c))
Unit: microseconds
expr min lq median uq max neval
coalesce(a, b, c) 336.266 341.6385 344.7320 355.4935 538.348 100
coalesce1(a, b, c) 8.287 9.4110 10.9515 12.1295 20.940 100
coalesce2(a, b, c) 37.711 40.1615 42.0885 45.1705 67.258 100
Or for larger data compare
coalesce1a <- function(...) {
ans <- ..1
for (elt in list(...)[-1]) {
i <- which(is.na(ans))
ans[i] <- elt[i]
}
ans
}
showing that which() can sometimes be effective, even though it implies a second pass through the index.
> aa <- sample(a, 100000, TRUE)
> bb <- sample(b, 100000, TRUE)
> cc <- sample(c, 100000, TRUE)
> microbenchmark(coalesce1(aa, bb, cc),
+ coalesce1a(aa, bb, cc),
+ coalesce2(aa,bb,cc), times=10)
Unit: milliseconds
expr min lq median uq max neval
coalesce1(aa, bb, cc) 11.110024 11.137963 11.145723 11.212907 11.270533 10
coalesce1a(aa, bb, cc) 2.906067 2.953266 2.962729 2.971761 3.452251 10
coalesce2(aa, bb, cc) 3.080842 3.115607 3.139484 3.166642 3.198977 10
From data.table >= 1.12.3 you can use fcoalesce.
library(data.table)
fcoalesce(a, b, c)
# [1] 1 2 NA 4 6
fcoalesce can also take "a single plain list, data.table or data.frame". Thus, if the vectors above were columns in a data.frame (or a data.table), we could simply supply the name of the data set:
d = data.frame(a, b, c)
# or d = data.table(a, b, c)
fcoalesce(d)
# [1] 1 2 NA 4 6
For more info, including a benchmark, see NEWS item #18 for development version 1.12.3.
Using dplyr package:
library(dplyr)
coalesce(a, b, c)
# [1] 1 2 NA 4 6
Benchamark, not as fast as accepted solution:
coalesce2 <- function(...) {
Reduce(function(x, y) {
i <- which(is.na(x))
x[i] <- y[i]
x},
list(...))
}
microbenchmark::microbenchmark(
coalesce(a, b, c),
coalesce2(a, b, c)
)
# Unit: microseconds
# expr min lq mean median uq max neval cld
# coalesce(a, b, c) 21.951 24.518 27.28264 25.515 26.9405 126.293 100 b
# coalesce2(a, b, c) 7.127 8.553 9.68731 9.123 9.6930 27.368 100 a
But on a larger dataset, it is comparable:
aa <- sample(a, 100000, TRUE)
bb <- sample(b, 100000, TRUE)
cc <- sample(c, 100000, TRUE)
microbenchmark::microbenchmark(
coalesce(aa, bb, cc),
coalesce2(aa, bb, cc))
# Unit: milliseconds
# expr min lq mean median uq max neval cld
# coalesce(aa, bb, cc) 1.708511 1.837368 5.468123 3.268492 3.511241 96.99766 100 a
# coalesce2(aa, bb, cc) 1.474171 1.516506 3.312153 1.957104 3.253240 91.05223 100 a
I have a ready-to-use implementation called coalesce.na in my misc package. It seems to be competitive, but not fastest.
It will also work for vectors of different length, and has a special treatment for vectors of length one:
expr min lq median uq max neval
coalesce(aa, bb, cc) 990.060402 1030.708466 1067.000698 1083.301986 1280.734389 10
coalesce1(aa, bb, cc) 11.356584 11.448455 11.804239 12.507659 14.922052 10
coalesce1a(aa, bb, cc) 2.739395 2.786594 2.852942 3.312728 5.529927 10
coalesce2(aa, bb, cc) 2.929364 3.041345 3.593424 3.868032 7.838552 10
coalesce.na(aa, bb, cc) 4.640552 4.691107 4.858385 4.973895 5.676463 10
Here's the code:
coalesce.na <- function(x, ...) {
x.len <- length(x)
ly <- list(...)
for (y in ly) {
y.len <- length(y)
if (y.len == 1) {
x[is.na(x)] <- y
} else {
if (x.len %% y.len != 0)
warning('object length is not a multiple of first object length')
pos <- which(is.na(x))
x[pos] <- y[(pos - 1) %% y.len + 1]
}
}
x
}
Of course, as Kevin pointed out, an Rcpp solution might be faster by orders of magnitude.
A very simple solution is to use the ifelse function from the base package:
coalesce3 <- function(x, y) {
ifelse(is.na(x), y, x)
}
Although it appears to be slower than coalesce2 above:
test <- function(a, b, func) {
for (i in 1:10000) {
func(a, b)
}
}
system.time(test(a, b, coalesce2))
user system elapsed
0.11 0.00 0.10
system.time(test(a, b, coalesce3))
user system elapsed
0.16 0.00 0.15
You can use Reduce to make it work for an arbitrary number of vectors:
coalesce4 <- function(...) {
Reduce(coalesce3, list(...))
}
Here is my solution:
coalesce <- function(x){
y <- head( x[is.na(x) == F] , 1)
return(y)
}
It returns first vaule which is not NA and it works on data.table, for example if you want to use coalesce on few columns and these column names are in vector of strings:
column_names <- c("col1", "col2", "col3")
how to use:
ranking[, coalesce_column := coalesce( mget(column_names) ), by = 1:nrow(ranking)]
Another apply method, with mapply.
mapply(function(...) {temp <- c(...); temp[!is.na(temp)][1]}, a, b, c)
[1] 1 2 NA 4 6
This selects the first non-NA value if more than one exists. The last non-missing element could be selected using tail.
Maybe a bit more speed could be squeezed out of this alternative using the bare bones .mapply function, which looks a little different.
unlist(.mapply(function(...) {temp <- c(...); temp[!is.na(temp)][1]},
dots=list(a, b, c), MoreArgs=NULL))
[1] 1 2 NA 4 6
.mapplydiffers in important ways from its non-dotted cousin.
it returns a list (like Map) and so must be wrapped in some function like unlist or c to return a vector.
the set of arguments to be fed in parallel to the function in FUN must be given in a list to the dots argument.
Finally, mapply, the moreArgs argument does not have a default, so must explicitly be fed NULL.
Another option is to use do.call and pmin:
do.call(pmin, c(list(a,b,c), list(na.rm=TRUE)))
Output
[1] 1 2 NA 4 6

Resources