Fastest way to detect if vector has at least 1 NA? - r

What is the fastest way to detect if a vector has at least 1 NA in R? I've been using:
sum( is.na( data ) ) > 0
But that requires examining each element, coercion, and the sum function.

As of R 3.1.0 anyNA() is the way to do this. On atomic vectors this will stop after the first NA instead of going through the entire vector as would be the case with any(is.na()). Additionally, this avoids creating an intermediate logical vector with is.na that is immediately discarded. Borrowing Joran's example:
x <- y <- runif(1e7)
x[1e4] <- NA
y[1e7] <- NA
microbenchmark::microbenchmark(any(is.na(x)), anyNA(x), any(is.na(y)), anyNA(y), times=10)
# Unit: microseconds
# expr min lq mean median uq
# any(is.na(x)) 13444.674 13509.454 21191.9025 13639.3065 13917.592
# anyNA(x) 6.840 13.187 13.5283 14.1705 14.774
# any(is.na(y)) 165030.942 168258.159 178954.6499 169966.1440 197591.168
# anyNA(y) 7193.784 7285.107 7694.1785 7497.9265 7865.064
Notice how it is substantially faster even when we modify the last value of the vector; this is in part because of the avoidance of the intermediate logical vector.

I'm thinking:
any(is.na(data))
should be slightly faster.

We mention this in some of our Rcpp presentations and actually have some benchmarks which show a pretty large gain from embedded C++ with Rcpp over the R solution because
a vectorised R solution still computes every single element of the vector expression
if your goal is to just satisfy any(), then you can abort after the first match -- which is what our Rcpp sugar (in essence: some C++ template magic to make C++ expressions look more like R expressions, see this vignette for more) solution does.
So by getting a compiled specialised solution to work, we do indeed get a fast solution. I should add that while I have not compared this to the solutions offered in this SO question here, I am reasonably confident about the performance.
Edit And the Rcpp package contains examples in the directory sugarPerformance. It has an increase of the several thousand of the 'sugar-can-abort-soon' over 'R-computes-full-vector-expression' for any(), but I should add that that case does not involve is.na() but a simple boolean expression.

One could write a for loop stopping at NA, but the system.time then depends on where the NA is... (if there is none, it takes looooong)
set.seed(1234)
x <- sample(c(1:5, NA), 100000000, replace = TRUE)
nacount <- function(x){
for(i in 1:length(x)){
if(is.na(x[i])) {
print(TRUE)
break}
}}
system.time(
nacount(x)
)
[1] TRUE
User System verstrichen
0.14 0.04 0.18
system.time(
any(is.na(x))
)
User System verstrichen
0.28 0.08 0.37
system.time(
sum(is.na(x)) > 0
)
User System verstrichen
0.45 0.07 0.53

Here are some actual times from my (slow) machine for some of the various methods discussed so far:
x <- runif(1e7)
x[1e4] <- NA
system.time(sum(is.na(x)) > 0)
> system.time(sum(is.na(x)) > 0)
user system elapsed
0.065 0.001 0.065
system.time(any(is.na(x)))
> system.time(any(is.na(x)))
user system elapsed
0.035 0.000 0.034
system.time(match(NA,x))
> system.time(match(NA,x))
user system elapsed
1.824 0.112 1.918
system.time(NA %in% x)
> system.time(NA %in% x)
user system elapsed
1.828 0.115 1.925
system.time(which(is.na(x) == TRUE))
> system.time(which(is.na(x) == TRUE))
user system elapsed
0.099 0.029 0.127
It's not surprising that match and %in% are similar, since %in% is implemented using match.

You can try:
d <- c(1,2,3,NA,5,3)
which(is.na(d) == TRUE, arr.ind=TRUE)

Related

Most efficient way to determine if element exists in a vector

I have several algorithms that depend on the efficiency of determining whether an element exists in a vector or not. It seems to me that %in% (which is equivalent to is.element()) should be the most efficient as it simply returns a Boolean value. After testing several methods, to my surprise, those methods are by far the most inefficient. Below is my analysis (the results get worse as the size of the vectors increase):
EfficiencyTest <- function(n, Lim) {
samp1 <- sample(Lim, n)
set1 <- sample(Lim, Lim)
print(system.time(for(i in 1:n) {which(set1==samp1[i])}))
print(system.time(for(i in 1:n) {samp1[i] %in% set1}))
print(system.time(for(i in 1:n) {is.element(samp1[i], set1)}))
print(system.time(for(i in 1:n) {match(samp1[i], set1)}))
a <- system.time(set1 <- sort(set1))
b <- system.time(for (i in 1:n) {BinVecCheck(samp1[i], set1)})
print(a+b)
}
> EfficiencyTest(10^3, 10^5)
user system elapsed
0.29 0.11 0.40
user system elapsed
19.79 0.39 20.21
user system elapsed
19.89 0.53 20.44
user system elapsed
20.04 0.28 20.33
user system elapsed
0.02 0.00 0.03
Where BinVecCheck is a binary search algorithm that I wrote that returns TRUE/FALSE. Note that I include the time it takes to sort the vector with the final method. Here is the code for the binary search:
BinVecCheck <- function(tar, vec) {
if (tar==vec[1] || tar==vec[length(vec)]) {return(TRUE)}
size <- length(vec)
size2 <- trunc(size/2)
dist <- (tar - vec[size2])
if (dist > 0) {
lower <- size2 - 1L
upper <- size
} else {
lower <- 1L
upper <- size2 + 1L
}
while (size2 > 1 && !(dist==0)) {
size2 <- trunc((upper-lower)/2)
temp <- lower+size2
dist <- (tar - vec[temp])
if (dist > 0) {
lower <- temp-1L
} else {
upper <- temp+1L
}
}
if (dist==0) {return(TRUE)} else {return(FALSE)}
}
Platform Info:
> sessionInfo()
R version 3.2.1 (2015-06-18)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
Question
Is there a more efficient way of determining whether an element exists in a vector in R? For example, is there an equivalent R function to the Python set function, that greatly improves on this approach? Also, why is %in%, and the like, so inefficient even when compared to the which function which gives more information (not only does it determine existence, but it also gives the indices of all true accounts)?
My tests aren't bearing out all of your claims, but that seems (?) to be due to cross-platform differences (which makes the question even more mysterious, and possibly worth taking up on r-devel#r-project.org, although maybe not since the fastmatch solution below dominates anyway ...)
n <- 10^3; Lim <- 10^5
set.seed(101)
samp1 <- sample(Lim,n)
set1 <- sample(Lim,Lim)
library("rbenchmark")
library("fastmatch")
`%fin%` <- function(x, table) {
stopifnot(require(fastmatch))
fmatch(x, table, nomatch = 0L) > 0L
}
benchmark(which=sapply(samp1,function(x) which(set1==x)),
infun=sapply(samp1,function(x) x %in% set1),
fin= sapply(samp1,function(x) x %fin% set1),
brc= sapply(samp1,BinVecCheck,vec=sort(set1)),
replications=20,
columns = c("test", "replications", "elapsed", "relative"))
## test replications elapsed relative
## 4 brc 20 0.871 2.329
## 3 fin 20 0.374 1.000
## 2 infun 20 6.480 17.326
## 1 which 20 10.634 28.433
This says that %in% is about twice as fast as which -- your BinVecCheck function is 7x better, but the fastmatch solution from here gets another factor of 2. I don't know if a specialized Rcpp implementation could do better or not ...
In fact, I get different answers even when running your code:
## user system elapsed (which)
## 0.488 0.096 0.586
## user system elapsed (%in%)
## 0.184 0.132 0.315
## user system elapsed (is.element)
## 0.188 0.124 0.313
## user system elapsed (match)
## 0.148 0.164 0.312
## user system elapsed (BinVecCheck)
## 0.048 0.008 0.055
update: on r-devel Peter Dalgaard explains the platform discrepancy (which is an R version difference, not an OS difference) by pointing to the R NEWS entry:
match(x, table) is faster, sometimes by an order of magnitude, when x is of length one and incomparables is unchanged, thanks to Haverty's PR#16491.
sessionInfo()
## R Under development (unstable) (2015-10-23 r69563)
## Platform: i686-pc-linux-gnu (32-bit)
## Running under: Ubuntu precise (12.04.5 LTS)
%in% is just sugar for match, and is defined as:
"%in%" <- function(x, table) match(x, table, nomatch = 0) > 0
Both match and which are low level (compiled C) functions called by .Internal(). You can actually see the source code by using the pryr package:
install.packages("pryr")
library(pryr)
pryr::show_c_source(.Internal(which(x)))
pryr::show_c_source(.Internal(match(x, table, nomatch, incomparables)))
You would be pointed to this page for which and this page for match.
which does not perform any of the casting, checks etc that match performs. This might explain its higher performance in your tests (but I haven't tested your results myself).
After many days researching this topic, I have found that the fastest method of determining existence depends on the number of elements being tested. From the answer given by #ben-bolker, %fin% looks like the clear-cut winner. This seems to be the case when the number of elements being tested (all elements in samp1) is small compared to the size of the vector (set1). Before we go any further, lets look at the binary search algorithm above.
First of all, the very first line in the original algorithm has an extremely low probability of evaluating to TRUE, so why check it everytime?
if (tar==vec[1] || tar==vec[size]) {return(TRUE)}
Instead, I put this statement inside the else statement at the very-end.
Secondly, determining the size of the vector every time is redundant, especially when I know the length of the test vector (set1) ahead of time. So, I added size as an argument to the algorithm and simply pass it as a variable. Below is the modified binary search code.
ModifiedBinVecCheck <- function(tar, vec, size) {
size2 <- trunc(size/2)
dist <- (tar - vec[size2])
if (dist > 0) {
lower <- size2 - 1L
upper <- size
} else {
lower <- 1L
upper <- size2 + 1L
}
while (size2 > 1 && !(dist==0)) {
size2 <- trunc((upper-lower)/2)
temp <- lower+size2
dist <- (tar - vec[temp])
if (dist > 0) {
lower <- temp-1L
} else {
upper <- temp+1L
}
}
if (dist==0) {
return(TRUE)
} else {
if (tar==vec[1] || tar==vec[size]) {return(TRUE)} else {return(FALSE)}
}
}
As we know, in order to use a binary search, your vector must be sorted, which cost time. The default sorting method for sort is shell, which can be used on all datatypes, but has the drawback (generally speaking) of being slower than the quick method (quick can only be used on doubles or integers). With quick as my method for sorting (since we are dealing with numbers) combined with the modified binary search, we get a significant performance increase (from the old binary search depending on the case). It should be noted that fmatch improves on match only when the datatype is an integer, real, or character.
Now, let's look at some test cases with differing sizes of n.
Case1 (n = 10^3 & Lim = 10^6, so n to Lim ratio is 1:1000):
n <- 10^3; Lim <- 10^6
set.seed(101)
samp1 <- sample(Lim,n)
set1 <- sample(Lim,Lim)
benchmark(fin= sapply(samp1,function(x) x %fin% set1),
brc= sapply(samp1,ModifiedBinVecCheck,vec=sort(set1, method = "quick"),size=Lim),
oldbrc= sapply(samp1,BinVecCheck,vec=sort(set1)),
replications=10,
columns = c("test", "replications", "elapsed", "relative"))
test replications elapsed relative
2 brc 10 0.97 4.217
1 fin 10 0.23 1.000
3 oldbrc 10 1.45 6.304
Case2 (n = 10^4 & Lim = 10^6, so n to Lim ratio is 1:100):
n <- 10^4; Lim <- 10^6
set.seed(101)
samp1 <- sample(Lim,n)
set1 <- sample(Lim,Lim)
benchmark(fin= sapply(samp1,function(x) x %fin% set1),
brc= sapply(samp1,ModifiedBinVecCheck,vec=sort(set1, method = "quick"),size=Lim),
oldbrc= sapply(samp1,BinVecCheck,vec=sort(set1)),
replications=10,
columns = c("test", "replications", "elapsed", "relative"))
test replications elapsed relative
2 brc 10 2.08 1.000
1 fin 10 2.16 1.038
3 oldbrc 10 2.57 1.236
Case3: (n = 10^5 & Lim = 10^6, so n to Lim ratio is 1:10):
n <- 10^5; Lim <- 10^6
set.seed(101)
samp1 <- sample(Lim,n)
set1 <- sample(Lim,Lim)
benchmark(fin= sapply(samp1,function(x) x %fin% set1),
brc= sapply(samp1,ModifiedBinVecCheck,vec=sort(set1, method = "quick"),size=Lim),
oldbrc= sapply(samp1,BinVecCheck,vec=sort(set1)),
replications=10,
columns = c("test", "replications", "elapsed", "relative"))
test replications elapsed relative
2 brc 10 13.13 1.000
1 fin 10 21.23 1.617
3 oldbrc 10 13.93 1.061
Case4: (n = 10^6 & Lim = 10^6, so n to Lim ratio is 1:1):
n <- 10^6; Lim <- 10^6
set.seed(101)
samp1 <- sample(Lim,n)
set1 <- sample(Lim,Lim)
benchmark(fin= sapply(samp1,function(x) x %fin% set1),
brc= sapply(samp1,ModifiedBinVecCheck,vec=sort(set1, method = "quick"),size=Lim),
oldbrc= sapply(samp1,BinVecCheck,vec=sort(set1)),
replications=10,
columns = c("test", "replications", "elapsed", "relative"))
test replications elapsed relative
2 brc 10 124.61 1.000
1 fin 10 214.20 1.719
3 oldbrc 10 127.39 1.022
As you can see, as n gets large relative to Lim, the efficiency of the binary search (both of them) start to dominate. In Case 1, %fin% was over 4x faster than the modified binary search, in Case2 there was almost no difference, in Case 3 we really start to see the binary search dominance, and in Case 4, the modified binary search is almost twice as fast as %fin%.
Thus, to answer the question "Which method is faster?", it depends. %fin% is faster for a small number of elemental checks with respect to the test vector and the ModifiedBinVecCheck is faster for a larger number of elemental checks with respect to the test vector.
any( x == "foo" ) should be plenty fast if you can be sure that x is free of NAs. If you may have NAs, R 3.3 has a speedup for "%in%" that will help.
For binary search, see findInterval before rolling your own. This doesn't sound like a job for binary search unless x is constant and sorted.

replacement for na.locf.xts (extremely slow when used with a multicolumn xts)

The R function
xts:::na.locf.xts
is extremely slow when used with a multicolumn xts of more than a few columns.
There is indeed a loop over the columns in the code of na.locf.xts
I am trying to find a way to avoid this loop.
Any idea?
The loop in na.locf.xts is slow because it creates a copy of the entire object for each column in the object. The loop itself isn't slow; the copies created by [.xts are slow.
There's an experimental (and therefore unexported) version of na.locf.xts on R-Forge that moves the loop over columns to C, which avoids copying the object. It's quite a bit faster for very large objects.
set.seed(21)
m <- replicate(20, rnorm(1e6))
is.na(m) <- sample(length(x), 1e5)
x <- xts(m, Sys.time()-1e6:1)
y <- x[1:1e5,1:3]
> # smaller objects
> system.time(a <- na.locf(y))
user system elapsed
0.008 0.000 0.008
> system.time(b <- xts:::.na.locf.xts(y))
user system elapsed
0.000 0.000 0.003
> identical(a,b)
[1] TRUE
> # larger objects
> system.time(a <- na.locf(x))
user system elapsed
1.620 1.420 3.064
> system.time(b <- xts:::.na.locf.xts(x))
user system elapsed
0.124 0.092 0.220
> identical(a,b)
[1] TRUE
timeIndex <- index(x)
x <- apply(x, 2, na.locf)
x <- as.xts(x, order.by = timeIndex)
This avoids the column-by-column data copying. Without this, when filling the nth column, you make a copy of 1 : (n - 1) columns and append the nth column to it, which becomes prohibitively slow when n is large.

Why is apply() method slower than a for loop in R?

As a matter of best practices, I'm trying to determine if it's better to create a function and apply() it across a matrix, or if it's better to simply loop a matrix through the function. I tried it both ways and was surprised to find apply() is slower. The task is to take a vector and evaluate it as either being positive or negative and then return a vector with 1 if it's positive and -1 if it's negative. The mash() function loops and the squish() function is passed to the apply() function.
million <- as.matrix(rnorm(100000))
mash <- function(x){
for(i in 1:NROW(x))
if(x[i] > 0) {
x[i] <- 1
} else {
x[i] <- -1
}
return(x)
}
squish <- function(x){
if(x >0) {
return(1)
} else {
return(-1)
}
}
ptm <- proc.time()
loop_million <- mash(million)
proc.time() - ptm
ptm <- proc.time()
apply_million <- apply(million,1, squish)
proc.time() - ptm
loop_million results:
user system elapsed
0.468 0.008 0.483
apply_million results:
user system elapsed
1.401 0.021 1.423
What is the advantage to using apply() over a for loop if performance is degraded? Is there a flaw in my test? I compared the two resulting objects for a clue and found:
> class(apply_million)
[1] "numeric"
> class(loop_million)
[1] "matrix"
Which only deepens the mystery. The apply() function cannot accept a simple numeric vector and that's why I cast it with as.matrix() in the beginning. But then it returns a numeric. The for loop is fine with a simple numeric vector. And it returns an object of same class as that one passed to it.
The point of the apply (and plyr) family of functions is not speed, but expressiveness. They also tend to prevent bugs because they eliminate the book keeping code needed with loops.
Lately, answers on stackoverflow have over-emphasised speed. Your code will get faster on its own as computers get faster and R-core optimises the internals of R. Your code will never get more elegant or easier to understand on its own.
In this case you can have the best of both worlds: an elegant answer using vectorisation that is also very fast, (million > 0) * 2 - 1.
As Chase said: Use the power of vectorization. You're comparing two bad solutions here.
To clarify why your apply solution is slower:
Within the for loop, you actually use the vectorized indices of the matrix, meaning there is no conversion of type going on. I'm going a bit rough over it here, but basically the internal calculation kind of ignores the dimensions. They're just kept as an attribute and returned with the vector representing the matrix. To illustrate :
> x <- 1:10
> attr(x,"dim") <- c(5,2)
> y <- matrix(1:10,ncol=2)
> all.equal(x,y)
[1] TRUE
Now, when you use the apply, the matrix is split up internally in 100,000 row vectors, every row vector (i.e. a single number) is put through the function, and in the end the result is combined into an appropriate form. The apply function reckons a vector is best in this case, and thus has to concatenate the results of all rows. This takes time.
Also the sapply function first uses as.vector(unlist(...)) to convert anything to a vector, and in the end tries to simplify the answer into a suitable form. Also this takes time, hence also the sapply might be slower here. Yet, it's not on my machine.
IF apply would be a solution here (and it isn't), you could compare :
> system.time(loop_million <- mash(million))
user system elapsed
0.75 0.00 0.75
> system.time(sapply_million <- matrix(unlist(sapply(million,squish,simplify=F))))
user system elapsed
0.25 0.00 0.25
> system.time(sapply2_million <- matrix(sapply(million,squish)))
user system elapsed
0.34 0.00 0.34
> all.equal(loop_million,sapply_million)
[1] TRUE
> all.equal(loop_million,sapply2_million)
[1] TRUE
You can use lapply or sapply on vectors if you want. However, why not use the appropriate tool for the job, in this case ifelse()?
> ptm <- proc.time()
> ifelse_million <- ifelse(million > 0,1,-1)
> proc.time() - ptm
user system elapsed
0.077 0.007 0.093
> all.equal(ifelse_million, loop_million)
[1] TRUE
And for comparison's sake, here are the two comparable runs using the for loop and sapply:
> ptm <- proc.time()
> apply_million <- sapply(million, squish)
> proc.time() - ptm
user system elapsed
0.469 0.004 0.474
> ptm <- proc.time()
> loop_million <- mash(million)
> proc.time() - ptm
user system elapsed
0.408 0.001 0.417
It is far faster in this case to do index-based replacement than either the ifelse(), the *apply() family, or the loop:
> million <- million2 <- as.matrix(rnorm(100000))
> system.time(million3 <- ifelse(million > 0, 1, -1))
user system elapsed
0.046 0.000 0.044
> system.time({million2[(want <- million2 > 0)] <- 1; million2[!want] <- -1})
user system elapsed
0.006 0.000 0.007
> all.equal(million2, million3)
[1] TRUE
It is well worth having all these tools at your finger tips. You can use the one that makes the most sense to you (as you need to understand the code months or years later) and then start to move to more optimised solutions if compute time becomes prohibitive.
Better example for speed advantage of for loop.
for_loop <- function(x){
out <- vector(mode="numeric",length=NROW(x))
for(i in seq(length(out)))
out[i] <- max(x[i,])
return(out)
}
apply_loop <- function(x){
apply(x,1,max)
}
million <- matrix(rnorm(1000000),ncol=10)
> system.time(apply_loop(million))
user system elapsed
0.57 0.00 0.56
> system.time(for_loop(million))
user system elapsed
0.32 0.00 0.33
EDIT
Version suggested by Eduardo.
max_col <- function(x){
x[cbind(seq(NROW(x)),max.col(x))]
}
By row
> system.time(for_loop(million))
user system elapsed
0.99 0.00 1.11
> system.time(apply_loop(million))
user system elapsed
1.40 0.00 1.44
> system.time(max_col(million))
user system elapsed
0.06 0.00 0.06
By column
> system.time(for_loop(t(million)))
user system elapsed
0.05 0.00 0.05
> system.time(apply_loop(t(million)))
user system elapsed
0.07 0.00 0.07
> system.time(max_col(t(million)))
user system elapsed
0.04 0.00 0.06

Redefine Data Frame in R

database$VAR
which has values of 0's and 1's.
How can I redefine the data frame so that the 1's are removed?
Thanks!
TMTOWTDI
Using subset:
df.new <- subset(df, VAR == 0)
EDIT:
David's solution seems to be the fastest on my machine. Subset seems to be the slowest. I won't even pretend to try and understand what's going on under that accounts for these differences:
> df <- data.frame(y=rep(c(1,0), times=1000000))
>
> system.time(df[ -which(df[,"y"]==1), , drop=FALSE])
user system elapsed
0.16 0.05 0.23
> system.time(df[which(df$y == 0), ])
user system elapsed
0.03 0.01 0.06
> system.time(subset(df, y == 0))
user system elapsed
0.14 0.09 0.27
I'd upvote the answer using "subset" if I had the reputation for it :-) . You can also use a logical vector directly for subsetting -- no need for "which":
d <- data.frame(VAR = c(0,1,0,1,1))
d[d$VAR == 0, , drop=FALSE]
I'm surprised to find the logical version a little faster in at least one case. (I expected the "which" version might win due to R possibly preallocating the proper amount of storage for the result.)
> d <- data.frame(y=rep(c(1,0), times=1000000))
> system.time(d[which(d$y == 0), ])
user system elapsed
0.119 0.067 0.188
> system.time(d[d$y == 0, ])
user system elapsed
0.049 0.024 0.074
Try this:
R> df <- data.frame(VAR = c(0,1,0,1,1))
R> df[ -which(df[,"VAR"]==1), , drop=FALSE]
VAR
1 0
3 0
R>
We use which( booleanExpr ) to get the indices for which your condition holds, then use -1 on these to exclude them and lastly use a drop=FALSE to prevent our data.frame of one columns from collapsing into a vector.

Is R's apply family more than syntactic sugar?

...regarding execution time and / or memory.
If this is not true, prove it with a code snippet. Note that speedup by vectorization does not count. The speedup must come from apply (tapply, sapply, ...) itself.
The apply functions in R don't provide improved performance over other looping functions (e.g. for). One exception to this is lapply which can be a little faster because it does more work in C code than in R (see this question for an example of this).
But in general, the rule is that you should use an apply function for clarity, not for performance.
I would add to this that apply functions have no side effects, which is an important distinction when it comes to functional programming with R. This can be overridden by using assign or <<-, but that can be very dangerous. Side effects also make a program harder to understand since a variable's state depends on the history.
Edit:
Just to emphasize this with a trivial example that recursively calculates the Fibonacci sequence; this could be run multiple times to get an accurate measure, but the point is that none of the methods have significantly different performance:
> fibo <- function(n) {
+ if ( n < 2 ) n
+ else fibo(n-1) + fibo(n-2)
+ }
> system.time(for(i in 0:26) fibo(i))
user system elapsed
7.48 0.00 7.52
> system.time(sapply(0:26, fibo))
user system elapsed
7.50 0.00 7.54
> system.time(lapply(0:26, fibo))
user system elapsed
7.48 0.04 7.54
> library(plyr)
> system.time(ldply(0:26, fibo))
user system elapsed
7.52 0.00 7.58
Edit 2:
Regarding the usage of parallel packages for R (e.g. rpvm, rmpi, snow), these do generally provide apply family functions (even the foreach package is essentially equivalent, despite the name). Here's a simple example of the sapply function in snow:
library(snow)
cl <- makeSOCKcluster(c("localhost","localhost"))
parSapply(cl, 1:20, get("+"), 3)
This example uses a socket cluster, for which no additional software needs to be installed; otherwise you will need something like PVM or MPI (see Tierney's clustering page). snow has the following apply functions:
parLapply(cl, x, fun, ...)
parSapply(cl, X, FUN, ..., simplify = TRUE, USE.NAMES = TRUE)
parApply(cl, X, MARGIN, FUN, ...)
parRapply(cl, x, fun, ...)
parCapply(cl, x, fun, ...)
It makes sense that apply functions should be used for parallel execution since they have no side effects. When you change a variable value within a for loop, it is globally set. On the other hand, all apply functions can safely be used in parallel because changes are local to the function call (unless you try to use assign or <<-, in which case you can introduce side effects). Needless to say, it's critical to be careful about local vs. global variables, especially when dealing with parallel execution.
Edit:
Here's a trivial example to demonstrate the difference between for and *apply so far as side effects are concerned:
> df <- 1:10
> # *apply example
> lapply(2:3, function(i) df <- df * i)
> df
[1] 1 2 3 4 5 6 7 8 9 10
> # for loop example
> for(i in 2:3) df <- df * i
> df
[1] 6 12 18 24 30 36 42 48 54 60
Note how the df in the parent environment is altered by for but not *apply.
Sometimes speedup can be substantial, like when you have to nest for-loops to get the average based on a grouping of more than one factor. Here you have two approaches that give you the exact same result :
set.seed(1) #for reproducability of the results
# The data
X <- rnorm(100000)
Y <- as.factor(sample(letters[1:5],100000,replace=T))
Z <- as.factor(sample(letters[1:10],100000,replace=T))
# the function forloop that averages X over every combination of Y and Z
forloop <- function(x,y,z){
# These ones are for optimization, so the functions
#levels() and length() don't have to be called more than once.
ylev <- levels(y)
zlev <- levels(z)
n <- length(ylev)
p <- length(zlev)
out <- matrix(NA,ncol=p,nrow=n)
for(i in 1:n){
for(j in 1:p){
out[i,j] <- (mean(x[y==ylev[i] & z==zlev[j]]))
}
}
rownames(out) <- ylev
colnames(out) <- zlev
return(out)
}
# Used on the generated data
forloop(X,Y,Z)
# The same using tapply
tapply(X,list(Y,Z),mean)
Both give exactly the same result, being a 5 x 10 matrix with the averages and named rows and columns. But :
> system.time(forloop(X,Y,Z))
user system elapsed
0.94 0.02 0.95
> system.time(tapply(X,list(Y,Z),mean))
user system elapsed
0.06 0.00 0.06
There you go. What did I win? ;-)
...and as I just wrote elsewhere, vapply is your friend!
...it's like sapply, but you also specify the return value type which makes it much faster.
foo <- function(x) x+1
y <- numeric(1e6)
system.time({z <- numeric(1e6); for(i in y) z[i] <- foo(i)})
# user system elapsed
# 3.54 0.00 3.53
system.time(z <- lapply(y, foo))
# user system elapsed
# 2.89 0.00 2.91
system.time(z <- vapply(y, foo, numeric(1)))
# user system elapsed
# 1.35 0.00 1.36
Jan. 1, 2020 update:
system.time({z1 <- numeric(1e6); for(i in seq_along(y)) z1[i] <- foo(y[i])})
# user system elapsed
# 0.52 0.00 0.53
system.time(z <- lapply(y, foo))
# user system elapsed
# 0.72 0.00 0.72
system.time(z3 <- vapply(y, foo, numeric(1)))
# user system elapsed
# 0.7 0.0 0.7
identical(z1, z3)
# [1] TRUE
I've written elsewhere that an example like Shane's doesn't really stress the difference in performance among the various kinds of looping syntax because the time is all spent within the function rather than actually stressing the loop. Furthermore, the code unfairly compares a for loop with no memory with apply family functions that return a value. Here's a slightly different example that emphasizes the point.
foo <- function(x) {
x <- x+1
}
y <- numeric(1e6)
system.time({z <- numeric(1e6); for(i in y) z[i] <- foo(i)})
# user system elapsed
# 4.967 0.049 7.293
system.time(z <- sapply(y, foo))
# user system elapsed
# 5.256 0.134 7.965
system.time(z <- lapply(y, foo))
# user system elapsed
# 2.179 0.126 3.301
If you plan to save the result then apply family functions can be much more than syntactic sugar.
(the simple unlist of z is only 0.2s so the lapply is much faster. Initializing the z in the for loop is quite fast because I'm giving the average of the last 5 of 6 runs so moving that outside the system.time would hardly affect things)
One more thing to note though is that there is another reason to use apply family functions independent of their performance, clarity, or lack of side effects. A for loop typically promotes putting as much as possible within the loop. This is because each loop requires setup of variables to store information (among other possible operations). Apply statements tend to be biased the other way. Often times you want to perform multiple operations on your data, several of which can be vectorized but some might not be able to be. In R, unlike other languages, it is best to separate those operations out and run the ones that are not vectorized in an apply statement (or vectorized version of the function) and the ones that are vectorized as true vector operations. This often speeds up performance tremendously.
Taking Joris Meys example where he replaces a traditional for loop with a handy R function we can use it to show the efficiency of writing code in a more R friendly manner for a similar speedup without the specialized function.
set.seed(1) #for reproducability of the results
# The data - copied from Joris Meys answer
X <- rnorm(100000)
Y <- as.factor(sample(letters[1:5],100000,replace=T))
Z <- as.factor(sample(letters[1:10],100000,replace=T))
# an R way to generate tapply functionality that is fast and
# shows more general principles about fast R coding
YZ <- interaction(Y, Z)
XS <- split(X, YZ)
m <- vapply(XS, mean, numeric(1))
m <- matrix(m, nrow = length(levels(Y)))
rownames(m) <- levels(Y)
colnames(m) <- levels(Z)
m
This winds up being much faster than the for loop and just a little slower than the built in optimized tapply function. It's not because vapply is so much faster than for but because it is only performing one operation in each iteration of the loop. In this code everything else is vectorized. In Joris Meys traditional for loop many (7?) operations are occurring in each iteration and there's quite a bit of setup just for it to execute. Note also how much more compact this is than the for version.
When applying functions over subsets of a vector, tapply can be pretty faster than a for loop. Example:
df <- data.frame(id = rep(letters[1:10], 100000),
value = rnorm(1000000))
f1 <- function(x)
tapply(x$value, x$id, sum)
f2 <- function(x){
res <- 0
for(i in seq_along(l <- unique(x$id)))
res[i] <- sum(x$value[x$id == l[i]])
names(res) <- l
res
}
library(microbenchmark)
> microbenchmark(f1(df), f2(df), times=100)
Unit: milliseconds
expr min lq median uq max neval
f1(df) 28.02612 28.28589 28.46822 29.20458 32.54656 100
f2(df) 38.02241 41.42277 41.80008 42.05954 45.94273 100
apply, however, in most situation doesn't provide any speed increase, and in some cases can be even lot slower:
mat <- matrix(rnorm(1000000), nrow=1000)
f3 <- function(x)
apply(x, 2, sum)
f4 <- function(x){
res <- 0
for(i in 1:ncol(x))
res[i] <- sum(x[,i])
res
}
> microbenchmark(f3(mat), f4(mat), times=100)
Unit: milliseconds
expr min lq median uq max neval
f3(mat) 14.87594 15.44183 15.87897 17.93040 19.14975 100
f4(mat) 12.01614 12.19718 12.40003 15.00919 40.59100 100
But for these situations we've got colSums and rowSums:
f5 <- function(x)
colSums(x)
> microbenchmark(f5(mat), times=100)
Unit: milliseconds
expr min lq median uq max neval
f5(mat) 1.362388 1.405203 1.413702 1.434388 1.992909 100

Resources