Can you implement 'sweep' using apply in R? - r

I'm brushing up on my R skills and finally feel like I've mastered the strange sweep function e.g.
df <- data.frame(a = 1:3, b = 2:4)
sweep(df, MARGIN = 2, STATS = c(5, 10), FUN = "*")
## a b
## 1 5 20
## 2 10 30
## 3 15 40
and more usefully here, on a tutorial I'm working on implementing a spatial interaction model in R.
They say that a sign you understand something is that you can say it in many ways, and I think this applies more in programming than almost anywhere else. Yet, despite the problem that sweep solves seeming apply-esque, I have NO IDEA whether they are to some degree interchangeable.
So, in order to improve my own understanding of R, is there any way to do the above procedure using apply?

This is close:
t(apply(df, 1, `*`, c(5,10)))
The row names are lost but otherwise the output is the same
> t(apply(df, 1, '*', c(5,10)))
a b
[1,] 5 20
[2,] 10 30
[3,] 15 40
To break this down, say we were doing this by hand for the first row of df, we'd write
> df[1, ] * c(5, 10)
a b
1 5 20
which is the same as calling the '*'() function with arguments df[1, ] and c(5, 10)
> '*'(df[1, ], c(5, 10))
a b
1 5 20
From this, we have enough to set up an apply() call:
we work by rows, hence MARGIN = 1,
we apply the function '*'() so FUN = '*'
we need to supply the second argument, c(5,10), to '*'(), which we do via the ... argument of apply().
The only extra thing to realise is how apply() sticks together the vector resulting from each "iteration"; here they are bound column-wise and hence we need to transpose the result from apply() so that we get the same output as sweep().

As an additional information, since questions about sweep are recurring, quick benchmarking gives (on Intel i7-8700 with Windows)
x <- matrix(data = 20000*5000, nrow = 20000, ncol = 5000)
system.time(expr = {
aa <- colMeans(x = x)
bb <- sweep(x = x, MARGIN = 2, STATS = aa, FUN = "-")
})
# user system elapsed
# 4.69 0.16 4.84
system.time(expr = {
bbb <- apply(X = x, MARGIN = 1, FUN = function(z) z - mean(x = z))
bbb <- t(x = bbb)
})
# user system elapsed
# 6.28 0.55 6.85
Meaning that sweep is more efficient when applicable.

Related

R - Calculate rolling mean of previous k non-NA values

I'm trying to calculate the rolling mean of the previous k non-NA values within the dplyr/tidyverse framework. I've written a function that seems to work but was wondering if there's already a function from some package (which will probably be much more efficient than my attempt) doing exactly this. An example dataset:
tmp.df <- data.frame(
x = c(NA, 1, 2, NA, 3, 4, 5, NA, NA, NA, 6, 7, NA)
)
Let's say I want the rolling mean of the previous 3 non-NA values. Then the output y should be:
x y
1 NA NA
2 1 NA
3 2 NA
4 NA NA
5 3 NA
6 4 2
7 5 3
8 NA 4
9 NA 4
10 NA 4
11 6 4
12 7 5
13 NA 6
The first 5 elements of y are NAs because the first time x has 3 previous non-NA values is on row 6 and the average of those 3 elements is 2. The next y elements are self-explanatory. Row 9 gets a 4 because the 3 previous non-NA values of x are in rows 5, 6, and 7 and so on.
My attempt is this:
roll_mean_previous_k <- function(x, k){
require(dplyr)
res <- NA
lagged_vector <- dplyr::lag(x)
lagged_vector_without_na <- lagged_vector[!is.na(lagged_vector)]
previous_k_values <- tail(lagged_vector_without_na, k)
if (length(previous_k_values) >= k) res <- mean(previous_k_values)
res
}
to be used as follows (using the slide_dbl function from the slider package):
library(dplyr)
tmp.df %>%
mutate(
y = slider::slide_dbl(x, roll_mean_previous_k, k = 3, .before = Inf)
)
which gives the desired output. However, I'm wondering if there's a ready-made, and (as mentioned before) more efficient way of doing this. I should mention that I know of rollmean and roll_mean from the zoo and RcppRoll packages respectively, but unless I'm mistaken, they seem to work on a fixed rolling window with the option of dealing with NA values (e.g ignoring them). In my case, I want to "extend" my window to include k non-NA values.
Any thoughts/suggestions are welcome.
EDIT - SIMULATION RESULTS
Thank you to all contributors. First of all, I had not mentioned that my datasets are indeed much larger and run often so any performance improvements are most welcome. I therefore ran the following simulation to check execution times, before deciding which answer to accept. Note, that some of the answers needed small tweaks to return the desired output, but if you feel that your solution is misrepresented (and therefore is less efficient than intended) feel free to let me know and I'll edit accordingly. I've used G. Grothendieck's trick from his answer below, to remove the need for if-else checks regarding the length of the lagged , non-NA vector.
So here's the simulation code:
library(tidyverse)
library(runner)
library(zoo)
library(slider)
library(purrr)
library(microbenchmark)
set.seed(20211004)
test_vector <- sample(x = 100, size = 1000, replace = TRUE)
test_vector[sample(1000, size = 250)] <- NA
# Based on GoGonzo's answer and the runner package
f_runner <- function(z, k){
runner(
x = z,
f = function(x) {
mean(`length<-`(tail(na.omit(head(x, -1)), k), k))
}
)
}
# Based on my inital answer (but simplified), also mentioned by GoGonzo
f_slider <- function(z, k){
slide_dbl(
z,
function(x) {
mean(`length<-`(tail(na.omit(head(x, -1)), k), k))
},
.before = Inf
)
}
# Based on helios' answer. Return the correct results but with a warning.
f_helios <- function(z, k){
reduced_vec <- na.omit(z)
unique_means <- rollapply(reduced_vec, width = k, mean)
start <- which(!is.na(z))[k] + 1
repeater <- which(is.na(z)) + 1
repeater_cut <- repeater[(repeater > start-1) & (repeater <= length(z))]
final <- as.numeric(rep(NA, length(z)))
index <- start:length(z)
final[setdiff(index, repeater_cut)] <- unique_means
final[(start):length(final)] <- na.locf(final)
final
}
# Based on G. Grothendieck's answer (but I couldn't get it to run with the performance improvements)
f_zoo <- function(z, k){
rollapplyr(
z,
seq_along(z),
function(x, k){
mean(`length<-`(tail(na.omit(head(x, -1)), k), k))
},
k)
}
# Based on AnilGoyal's answer
f_purrr <- function(z, k){
map_dbl(
seq_along(z),
~ ifelse(
length(tail(na.omit(z[1:(.x -1)]), k)) == k,
mean(tail(na.omit(z[1:(.x -1)]), k)),
NA
)
)
}
# Check if all are identical #
all(
sapply(
list(
# f_helios(test_vector, 10),
f_purrr(test_vector, 10),
f_runner(test_vector, 10),
f_zoo(test_vector, 10)
),
FUN = identical,
f_slider(test_vector, 10),
)
)
# Run benchmarking #
microbenchmark(
# f_helios(test_vector, 10),
f_purrr(test_vector, 10),
f_runner(test_vector, 10),
f_slider(test_vector, 10),
f_zoo(test_vector, 10)
)
And the results:
Unit: milliseconds
expr min lq mean median uq max neval cld
f_purrr(test_vector, 10) 31.9377 37.79045 39.64343 38.53030 39.65085 104.9613 100 c
f_runner(test_vector, 10) 23.7419 24.25170 29.12785 29.23515 30.32485 98.7239 100 b
f_slider(test_vector, 10) 20.6797 21.71945 24.93189 26.52460 27.67250 32.1847 100 a
f_zoo(test_vector, 10) 43.4041 48.95725 52.64707 49.59475 50.75450 122.0793 100 d
Based on the above, and unless the code can be further improved, it seems as the slider and runner solutions are faster. Any final suggestions are more than welcome.
Many thanks for your time!!
With runner it will be something like mean of 3-elements tail window of non-na values. You can achive the same result with slider
library(runner)
tmp.df <- data.frame(
x = c(NA, 1, 2, NA, 3, 4, 5, NA, NA, NA, 6, 7, NA)
)
# using runner
tmp.df$y_runner <- runner(
x = tmp.df$x,
f = function(x) {
mean(
tail(
x[!is.na(x)],
3
)
)
}
)
# using slider
tmp.df$y_slider <- slider::slide_dbl(
tmp.df$x,
function(x) {
mean(
tail(
x[!is.na(x)],
3
)
)
},
.before = Inf
)
tmp.df
# x y_runner y_slider
# 1 NA NaN NaN
# 2 1 1.0 1.0
# 3 2 1.5 1.5
# 4 NA 1.5 1.5
# 5 3 2.0 2.0
# 6 4 3.0 3.0
# 7 5 4.0 4.0
# 8 NA 4.0 4.0
# 9 NA 4.0 4.0
# 10 NA 4.0 4.0
# 11 6 5.0 5.0
# 12 7 6.0 6.0
# 13 NA 6.0 6.0
rollapplyr. Regarding the comment about rollmean in the question, zoo also has rollappy and rollapplyr (right aligned) and those allow different widths (and offsets) for each component of the input by specifying a vector (as we do here) or list for width -- see ?rollapply for more info. We use a relatively naive vector of widths below and also show some improved width vectors which run faster.
Operation Create a Mean function which takes a vector, removes the last element and all NA's and takes the last k elements of what is left extending it to k elements with NA's as needed. Finally take the mean of that. We use rollapplyr to apply that to x with a width of seq_along(x).
Performance improvements. With this small data the following likely don't make much difference but if you have larger data you could try these which might improve the speed:
replace na.omit with na_rm from the collapse package
replace the second argument of rollapplyr with the code shown here.
The idea here is that the sum of the lengths of the k+1 longest runs of NA plus k+1 forms a bound on the number of elements that we need to consider. This (plus using na_rm) ran about 25% faster as the code in the question on a problem when I tried it with 1300 rows (formed from 100 copies of the data in the question) and does not add much extra code.
pmin(with(rle(is.na(x)), sum(tail(sort(lengths[values]), k+1)))+k+1, seq_along(x))
replace the second argument of rollapplyr with w where w is shown here. The idea here is to use findInterval to find the element k non-NA's back which provides an even tigher bound. This one (plus using na_rm) ran nearly twice as fast as the code in the question when tried with the same 1300 rows at the expense of adding 2 more lines of code.
tt <- length(x) - rev(cumsum(rev(!is.na(x))))
w <- seq_along(tt) - findInterval(tt - k - 1, tt)
Code. With the data in the question the code below (not using the above improvements) ran slightly faster (not a lot) than the code in the question based on my benchmarking and it is only two lines of code.
library(dplyr)
library(zoo)
Mean <- function(x, k) mean(`length<-`(tail(na.omit(head(x, -1)), k), k))
tmp.df %>% mutate(y = rollapplyr(x, seq_along(x), Mean, k = 3))
giving:
x y
1 NA NA
2 1 NA
3 2 NA
4 NA NA
5 3 NA
6 4 2
7 5 3
8 NA 4
9 NA 4
10 NA 4
11 6 4
12 7 5
13 NA 6
Since I am not aware of a ready-made way of computing your output in any standard library, I came up with the implementation roll_mean_k_efficient below, which seems to speed up your computations considerably. Note that this implementation makes use of the rollapply and the na.locf methods from the zoo package.
rm(list = ls())
library("zoo")
library("rbenchmark")
library("dplyr")
x = rep(c(NA, 1, 2, NA, 3, 4, 5, NA, NA, NA, 6, 7, NA), 100)
# your sample (extended)
tmp.df <- data.frame(
x = rep(c(NA, 1, 2, NA, 3, 4, 5, NA, NA, NA, 6, 7, NA), 100)
)
# enhanced implementation
roll_mean_k_efficient <- function(x, k){
reduced_vec = na.omit(x)
unique_means = rollapply(reduced_vec, width=k, mean)
start = which(!is.na(x))[k] + 1
repeater = which(is.na(x)) + 1
repeater_cut = repeater[(repeater > start-1) & (repeater <= length(x))]
final <- as.numeric(rep(NA, length(x)))
index = start:length(x)
final[setdiff(index, repeater_cut)] <- unique_means
final[(start):length(final)] <- na.locf(final)
final
}
# old implementation
roll_mean_previous_k <- function(x, k){
res <- NA
lagged_vector <- dplyr::lag(x)
lagged_vector_without_na <- lagged_vector[!is.na(lagged_vector)]
previous_k_values <- tail(lagged_vector_without_na, k)
if (length(previous_k_values) >= k) res <- mean(previous_k_values)
res
}
# wrapper function for the benchmarking below
roll_mean_benchmark = function(){
res = tmp.df %>%
mutate(
y = slider::slide_dbl(x, roll_mean_previous_k, k = 3, .before = Inf)
)
return(res)
}
# some benchmarking
benchmark(roll_mean_k_efficient(x = x, k=3),
roll_mean_benchmark(),
columns=c('test','elapsed','replications'),
replications = 100)
Furthermore, I extended your example vector x to get some more reliable benchmark results via the benchmark function from the rbenchmark package.
In my case the benchmark runtimes that are printed after running the code are:
test elapsed replications
2 roll_mean_benchmark() 4.463 100
1 roll_mean_k_efficient(x = x, k = 3) 0.039 100
Without using zoo. In tidyverse fashion, you can also do it using purrr::map
tmp.df %>% mutate(y = map(seq_along(x), ~ ifelse(length(tail(na.omit(tmp.df$x[1:(.x -1)]), 3)) ==3,
mean(tail(na.omit(tmp.df$x[1:(.x -1)]), 3)),
NA)))
x y
1 NA NA
2 1 NA
3 2 NA
4 NA NA
5 3 NA
6 4 2
7 5 3
8 NA 4
9 NA 4
10 NA 4
11 6 4
12 7 5
13 NA 6

Trouble applying function to data frame

Toy example:
> myfn = function(a,x){sum(a*x)}
> myfn(a=2, x=c(1,2,3))
[1] 12
Good so far. Now:
> df = data.frame(a=c(4,5))
> df$ans = myfn(a=df$a, x=c(1,2,3))
Warning message:
In a * x : longer object length is not a multiple of shorter object length
> df
a ans
1 4 26
2 5 26
What I want to happen is that for the first row, it is as if I called myfn(a=4, x=c(1,2,3), giving an answer of 24, and for the second row, it is as if I called myfn(a=5, x=c(1,2,3) giving an answer of 30. How do I do this? Thank you.
EDIT: slightly more complex version. Now suppose that the function is
myfn = function(a,b, x){sum((a+b)*x)}
and that I have the data frame
df = data.frame(a=c(4,5), b=c(6,7), c=c(9,9))
I want to create df$ans such that, for the first row it is as if I called myfn(a=4, b=6, x=c(1,2,3) and for the second for it is as if I called myfn(a=5, b=7, x=c(1,2,3), that is, use df$x for a, df$y for b, and ignore df$z.
Something like this would work:
myfn = function(a,x){
return(sum(a*x))
}
df <- data.frame(a=c(4,5))
df$ans <- apply(df, 1, myfn, x = c(1,2,3))
df$ans
a ans
1 4 24
2 5 30
** Edited Based On User Edit **
df = data.frame(a=c(4,5), b=c(6,7), c=c(9,9))
df$ans <- apply(df[, c("a", "b")], 1, function(y) sum((y['a']+y['b'])*c(1,2,3)))
a b c ans
1 4 6 9 60
2 5 7 9 72
There are several ways this can be done, each with it's own charms. If you don't want to modify the function I would just do
mapply(myfn, df$x, df$y, MoreArgs = list(x = 1:3))
Alternatively, you can bake the iteration right into the function, e.g,
myfn = function(a,b, x){
sapply(a+b, function(ab) {
sum(ab*x)
})
}
myfn(df$x, df$y, 1:3)
That's probably the way I would do it.

Arithmetic mean on a multidimensional array on R and MATLAB: drastic difference of performances

I am working with multidimensional array both on R and MATLAB, these arrays have five dimensions (total of 14.5M of elements). I have to remove a dimension applying an arithmetic mean on it and I discovered an amazing difference of performances using the two softwares.
MATLAB:
>> a = rand([144 73 10 6 23]);
>> tic; b = mean(a,3); toc
Elapsed time is 0.014454 seconds.
R:
> a = array(data = runif(144*73*6*23*10), dim = c(144,73,10,6,23))
> start <- Sys.time (); b = apply(a, c(1,2,4,5), mean); Sys.time () - start
Time difference of 1.229083 mins
I know that apply function is slow because is something like a general purpose function, but I don't know how to deal with this problem because this difference of performances is really a big limit for me. I tried to search for a generalization of colMeans/rowMeans functions but I didn't succeed.
EDIT
I'll show a little sample matrix:
> dim(a)
[1] 2 4 3
> dput(aa)
structure(c(7, 8, 5, 8, 10, 11, 9, 9, 6, 12, 9, 10, 12, 10, 14,
12, 7, 9, 8, 10, 10, 9, 8, 6), .Dim = c(2L, 4L, 3L))
a_mean = apply(a, c(2,3), mean)
> a_mean
[,1] [,2] [,3]
[1,] 7.5 9.0 8.0
[2,] 6.5 9.5 9.0
[3,] 10.5 11.0 9.5
[4,] 9.0 13.0 7.0
EDIT (2):
I discovered that applying sum function and then dividing by the size of the removed dimension is definitely faster:
> start <- Sys.time (); aaout = apply(aa, c(1,2,4,5), sum); Sys.time () - start
Time difference of 5.528063 secs
In R, apply is not the right tool for the task. If you had a matrix and needed the row or column means, you would use the much much faster, vectorized rowMeans and colMeans. You can still use these for a multi-dimensional array but you need to be a little creative:
Assuming your array has n dimensions, and you want to compute means along dimension i:
use aperm to move the dimension i to the last position n
use rowMeans with dims = n - 1
Similarly, you could:
use aperm to move the dimension i to the first position
use colMeans with dims = 1
a <- array(data = runif(144*73*6*23*10), dim = c(144,73,10,6,23))
means.along <- function(a, i) {
n <- length(dim(a))
b <- aperm(a, c(seq_len(n)[-i], i))
rowMeans(b, dims = n - 1)
}
system.time(z1 <- apply(a, c(1,2,4,5), mean))
# user system elapsed
# 25.132 0.109 25.239
system.time(z2 <- means.along(a, 3))
# user system elapsed
# 0.283 0.007 0.289
identical(z1, z2)
# [1] TRUE
mean is particularly slow because of S3 method dispatch. This is faster:
set.seed(42)
a = array(data = runif(144*73*6*23*10), dim = c(144,73,10,6,23))
system.time({b = apply(a, c(1,2,4,5), mean.default)})
# user system elapsed
#16.80 0.03 16.94
If you don't need to handle NAs you can use the internal function:
system.time({b1 = apply(a, c(1,2,4,5), function(x) .Internal(mean(x)))})
# user system elapsed
# 6.80 0.04 6.86
For comparison:
system.time({b2 = apply(a, c(1,2,4,5), function(x) sum(x)/length(x))})
# user system elapsed
# 9.05 0.01 9.08
system.time({b3 = apply(a, c(1,2,4,5), sum)
b3 = b3/dim(a)[[3]]})
# user system elapsed
# 7.44 0.03 7.47
(Note that all timings are only approximate. Proper benchmarking would require running this repreatedly, e.g., using one of the bechmarking packages. But I'm not patient enough for that right now.)
It might be possible to speed this up with an Rcpp implementation.

Implementation of skyline query or efficient frontier

I know there must be an easy answer to this but somehow I can't seem to find it...
I have a data frame with 2 numeric columns.
I would like to remove from it, the rows, which have the property, that there exists at least one other row in the data frame, with both column values bigger than the ones in this row.
So if I have
Col1 Col2
1 2 3
2 4 7
3 5 6
I would like to remove the first row, because the second one fulfills the property and keep only rows 2 and 3.
Thanks a lot!
That problem is called a "skyline query" by database administrators (they may have other algorithms) and an "efficient frontier" by economists.
Plotting the data can make it clear what we are looking for.
n <- 40
d <- data.frame(
x = rnorm(n),
y = rnorm(n)
)
# We want the "extreme" points in the following plot
par(mar=c(1,1,1,1))
plot(d, axes=FALSE, xlab="", ylab="")
for(i in 1:n) {
polygon( c(-10,d$x[i],d$x[i],-10), c(-10,-10,d$y[i],d$y[i]),
col=rgb(.9,.9,.9,.2))
}
The algorithm is as follows: sort the points along the first coordinate,
keep each observation unless it is worse than the last retained one.
d <- d[ order(d$x, decreasing=TRUE), ]
result <- d[1,]
for(i in seq_len(nrow(d))[-1] ) {
if( d$y[i] > result$y[nrow(result)] ) {
result <- rbind(result, d[i,]) # inefficient
}
}
points(result, cex=3, pch=15)
Edit (2015-03-02): For a more efficient solution, please see Patrick Roocks' rPref, a package for "Database Preferences and Skyline Computation", (also linked to in his answer below). To show that it finds the same solution as my code here, I've appended an example using it to my original answer here.
Riffing off of Vincent Zoonekynd's enlightening response, here's an algorithm that's fully vectorized, and likely more efficient:
set.seed(100)
d <- data.frame(x = rnorm(100), y = rnorm(100))
D <- d[order(d$x, d$y, decreasing=TRUE), ]
res <- D[which(!duplicated(cummax(D$y))), ]
# x y
# 64 2.5819589 0.7946803
# 20 2.3102968 1.6151907
# 95 -0.5302965 1.8952759
# 80 -2.0744048 2.1686003
# And then, if you would prefer the rows to be in
# their original order, just do:
d[sort(as.numeric(rownames(res))), ]
# x y
# 20 2.3102968 1.6151907
# 64 2.5819589 0.7946803
# 80 -2.0744048 2.1686003
# 95 -0.5302965 1.8952759
Or, using the rPref package:
library(rPref)
psel(d, high(x) | high(y))
# x y
# 20 2.3102968 1.6151907
# 64 2.5819589 0.7946803
# 80 -2.0744048 2.1686003
# 95 -0.5302965 1.8952759
Here is an sqldf solution where DF is the data frame of data:
library(sqldf)
sqldf("select * from DF a
where not exists (
select * from DF b
where b.Col1 >= a.Col1 and b.Col2 > a.Col2
or b.Col1 > a.Col1 and b.Col2 >= a.Col2
)"
)
This question is pretty old, but meanwhile there is a new solution. I hope it is ok to do some self-promotion here: I developed a package rPref which does an efficient Skyline computation due to C++ algorithms. With installed rPref package the query from the question can be done via (assuming that df is the name of data set):
library(rPref)
psel(df, high(Col1) | high(Col2))
This removes only those tuples, where some other tuple is better in both dimensions.
If one requires the other tuple to be strictly better in just one dimension (and better or equal in the other dimension), use high(Col1) * high(Col2) instead.
In one line:
d <- matrix(c(2, 3, 4, 7, 5, 6), nrow=3, byrow=TRUE)
d[!apply(d,1,max)<max(apply(d,1,min)),]
[,1] [,2]
[1,] 4 7
[2,] 5 6
Edit: In light of your precision in jbaums' response, here's how to check for both columns separately.
d <- matrix(c(2, 3, 3, 7, 5, 6, 4, 8), nrow=4, byrow=TRUE)
d[apply(d,1,min)>min(apply(d,1,max)) ,]
[,1] [,2]
[1,] 5 6
[2,] 4 8
d <- matrix(c(2, 3, 4, 7, 5, 6), nrow=3, byrow=TRUE)
d2 <- sapply(d[, 1], function(x) x < d[, 1]) &
sapply(d[, 2], function(x) x < d[, 2])
d2 <- apply(d2, 2, any)
result <- d[!d2, ]

Using lapply with changing arguments

R textbooks continue to promote the use of lapply instead of loops. This is easy even for functions with arguments like
lapply(somelist, f, a=1, b=2)
but what if the arguments change depending on the list element?
Assume my somelist consists of:
somelist$USA
somelist$Europe
somelist$Switzerland
plus there is anotherlist with the same regions and I want use lapply with these changing arguments? This could be useful when f was a ratio calculation for example.
lapply(somelist, f, a= somelist$USA, b=anotherlist$USA)
Is there are way except for a loop to run through these regions efficiently?
EDIT:
my problem seems to be that I tried to use a previously written function without indexes...
ratio <-function(a,b){
z<-(b-a)/a
return(z)
}
which led to
lapply(data,ratio,names(data))
which does not work. Maybe others can also learn from this mistake.
Apply over list names rather than list elements. E.g.:
somelist <- list('USA'=rnorm(10), 'Europe'=rnorm(10), 'Switzerland'=rnorm(10))
anotherlist <- list('USA'=5, 'Europe'=10, 'Switzerland'=4)
lapply(names(somelist), function(i) somelist[[i]] / anotherlist[[i]])
EDIT:
You also ask if there is a way "except for a loop" to do this "efficiently". You should note that the apply will not necessarily be more efficient. Efficiency will probably be determined by how quick your inner function is. If you want to operate on each elements of a list, you will need a loop, whether it is hidden in an apply() call or not. Check this question: Is R's apply family more than syntactic sugar?
The example I gave above can be re-written as a for loop, and you can make some naive benchmarks:
fun1 <- function(){
lapply(names(somelist), function(i) somelist[[i]] / anotherlist[[i]])
}
fun2 <- function(){
for (i in names(somelist)){
somelist[[i]] <- somelist[[i]] / anotherlist[[i]]
}
return(somelist)
}
library(rbenchmark)
benchmark(fun1(), fun2(),
columns=c("test", "replications",
"elapsed", "relative"),
order="relative", replications=10000)
The output of the benchmark on my machine was this:
test replications elapsed relative
1 fun1() 10000 0.145 1.000000
2 fun2() 10000 0.148 1.020690
Although this is not a real work application and the functions are not realistic tasks, you can see that the difference in computation time is quite negligible.
You just need to work out what to lapply() over. Here the names() of the lists suffices, after we rewrite f() to take different arguments:
somelist <- list(USA = 1:10, Europe = 21:30,
Switzerland = seq(1, 5, length = 10))
anotherlist <- list(USA = list(a = 1, b = 2), Europe = list(a = 2, b = 4),
Switzerland = list(a = 0.5, b = 1))
f <- function(x, some, other) {
(some[[x]] + other[[x]][["a"]]) * other[[x]][["b"]]
}
lapply(names(somelist), f, some = somelist, other = anotherlist)
Giving:
R> lapply(names(somelist), f, some = somelist, other = anotherlist)
[[1]]
[1] 4 6 8 10 12 14 16 18 20 22
[[2]]
[1] 92 96 100 104 108 112 116 120 124 128
[[3]]
[1] 1.500000 1.944444 2.388889 2.833333 3.277778 3.722222 4.166667 4.611111
[9] 5.055556 5.500000

Resources