R *apply vector as input; matrix as output - r

I'd like to apply over each element of a vector, a function that outputs a vector.
After applying the function to each element of that vector, I should have many vectors, which I'd like to rbind in order to have a matrix.
The code should be equivalent to the following:
my_function <- function(x) x:(x+10)
my_vec <- 1:10
x <- vector()
for(i in seq_along(vec)){
x <- rbind(x,my_function(my_vec[i]))
}
Of course, my_function and my_vec are just examples.

try:
tmp <- lapply(my_vec, my_function)
do.call(rbind, tmp)
or, like Heroka suggested, use sapply. i prefer lapply, then bind my output the way i like (rbind/cbind) instead of potentially transposing.

Here is an alternative:
matrix( unlist(lapply(my_vec,my_function)), length(my_vec), byrow=TRUE )
Speed is almost the same:
library(microbenchmark)
my_function <- function(x) sin(x:(x+10))
for ( n in 1:4 )
{
my_vec <- 1:10^n
print(
microbenchmark( mra68 = matrix( unlist(lapply(my_vec,my_function)), length(my_vec), byrow=TRUE ),
stas.g = do.call(rbind, lapply(my_vec, my_function)),
times = 1000 )
)
print("identical?")
print( identical( matrix( unlist(lapply(my_vec,my_function)), length(my_vec), byrow=TRUE ),
do.call(rbind, lapply(my_vec, my_function)) ) )
}
.
Unit: microseconds
expr min lq mean median uq max neval
mra68 38.496 40.307 68.00539 41.213 110.052 282.148 1000
stas.g 41.213 42.572 72.86443 43.930 115.939 445.186 1000
[1] "identical?"
[1] TRUE
Unit: microseconds
expr min lq mean median uq max neval
mra68 793.002 810.212 850.4857 818.3640 865.2375 7231.669 1000
stas.g 876.786 894.901 946.8165 906.2235 966.9100 7051.873 1000
[1] "identical?"
[1] TRUE
Unit: milliseconds
expr min lq mean median uq max neval
mra68 2.605448 3.028442 5.269003 4.020940 7.807512 14.51225 1000
stas.g 2.959604 3.390071 5.823661 4.500546 8.800462 92.54977 1000
[1] "identical?"
[1] TRUE
Unit: milliseconds
expr min lq mean median uq max neval
mra68 27.29810 30.99387 51.44223 41.20167 79.46185 559.0059 1000
stas.g 33.63622 37.22420 60.10224 49.07643 92.94333 395.3315 1000
[1] "identical?"
[1] TRUE
>

Related

Reason to use dplyr::union_all over other row binding methods on a dataframe

Is there a reason to use dplyr::union_all on a dataframe to bind two dataframes together vs other ways to bind rows? When I test it there don't seem to be differences but don't know if there might be edge cases where this wouldn't be a good idea or if there are reasons to choose it over another method. The documentation shows an example on a dataframe.
library(dplyr)
library(data.table)
library(microbenchmark)
data <- data.frame(a=rep('a', 10^6), b= rep(1, 10^6))
dt <- as.data.table(data)
microbenchmark(df1 <- dplyr::union_all(data, data))
microbenchmark(df2 <- rbind(data, data))
microbenchmark(df3 <- dplyr::bind_rows(data, data))
microbenchmark(df4 <- data.table::rbindlist(list(data, data)))
microbenchmark(df5 <- rbind(dt, dt))
all((df1 == df2) && (df2 == df3) && (df3 == as.data.frame(df4)) && (df4 == df5))
# [1] TRUE
They all seem roughly the same speed except for rbind on a data.frame which seems particularly slow. There doesn't seem to be a speed reason to choose union_all over other methods.
microbenchmark(df1 <- dplyr::union_all(data, data))
Unit: milliseconds
expr min lq mean median uq max neval
df1 <- dplyr::union_all(data, data) 8.501586 10.19703 13.77899 11.62611 18.16747 25.73479 100
microbenchmark(df2 <- rbind(data, data))
Unit: milliseconds
expr min lq mean median uq max neval
df2 <- rbind(data, data) 48.4319 50.98856 63.70163 52.65343 61.17889 180.8519 100
microbenchmark(df3 <- dplyr::bind_rows(data, data))
Unit: milliseconds
expr min lq mean median uq max neval
df3 <- dplyr::bind_rows(data, data) 9.121883 10.36146 13.38456 11.13614 12.04666 127.5304 100
microbenchmark(df4 <- data.table::rbindlist(list(data, data)))
Unit: milliseconds
expr min lq mean median uq max neval
df4 <- data.table::rbindlist(list(data, data)) 11.2442 11.84408 13.50861 12.37741 13.17539 22.89314 100
microbenchmark(df5 <- rbind(dt, dt))
Unit: milliseconds
expr min lq mean median uq max neval
df5 <- rbind(dt, dt) 11.02781 12.04254 15.0049 12.69404 13.36917 135.747 100

How to substitute multiple words with spaces in R?

Here is an example:
drugs<-c("Lapatinib-Ditosylate", "Caffeic-Acid-Phenethyl-Ester", "Pazopanib-HCl", "D-Pantethine")
ads<-"These are recently new released drugs Lapatinib Ditosylate, Pazopanib HCl, and Caffeic Acid Phenethyl Ester"
What I wanted is to correct the drug names in ads with the names in drugs such that a desired output would be:
"These are recently new released drugs Lapatinib-Ditosylate, Pazopanib-HCl, and Caffeic-Acid-Phenethyl-Ester"
If you create a vector of words to be replaced, then you can loop over that vector and the vector of words to replace them (drugs), replacing all instances of one element in each interation of the loop.
to_repl <- gsub('-', ' ', drugs)
for(i in seq_along(drugs))
ads <- gsub(to_repl[i], drugs[i], ads)
ads
# "These are recently new released drugs Lapatinib-Ditosylate, Pazopanib-HCl, and Caffeic-Acid-Phenethyl-Ester"
Contrary to popular belief, for-loops in R are no slower than lapply
f_lapply <- function(ads){
to_repl <- gsub('-', ' ', drugs)
invisible(lapply(seq_along(to_repl), function(i) {
ads <<- gsub(to_repl[i], drugs[i], ads)
}))
ads
}
f_loop <- function(ads){
to_repl <- gsub('-', ' ', drugs)
for(i in seq_along(to_repl))
ads <- gsub(to_repl[i], drugs[i], ads)
ads
}
f_loop(ads) == f_lapply(ads)
# [1] TRUE
microbenchmark::microbenchmark(f_loop(ads), f_lapply(ads), times = 1e4)
# Unit: microseconds
# expr min lq mean median uq max neval
# f_loop(ads) 59.488 95.180 118.0793 107.487 120.205 7426.866 10000
# f_lapply(ads) 69.333 114.462 147.9732 130.872 152.205 27283.670 10000
Or, using more general examples:
loop_over <- 1:1e5
microbenchmark::microbenchmark(
for_loop = {for(i in loop_over) 1},
lapply = {lapply(loop_over, function(x) 1)}
)
# Unit: milliseconds
# expr min lq mean median uq max neval
# for_loop 4.66174 5.865842 7.725975 6.354867 7.449429 35.26807 100
# lapply 94.09223 114.378778 125.149863 124.665128 134.217326 170.16889 100
loop_over <- 1:1e5
microbenchmark::microbenchmark(
for_loop = {y <- numeric(1e5); for(i in seq_along(loop_over)) y[i] <- loop_over[i]},
lapply = {lapply(loop_over, function(x) x)}
)
# Unit: milliseconds
# expr min lq mean median uq max neval
# for_loop 11.00184 11.49455 15.24015 12.10461 15.26050 134.139 100
# lapply 71.41820 81.14660 93.64569 87.05162 98.59295 357.219 100
This can also be done using lapply() which will be faster than for loop. Modifying #IceCreamToucan's answer, this can be done in lapply as follows
to_repl <- gsub('-', ' ', drugs)
invisible(lapply(seq_along(to_repl), function(i) {
ads <<- gsub(to_repl[i], drugs[i], ads)
}))
# [1] "These are recently new released drugs Lapatinib-Ditosylate, Pazopanib-HCl, and Caffeic-Acid-Phenethyl-Ester"
Microbenchmark
Unit: microseconds
expr min lq mean median uq max neval
lapply 80.514 87.4935 110.1103 93.304 96.1995 1902.861 100
for.loop 2285.164 2318.5665 2463.1554 2338.216 2377.4120 7510.763 100

dictionary and list comprehension in R

Any generic way of doing the following R code faster? For example in python dict comprehension (see equivalent below) would be a nice faster alternative.
R:
l1 <- 1:3
l2 <- c("a", "b", "c")
foo <- function(x) {return(5*x)}
bar <- list()
for (i in 1:length(l1)) bar[l2[i]] <- foo(l1[i])
Python
l1 = range(1, 4)
l2 = ["a", "b", "c"]
def foo(x):
return 5*x
{b: foo(a) for a, b in zip(l1, l2)}
We're talking about speed, so let's do some benchmarking:
library(microbenchmark)
microbenchmark(op = {for (i in 1:length(l1)) bar[l2[i]] <- foo(l1[i])},
lapply = setNames(lapply(l1,foo),l2),
vectorised = setNames(as.list(foo(l1)), l2))
Unit: microseconds
expr min lq mean median uq max neval
op 7.982 9.122 10.81052 9.693 10.548 36.206 100
lapply 5.987 6.557 7.73159 6.842 7.270 55.877 100
vectorised 4.561 5.132 6.72526 5.417 5.987 80.964 100
But these small values don't mean much, so I pumped up the vector length to 10,000 where you'll really see a difference:
l <- 10000
l1 <- seq_len(l)
l2 <- sample(letters, l, replace = TRUE)
microbenchmark(op = {bar <- list(); for (i in 1:length(l1)) bar[l2[i]] <- foo(l1[i])},
lapply = setNames(lapply(l1,foo),l2),
vectorised = setNames(as.list(foo(l1)), l2),
times = 100)
Unit: microseconds
expr min lq mean median uq max neval
op 30122.865 33325.788 34914.8339 34769.8825 36721.428 41515.405 100
lapply 13526.397 14446.078 15217.5309 14829.2320 15351.933 19241.767 100
vectorised 199.559 259.997 349.0544 296.9155 368.614 3189.523 100
But tacking onto what everyone else said, it doesn't have to be a list. If you remove the list requirement:
microbenchmark(setNames(foo(l1), l2))
Unit: microseconds
expr min lq mean median uq max neval
setNames(foo(l1), l2) 22.522 23.8045 58.06888 25.0875 48.322 1427.417 100

Efficient use of vectors

I am attempting to copy one vector to another using the following syntax:
data<-NULL
for( i in 1:nrow(line)){
data=append(data,line[i*4])
}
From what I have seen, the use of append in this way results in a lot of copying of data, which makes R very slow. What is the syntax for copying the 4th element of one array to another, given that the list you are copying from is of a given size?
Here are three methods with their benchmarks. You can see that preallocating the vector as in the method2 function is quite a bit faster, while the lapply method is middle, and your function is the slowest.
Of course, these are 1D vectors as opposed to arrays of n-D, but I would expected the benchmarks would be similar or even more pronounced.
method1 <- function(line) {
data<-NULL
for( i in 1:length(line)){
data=append(data,line[i])
}
}
method2 <- function(line) {
data <- vector(mode="numeric", length = length(line))
for (i in 1:length(line)) {
data[i] <- line[i]
}
}
library(microbenchmark)
r <- rnorm(1000)
microbenchmark(method2(r), unit="ms")
#> Unit: milliseconds
#> expr min lq mean median uq max neval
#> method2(r) 2.18085 2.279676 2.428731 2.371593 2.500495 5.24888 100
microbenchmark(lapply(r, function(x) { data<-append(data, x) }), unit="ms")
#> Unit: milliseconds
#> expr min lq
#> lapply(r, function(x) { data <- append(data, x) }) 3.014673 3.091299
#> mean median uq max neval
#> 3.287216 3.150052 3.260199 6.036501 100
microbenchmark(method1(r), unit="ms")
#> Unit: milliseconds
#> expr min lq mean median uq max neval
#> method1(r) 3.938684 3.978002 5.71831 4.020001 4.280521 98.58584 100
Didn't realize OP wanted only every fourth. Why not just use a data frame or data.table?
d <- data.frame(matrix(rnorm(1000), ncol=1))
microbenchmark(d2 <- d[seq(1,nrow(d), 4),])
#> Unit: microseconds
#> expr min lq mean median uq
#> d2 <- d[seq(1, nrow(d), 4), ] 64.846 65.9915 73.08007 67.225 73.8225
#> max neval
#> 220.438 100
library(data.table)
dt <- data.table(d)
microbenchmark(d2 <- dt[seq(1,nrow(d), 4),])
#> Unit: microseconds
#> expr min lq mean median uq
#> d2 <- dt[seq(1, nrow(d), 4), ] 298.163 315.2025 324.8793 320.554 330.416
#> max neval
#> 655.124 100
If you're trying to extract every fourth element from a vector, you could index using seq to grab the correct elements:
data <- letters[seq(4, length(letters), by=4)]
data
# [1] "d" "h" "l" "p" "t" "x"
Growing the vector one at a time as you show in your question will be slow because you will need to keep re-allocating your vector (see the second circle of The R Inferno for details). However, even pre-allocating your vector and constructing it with a for loop will be slow compared to constructing it in a single vectorized indexing operation.
To get a sense of the speed improvements, consider a comparison to the sort of method you've described, except using pre-allocation:
for.prealloc <- function(x) {
data <- vector(mode="numeric", length = floor(length(x)/4))
for (i in 1:floor(length(x)/4)) {
data[i] <- x[i*4]
}
data
}
josilber <- function(x) x[seq(4, length(x), by=4)]
r <- rnorm(10000)
all.equal(for.prealloc(r), josilber(r))
# [1] TRUE
library(microbenchmark)
microbenchmark(for.prealloc(r), josilber(r))
# Unit: microseconds
# expr min lq mean median uq max neval
# for.prealloc(r) 1846.014 2035.7890 2351.9681 2094.804 2244.56 5283.285 100
# josilber(r) 95.757 97.4125 125.9877 113.179 138.96 259.606 100
The approach I propose is 20x faster than using for and a pre-allocated vector (and it will be even faster than using append and a non-pre-allocated vector).

Why does which work faster on a data frame column compared to a matrix column?

I have the following data:
height = 1:10000000
length = -(1:10000000)
body_dim = data.frame(height,length)
body_dim_mat = as.matrix(body_dim)
Why does which() work faster for the data frame compared to the matrix?
> microbenchmark(body_dim[which(body_dim$height==50000),"length"])
Unit: milliseconds
expr min lq median uq max neval
body_dim[which(body_dim$height == 50000), "length"] 124.4586 125.1625 125.9281 127.9496 284.9824 100
> microbenchmark(body_dim_mat[which(body_dim_mat[,1] == 50000),2])
Unit: milliseconds
expr min lq median uq max neval
body_dim_mat[which(body_dim_mat[, 1] == 50000), 2] 251.1282 252.4457 389.7251 400.313 1004.25 100
A data.frame is a list and a column is a simple vector and very easy to extract from the list. A matrix is a vector with dimension attributes. Which values belong to one column has to be calculated from the dimensions. This effects subsetting, which you include in your benchmarks:
library(microbenchmark)
set.seed(42)
m <- matrix(rnorm(1e5), ncol=10)
DF <- as.data.frame(m)
microbenchmark(m[,1], DF[,1], DF$V1)
#Unit: microseconds
# expr min lq median uq max neval
# m[, 1] 80.997 82.536 84.230 87.1560 1147.795 100
#DF[, 1] 15.399 16.939 20.789 22.6365 100.090 100
# DF$V1 1.849 2.772 3.389 4.3130 90.235 100
However, the take-home message is not that you should always use a data.frame. Because if you do subsetting, where the result is not a vector:
microbenchmark(m[1:10, 1:10], DF[1:10, 1:10])
# Unit: microseconds
# expr min lq median uq max neval
# m[1:10, 1:10] 1.233 1.8490 3.2345 3.697 11.087 100
# DF[1:10, 1:10] 211.267 219.7355 228.2050 252.226 1265.131 100
It seems that problem is before which(), subsetting of data.frame column is simply faster if compared to subsetting of whole matrix:
microbenchmark(body_dim$height==50000)
# Unit: milliseconds
# expr min lq median uq max neval
# body_dim$height == 50000 138.2619 148.5132 170.1895 170.8909 249.4592 100
microbenchmark(body_dim_mat[,1]==50000)
# Unit: milliseconds
# expr min lq median uq max neval
# body_dim_mat[, 1] == 50000 299.599 308.6066 310.9036 354.4641 432.7833 100
By the way, this case is where data.table can shine:
require(data.table)
dt <- data.table(body_dim, key="height")
microbenchmark(dt[J(50000)]$length, unit="ms")
# Unit: milliseconds
# expr min lq median uq max neval
# dt[J(50000)]$length 0.96637 0.97908 0.989772 1.025257 2.588402 100

Resources