I have a problem where the idea is to find unique elements of an extremely large matrix, apply some function (in this example, it is a sine function, but it can be an arbitrary function, even a trained neural net) to each unique element, and then replace it back in the original matrix.
I have the following R code snippet:
nrows <- 28000
ncols <- 3000
x <- matrix(round(runif(nrows*ncols, 1, 5), 5), nrow=nrows, ncol=ncols)
u <- unique(as.vector(x))
uindex <- seq(from=1, to=length(u), by=1)
ut <- sin(u)
for (hh in uindex) {
x[x == u[hh]] <- ut[hh]
}
In the above, code-snippet, the portion
for (hh in uindex) {
x[x == u[hh]] <- ut[hh]
}
takes forever to finish for the matrix dimension of order 1e4x1e3. How can I optimize the for loop?
Please note that this is just a minimal working example for Stackoverflow only. Hence, please refrain from telling me that I can do x = sin(x). My point is the for loop.
To expand on my comment regarding avoiding the for loop entirely...
As OP knows, using loops in R is generally slow and if there's an alternative, it's likely to be faster. One answer that avoids the for loop has already been provided. Here is another, with benchmarking.
To start, turn the OP's code into a function.
withForLoop <- function(nrows=28000, ncols=3000) {
x <- matrix(round(runif(nrows*ncols, 1, 5), 5), nrow=nrows, ncol=ncols)
u <- unique(as.vector(x))
uindex <- seq(from=1, to=length(u), by=1)
ut <- sin(u)
for (hh in uindex) {
x[x == u[hh]] <- ut[hh]
}
}
and benchmark it
library(microbenchmark)
microbenchmark(withForLoop, times=100)
Unit: nanoseconds
expr min lq mean median uq max neval
withForLoop 47 49 83.46 49 55 3226 100
So we are trying to beat a median time of 49 nanoseconds. (You could, of course, pick any other summary statistic as your target metric.)
Now rewrite the code using the tidyverse. OP's code starts with a matrix, converts it to a vector and then manipulates the vector. It's not clear if the matrix needs to be recovered. Assume it does, so provide the means to recover it, but - for consistency with OP's code - don't do the recovery.
library(tidyverse)
withTidyverse <- function(nrows=28000, ncols=3000) {
x <- tibble() %>%
expand(
Row=1:nrows,
Col=1:ncols
) %>%
mutate(
Random=round(runif(nrow(.), 1, 5), 5),
Sin=sin(Random)
)
}
microbenchmark(withTidyverse, times=100)
Unit: nanoseconds
expr min lq mean median uq max neval
withTidyverse1 41 42 52.21 42.5 43 964 100
So that's reduced the median execution time to 42.5 nanoseconds on my machine. That's a saving of just over 13%.
Because sin is a relatively quick function I've not bothered to search for unique values and replace each unique value in a batch. I've just taken a blunderbuss approach and recalculated each value in the vector as it arises. Here's a way of taking the more sophisticated, "replace unique values in batches" approach:
withTidyverse2 <- function(nrows=28000, ncols=3000) {
x <- tibble() %>%
expand(
Row=1:nrows,
Col=1:ncols
) %>%
mutate(
Random=round(runif(nrow(.), 1, 5), 5)
)
y <- x %>%
distinct(Random) %>%
mutate(Sin=sin(Random))
x <- x %>%
left_join(y, by="Random")
}
microbenchmark(withTidyverse2, times=100)
Unit: nanoseconds
expr min lq mean median uq max neval
withTidyverse2 44 45 82.31 45.5 51 2543 100
So, in this specific case, the overhead of extracting the unique values and updating in batches is not worthwhile, although it is still quicker than the for loop. OP will have to investigate their actual use case.
There are, of course, lots of other ways to address the actual problem. Which one is optimal is impossible to say based on the information provided.
Here is a data.table based solution:
nrows <- 28000
ncols <- 3000
x <- round(runif(nrows*ncols, 1, 5), 5)
u <- unique(as.vector(x))
uindex = seq(from = 1, to = length(u), by = 1)
dt.x <- data.table(x)
dt.u <- data.table(u)
dt.u[, ut := sin(u)]
dt.res <- merge(dt.x, dt.u, by.x = "x", by.y = "u", all.x = TRUE)
ut <- dt.res[, ut]
output <- matrix(ut, nrow=nrows, ncol=ncols)
Main idea here, is to work with the vectors and tables. I think you can convert to the matrix at the very end.
You could use match:
nrows <- 28000
ncols <- 3000
x <- matrix(round(runif(nrows*ncols, 1, 5), 5), nrow=nrows, ncol=ncols)
u <- unique(as.vector(x))
ut <- sin(u)
i <- match(x, u)
ut[i]
Related
Usually, I find myself using a few summary functions or making my own computations to get some additional initial information from the data. For example, I wanted to see the count and percentage per variable given a limit of distinct values:
table_transposed <- function(vector){
merge(as.data.frame(table(vector, dnn="values")),
as.data.frame(round(prop.table(table(vector, dnn="values")),2)),
by="values",
all.x=TRUE) %>%
data.table::transpose(keep.names = "values",
make.names = names(.)[1]) %T>%
{.[,c("values")] <- c("Count", "Percentage")}
}
table_transposed_filter <- function(dataframe, max_number_categories) {
(lapply(dataframe, function(x) NROW(unique(x))) <= max_number_categories) %>%
as.vector() %>%
{dataframe[,.]} %>%
lapply(table_transposed)
}
So, you give the dataframe and the threshold of distinct values per variable.
table_transposed_filter(mtcars, 10)
However, it's SUPER slow (maybe because of using merge() instead of left_join() from dplyr). Now, I'm trying to figure an efficient, fast, and simple way to do a combination of psych::describe(), Hmisc::describe(), other, and my own, for numeric and categorical variables (one descriptive function for each one). Something like (for numerical):
| Variable | dtype | mean | mode | variance | skew | percentile 25 | ...
If I create this table with mainly with sapply() for example, is it better (more efficient, faster, simpler code) than actually learning to create a r-package and developing in there?
PS: I thought to put this question in StackMetaExchange or Crossvalidation, but none of them seem to fit it.
Here's a somewhat faster version. It's about 2x faster on small data (like mtcars), but the difference narrows on litte bit on larger data.
This makes sense as the most expensive operation you do is table - your version does it twice, my version does it once. I didn't profile the code, but my guess is table is the bottleneck by more than one order of magnitude on any sizeable data, so it's a waste to try to optimize any other parts of the code.
t_transp = function(x, digits = 2) {
tab = table(x)
prop_tab = prop.table(tab)
df = data.frame(values = c("Count", "Percentage"))
df = cbind(df, rbind(tab, round(prop_tab, digits = digits)))
row.names(df) = NULL
df
}
t_transp_filter = function(data, n_max, ...) {
lapply(Filter(function(x) NROW(unique(x)) <= n_max, data), t_transp, ...)
}
Benchmarking:
microbenchmark::microbenchmark(
gregor = t_transp_filter(mtcars, n_max = 4),
OP = table_transposed_filter(mtcars, 4),
times = 20
)
# Unit: milliseconds
# expr min lq mean median uq max neval cld
# gregor 1.6483 1.7093 2.253425 1.74765 1.84680 7.5394 20 a
# OP 5.6988 5.7627 6.316295 6.08545 6.57965 8.1048 20 b
set.seed(47)
df = as.data.frame(matrix(
c(sample(letters[1:5], size = 1e5 * 20, replace = T))
, ncol = 20))
microbenchmark::microbenchmark(
gregor = t_transp_filter(df, n_max = 5),
OP = table_transposed_filter(df, 5),
times = 20
)
# Unit: milliseconds
# expr min lq mean median uq max neval cld
# gregor 59.5466 59.95545 63.6825 61.14075 67.2167 75.4270 20 a
# OP 110.3265 117.35585 123.8782 118.91005 133.7795 149.0651 20 b
I've been seeing a lot of comments among data scientists online about how for loops are not advisable. However, I recently found myself in a situation where using one was helpful. I would like to know if there is a better alternative for the following process (and why the alternative would be better):
I needed to run a series of repeated-measures ANOVA and approached the problem similarly to the reproducible example you see below.
[I am aware that there are other issues regarding running multiple ANOVA models and that there are other options for these sorts of analyses, but for now I'd simply like to hear about the use of for loop]
As an example, four repeated-measures ANOVA models - four dependent variables that were each measured at three occasions:
set.seed(1976)
code <- seq(1:60)
time <- rep(c(0,1,2), each = 20)
DV1 <- c(rnorm(20, 10, 2), rnorm(20, 10, 2), rnorm(20, 14, 2))
DV2 <- c(rnorm(20, 10, 2), rnorm(20, 10, 2), rnorm(20, 10, 2))
DV3 <- c(rnorm(20, 10, 2), rnorm(20, 10, 2), rnorm(20, 8, 2))
DV4 <- c(rnorm(20, 10, 2), rnorm(20, 10, 2), rnorm(20, 10, 2))
dat <- data.frame(code, time, DV1, DV2, DV3, DV4)
outANOVA <- list()
for (i in names(dat)) {
y <- dat[[i]]
outANOVA[i] <- summary(aov(y ~ factor(time) + Error(factor(code)),
data = dat))
}
outANOVA
You could write it this way, it's more compact:
outANOVA <-
lapply(dat,function(y)
summary(aov(y ~ factor(time) + Error(factor(code)),data = dat)))
for loops are not necessarily slower than apply functions but they're less easy to read for many people. It is to some extent a matter of taste.
The real crime is to use a for loop when a vectorized function is available. These vectorized functions usually contain for loops written in C that are much faster (or call functions that do).
Notice that in this case we also could avoid to create a global variable y and that we didn't have to initialize the list outANOVA.
Another point, directly from this relevant post :For loops in R and computational speed (answer by Glen_b):
For loops in R are not always slower than other approaches, like apply
- but there's one huge bugbear - •never grow an array inside a loop
Instead, make your arrays full-size before you loop and then fill them
up.
In your case you're growing outANOVA, for big loops it could become problematic.
Here is some microbenchmark of different methods on a simple example:
n <- 100000
microbenchmark::microbenchmark(
preallocated_vec = {x <- vector(length=n); for(i in 1:n) {x[i] <- i^2}},
preallocated_vec2 = {x <- numeric(n); for(i in 1:n) {x[i] <- i^2}},
incremented_vec = {x <- vector(); for(i in 1:n) {x[i] <- i^2}},
preallocated_list = {x <- vector(mode = "list", length = n); for(i in 1:n) {x[i] <- i^2}},
incremented_list = {x <- list(); for(i in 1:n) {x[i] <- i^2}},
sapply = sapply(1:n, function(i) i^2),
lapply = lapply(1:n, function(i) i^2),
times=20)
# Unit: milliseconds
# expr min lq mean median uq max neval
# preallocated_vec 9.784237 10.100880 10.686141 10.367717 10.755598 12.839584 20
# preallocated_vec2 9.953877 10.315044 10.979043 10.514266 11.792158 12.789175 20
# incremented_vec 74.511906 79.318298 81.277439 81.640597 83.344403 85.982590 20
# preallocated_list 10.680134 11.197962 12.382082 11.416352 13.528562 18.620355 20
# incremented_list 196.759920 201.418857 212.716685 203.485940 205.441188 393.522857 20
# sapply 6.557739 6.729191 7.244242 7.063643 7.186044 9.098730 20
# lapply 6.019838 6.298750 6.835941 6.571775 6.844650 8.812273 20
For your use case, I would say the point is moot. Applying vectorization (and, in the process, obfuscating the code) has no benefits here.
Here's an example below, where I did a microbenchmark::microbenchmark of your solution as presented in OP, Moody's solution as in his post, and a third solution of mine, with even more vectorization (triple nested lapply).
Microbenchmark
set.seed(1976); code = seq(1:60); time = rep(c(0,1,2), each = 20);
DV1 = c(rnorm(20, 10, 2), rnorm(20, 10, 2), rnorm(20, 14, 2)); DV2 = c(rnorm(20, 10, 2), rnorm(20, 10, 2), rnorm(20, 10, 2)); DV3 = c(rnorm(20, 10, 2), rnorm(20, 10, 2), rnorm(20, 8, 2)); DV4 = c(rnorm(20, 10, 2), rnorm(20, 10, 2), rnorm(20, 10, 2))
dat = data.frame(code, time, DV1, DV2, DV3, DV4)
library(microbenchmark)
microbenchmark(
`Peter Miksza` = {
outANOVA1 = list()
for (i in names(dat)) {
y = dat[[i]]
outANOVA1[i] = summary(aov(y ~ factor(time) + Error(factor(code)),
data = dat))
}},
Moody_Mudskipper = {
outANOVA2 =
lapply(dat,function(y)
summary(aov(y ~ factor(time) + Error(factor(code)),data = dat)))
},
`catastrophic_failure` = {
outANOVA3 =
lapply(lapply(lapply(dat, function(y) y ~ factor(time) + Error(factor(code))), aov, data = dat), summary)
},
times = 1000L)
Results
#Unit: milliseconds
# expr min lq mean median uq max neval cld
# Peter Miksza 26.25641 27.63011 31.58110 29.60774 32.81374 136.84448 1000 b
# Moody_Mudskipper 22.93190 23.86683 27.20893 25.61352 28.61729 135.58811 1000 a
# catastrophic_failure 22.56987 23.57035 26.59955 25.15516 28.25666 68.87781 1000 a
fiddling with JIT compilation, running compiler::setCompilerOptions(optimize = 0) and compiler::enableJIT(0) the following result ensues as well
#Unit: milliseconds
# expr min lq mean median uq max neval cld
# Peter Miksza 23.10125 24.27295 28.46968 26.52559 30.45729 143.0731 1000 a
# Moody_Mudskipper 22.82366 24.35622 28.33038 26.72574 30.27768 146.4284 1000 a
# catastrophic_failure 22.59413 24.04295 27.99147 26.23098 29.88066 120.6036 1000 a
Conclusion
As alluded by Dirk's comment, there isn't a difference in performance, but readability is greatly impaired using vectorization.
On growing lists
Experimenting with Moody's solutions, it seems growing lists can be a bad idea if the resulting list is moderately long. Also, using byte-compiled functions directly can provide a small improvement in performance. Both are expected behaviors. Pre-allocation might prove sufficient for your application though.
I'm trying to divide each row of a dataframe by a number stored in a second mapping dataframe.
for(g in rownames(data_table)){
print(g)
data_table[g,] <- data_table[g,]/mapping[g,2]
}
However, this is incredibly slow, each row takes almost 1-2 seconds to run. I know iteration is usually not the best way to do things in R, but I don't know how else to do it. Is there any way I can speed up the runtime?
Try this :
sweep(data_table, 1, mapping[[2]], "/")
In terms of speed here is a benchmark for the possibilities using iris dataset and including your version :
microbenchmark::microbenchmark(
A = {
for(g in rownames(test)){
# print(g)
test[g,] <- test[g,]/test[g,2]
}
},
B = sweep(test, 1, test[[2]], "/"),
C = test / test[[2]],
times = 100
)
#Unit: microseconds
#expr min lq mean median uq max neval
#A 82374.693 83722.023 101688.1254 84582.052 147280.057 157507.892 100
#B 453.652 484.393 514.4094 513.850 539.480 623.688 100
#C 404.506 423.794 456.0063 446.101 470.675 729.205 100
you can vectorize this operation if the two variables have the same number of rows:
dt <- data.frame(a = rnorm(100), b = rnorm(100))
mapping <- data.frame(x = rnorm(100), y = rnorm(100))
dt / mapping[,2]
I am trying to merge two data frames. The original data frame is much larger than the data frame that is going to be merged with however there is only 1 possible match for each row. The row is found by matching the type (a factor) and the level. The level is an integer that will be put into one of several buckets (the example only has two)
My current method works but uses sapply and is slow for large numbers of rows. How can I vectorise this operation?
set.seed(123)
sample <- 100
data <- data.frame(type= sample(LETTERS[1:4], sample, replace=TRUE), level =round(runif(sample, 1,sample)), value = round(runif(sample, 200,1000)))
data2 <- data.frame(type= rep(LETTERS[1:4],2), lower= c(rep(1,4), rep(51,4)), upper = c(rep(50,4), rep(sample,4)), cost1 = runif(8, 0,1), cost2 = runif(8, 0,1),cost3 = runif(8, 0,1))
data2[,4:6] <- data2[,4:6]/rowSums(data2[,4:6]) #turns the variables in to percentages, not necessary on real data
x <- unlist(sapply(1:sample, function(n) which(ll <-data$type[n]==data2$type & data$level[n] >= data2$lower & data$level[n] <= data2$upper)))
data3 <- cbind(data, percentage= data2[x, -c(1:3)])
If I understand the matching problem you've set up, the following code seems to speed things up a bit by dividing data by type and then using cut to find the proper bucket. I think it will accommodate larger numbers of pairs of lower and upper values but haven't checked carefully.
library(plyr)
percents <- function(value, cost) {
cost <- cost[cost[,1]== value[1,1],]
cost <- cost[order(cost[,2]),]
ints <- cut(value[,2], breaks=c(t(cost[,2:3])), labels=FALSE, include.lowest=TRUE )
cbind(value,percentage=cost[ceiling(ints/2),-(1:3)])
}
data4 <- rbind.fill(mapply(percents, value=split(data, data$type), cost=list(data2), SIMPLIFY=FALSE) )
Setting
sample <- 10000
gives the following execution time comparisons
microbenchmark({x <- unlist(sapply(1:sample, function(n) which(ll <-data$type[n]==data2$type & data$level[n] >= data2$lower & data$level[n] <= data2$upper)));
data3 <- cbind(data, percentage= data2[x, -c(1:3)])} ,
data4 <- rbind.fill(mapply(percents, value=split(data, data$type), cost=list(data2), SIMPLIFY=FALSE) ),
times=10)
Unit: milliseconds
expr
{ x <- unlist(sapply(1:sample, function(n) which(ll <- data$type[n] == data2$type & data$level[n] >= data2$lower & data$level[n] <= data2$upper))) data3 <- cbind(data, percentage = data2[x, -c(1:3)]) }
data4 <- rbind.fill(mapply(percents, value = split(data, data$type), cost = list(data2), SIMPLIFY = FALSE))
min lq mean median uq max neval
1198.18269 1214.10560 1225.85117 1226.79838 1234.2671 1258.63122 10
20.81022 20.93255 21.50001 21.24237 22.1305 22.65291 10
where the first numbers are for the code shown in your question and the second times are for the code in my post. For this case, the new code seems almost 60 times faster.
Edit
To use rbind_all and avoid mapply, use the following:
microbenchmark({x <- unlist(sapply(1:sample, function(n) which(ll <-data$type[n]==data2$type & data$level[n] >= data2$lower & data$level[n] <= data2$upper)));
data3 <- cbind(data, percentage= data2[x, -c(1:3)])} ,
data4 <- rbind_all(lapply(split(data, data$type), percents, cost=data2 )),
times=10)
which gives slightly improved execution times
min lq mean median uq max neval
1271.57023 1289.17614 1297.68572 1301.84540 1308.31476 1313.56822 10
18.33819 18.57373 23.28578 19.53742 19.95132 58.96143 10
Edit 2
Modification to use the data2$lower values only for setting intervals
percents <- function(value, cost) {
cost <- cost[cost[,"type"] == value[1,"type"],]
cost <- cost[order(cost[,"lower"]),]
ints <- cut(value[,"value"], breaks= c(cost[,"lower"], max(cost[,"upper"])), labels=FALSE, right=FALSE, include.highest=TRUE )
cbind(value,percentage=cost[ints,-(1:3)])
}
to use with
data4 <- rbind_all(lapply(split(data, data$type), percents, cost=data2 ))
I was hoping someone could help point me in the right direction to create a vector in R, containing a defined amount of randomly generated numbers. I am a complete newbie to R, and I have learned that the concatenate function is used for creating vectors. However, I wish to populate the vector with 50 random numbers. I do not wish to specify a range or any other conditions for the numbers.
MyVectorObject <- c(...)
Any suggestions would be greatly appreciated!
It depends on which numbers you want to generate. These are some options.
x1 <- rpois(n = 50, lambda = 10)
x2 <- runif(n = 50, min = 1, max = 10)
x3 <- sample(x = c(1, 3, 5), size = 50, replace = TRUE)
If we are talking about integer numbers, you want to generate number in interval <-base::.Machine$integer.max, base::.Machine$integer.max> which is for example on my computer interval <-2147483647,2147483647>
Implementation
you can use base::sample to generate positive numbers from 1 to base::.Machine$integer.max
random.pos <- function(N) {
int.max <- .Machine$integer.max
return(sample(int.max, N, replace=TRUE))
}
if you want also negative numbers, you can use
random.posneg <- function(N) {
int.max <- .Machine$integer.max
random.numbers <- sample(int.max, N, replace = TRUE)
random.signs <- sample(c(1,-1), N, replace=TRUE)
return(random.numbers * random.signs)
}
NOTE: No one from functions specified above does generate 0 (zero)
The best approach (by my opinion) is to use base::runif function.
random.runif <- function(N) {
int.max <- .Machine$integer.max
int.min <- -int.max
return(as.integer(runif(N, int.min, int.max)))
}
This will be little bit slower then using base::sample but you get random numbers uniformly distributed with possible zero.
Benchmark
library(microbenchmark)
require(compiler)
random.runif <- cmpfun(random.runif)
random.pos <- cmpfun(random.pos)
random.posneg <- cmpfun(random.posneg)
N <- 500
op <- microbenchmark(
RUNIF = random.runif(N),
POS = random.pos(N),
POSNEG = random.posneg(N),
times = 10000
)
print(op)
## library(ggplot2)
## boxplot(op)
## qplot(y=time, data=op, colour=expr) + scale_y_log10()
and results from the benchmark above
Unit: microseconds
expr min lq mean median uq max neval
RUNIF 13.423 14.251 15.197122 14.482 14.694 2425.290 10000
POS 4.174 5.043 5.613292 5.317 5.645 2436.909 10000
POSNEG 11.673 12.845 13.383194 13.285 13.800 60.304 10000