I'd like to evaluate the time to extract data from a raster time series using different file types (geotiff, binary) or objects (RasterBrick, RasterStack). I created a function that will extract the time series from a random point of the raster object and I then use microbenchmark to test it.
Ex.:
# read a random point from a raster stack
sample_raster <- function(stack) {
poi <- sample(ncell(stack), 1)
raster::extract(stack, poi)
}
# opening the data using different methods
data_stack <- stack(list.files(pattern = '3B.*tif'))
data_brick <- brick('gpm_multiband.tif')
bench <- microbenchmark(
sample_stack = sample_raster(data_stack),
sample_brick = sample_raster(data_brick),
times = 10
)
boxplot(bench)
# this fails because sampled point is different
bench <- microbenchmark(
sample_stack = sample_raster(data_stack),
sample_brick = sample_raster(data_brick),
times = 10,
check = 'equal'
)
I included a sample of my dataset here
With this I can see that sampling on RasterBrick is faster than stacks (R Raster manual also says so -- good). The problem is that I'm sampling at different points at each evaluated expression. So I can't check if the results are the same. What I'd like to do is sample at the same location (poi) on both objects. But have the location be different for each iteration. I tried to use the setup option in microbenchmark but from what I figured out, the setup is evaluated before each function is timed, not once per iteration. So generating a random poi using the setup will not work.
Is it possible to pass the same argument to the functions being evaluated in microbenchmark?
Result
Solution using microbenchmark
As suggested (and explained bellow), I tried the bench package with the press call. But for some reason it was slower than setting the same seed at each microbenchmark iteration, as suggested by mnist. So I ended up going back to microbenchmark. This is the code I'm using:
library(microbenchmark)
library(raster)
annual_brick <- raster::brick('data/gpm_tif_annual/gpm_2016.tif')
annual_stack <- raster::stack('data/gpm_tif_annual/gpm_2016.tif')
x <- 0
y <- 0
bm <- microbenchmark(
ext = {
x <- x + 1
set.seed(x)
poi = sample(raster_size, 1)
raster::extract(annual_brick, poi)
},
slc = {
y <- y + 1
set.seed(y)
poi = sample(raster_size, 1)
raster::extract(annual_stack, poi)
},
check = 'equal'
)
Solution using bench::press
For completeness sake, this was how I did, using the bench::press. In the process, I also separated the code for selecting the random cell from the point sampling function. So I can time only the point sampling part of the code. Here is how I'm doing it:
library(bench)
library(raster)
annual_brick <- raster::brick('data/gpm_tif_annual/gpm_2016.tif')
annual_stack <- raster::stack('data/gpm_tif_annual/gpm_2016.tif')
bm <- bench::press(
pois = sample(ncell(annual_brick), 10),
mark(
iterations = 1,
sample_brick = raster::extract(annual_brick, pois),
sample_stack = raster::extract(annual_stack, pois)
)
)
My approach would be to set the same seats for each option in microbenachmark but change them prior to each function call. See the output and how the same seats are used for both calls eventually
x <- 0
y <- 0
microbenchmark::microbenchmark(
"checasdk" = {
# increase seat value by 1
x <- x + 1
print(paste("1", x))
set.seed(x)},
"check2" = {
y <- y + 1
print(paste("2", y))
set.seed(y)
}
)
If I understand correctly, the OP has two requirements:
The same data points should be sampled when timing the two expressions in order to check the results are identical.
In addition, timing of the two expressions is to be repeated for different data points sampled.
Using the same random numbers
As suggested by Roman, set.seed() can be used to set the seed values for R's random number generator. If the same parameter is used, the sequence of generated random numbers will be the same.
sample_raster() can be modified to ensure that the random number generator will be initiliased for each call.
sample_raster <- function(stack) {
set.seed(1L)
poi <- sample(ncell(stack), 1)
raster::extract(stack, poi)
}
This will met requirement 1 but not requirement 2 as the same data samples will be used for all repetitions.
Different random numbers in repetitions
The OP has asked:
Is it possible to pass the same argument to the functions being
evaluated in microbenchmark?
One possibility is to use for or lapply() to loop over a sequence of seed values as suggested in answers to a similar question.
In this case, I suggest to use the bench package for benchmarking. It has a press() function which runs bench::mark() across a grid of parameters.
For this, sample_raster() gets a second parameter:
sample_raster <- function(stack, seed) {
set.seed(seed)
poi <- sample(ncell(stack), 1L)
# cat(substitute(f), s, poi, "\n") # just to check, NOT to use for timings
raster::extract(stack, poi)
}
The timings are executed for different seeds as given in vector seed_vec.
library(bench)
bm <- press(
seed_vec = 1:10,
mark(
iterations = 1L,
sample_stack = sample_raster(data_stack, seed_vec),
sample_brick = sample_raster(data_brick, seed_vec)
)
)
Note that the length of seed_vec determines the number of repetitions with different poi, now. The iterations parameter to mark() specifies how often the timings are to be repeated for the same seed / poi.
The results can be plotted using
library(ggplot2)
autoplot(bm)
or summarized using
library(dplyr)
bm %>%
group_by(expression = expression %>% as.character()) %>%
summarise(median = median(median), n_itr = n())
Related
I have a data frame with 10,000 rows and 40 columns. I am trying to apply a function to each of these rows. For each row, I am expecting to return a scalar which is the value of the statistic I am calculating in this function. Below is what I have done so far;
library(dfadjust)
library(MASS)
# Creating example data #
nrows=10000
ncols=40
n1=20
n2=20
df=data.frame(t(replicate(nrows, rnorm(ncols, 100, 3))))
cov=data.frame(group=as.factor(rep(c(1,2),c(n1,n2))))
# Function to evaluate on each row of df #
get_est= function(x){
mod = rlm(x~cov$group)
fit = dfadjustSE(mod)
coef = fit$coefficients[2,1]
se = fit$coefficients[2,4]
stats = coef/se
return(stats)
}
# Applying above function to full data #
t1=Sys.time()
estimates=apply(df, 1, function(x) get_est(x))
t2=Sys.time()-t1
# Time taken by apply function
Time difference of 37.10623 secs
Is there a way to significantly decrease the time taken to implement get_est() on the full data? The main reason I need to speed up the computation on a single df is because I have 1000 more data frames with the same dimension and I have to apply this function to each row to each of these data frames simultaneously. To illustrate, below is the broader situation I am dealing with;
# Creating example data
set.seed(1234)
nrows = 10000
ncols = 40
n1 = 20
n2 = 20
df.list = list()
for(i in 1:1000){
df.list[[i]] = data.frame(t(replicate(nrows, rnorm(ncols, 100, 3))))
}
# Applying get_est() to each row and to each of data frame in df.list #
pcks = c('MASS','dfadjust')
all.est = foreach(j = 1:length(df.list), .combine = cbind, .packages = pcks) %dopar% {
cov=data.frame(group=as.factor(rep(c(1,2),c(n1,n2))))
est = apply(df.list[[j]], 1, function(x) get_est(x))
return(est)
}
Even after parallelizing it is taking hours to finish. My ultimate objective is to significantly cut down the time to obtain "all.est" which will contain 10000 rows and 1000 columns where each column has the stats estimates for the respective data set. Any help is much appreciated!! Thanks in advance!
Doing some preprocessing of data we can adjust the rlm function so there's less overhead:
x3 <- as.matrix(cbind(1L, y))
colnames(x3) <- c('(Intercept)', 'x')
w = rep(1, nrow(x3))
get_est= function(x){
mod = rlm(x3, x, weights = w, w = w)
fit = dfadjustSE(mod)
coef = fit$coefficients[2,1]
se = fit$coefficients[2,4]
stats = coef/se
return(stats)
}
I got 12 seconds instead of 18 seconds for initial approach. ~33% improvement
For larger speedups I would suggest looking into rlm and dfadjustSE functions and try to optimize those for your specific needs (removing unnecessary checks etc., as you are calling those functions millions of times). But that probably will be quite time consuming and better performance is not guaranteed. Maybe there are other packages with similar but faster functions?
I'm working on improving the speed of a function (for a dissimilarity measure) I'm writing which is quite similar mathematically to the Euclidean distance function. However, when I time my function compared to that implemented in the daisy function from the cluster package, I find quite a significant difference in speed, with daisy performing much better. Given that (I'm assuming) a dissimilarity measure would require O(n x p) time due to the need to compare each object to itself over all variables (where n is number of objects and p is number of variables), I find it difficult to understand how the daisy function performs so well (near constant time, from the few experiments I've done) relative to my simple and direct implementation. I present the code I have used both to implement and test below. I have tried looking through the r source code for the implementation of the daisy function, but I found it difficult to understand. I found no nested for loop. Any help with understanding why this function performs so fast and how I could possibly modify my code to have similar speed would be very highly appreciated.
euclidean <- function (df){
no_obj <- nrow(df)
dist <- array(0, dim = c(no_obj, no_obj))
for (i in 1:no_obj){
for (j in 1:no_obj){
dist_v <- 0
if(i != j){
for (v in 1:ncol(df)){
dist_v <- dist_v + sqrt((df[i,v] - df[j,v])^2)
}
}
dist[i,j] <- dist_v
}
}
return(dist)
}
data("iris")
tic <- Sys.time()
dst <- euclidean(iris[,1:4])
time <- difftime(Sys.time(), tic, units = "secs")[[1]]
print(paste("Time taken [Euclidean]: ", time))
tic <- Sys.time()
dst <- daisy(iris[,1:4])
time <- difftime(Sys.time(), tic, units = "secs")[[1]]
print(paste("Time taken [Daisy]: ", time))
one option:
euclidean3 <- function(df) {
require(data.table)
n <- nrow(df)
i <- CJ(1:n, 1:n) # generate all row combinations
dl <- sapply(df, function(x) sqrt((x[i[[1]]] - x[i[[2]]])^2)) # loop over columns
dv <- rowSums(dl) # sum values of columns
d <- matrix(dv, n, n) # fill in matrix
d
}
dst3 <- euclidean3(iris[,1:4])
all.equal(euclidean(iris[,1:4]), dst3) # TRUE
[1] "Time taken [Euclidean3]: 0.008"
[1] "Time taken [Daisy]: 0.002"
Largest bottleneck in your code is selecting data.frame elements in loop (df[j,v])). Maybe changing it to matrix also could improver speed. I believe there could be more performant approach on stackoverflow, you just need to search by correct keywords...
I am implementing a version of the Very Large Scale Relieff algorithm detailed here.
Simply put, Very Large Scale Relieff split the set of features N into several random subsets Ns where Ns << N. Then it calculates the Relieff weights for the features in the subset Ns. For each feature, the final weight will be the highest weight assigned among the different subsets were that particular feature appear.
I have ~80000 features for ~100 subjects. I can calculate 10000 subsets of 8000 features each in a reasonable amount of time (~5 minutes running on 25 cores) with the following code (that is scaled down to 100 features in order to be easier to profile):
library(tidyverse)
library(magrittr)
library(CORElearn)
library(doParallel)
#create fake data for example
fake_table <- matrix(rnorm(100*100), ncol = 100) %>%
as_tibble()
outcome <- rnorm(100)
#create fake data for example
#VLSRelieff code
start_time <- Sys.time()
myCluster <- makeCluster(25, # number of cores to use
type = "FORK")
registerDoParallel(myCluster)
result <- foreach(x = seq(1,10000)) %dopar% {
#set seed for results consistency among different run
set.seed(x)
#subsample the features table by extracting a subset of columns
subset_index <- sample(seq(1,ncol(fake_table)),size = round(ncol(fake_table)*.01))
subset_matrix <- fake_table[,subset_index]
#assign the outcome as last column of the subset
subset_matrix[,ncol(subset_matrix)+1] <- outcome
#use the function attrEval from the CORElearn package to calculate the Relieff weights for the subset
rf_weights <- attrEval(formula = ncol(subset_matrix), subset_matrix, estimator = "RReliefFequalK")
#create a data frame with as many columns as features in the subset and only one row
#with the Relieff weigths
rf_df <- rf_weights %>%
unname() %>%
matrix(., ncol = length(.), byrow = TRUE) %>%
as_tibble() %>%
set_colnames(., names(rf_weights))}
end_time <- Sys.time()
end_time - start_time
However, the code above does only half of the work: the other half is, for each feature, to go into the results of the different repetitions and find the maximum value obtained. I have managed to write a working code, but it is outrageously slow (I let it run for 2 hours before stopping it, although it worked on testing with fewer features - again, here it is scaled down to 100 features and should run in ~7 seconds):
start_time <- Sys.time()
myCluster <- makeCluster(25, # number of cores to use
type = "FORK")
registerDoParallel(myCluster)
#get all features name
feat_names <- colnames(fake_table)
#initalize an empty vector of zeros, with the names of the features
feat_wegiths <- rep(0, length(feat_names))
names(feat_wegiths) <- feat_names
#loop in parallel on the features name, for each feature name
feat_weight_foreach <- foreach(feat = feat_names, .combine = 'c') %dopar% {
#initalize the weight as 0
current_weigth <- 0
#for all element in result (i.e. repetitions of the subsampling process)
for (el in 1:length(result)){
#assign new weight accessing the table
new_weigth <- result[[el]][[1,feat]]
#skip is empty (i.e. the features is not present in the current subset)
if(is_empty(new_weigth)){next}
#if new weight is higher than current weight assign the value to current weight
if (current_weigth < new_weigth){
current_weigth <- new_weigth}}
current_weigth
}
end_time <- Sys.time()
end_time - start_time
If I understood what you are trying to do correctly, then the answer is simpler than you think.
Correct me if I'm wrong, but you are trying to get the max value obtained from attrEval per feature?
if so, then why not just bind all results in one dataframe (or data.table), and then get the max per column like so:
allResults <- result %>% data.table::rbindlist(fill = TRUE)
apply(allResults, 2, max, na.rm=TRUE)
This follows #DS_UNI's idea, but instead of binding a list, the approach is to create a matrix from the initial loop. That is, a list of tibbles makes us do extra work. Instead, we have every thing we need to make a matrix:
library(tidyverse)
library(magrittr)
library(CORElearn)
library(doParallel)
nr = 50L
nc = 200L
## generate data
set.seed(123)
mat = matrix(rnorm(nr * nc), ncol = nc, dimnames = list(NULL, paste0('V', seq_len(nc))))
outcome = rnorm(nr)
## constants for sampling
n_reps = nc
nc_sample_size = round(nc * 0.01)
## pre-allocate result
res = matrix(0, nrow = n_reps, ncol = ncol(mat), dimnames = dimnames(mat))
st = Sys.time()
for (i in seq_len(n_reps)) {
set.seed(i)
## similar way to do data simulations as OP
sub_cols = sample(seq_len(nc), nc_sample_size)
sub_mat = cbind(mat[, sub_cols], outcome)
rf_weights = attrEval(formula = ncol(sub_mat), as.data.frame(sub_mat), estimator = 'RReliefFequalK')
## assign back to pre-allocated result
res[i, sub_cols] = rf_weights
}
## get max of each column
apply(res, 2L, max)
et = Sys.time()
et - st
The downsides is that this loses the parallel workers. The upside is that we have less memory slowdowns because we're allocating much of what we need up front.
This is not a final answer, but I would suggest, since it is a numerical problem, to write a function in C++. This will increase the speed significantly, by some order of magnitude I would guess. In my oppinion, using R for this very specific numercial task is just hitting a brick wall.
The first chapter of Rcpp for everyone says:
Chapter 1 Suitable situations to use Rcpp
R is weak in some kinds of operations. If you need operations listed below, it is time to consider using Rcpp.
Loop operations in which later iterations depend on previous
iterations.
Accessing each element of a vector/matrix.
Recurrent function calls within loops.
Changing the size of vectors dynamically.
Operations that need advanced data structures and algorithms.
Wickham's Advanced R has a good chapter on that topic too.
I am trying to vectorise the following function which calculates the Hoeffdings distance between two random variable on [0,1]^2, in a discretise way.
Indeed, if you use the hoeffd function from the Hmisc package, it provides you with a fortran implementation ( that you can find here : https://github.com/harrelfe/Hmisc/blob/master/src/hoeffd.f ), but only give back the maximum of the matrix i'm trying to analyse here. I'm here interested in the place of the maximum, and hence i need to compute the whole matrix.
Here is my current implementation :
hoeffding_D <- function(x,y){
n = length(x)
indep <- outer(0:n,0:n)/(n)^2
bp = list(
c(0,sort(x)) + (c(sort(x),1) - c(0,sort(x)))/2,
c(0,sort(y)) + (c(sort(y),1) - c(0,sort(y)))/2
)
pre_calc <- t(outer(rep(1,n+1),x)<=bp[[1]])
# This is the problematic part :
dep <- t(sapply(bp[[2]],function(bpy){
colMeans(pre_calc*(y<=bpy))
}))
rez <- abs(dep-indep)
return(rez)
}
To use it, consider the folloiwing exemple :
library(copula)
# for 10 values, it's fast enough, but for 1000 it takes a lot of time..
x = pobs(rnorm(10),ties.method = "max")
y = pobs(rnorm(10),ties.method = "max")
hoeffding_D(x,y)
I already suppressed a first sapply via the use of the outer function, but i cant get rid of the other. The issue is that the comparaison x<=bpx must be done for all x and for all bpx, and the same for y, altogether this is a lot of dimensions to the problem...
Do you have an idea on how to speed it up ?
I am currently performing a style analysis using the following method: http://www.r-bloggers.com/style-analysis/ . It is a constrained regression of one asset on a number of benchmarks, over a rolling 36 month window.
My problem is that I need to perform this regression for a fairly large number of assets and doing it one by one would take a huge amount of time. To be more precise: Is there a way to tell R to regress columns 1-100 one by one on colums 101-116. Of course this also means printing 100 different plots, one for each asset. I am new to R and have been stuck for several days now.
I hope it doesn't matter that the following excerpt isn't reproducible, since the code works as originally intended.
# Style Regression over Window, constrained
#--------------------------------------------------------------------------
# setup
load.packages('quadprog')
style.weights[] = NA
style.r.squared[] = NA
# Setup constraints
# 0 <= x.i <= 1
constraints = new.constraints(n, lb = 0, ub = 1)
# SUM x.i = 1
constraints = add.constraints(rep(1, n), 1, type = '=', constraints)
# main loop
for( i in window.len:ndates ) {
window.index = (i - window.len + 1) : i
fit = lm.constraint( hist.returns[window.index, -1], hist.returns[window.index, 1], constraints )
style.weights[i,] = fit$coefficients
style.r.squared[i,] = fit$r.squared
}
# plot
aa.style.summary.plot('Style Constrained', style.weights, style.r.squared, window.len)
Thank you very much for any tips!
"Is there a way to tell R to regress columns 1-100 one by one on colums 101-116."
Yes! You can use a for loop, but you there's also a whole family of 'apply' functions which are appropriate. Here's a generalized solution with a random / toy dataset and using lm(), but you can sub in whatever regression function you want
# data frame of 116 cols of 20 rows
set.seed(123)
dat <- as.data.frame(matrix(rnorm(116*20), ncol=116))
# with a for loop
models <- list() # empty list to store models
for (i in 1:100) {
models[[i]] <-
lm(formula=x~., data=data.frame(x=dat[, i], dat[, 101:116]))
}
# with lapply
models2 <-
lapply(1:100,
function(i) lm(formula=x~.,
data=data.frame(x=dat[, i], dat[, 101:116])))
# compare. they give the same results!
all.equal(models, models2)
# to access a single model, use [[#]]
models2[[1]]