I am new to Parallel computing in R.
I have gone through various links on StackOverFlow for the topic and wrote an initial code
library(doParallel)
library(foreach)
detectCores()
## [1] 4
# Create cluster with desired number of cores
cl <- makeCluster(3)
# Register cluster
registerDoParallel(cl)
# Find out how many cores are being used
getDoParWorkers()
My objective is to do a repetitive calculation on each row, my function looks something like
func2<-function(i)
{
msgbody<-tolower(as.character(purchase$msg_body[i]))
purchase$category[i]<-category_fun(i,msgbody)
}
For this purpose I have written a foreach loop
foreach(i = 1:nrow(purchase)) %dopar% func2(i)
But, the issue is that "func2" is supposed to write back to dataframe but it is not writing anything back, all the entries are same as old
Appreciate you help.
I believe this would work better in the scenario you're indicating. You can write a function that works on each msg_body string:
func2 <- function(msg_body)
{
return(category_fun(i,tolower(as.character(purchase$msg_body[i])))
}
result <- foreach(i=1:nrow(purchase),.combine=c) %dopar% {func2(purchase$msg_body[i]}
purchase$category <- result
I do think you'll be better off using apply() to solve this though.
Related
I have a large database and I wrote a code which executes the same calculations on that in a rolling manner by nesting it in a for loop. My problem is that the code runs pretty long. As I read, this is probably caused by R using a single-threaded method as default. As far as I know, foreach package would make it possible to speed up the execution by considerable time, however, I am unsure how to implement it. Currently, my code looks like this, in every iteration I subset a chunk of the large database and do various stuff with these subsets. At the end of an iteration, I collect the output in a time series. Is it possible to apply foreach in this situation?
(k in seq(1,5284, 21)) {
fdata <- data[k:(k+251),]
tdata <- data[(k+252):(k+377),]
}
Thanks!
This is certainly doable using foreach. Depending on your OS you would first have to load a suitable backend (e.g. SNOW on a windows machine) and then set up a cluster.
Example:
library(foreach)
library(doSNOW)
# set number of cores/CPUs to be used
(n_cores <- parallel::detectCores() - 1)
# some example data
dat <- matrix(1:1e3, ncol = 10)
# a set you iterate over
k <- 1:99
# run stuff in parallel
cl <- makeCluster(n_cores)
registerDoSNOW(cl)
result <- foreach(k) %dopar% {
fdata <- dat[k:(k+1), ]
# do computationally expensive stuff with `fdata`
# ... and return something
cumsum(fdata[1,] + fdata[2,])
}
stopCluster(cl)
By default result will be a list of the results. There are, however, ways to combine into an array or similar. Look at details on the .combine argument in ?foreach.
I am currently working on a program to evaluate the out-of-sample performance of several forecasting models on simulated data. For those who are familiar with finance, it works exactly like backtesting a trading strategy, except that I would evaluate forecasts and not transactions.
Some of the objects I currently manipulate using for loops for this type of task are 7 dimensional arrays (dimensions stand for Monte Carlo replications, data generating processes, forecast horizons, 3 dimensions for model parameter selection, and one dimension for all the periods covered in the out-of-sample analysis). Obviously, it is painfully slow, so parallel computing has became a must for me.
My problem is: how do I keep track of more than 2 dimensions in R? Let's just show you using 'for loops' and only 3 dimensions what I mean:
x <- array(dim=c(2,2,2))
for (i in 1:2){
for (j in 1:2){
for (k in 1:2){
x[i,j,k] <- i+j+k
}
}
}
If I use something like 'foreach', I am very annoyed by the fact that, to my knowledge, available combining functionalities will return lists, matrices or vectors -- but not arbitrarily large multidimensional arrays. For instance:
library(doParallel)
library(foreach)
# Get the number of cores to use
no_cores <- max(1, detectCores()-1)
# Make cluster object using no_cores
cl <- makeCluster(no_cores)
# Initialize cluster for parallel computing
registerDoParallel(cl)
x <- foreach(i=1:2, .combine=rbind)%:%
foreach(j=1:2, .combine=cbind)%:%
foreach(k=1:2, .combine=c)%dopar%{
i+j+k
}
Here, I basically combine results into vectors, then matrices and, finally, I pile up matrices by rows. Another option would be to use lists, or pile matrices through columns, but you can imagine the mess when you have 7 dimensions and millions of iterations to track.
I suppose I could also write my own 'combine' function and get the kind of output I want, but I suspect that I am not the first person to encounter this problem. Either there is a way to do exactly what I want, or someone here can point out a way to think differently about storing my results. It wouldn't be surprising that I am taking an absurdly inefficient path toward solving this problem -- I am an economist, not a data scientist, after all!
Any help would be greatly appreciated. Thanks in advance.
There is one available solution that I finally stumbled upon tonight. I can create an appropriate combination function along the dimension of my choice using the 'abind' function of the 'abind' package:
library(abind)
# Get the number of cores to use
no_cores <- max(1, detectCores()-1)
# Make cluster object using no_cores
cl <- makeCluster(no_cores)
# Initialize cluster for parallel computing
registerDoParallel(cl)
mbind <- function(...) abind(..., along=3)
x <- foreach(i=1:2, .combine=mbind)%:%
foreach(j=1:2, .combine=cbind)%:%
foreach(k=1:2, .combine=c)%dopar%{
i+j+k
}
I would still like to see if someone has other means of doing what I want to do, however. There might be many ways to do it and I am new to R, yet this solution is a distinct possibility.
What I would do and I already use in one of my packages, bigstatsr.
Take only one dimension and cut it in no_cores blocks. It should have sufficient iterations (e.g. 20 for 4 cores). For each iteration, construct part of the array you want and store it in a temporary file. The, use the content of these files to fill the whole array. By doing so, you fill only preallocated objects, which should be faster and easier.
Example:
x.all <- array(dim=c(20,2,2))
no_cores <- 3
tmpfile <- tempfile()
range.parts <- bigstatsr:::CutBySize(nrow(x.all), nb = no_cores)
library(foreach)
cl <- parallel::makeCluster(no_cores)
doParallel::registerDoParallel(cl)
foreach(ic = 1:no_cores) %dopar% {
ind <- bigstatsr:::seq2(range.parts[ic, ])
x <- array(dim = c(length(ind), 2, 2))
for (i in seq_along(ind)){
for (j in 1:2){
for (k in 1:2){
x[i,j,k] <- ind[i]+j+k
}
}
}
saveRDS(x, file = paste0(tmpfile, "_", ic, ".rds"))
}
parallel::stopCluster(cl)
for (ic in 1:no_cores) {
ind <- bigstatsr:::seq2(range.parts[ic, ])
x.all[ind, , ] <- readRDS(paste0(tmpfile, "_", ic, ".rds"))
}
print(x.all)
Instead of writing files, you could also directly return the no_cores parts of the array in foreach and combine them with the right abind.
As a disclaimer, this is my first post, and would I welcome any constructive criticism on appropriate postings in the future.
The input to my function is an array. When I input the array using the %do% option of foreach, I get the desired output array. I have searched through the various foreach threads, and have not found a similar problem. However, when I run the %dopar% option, I only get the input array back in return. Here is the basic outline of my code; I can post the full script if that would be best, but it is quite long. Ultimately, I will be scaling the array to a size where calculations will have to be done in parallel for the sake of time.
#libraries for constructing a parallel for loop
library(foreach)
library(doParallel)
cores <- detectCores()-1
#establish cluster
cl <- makeCluster(cores)
#to register the doparallel backend
registerDoParallel(cl,cores=7)
#input array
Q <- matrix(0,20,20)
Q[1:5,1:5] <- 2
#function in question....
dyn_rec_proto20_parallel <- function(vegmat){
distance <- matrix(0,length(vegmat[1,]),length(vegmat[1,]))
recruitment_prob_mat <- matrix(0,length(vegmat[1,]),length(vegmat[1,]))
foreach(i=1:length(vegmat[1,]),.export = c('IDW_scoring','roll_for_rec','Q','ageing20_parallel' ),)%dopar%{
for(j in 1:length(vegmat[1,])){
if(vegmat[i,j]==0){ #do a bunch of updates to vegmat
}
}
return(vegmat)
}
Iterpc begins each loop from the same point. This has created an amusing, though frustrating issue, illustrated below:
####Load Packages:
library("doParallel")
library("foreach")
library("iterpc")
####Define variables:
n<-2
precision<-0.1
support<-matrix(seq(0+precision,1-precision,by=precision), ncol=1)
nodes<-2 #preparing for multicore.
cl<-makeCluster(nodes)
####Prep iterations
I<-iterpc(table(support),n, ordered=TRUE,replace=FALSE)
steps<-((factorial(length(support)) / factorial(length(support)-n)))/n
####Run loop to get the combined values:
registerDoParallel(cl)
support_n<-foreach(m=1:n,.packages="iterpc", .combine='cbind') %dopar% {
t(getnext(I,steps))
} #????
Which returns
support_n
I was hoping that this would run each of the sets in parallel, one half of the permutations assigned to each node. However, it only does the first half of the permutations... twice. ([,1] is equal to [,37].) How do I get it to return all of permutations and combine them in parallel?
Assume there will be an arbitrarily large number of permutations so memory management and speed are nontrivial.
Previous research:All possible permutations for large n
Just for anyone, who will come here by searching "foreach iterpc R", as i did.
Your approach marked as accepted answer does not really differ much from
result <- foreach(a=1:10) %dopar% {
a
}
because a=getnext(I,d=(2*steps)) will simply return the first 2*steps combinations and then foreach package will iterate in parallel over this combinations.
When you have very large number of combinations returned by iterpc (which it is build for) you cannot in fact use such an approach.
In that case the only thing i believe one could do is to write iterator wrapper over the iterpc object.
# register parallel backend
library(doParallel)
registerDoParallel(cores = 3)
#create iterpc object
library(iterpc)
combinations <- iterpc(4,2)
library(iterators)
iterpc_iterator <- function(iterpc_object, iteration_length) {
# one's own function of nextElement() because iterpc
# returns NULL on finished iteration on subsequent getnext() invocation
# but not 'StopIteration'
nextEl <- function() {
if (iteration_length > 0)
iteration_length <<- iteration_length - 1
else
stop('StopIteration')
getnext(iterpc_object)
}
obj <- list(nextElem=nextEl)
class(obj) <- c('irep', 'abstractiter', 'iter')
obj
}
it <- iterpc_iterator(combinations, getlength(combinations))
library(foreach)
result <- foreach(i=it) %dopar% {
i
}
You can simply use iterpc::iter_wrapper.
The relevant line from your example:
support_n <-foreach(a = iter_wrapper(I), .combine='cbind') %dopar% a
After further investigation I believe the following does in fact execute the command in parallel.
registerDoParallel(cl)
system.time(
support_n<-foreach(a=getnext(I,d=(2*steps)),.combine='cbind') %dopar% a
)
support_n<-t(support_n)
Thank you for your assistance.
Is there a problem when accessing/writing to global variable in using doSNOW package on multiple cores?
In the below program, each of the MyCalculations(ii) writes to the ii-th column of the matrix "globalVariable"...
Do you think the result will be correct? Will there be hidden catches?
Thanks a lot!
p.s. I have to write out to the global variable because this is a simplied example, in fact I have lots of outputs that need to be transported from within the parallel loops... therefore, probably the only way is to write out to global variables...
library(doSNOW)
MaxSearchSpace=44*5
globalVariable=matrix(0, 10000, MaxSearchSpace)
cl<-makeCluster(7)
registerDoSNOW(cl)
foreach (ii = 2:nMaxSearchSpace, .combine=cbind, .verbose=F) %dopar%
{
MyCalculations(ii)
}
stopCluster(cl)
p.s. I am asking - within the DoSnow framework, is there any danger of accessing/writing global variables... thx
Since this question is a couple months old, I hope you've found an answer by now. However, in case you're still interested in feedback, here's something to consider:
When using foreach with a parallel backend, you won't be able to assign to variables in R's global environment in the way you're attempting (you probably noticed this). Using a sequential backend, assignment will work, but not using a parallel one like with doSNOW.
Instead, save all the results of your calculations for each iteration in a list and return this to an object, so that you can extract the appropriate results after all calculations have been completed.
My suggestion starts similarly to your example:
library(doSNOW)
MaxSearchSpace <- 44*5
cl <- makeCluster(parallel::detectCores())
# do not create the globalVariable object
registerDoSNOW(cl)
# Save the results of the `foreach` iterations as
# lists of lists in an object (`theRes`)
theRes <- foreach (ii = 2:MaxSearchSpace, .verbose=F) %dopar%
{
# do some calculations
theNorms <- rnorm(10000)
thePois <- rpois(10000, 2)
# store the results in a list
list(theNorms, thePois)
}
After all iterations have been completed, extract the results from theRes and store them as objects (e.g., globalVariable, globalVariable2, etc.)
globalVariable1 <- do.call(cbind, lapply(theRes, "[[", 1))
globalVariable2 <- do.call(cbind, lapply(theRes, "[[", 2))
With this in mind, if you are performing calculations with each iteration that are dependent on the results of calculations from previous iterations, then this type of parallel computing is not the approach to take.