Using counter from loop when using foreach() - r

I have a large simulation that I would like to run on multiple cores.
For this I am using the foreach() package.
I am iterating a loop 1000 times and within the loop I am using the counter from the loop as a position vector, for exampe:
reps<-1000
for (i in reps){
a[i]<-mean(rnorm(100))
}
If I do the same thing with foreach:
library(foreach)
cl<-makeCluster(8)
registerDoParallel(cl)
ls<-foreach(icount(reps)) %dopar% {
rnorm(100)
}
I can no longer use the current counter i as in the original loop.
Is there a way to use it?
I am also fine with having a counter, like everytime one iteration passes I do i=i+1 starting from i=0.

As Roland suggests, you could write the "foreach" version as:
reps <- 1000
ls <- foreach(i=icount(reps), .combine='c') %dopar% {
mean(rnorm(100))
}
The variable i isn't used, but it's available if you want it in the future.
Using "for" loops inside "foreach" loops can be very useful for getting better performance, since it can decrease the number of iterations and allow the workers to do more of the work in parallel:
reps <- 1000
ls <- foreach(n=rep(reps/8, 8), .combine='c') %dopar% {
a <- numeric(n)
for (i in seq_len(n)) {
a[i] <- mean(rnorm(100))
}
a
}
It's fine to make assignments to a inside the "foreach" loop because it's a local variable. When the "for" loop finishes, a is returned, and finally all of the a vectors are combined with the "c" function by the master.
Notice that this example is using the original "for" loop inside the "foreach" loop. This is more efficient because the master doesn't have to do nearly as much work to send out the tasks and collect and combine the results.
Actually, I would use "sapply" instead of the "for" loop and "idiv" (from the "iterators" package) instead of "rep":
reps <- 1000
ls <- foreach(n=idiv(reps, chunks=8), .combine='c') %dopar% {
sapply(seq_len(n), function(i) mean(rnorm(100)))
}
With "idiv" I don't have to worry about what will happen if the number of iterations is not evenly divisible by the number of workers.

Related

Foreach instead for loop

I have a large database and I wrote a code which executes the same calculations on that in a rolling manner by nesting it in a for loop. My problem is that the code runs pretty long. As I read, this is probably caused by R using a single-threaded method as default. As far as I know, foreach package would make it possible to speed up the execution by considerable time, however, I am unsure how to implement it. Currently, my code looks like this, in every iteration I subset a chunk of the large database and do various stuff with these subsets. At the end of an iteration, I collect the output in a time series. Is it possible to apply foreach in this situation?
(k in seq(1,5284, 21)) {
fdata <- data[k:(k+251),]
tdata <- data[(k+252):(k+377),]
}
Thanks!
This is certainly doable using foreach. Depending on your OS you would first have to load a suitable backend (e.g. SNOW on a windows machine) and then set up a cluster.
Example:
library(foreach)
library(doSNOW)
# set number of cores/CPUs to be used
(n_cores <- parallel::detectCores() - 1)
# some example data
dat <- matrix(1:1e3, ncol = 10)
# a set you iterate over
k <- 1:99
# run stuff in parallel
cl <- makeCluster(n_cores)
registerDoSNOW(cl)
result <- foreach(k) %dopar% {
fdata <- dat[k:(k+1), ]
# do computationally expensive stuff with `fdata`
# ... and return something
cumsum(fdata[1,] + fdata[2,])
}
stopCluster(cl)
By default result will be a list of the results. There are, however, ways to combine into an array or similar. Look at details on the .combine argument in ?foreach.

isplitVector and foreach indexing issue

I am a beginner in parallel computing with R. I recently started using the foreach and parallel computing using the doParallel package. I have a an issue when i am trying to index a list after splitting a iterator into chunks.
library(itertools)
library(foreach)
library(doParallel)
n=10000
iter = 1:n
cores = detectCores() -1
c = makeCluster(cores)
clusterExport(c,c("mod_function","test_list","cores")
registerDoParallel(c)
output <- foreach(i = isplitVector(iter,chunks = cores)) %dopar%
{
mod_function(test_list[[i]]
}
stopCluster(c)
I get the error
Error in { : task 1 failed - "recursive indexing failed at level 3
I do not get the error when I do not split the iteration vector into chunks. I am not sure what exactly does the isplitVector returns and how I go about indexing the list. This works for me
n=10000
iter = 1:n
cores = detectCores() -1
c = makeCluster(cores)
registerDoParallel(c)
output <- foreach(i = (1:n) %dopar%
{
mod_function(test_list[[i]]
}
stopCluster(c)
Since I have a lot of iterations, I thought the best way to speed up my foreach was to chunk the iterations to the cluster. Any help in this direction would be very helpful. Thanks in advance.
The isplitVector function returns an iterator that returns sub-vectors (or sub-lists) of its first argument. You're getting an error because you're using [[ to index into test_list with a vector. You might be able to use [ instead, but that would fail if mod_function doesn't accept list arguments.
Here's one way to break up your example into cores tasks that works even if mod_function doesn't accept list arguments:
output <-
foreach(s=isplitVector(test_list, chunks=cores), .combine='c') %dopar% {
lapply(s, mod_function)
}
Note that it uses c to combine the lists returned by lapply into a single list.

Writing data back to dataframe after parallel computing in R

I am new to Parallel computing in R.
I have gone through various links on StackOverFlow for the topic and wrote an initial code
library(doParallel)
library(foreach)
detectCores()
## [1] 4
# Create cluster with desired number of cores
cl <- makeCluster(3)
# Register cluster
registerDoParallel(cl)
# Find out how many cores are being used
getDoParWorkers()
My objective is to do a repetitive calculation on each row, my function looks something like
func2<-function(i)
{
msgbody<-tolower(as.character(purchase$msg_body[i]))
purchase$category[i]<-category_fun(i,msgbody)
}
For this purpose I have written a foreach loop
foreach(i = 1:nrow(purchase)) %dopar% func2(i)
But, the issue is that "func2" is supposed to write back to dataframe but it is not writing anything back, all the entries are same as old
Appreciate you help.
I believe this would work better in the scenario you're indicating. You can write a function that works on each msg_body string:
func2 <- function(msg_body)
{
return(category_fun(i,tolower(as.character(purchase$msg_body[i])))
}
result <- foreach(i=1:nrow(purchase),.combine=c) %dopar% {func2(purchase$msg_body[i]}
purchase$category <- result
I do think you'll be better off using apply() to solve this though.

Using large numbers of permutations in parallel: combining iterpc and foreach

Iterpc begins each loop from the same point. This has created an amusing, though frustrating issue, illustrated below:
####Load Packages:
library("doParallel")
library("foreach")
library("iterpc")
####Define variables:
n<-2
precision<-0.1
support<-matrix(seq(0+precision,1-precision,by=precision), ncol=1)
nodes<-2 #preparing for multicore.
cl<-makeCluster(nodes)
####Prep iterations
I<-iterpc(table(support),n, ordered=TRUE,replace=FALSE)
steps<-((factorial(length(support)) / factorial(length(support)-n)))/n
####Run loop to get the combined values:
registerDoParallel(cl)
support_n<-foreach(m=1:n,.packages="iterpc", .combine='cbind') %dopar% {
t(getnext(I,steps))
} #????
Which returns
support_n
I was hoping that this would run each of the sets in parallel, one half of the permutations assigned to each node. However, it only does the first half of the permutations... twice. ([,1] is equal to [,37].) How do I get it to return all of permutations and combine them in parallel?
Assume there will be an arbitrarily large number of permutations so memory management and speed are nontrivial.
Previous research:All possible permutations for large n
Just for anyone, who will come here by searching "foreach iterpc R", as i did.
Your approach marked as accepted answer does not really differ much from
result <- foreach(a=1:10) %dopar% {
a
}
because a=getnext(I,d=(2*steps)) will simply return the first 2*steps combinations and then foreach package will iterate in parallel over this combinations.
When you have very large number of combinations returned by iterpc (which it is build for) you cannot in fact use such an approach.
In that case the only thing i believe one could do is to write iterator wrapper over the iterpc object.
# register parallel backend
library(doParallel)
registerDoParallel(cores = 3)
#create iterpc object
library(iterpc)
combinations <- iterpc(4,2)
library(iterators)
iterpc_iterator <- function(iterpc_object, iteration_length) {
# one's own function of nextElement() because iterpc
# returns NULL on finished iteration on subsequent getnext() invocation
# but not 'StopIteration'
nextEl <- function() {
if (iteration_length > 0)
iteration_length <<- iteration_length - 1
else
stop('StopIteration')
getnext(iterpc_object)
}
obj <- list(nextElem=nextEl)
class(obj) <- c('irep', 'abstractiter', 'iter')
obj
}
it <- iterpc_iterator(combinations, getlength(combinations))
library(foreach)
result <- foreach(i=it) %dopar% {
i
}
You can simply use iterpc::iter_wrapper.
The relevant line from your example:
support_n <-foreach(a = iter_wrapper(I), .combine='cbind') %dopar% a
After further investigation I believe the following does in fact execute the command in parallel.
registerDoParallel(cl)
system.time(
support_n<-foreach(a=getnext(I,d=(2*steps)),.combine='cbind') %dopar% a
)
support_n<-t(support_n)
Thank you for your assistance.

writing to global variables in using doSNOW and doing parallelization in R?

Is there a problem when accessing/writing to global variable in using doSNOW package on multiple cores?
In the below program, each of the MyCalculations(ii) writes to the ii-th column of the matrix "globalVariable"...
Do you think the result will be correct? Will there be hidden catches?
Thanks a lot!
p.s. I have to write out to the global variable because this is a simplied example, in fact I have lots of outputs that need to be transported from within the parallel loops... therefore, probably the only way is to write out to global variables...
library(doSNOW)
MaxSearchSpace=44*5
globalVariable=matrix(0, 10000, MaxSearchSpace)
cl<-makeCluster(7)
registerDoSNOW(cl)
foreach (ii = 2:nMaxSearchSpace, .combine=cbind, .verbose=F) %dopar%
{
MyCalculations(ii)
}
stopCluster(cl)
p.s. I am asking - within the DoSnow framework, is there any danger of accessing/writing global variables... thx
Since this question is a couple months old, I hope you've found an answer by now. However, in case you're still interested in feedback, here's something to consider:
When using foreach with a parallel backend, you won't be able to assign to variables in R's global environment in the way you're attempting (you probably noticed this). Using a sequential backend, assignment will work, but not using a parallel one like with doSNOW.
Instead, save all the results of your calculations for each iteration in a list and return this to an object, so that you can extract the appropriate results after all calculations have been completed.
My suggestion starts similarly to your example:
library(doSNOW)
MaxSearchSpace <- 44*5
cl <- makeCluster(parallel::detectCores())
# do not create the globalVariable object
registerDoSNOW(cl)
# Save the results of the `foreach` iterations as
# lists of lists in an object (`theRes`)
theRes <- foreach (ii = 2:MaxSearchSpace, .verbose=F) %dopar%
{
# do some calculations
theNorms <- rnorm(10000)
thePois <- rpois(10000, 2)
# store the results in a list
list(theNorms, thePois)
}
After all iterations have been completed, extract the results from theRes and store them as objects (e.g., globalVariable, globalVariable2, etc.)
globalVariable1 <- do.call(cbind, lapply(theRes, "[[", 1))
globalVariable2 <- do.call(cbind, lapply(theRes, "[[", 2))
With this in mind, if you are performing calculations with each iteration that are dependent on the results of calculations from previous iterations, then this type of parallel computing is not the approach to take.

Resources