foreach cannot find `i` used in foreach(i=1:N) - r

I'm having some troubles with the variables inside a foreach. I load the cluster and set up a couple of vectors:
library(doParallel)
ncores <- detectCores() - 2
cl <- makeCluster(ncores, outfile="", port=11439)
registerDoParallel(cl)
results <- rep(NA,10)
values <- 20:30
Then, it does not work:
# Error: object 'i' not found
foreach(i=1:10) %dopar%
results[i] <- i
stopCluster(cl)
While this does:
# ok
foreach(i=1:10) %dopar%
values[i]
stopCluster(cl)
How come it finds i when it is used inside a [i] in the left hand side, but it does not find it when used in the right hand side?

From my comment:
try it with curly braces.
foreach(i=1:10) %dopar% {
results[i] <- i
}
Not just with this example, I experienced it is better to use curly braces in R. Many Problems can be avoided by using them. And apparently there are some more advantages of these little helpers, as you may see while browsing through the Internets (e.g. see here).

Related

Dealing with multidimensional output in parallel programming

I am currently working on a program to evaluate the out-of-sample performance of several forecasting models on simulated data. For those who are familiar with finance, it works exactly like backtesting a trading strategy, except that I would evaluate forecasts and not transactions.
Some of the objects I currently manipulate using for loops for this type of task are 7 dimensional arrays (dimensions stand for Monte Carlo replications, data generating processes, forecast horizons, 3 dimensions for model parameter selection, and one dimension for all the periods covered in the out-of-sample analysis). Obviously, it is painfully slow, so parallel computing has became a must for me.
My problem is: how do I keep track of more than 2 dimensions in R? Let's just show you using 'for loops' and only 3 dimensions what I mean:
x <- array(dim=c(2,2,2))
for (i in 1:2){
for (j in 1:2){
for (k in 1:2){
x[i,j,k] <- i+j+k
}
}
}
If I use something like 'foreach', I am very annoyed by the fact that, to my knowledge, available combining functionalities will return lists, matrices or vectors -- but not arbitrarily large multidimensional arrays. For instance:
library(doParallel)
library(foreach)
# Get the number of cores to use
no_cores <- max(1, detectCores()-1)
# Make cluster object using no_cores
cl <- makeCluster(no_cores)
# Initialize cluster for parallel computing
registerDoParallel(cl)
x <- foreach(i=1:2, .combine=rbind)%:%
foreach(j=1:2, .combine=cbind)%:%
foreach(k=1:2, .combine=c)%dopar%{
i+j+k
}
Here, I basically combine results into vectors, then matrices and, finally, I pile up matrices by rows. Another option would be to use lists, or pile matrices through columns, but you can imagine the mess when you have 7 dimensions and millions of iterations to track.
I suppose I could also write my own 'combine' function and get the kind of output I want, but I suspect that I am not the first person to encounter this problem. Either there is a way to do exactly what I want, or someone here can point out a way to think differently about storing my results. It wouldn't be surprising that I am taking an absurdly inefficient path toward solving this problem -- I am an economist, not a data scientist, after all!
Any help would be greatly appreciated. Thanks in advance.
There is one available solution that I finally stumbled upon tonight. I can create an appropriate combination function along the dimension of my choice using the 'abind' function of the 'abind' package:
library(abind)
# Get the number of cores to use
no_cores <- max(1, detectCores()-1)
# Make cluster object using no_cores
cl <- makeCluster(no_cores)
# Initialize cluster for parallel computing
registerDoParallel(cl)
mbind <- function(...) abind(..., along=3)
x <- foreach(i=1:2, .combine=mbind)%:%
foreach(j=1:2, .combine=cbind)%:%
foreach(k=1:2, .combine=c)%dopar%{
i+j+k
}
I would still like to see if someone has other means of doing what I want to do, however. There might be many ways to do it and I am new to R, yet this solution is a distinct possibility.
What I would do and I already use in one of my packages, bigstatsr.
Take only one dimension and cut it in no_cores blocks. It should have sufficient iterations (e.g. 20 for 4 cores). For each iteration, construct part of the array you want and store it in a temporary file. The, use the content of these files to fill the whole array. By doing so, you fill only preallocated objects, which should be faster and easier.
Example:
x.all <- array(dim=c(20,2,2))
no_cores <- 3
tmpfile <- tempfile()
range.parts <- bigstatsr:::CutBySize(nrow(x.all), nb = no_cores)
library(foreach)
cl <- parallel::makeCluster(no_cores)
doParallel::registerDoParallel(cl)
foreach(ic = 1:no_cores) %dopar% {
ind <- bigstatsr:::seq2(range.parts[ic, ])
x <- array(dim = c(length(ind), 2, 2))
for (i in seq_along(ind)){
for (j in 1:2){
for (k in 1:2){
x[i,j,k] <- ind[i]+j+k
}
}
}
saveRDS(x, file = paste0(tmpfile, "_", ic, ".rds"))
}
parallel::stopCluster(cl)
for (ic in 1:no_cores) {
ind <- bigstatsr:::seq2(range.parts[ic, ])
x.all[ind, , ] <- readRDS(paste0(tmpfile, "_", ic, ".rds"))
}
print(x.all)
Instead of writing files, you could also directly return the no_cores parts of the array in foreach and combine them with the right abind.

foreach: code produces different results with %do% and %dopar%

As a disclaimer, this is my first post, and would I welcome any constructive criticism on appropriate postings in the future.
The input to my function is an array. When I input the array using the %do% option of foreach, I get the desired output array. I have searched through the various foreach threads, and have not found a similar problem. However, when I run the %dopar% option, I only get the input array back in return. Here is the basic outline of my code; I can post the full script if that would be best, but it is quite long. Ultimately, I will be scaling the array to a size where calculations will have to be done in parallel for the sake of time.
#libraries for constructing a parallel for loop
library(foreach)
library(doParallel)
cores <- detectCores()-1
#establish cluster
cl <- makeCluster(cores)
#to register the doparallel backend
registerDoParallel(cl,cores=7)
#input array
Q <- matrix(0,20,20)
Q[1:5,1:5] <- 2
#function in question....
dyn_rec_proto20_parallel <- function(vegmat){
distance <- matrix(0,length(vegmat[1,]),length(vegmat[1,]))
recruitment_prob_mat <- matrix(0,length(vegmat[1,]),length(vegmat[1,]))
foreach(i=1:length(vegmat[1,]),.export = c('IDW_scoring','roll_for_rec','Q','ageing20_parallel' ),)%dopar%{
for(j in 1:length(vegmat[1,])){
if(vegmat[i,j]==0){ #do a bunch of updates to vegmat
}
}
return(vegmat)
}

Writing data back to dataframe after parallel computing in R

I am new to Parallel computing in R.
I have gone through various links on StackOverFlow for the topic and wrote an initial code
library(doParallel)
library(foreach)
detectCores()
## [1] 4
# Create cluster with desired number of cores
cl <- makeCluster(3)
# Register cluster
registerDoParallel(cl)
# Find out how many cores are being used
getDoParWorkers()
My objective is to do a repetitive calculation on each row, my function looks something like
func2<-function(i)
{
msgbody<-tolower(as.character(purchase$msg_body[i]))
purchase$category[i]<-category_fun(i,msgbody)
}
For this purpose I have written a foreach loop
foreach(i = 1:nrow(purchase)) %dopar% func2(i)
But, the issue is that "func2" is supposed to write back to dataframe but it is not writing anything back, all the entries are same as old
Appreciate you help.
I believe this would work better in the scenario you're indicating. You can write a function that works on each msg_body string:
func2 <- function(msg_body)
{
return(category_fun(i,tolower(as.character(purchase$msg_body[i])))
}
result <- foreach(i=1:nrow(purchase),.combine=c) %dopar% {func2(purchase$msg_body[i]}
purchase$category <- result
I do think you'll be better off using apply() to solve this though.

Using large numbers of permutations in parallel: combining iterpc and foreach

Iterpc begins each loop from the same point. This has created an amusing, though frustrating issue, illustrated below:
####Load Packages:
library("doParallel")
library("foreach")
library("iterpc")
####Define variables:
n<-2
precision<-0.1
support<-matrix(seq(0+precision,1-precision,by=precision), ncol=1)
nodes<-2 #preparing for multicore.
cl<-makeCluster(nodes)
####Prep iterations
I<-iterpc(table(support),n, ordered=TRUE,replace=FALSE)
steps<-((factorial(length(support)) / factorial(length(support)-n)))/n
####Run loop to get the combined values:
registerDoParallel(cl)
support_n<-foreach(m=1:n,.packages="iterpc", .combine='cbind') %dopar% {
t(getnext(I,steps))
} #????
Which returns
support_n
I was hoping that this would run each of the sets in parallel, one half of the permutations assigned to each node. However, it only does the first half of the permutations... twice. ([,1] is equal to [,37].) How do I get it to return all of permutations and combine them in parallel?
Assume there will be an arbitrarily large number of permutations so memory management and speed are nontrivial.
Previous research:All possible permutations for large n
Just for anyone, who will come here by searching "foreach iterpc R", as i did.
Your approach marked as accepted answer does not really differ much from
result <- foreach(a=1:10) %dopar% {
a
}
because a=getnext(I,d=(2*steps)) will simply return the first 2*steps combinations and then foreach package will iterate in parallel over this combinations.
When you have very large number of combinations returned by iterpc (which it is build for) you cannot in fact use such an approach.
In that case the only thing i believe one could do is to write iterator wrapper over the iterpc object.
# register parallel backend
library(doParallel)
registerDoParallel(cores = 3)
#create iterpc object
library(iterpc)
combinations <- iterpc(4,2)
library(iterators)
iterpc_iterator <- function(iterpc_object, iteration_length) {
# one's own function of nextElement() because iterpc
# returns NULL on finished iteration on subsequent getnext() invocation
# but not 'StopIteration'
nextEl <- function() {
if (iteration_length > 0)
iteration_length <<- iteration_length - 1
else
stop('StopIteration')
getnext(iterpc_object)
}
obj <- list(nextElem=nextEl)
class(obj) <- c('irep', 'abstractiter', 'iter')
obj
}
it <- iterpc_iterator(combinations, getlength(combinations))
library(foreach)
result <- foreach(i=it) %dopar% {
i
}
You can simply use iterpc::iter_wrapper.
The relevant line from your example:
support_n <-foreach(a = iter_wrapper(I), .combine='cbind') %dopar% a
After further investigation I believe the following does in fact execute the command in parallel.
registerDoParallel(cl)
system.time(
support_n<-foreach(a=getnext(I,d=(2*steps)),.combine='cbind') %dopar% a
)
support_n<-t(support_n)
Thank you for your assistance.

Parallel computing with recursive function

My challenge is to parallel compute a recursive function. However, the recursion is quite deep, and therefore (in my own novice words) there is an issue with allocating a worker when all the workers are busy. in short, it crushes.
Here is some reproducible code. The code is very stupid, but the structure is what counts. This is a simplified version of what is going on.
I work on a windows machine, if the solution is to go linux, just say the word. Because the real function can be quite deep, managing the number of workers that are called for in the upper level will not solve the issue. Is there perhaps a way to know in what level the recursion is?
FUN <- function(optimizer,neighbors,considered,x){
considered <- c(considered,optimizer)
neighbors <- setdiff(x=neighbors,y=considered)
if (length(neighbors)==0) {
# this loop is STUPID, but it is just an example.
z <- numeric(10)
for (i in 1:100)
{
z[i] <- sample(x,1)
}
return(max(z))
} else {
# Something embarrassingly parallel,
# but cannot be vectorized.
z <- numeric(10)
z <- foreach(i=1:10, .combine='c') %dopar%{
FUN(optimizer=neighbors[1],neighbors=neighbors,
considered=considered,x=x)}
return(max(z))
}
}
require(doParallel,quietly=T)
cl <- makeCluster(3)
clusterExport(cl, c("FUN"))
registerDoParallel(cl)
getDoParWorkers()
>FUN(optimizer=1,neighbors=c(2),considered=c(),x=1:500)
[1] 500
>FUN(optimizer=1,neighbors=c(2,3),considered=c(),x=1:500)
Error in { : task 1 failed - "could not find function "%dopar%""
>FUN(optimizer=1,neighbors=c(2,3),considered=c(),x=1:500)
Error in { : task 1 failed - "could not find function "%dopar%""
Is this error really because the recursion is too deep or is it just because you haven't got require(doParallel) in your FUN function? So that when FUN is called on the workers, that instance of R hasn't got that package in its list.
Your first example doesn't do this because its simple enough to not get to the inner %dopar% loop.

Resources