parallel programming in r using doParallel and foreach - r

I get the correct output when I use %do% (sequential processing) in foreach
However, the the script gets stuck when I try %dopar% (parallel processing) in foreach
for(i in seq(10,8,-2)){
for(j in seq(10,8,-2)){
foreach(k = 96:100,.packages=c('recommenderlab','ggplot2','reshape2') ) %dopar% {
results <- run_algorithms(r.binary, "cross", k, k)
#plot the performance of the three algos
jpeg(sprintf('u_%d_d_%d_%d.jpg',i,j,k))
plot(results,annotate=c(1,3),legend="topleft")
dev.off()
}
}
}

Related

Parallel processing- Foraech and for loop in R

I want to produce a matrix which I put produce process below. But it tooks so much time.
Is there any other way to get same result quicker with using foreach and parallel processing?
"""
library("foreach")
library("doParallel")
lambdas=seq(0.01,7, by = 0.01)
cl <- makeCluster(2) # create a cluster with 2 cores
registerDoParallel(cl) # register the cluster
nlambdas <-foreach(i = 1:1, .inorder=FALSE ,.combine = 'cbind', .multicombine=TRUE, .packages = "quantreg") %dopar% {
first<-rep()
second<-rep()
third<-rep()
fourth<-rep()
for (m in 1:700) {
for (j in 1:700) {
for (n in 1:700) {
for (k in 1:700) {
first<-rbind(first,lambdas[m])
second<-rbind(second,lambdas[j])
third<-rbind(third,lambdas[n])
fourth<-rbind(fourth,lambdas[k])
}
}
}
}
lambda_total<-cbind(first,second,third,fourth)
}
stopCluster(cl)
"""
You don't need parallel processing, you just need an algorithm with less than (O**4) complexity and a slow operation to compose this matrix.
The same matrix may easily be constructed by repeatedly sorting four vectors. In your case, however, with 700**4= 2.4e+11 elements, this might take some time.
I illustrate the algorithm with only 7 different values in the vector lambdas (and 2401 elements in total).
nsteps = 7
lambdas = seq(0.01,7,(7/nsteps))
h = rep(lambdas,nsteps**3)
i = rep(lambdas,nsteps**3)
j = rep(lambdas,nsteps**3)
k = rep(lambdas,nsteps**3)
ordering = order(k)
k = k[ordering]
k = k[ordering]
k = k[ordering]
k = k[ordering]
j = j[ordering]
j = j[ordering]
j = j[ordering]
i = i[ordering]
i = i[ordering]
h = h[ordering]
lambdas.total = cbind(first=h,second=i,third=j,fourth=k)
If your memory is large enough to cope with ordering a vector with length>10**10, you can do this with 700 steps for lambda in no time.
No need for any for loops.

R foreach stop iteration at i

I am using R package foreach.
When bug exists in foreach block, it's hard to re-occur it and hard to debug.
Take the following script as example.
I want to stop at i=4 to check what's wrong. However, it stops at i=10.
Any solution?
library(foreach)
foreach(i = icount(10)) %do% {
if (i == 4){
e <- simpleError("test error")
stop(e)
}
}
One option to handle this is with a browser() inside a tryCatch as in:
foreach(i = icount(10)) %do% {
tryCatch(
if (i == 4){
e <- simpleError("test error")
stop(e)
},
error = function(e) browser()
)
}
This will produce a browser of the environment at the time of the error, which will allow you to inspect any objects and/or debug your code.
Your console will then look like the following and you can ask what the value of i is. Like this:
Browse[1]> i
[1] 4

unrelated nested foreach with an outer %dopar% and an inner %do%

I am running tasks locally in parallel using %dopar% from the foreach package using the doSNOW package to create the cluster (running this on a windows machine at the moment). I have done this many times before and it works fine until I place an unrelated foreach loop using a %do% (i.e. non-parallel) inside of it. Then R gives me the error (with traceback) :
Error in { : task 1 failed - "could not find function "%do%"" 3 stop(simpleError(msg, call = expr)) 2 e$fun(obj, substitute(ex), parent.frame(), e$data) 1 foreach(rc = 1:5) %dopar% {
aRandomCounter = -1
if (1 > 0) {
for (batchi in 1:20) { ...
Here is some code that replicates the problem on my machine:
require(foreach)
require(doSNOW)
cl<-makeCluster(5)
registerDoSNOW(cl)
for(stepi in 1:10) # normal outer for
{
foreach(rc=1:5) %dopar% # the time consuming stuff in parallel (not looking to actually retrieve any data)
{
aRandomCounter = -1
if(1 > 0)
{
for(batchi in 1:20)
{
anObjectIwantToCreate <- foreach( qrc = 1:100, .combine=c ) %do%
{
return(runif(1)) # I know this is not efficient, it is a placeholder to reproduce the issue
}
aRandomCounter = aRandomCounter + sum(anObjectIwantToCreate > 0.5)
}
}
return(aRandomCounter)
}
}
stopCluster(cl)
Replacing the inner foreach with a simple for or (l/s)apply is a solution. But is there a way to make this work with the inner foreach and why the error in the first place ?
Of course, I got it to work as soon as I posted it (sorry.. I will leave it in case someone else has the same issue). It is a scoping issue - I knew you had to load any external packages within the %dopar%, but what I did not realize is that that includes the foreach package itself. Here is the solution:
require(foreach)
require(doSNOW)
cl<-makeCluster(5)
registerDoSNOW(cl)
for(stepi in 1:10) # normal outer for
{
foreach(rc=1:5) %dopar% # the time consuming stuff in parallel (not looking to actually retrieve any data)
{
require(foreach) ### the solution
aRandomCounter = -1
if(1 > 0)
{
for(batchi in 1:20)
{
anObjectIwantToCreate <- foreach( qrc = 1:100, .combine=c ) %do%
{
return(runif(1))
}
aRandomCounter = aRandomCounter + sum(anObjectIwantToCreate > 0.5)
}
}
return(aRandomCounter)
}
}
stopCluster(cl)
I know this is an outdate question, but just to give a hint for those
who do not get nested foreach to work.
If parallelizing outer loop with putting %do% in %dopar%, you would
need to include .packages = c("doSNOW") in the augment of the
outer loop (%dopar%), otherwise you will run into "doSNOW not found" error.
Generally, people just parallelize inner loop (%dopar% in %:%), which
can be slow for a huge amount of data (waiting for combinations of inner loops).

doParallel issue with inline function on Windows 7 (works on Linux)

I am using R 3.0.1 both on Windows 7 and Linux (SUSE Server 11 (x86_64)). The following example code produces an error on Windows but not on Linux. All the toolboxes listed are up-to-date in both machines.
The Windows error is:
Error in { : task 1 failed - "NULL value passed as symbol address"
If I change %dopar% to %do%, the Windows code runs without any errors. My initial guess was that this relates to some configuration issue in Windows and I tried reinstalling Rcpp and R but that did not help. The error seems to be related to scoping - if I define and compile the function cFunc inside f1, then %dopar% works but, as expected, it is very slow since we are calling the compiler once for each task.
Does anyone have some insights on why the error happens or suggestions on how to fix it?
library(inline)
sigFunc <- signature(x="numeric", size_x="numeric")
code <- ' double tot =0;
for(int k = 0; k < INTEGER(size_x)[0]; k++){
tot += REAL(x)[k];
};
return ScalarReal(tot);
'
cFunc <- cxxfunction(sigFunc, code)
f1 <- function(){
x <- rnorm(100)
a <- cFunc(x=x, size_x=as.integer(length(x)))
return(a)
}
library(foreach)
library(doParallel)
registerDoParallel()
# this produces an error in Windows but not in Linux
res <- foreach(counter=(1:100)) %dopar% {f1()}
# this works for both Windows and Linux
res <- foreach(counter=(1:100)) %do% {f1()}
# The following is not a practical solution, but I can compile cFunc inside f1 and then this works in Windows but it is very slow
f1 <- function(){
library(inline)
sigFunc <- signature(x="numeric", size_x="numeric")
code <- ' double tot =0;
for(int k = 0; k < INTEGER(size_x)[0]; k++){
tot += REAL(x)[k];
};
return ScalarReal(tot);
'
cFunc <- cxxfunction(sigFunc, code)
x <- rnorm(100)
a <- cFunc(x=x, size_x=as.integer(length(x)))
return(a)
}
# this now works in Windows but is very slow
res <- foreach(counter=(1:100)) %dopar% {f1()}
Thanks!
Gustavo
The error message "NULL value passed as symbol address" is unusual, and isn't due to the function not being exported to the workers. The cFunc function just doesn't work after being serialized, sent to a worker, and unserialized. It also doesn't work when it's loaded from a saved workspace, which results in the same error message. That doesn't surprise me much, and it may be a documented behavior of the inline package.
As you've demonstrated, you can work-around the problem by creating cFunc on the workers. To do that efficiently, you need to do it only once on each of the workers. To do that with the doParallel backend, I would define a worker initialization function, and execute it on each of the workers using the clusterCall function:
worker.init <- function() {
library(inline)
sigFunc <- signature(x="numeric", size_x="numeric")
code <- ' double tot =0;
for(int k = 0; k < INTEGER(size_x)[0]; k++){
tot += REAL(x)[k];
};
return ScalarReal(tot);
'
assign('cFunc', cxxfunction(sigFunc, code), .GlobalEnv)
NULL
}
f1 <- function(){
x <- rnorm(100)
a <- cFunc(x=x, size_x=as.integer(length(x)))
return(a)
}
library(foreach)
library(doParallel)
cl <- makePSOCKcluster(3)
clusterCall(cl, worker.init)
registerDoParallel(cl)
res <- foreach(counter=1:100) %dopar% f1()
Note that you must create the PSOCK cluster object explicitly in order to call clusterCall.
The reason that your example worked on Linux is that the mclapply function is used when you call registerDoParallel without an argument, while on Windows a cluster object is created and the clusterApplyLB function is used. Functions and variables aren't serialized and sent to the workers when using mclapply, so there is no error.
It would be nice if doParallel included support for initializing the workers without the need for using clusterCall, but it doesn't yet.
The easiest 'workaround', I could think, would be
1) Write your code in a separate source file, say cFunc.c,
2) Compile it with R CMD SHLIB,
3) dyn.load that function within your foreach call.
For example,
cFunc.c
=======
#include <R.h>
#include <Rinternals.h>
SEXP cFunc( SEXP x, SEXP size_x ) {
double tot = 0;
for (int k=0; k < INTEGER(size_x)[0]; ++k ) {
tot += REAL(x)[k];
}
return ScalarReal(tot);
}
and
library(foreach)
library(doParallel)
registerDoParallel()
x <- as.numeric(1:100)
size_x <- as.integer(length(x))
res <- foreach(counter=(1:100)) %dopar% {
dyn.load("cFunc.dll")
.Call("cFunc", x, size_x)
}
Alternatively (and probably better), consider building an actual package with this function exported that you can use.

How to increase buffer size in R-function

i would like to implement a function in R which is able to increase the size of a buffer in a for-loop.
The overall-plan is to write a package, which uses a test- and a reference-shapefile. It should create a buffer around the reference shapefile and increases the size as long as necessary, to intersect the whole test-shapefile.
Therefore, i already wrote some code snippets to insert the shapefiles and create the first buffer:
require("rgeos")
require("rgdal")
l1=readOGR(dsn="C:/Maps", layer="osm_ms1")
l2=readOGR(dsn="C:/Maps", layer="osm_ms2")
proj4string(l2) = CRS("+init=epsg:31467") ## DHDN / 3-degree Gauss-Kruger zone 3
l2buffer <- gBuffer(l2, width=0.001, capStyle="ROUND")
plot(l2buffer, col="black")
lines(l2, col="red")
lines(l1, col="blue")
Until now, every works fine.
After that, i wanted to transfer this method to a for-loop with a buffer for every step:
i = 0.001
buffergrow = function(shape) {
for (k in 1:10) {
linebuffer[k] <- gBuffer(l2, width=i, capStyle="ROUND")
plot(linebuffer[k])
i = i+0.001
}
}
> buffergrow(l2)
Error in linebuffer[k] <- gBuffer(shape, width = i, capStyle = "ROUND") :
Object 'linebuffer' not found
As you can see, an error occurs when i call the function 'buffergrow' with 'l2' as the argument (shape). Does anybody has an idea why this happens? I tried already some other ideas, but i need some help.
Optionally / Additionally: Do you have some hints for me, regarding the further work for my overall plan?
Best regards,
Stefan
You have to initialize an object before accessing its subelements.
E.g.:
foo <- double(10)
for (i in 1:10) {
foo[i] <- i;
}
# or
linebuffer <- list()
for (i in 1:10) {
linebuffer[[i]] <- i;
}
But you don't need an object linebuffer in your usecase.
Try the following instead:
buffergrow = function(shape) {
for (k in 1:10) {
plot(gBuffer(l2, width=i, capStyle="ROUND"))
i = i+0.001
}
}
EDIT:
If you need to store the gBuffer results:
buffergrow = function(shape) {
i <- 1
linebuffer <- vector("list", 10)
for (k in 1:10) {
linebuffer[[k]] <- gBuffer(l2, width=i, capStyle="ROUND")
plot(linebuffer[[k]])
i = i+0.001
}
return(linebuffer)
}

Resources