How to implement parallel jags on Windows with foreach? - r

I would like to run jags models in parallel on my windows computer with 4 cores, but have not been able to figure out why my model will not run. I have searched the web extensively including these posts:
http://andrewgelman.com/2011/07/23/parallel-jags-rngs/
http://users.soe.ucsc.edu/~draper/eBay-Google-2013-parallel-rjags-example.txt
When I run a simple example (see code below) with %do%, the model runs fine (serially of course). When I use %dopar%, I receive the error:
Error in { : task 1 failed - "Symbol table is empty"
library(rjags)
library(coda)
library(foreach)
library(doParallel)
library(random)
load.module("lecuyer")
### Data generation
y <- rnorm(100)
n <- length(y)
win.data <- list(y=y, n=n)
# Define model
sink("model.txt")
cat("
model {
# Priors
mu ~ dnorm(0, 0.001)
tau <- 1 / (sigma * sigma)
sigma ~ dunif(0, 10)
# Likelihood
for (i in 1:n) {
y[i] ~ dnorm(mu, tau)
}
}
",fill=TRUE)
sink()
inits <- function(){ list(mu=rnorm(1), sigma=runif(1, 0, 10),
.RNG.name = "lecuyer::RngStream",
.RNG.seed = as.numeric(randomNumbers( n = 1, min = 1, max = 1e+06, col = 1 )) ) }
params <- c('mu','sigma')
cl <- makePSOCKcluster(3)
clusterSetRNGStream(cl)
registerDoParallel(cl)
model.wd <- paste(getwd(), '/model.txt', sep='') # I wondered if the cores were having trouble finding the model.
m <- foreach(i=1:3, .packages=c('rjags','random','coda'), .multicombine=TRUE) %dopar% {
load.module( "lecuyer" )
model.jags <- jags.model(model.wd, win.data, inits=inits, n.chains=1, n.adapt=1000, quiet=TRUE)
result <- coda.samples(model.jags, params, 1000, thin=5)
return(result)
}
stopCluster(cl)
# Error in { : task 1 failed - "Symbol table is empty
sessionInfo()
# R version 3.0.1 (2013-05-16)
# Platform: x86_64-w64-mingw32/x64 (64-bit)
#
# locale:
# [1] LC_COLLATE=English_Canada.1252 LC_CTYPE=English_Canada.1252 LC_MONETARY=English_Canada.1252
# [4] LC_NUMERIC=C LC_TIME=English_Canada.1252
#
# attached base packages:
# [1] parallel stats graphics grDevices utils datasets methods base
#
# other attached packages:
# [1] random_0.2.1 doParallel_1.0.3 iterators_1.0.6 foreach_1.4.1 rjags_3-10 coda_0.16-1
# [7] lattice_0.20-21
#
# loaded via a namespace (and not attached):
# [1] codetools_0.2-8 compiler_3.0.1 grid_3.0.1 tools_3.0.1
More Details:
The problem occurs on a Windows 7 computer with NO admin privaleges, but not on a computer WITH admin privaleges. The problem occurs with Rgui and Rterm and with the new rjags packaged 3-11. The error message occurs within the function jags.model
The problem appears to stem from a mismatch in writing and reading files to a temporary directory. When I start R, it automatically creates a temporary folder. When I close R, this folder is automatically deleted, unless it contains files.
For example, when I start R it creates this folder:
C:\Users\jesse whittington\AppData\Local\Temp\RtmpoBe1gw.
When I run a rjags model with
m <- jags.model(file='model.txt', data=win.data, inits=inits, n.chains=3, n.adapt=1000, quiet=FALSE)
No files are written to this temporary directory.
When I run 3 chains serially with foreach and %do%, 3 temporary files are written to this folder. These files are 1 kb in size and when I open with a text editor they appear blank.
wd <- getwd()
cl <- makePSOCKcluster(3, outfile=paste(wd,'/Out_messages.txt', sep='')) # 3 chains
clusterSetRNGStream(cl)
registerDoParallel(cl)
m <- foreach(i=1:3, .packages=c('rjags','random','coda'), .multicombine=TRUE) %do% {
load.module( "lecuyer" )
result <- jags.model(file='model.txt', data=win.data, inits=inits, n.chains=1, n.adapt=1000, quiet=FALSE)
return(result)
}
stopCluster(cl)
When I run 3 chains in parallel with foreach and %dopar%, 3 temporary files are written to the folder ..Temp\RtmpoBe1gw. The error messages in the outfile suggest that the function is looking for DIFFERENT files in DIFFERENT temporary directories. When, I include a line to create a tempfile directory and name, I see that 3 new temporary folders are created (they are later deleted with stopCluster). jags.model looks in these 3 folders for the temporary files and fails because there is nothing in them. Thus, I suspect tempfiles are written to one temporary directory (associated with the parent R session) and then fails when trying to open different tmpfiles in the 3 temporary directories created within foreach.
wd <- getwd()
cl <- makePSOCKcluster(3, outfile=paste(wd,'/Out_messages.txt', sep='')) # 3 chains
clusterSetRNGStream(cl)
registerDoParallel(cl)
m <- foreach(i=1:3, .packages=c('rjags','random','coda'), .multicombine=TRUE) %dopar% {
load.module( "lecuyer" )
tmp <- tempfile()
print(tmp)
result <- jags.model(file='model.txt', data=win.data, inits=inits, n.chains=1, n.adapt=1000, quiet=FALSE)
return(result)
}
stopCluster(cl)
From Out_messages.txt
starting worker pid=4396 on localhost:11109 at 08:34:06.430
starting worker pid=6548 on localhost:11109 at 08:34:06.879
starting worker pid=6212 on localhost:11109 at 08:34:07.418
Loading required package: coda
Loading required package: lattice
Loading required package: coda
Loading required package: lattice
Loading required package: coda
Loading required package: lattice
Linked to JAGS 3.3.0
Loaded modules: basemod,bugs
Linked to JAGS 3.3.0
Loaded modules: basemod,bugs
Linked to JAGS 3.3.0
Loaded modules: basemod,bugs
module lecuyer loaded
module lecuyer loaded
module lecuyer loaded
[1] "C:\\Users\\JESSEW~1\\AppData\\Local\\Temp\\RtmpQbPAVC\\file112c8077a0" # Note this is from: tmp <- tempfile()
[1] "C:\\Users\\JESSEW~1\\AppData\\Local\\Temp\\RtmpMPMpcY\\file199489564c6"
[1] "C:\\Users\\JESSEW~1\\AppData\\Local\\Temp\\Rtmpk9vMR5\\file18445f6b2fd4"
Compiling model graph
Compiling model graph
Compiling model graph
Warning messages:
1: In jags.model(file = "model.txt", data = win.data, inits = inits, :
Unused variable "y" in data
2: In jags.model(file = "model.txt", data = win.data, inits = inits, :
Unused variable "n" in data
3: In jags.model(file = "model.txt", data = win.data, inits = inits, :
Failed to open file C:\Users\JESSEW~1\AppData\Local\Temp\RtmpQbPAVC\file112c394b4eef
Nothing to compile
4: In jags.model(file = "model.txt", data = win.data, inits = inits, :
Unused initial value for "mu" in chain 1
5: In jags.model(file = "model.txt", data = win.data, inits = inits, :
Unused initial value for "sigma" in chain 1
6: In jags.model(file = "model.txt", data = win.data, inits = inits, :
Can't initialize. No nodes in graph (Have you compiled the model?)
The folder RtmpQbPAVC is created but the file file112c394b4eef does not exist.

Steve brought this to my attention, but your second example shows that it is not a problem with rjags. I am unable to reproduce the bug in either example using the same setup (Windows 7, R 3.0.1, JAGS 3.0.3, ordinary user without admin access).

Since the errors are caused by writing and reading the model file, I suggest that you bypass that issue by using the "textConnection" function. This can be used to create a file-like object without creating an actual file, thus avoiding the need for temporary files. I modified your example to demonstrate this:
library(rjags)
library(doParallel)
library(random)
load.module("lecuyer")
y <- rnorm(100)
n <- length(y)
win.data <- list(y=y, n=n)
model <- "
model {
# Priors
mu ~ dnorm(0, 0.001)
tau <- 1 / (sigma * sigma)
sigma ~ dunif(0, 10)
# Likelihood
for (i in 1:n) {
y[i] ~ dnorm(mu, tau)
}
}"
inits <- function() {
list(mu=rnorm(1), sigma=runif(1, 0, 10),
.RNG.name="lecuyer::RngStream",
.RNG.seed=as.numeric(randomNumbers(n=1, min=1, max=1e+06, col=1)))
}
params <- c('mu', 'sigma')
cl <- makePSOCKcluster(3)
clusterSetRNGStream(cl)
registerDoParallel(cl)
m <- foreach(i=1:3, .packages=c('rjags', 'random'),
.combine='c', .final=mcmc.list) %dopar% {
load.module( "lecuyer" )
model.jags <- jags.model(textConnection(model), win.data, inits=inits,
n.chains=1, n.adapt=1000, quiet=TRUE)
coda.samples(model.jags, params, 1000, thin=5)
}
I also changed the result handling so that the value returned by the foreach loop is an "mcmc.list" object, which is what the "coda.samples" function returns.

I have identified the source of the problem.
I can write and read files to and from a temporary directory when using R normally.
When in parallel, I can write files to the temporary directories, but I do NOT have permission to read files.
The problem occurs both writing and reading text files (using writeLines and readLines) and csv files.
I have since found that if I receive this message: "Error in { : task 1 failed - cannot open the connection", I can rectify the problem by deleting all temporary files in TEMP. For some locked files, I have to shut down and restart the computer before I am able to delete the necessary files. Even so, within the same R session I might receive the error message and then be able to successful run the program on my next try. The problem likely stems from our government anti-virus software and/or the structure of our remote network access.
Here is an example that writes and reads text files for simplicity.
library(foreach)
library(doParallel)
wd <- getwd()
data <- data.frame(x=1:10, y=1:10)
This works fine.
modfile <- tempfile()
print(modfile)
# "C:\\Users\\JESSEW~1\\AppData\\Local\\Temp\\RtmpsvYfFk\\filef38a272022"
write.csv(data, modfile, row.names=F)
m <- read.csv(modfile)
This does not work
cl <- makePSOCKcluster(3, outfile=paste(wd,'/Out_messages.txt', sep='')) # 3 chains
clusterSetRNGStream(cl)
registerDoParallel(cl)
m <- foreach(i=1:3) %dopar% {
modfile <- tempfile()
write.csv(data, modfile, row.names=F)
x <- read.csv(modfile)
return(x)
}
# Error in { : task 1 failed - "cannot open the connection"
stopCluster(cl)
Here is the output from Out_message.txt. Note the "Permission Denied" on the far right.
starting worker pid=6852 on localhost:11611 at 22:09:19.488
starting worker pid=6984 on localhost:11611 at 22:09:19.926
starting worker pid=3384 on localhost:11611 at 22:09:20.441
Warning message:
Warning message:
In file(con, "r") :
cannot open file 'C:\Users\JESSEW~1\AppData\Local\Temp\Rtmp6dEZLP\file1ac44a506032': Permission denied
In file(con, "r") :
cannot open file 'C:\Users\JESSEW~1\AppData\Local\Temp\RtmpuydRvR\file1b48185f2a2d': Permission denied
Warning message:
In file(con, "r") :
cannot open file 'C:\Users\JESSEW~1\AppData\Local\Temp\RtmpAbOIng\filed382ef37d51': Permission denied

Related

How to fix C function R_nc4_get_vara_double returned error in ncdf4 parallel processing in R

I want to download nc data through OPENDAP from a remote storage. I use parallel backend with foreach - dopar loop as follows:
# INPUTS
inputs=commandArgs(trailingOnly = T)
interimpath=as.character(inputs[1])
gcm=as.character(inputs[2])
period=as.character(inputs[3])
var=as.character(inputs[4])
source='MACAV2'
cat('\n\n EXTRACTING DATA FOR',var, gcm, period, '\n\n')
# CHANGING LIBRARY PATHS
.libPaths("/storage/home/htn5098/local_lib/R40") # local library for packages
setwd('/storage/work/h/htn5098/DataAnalysis')
source('./src/Rcodes/CWD_function_package.R') # Calling the function Rscript
# CALLING PACKAGES
library(foreach)
library(doParallel)
library(parallel)
library(filematrix)
# REGISTERING CORES FOR PARALLEL PROCESSING
no_cores <- detectCores()
cl <- makeCluster(no_cores)
registerDoParallel(cl)
invisible(clusterEvalQ(cl,.libPaths("/storage/home/htn5098/local_lib/R40"))) # Really have to import library paths into the workers
invisible(clusterEvalQ(cl, c(library(ncdf4))))
# EXTRACTING DATA FROM THE .NC FILES TO MATRIX FORM
url <- readLines('./data/external/MACAV2_OPENDAP_allvar_allgcm_allperiod.txt')
links <- grep(x = url,pattern = paste0('.*',var,'.*',gcm,'_.*',period), value = T)
start=c(659,93,1) # lon, lat, time
count=c(527,307,-1)
spfile <- read.csv('./data/external/SERC_MACAV2_Elev.csv',header = T)
grids <- sort(unique(spfile$Grid))
clusterExport(cl,list('ncarray2matrix','start','count','grids')) #exporting data into clusters for parallel processing
cat('\nChecking when downloading all grids\n')
# k <- foreach(x = links,.packages = c('ncdf4')) %dopar% {
# nc <- nc_open(x)
# nc.var=ncvar_get(nc,varid=names(nc$var),start=start,count=count)
# return(nc.var)
# nc_close(nc)
# }
k <- foreach(x = links,.packages = c('ncdf4'),.errorhandling = 'pass') %dopar% {
nc <- nc_open(x)
print(nc)
nc.var=ncvar_get(nc,varid=names(nc$var),start=c(659,93,1),count=c(527,307,-1))
nc_close(nc)
return(dim(nc.var))
Sys.sleep(10)
}
# k <- parSapply(cl,links,function(x) {
# nc <- nc_open(x)
# nc.var=ncvar_get(nc,varid=names(nc$var),start=start,count=count)
# nc_close(nc)
# return(nc.var)
# })
print(k)
However, I keep getting this error:
<simpleError in ncvar_get_inner(ncid2use, varid2use, nc$var[[li]]$missval, addOffset, scaleFact, start = start, count = count, verbose = verbose, signedbyte = signedbyte, collapse_degen = collapse_degen): C function R_nc4_get_vara_double returned error>
What could be the reason for this problem? Can you recommend a solution for this that is time-efficient (I have to repeat this for about 20 files)?
Thank you.
I had the same error in my code. The problem was not the code itself. It was one of the files that I wanted to read. It has something wrong, so R couldn't open it. I identified the file and downloaded it again, and the same code worked perfectly.
I also encountered the same error. For me, restarting R session did the trick.

getting Error while installing DMwR package

hi i am getting this error message while installing DMwR package from RGUI-3.3.1.
Error in read.dcf(file.path(pkgname, "DESCRIPTION"), c("Package", "Type")) :
cannot open the connection
In addition: Warning messages:
1: In unzip(zipname, exdir = dest) : error 1 in extracting from zip file
2: In read.dcf(file.path(pkgname, "DESCRIPTION"), c("Package", "Type")) :
cannot open compressed file 'bitops/DESCRIPTION', probable reason 'No such file or directory'
Approach 1:
The error being reported is inability to open a connection. In Windows that is often a firewall problem and is in the Windows R FAQ. The usual first attempt should be to run internet2.dll. From a console session you can use:
setInternet2(TRUE)
NEWS for R version 3.3.1 Patched (2016-09-13 r71247)
(Windows only) Function
setInternet2()
has no effect and will be removed in due
course. The choice between methods
"internal"
and
"wininet"
is now made by the
method
arguments of
url()
and
download.file()
and their defaults can be set
via
options. The out-of-the-box default remains
"wininet"
(as it has been since
R
3.2.2)
You are using version 3.3.1, this is why it is not working anymore.
Approach 2
The error is suggesting that the package requires another package bitops that is not available. That package is not in any of the dependencies but perhaps one of the dependencies requires it in turn(In this case, it is: ROCR).
Try installing:
install.packages("bitops",repos="https://cran.r-project.org/bin/windows/contrib/3.3/bitops_1.0-6.zip",dependencies=TRUE,type="source")
The package DMwR contains packages abind, zoo, xts, quantmod and ROCR as imports. So, additionally to installing 5 packages you must install DMwR package, Install these packages manually.
Install packages in following sequence:
install.packages('abind')
install.packages('zoo')
install.packages('xts')
install.packages('quantmod')
install.packages('ROCR')
install.packages("DMwR")
library("DMwR")
Approach 3:
chooseCRANmirror()
Select CRAN mirror from popup list. Then install packages:
install.packages("bitops")
install.packages("DMwR")
Package ‘DMwR’ was removed from the CRAN repository.
Formerly available versions can be obtained from the archive.
https://CRAN.R-project.org/package=DMwR
You can use the function as written in CRAN package. Copy the following code in a new RScript, run it and save it for future use if you want. Once you run this function, you should be able to use the way you have been trying to use it.
# ===================================================
# Creating a SMOTE training sample for classification problems
#
# If called with learner=NULL (the default) is does not
# learn any model, simply returning the SMOTEd data set
#
# NOTE: It does not handle NAs!
#
# Examples:
# ms <- SMOTE(Species ~ .,iris,'setosa',perc.under=400,perc.over=300,
# learner='svm',gamma=0.001,cost=100)
# newds <- SMOTE(Species ~ .,iris,'setosa',perc.under=300,k=3,perc.over=400)
#
# L. Torgo, Feb 2010
# ---------------------------------------------------
SMOTE <- function(form,data,
perc.over=200,k=5,
perc.under=200,
learner=NULL,...
)
# INPUTS:
# form a model formula
# data the original training set (with the unbalanced distribution)
# minCl the minority class label
# per.over/100 is the number of new cases (smoted cases) generated
# for each rare case. If perc.over < 100 a single case
# is generated uniquely for a randomly selected perc.over
# of the rare cases
# k is the number of neighbours to consider as the pool from where
# the new examples are generated
# perc.under/100 is the number of "normal" cases that are randomly
# selected for each smoted case
# learner the learning system to use.
# ... any learning parameters to pass to learner
{
# the column where the target variable is
tgt <- which(names(data) == as.character(form[[2]]))
minCl <- levels(data[,tgt])[which.min(table(data[,tgt]))]
# get the cases of the minority class
minExs <- which(data[,tgt] == minCl)
# generate synthetic cases from these minExs
if (tgt < ncol(data)) {
cols <- 1:ncol(data)
cols[c(tgt,ncol(data))] <- cols[c(ncol(data),tgt)]
data <- data[,cols]
}
newExs <- smote.exs(data[minExs,],ncol(data),perc.over,k)
if (tgt < ncol(data)) {
newExs <- newExs[,cols]
data <- data[,cols]
}
# get the undersample of the "majority class" examples
selMaj <- sample((1:NROW(data))[-minExs],
as.integer((perc.under/100)*nrow(newExs)),
replace=T)
# the final data set (the undersample+the rare cases+the smoted exs)
newdataset <- rbind(data[selMaj,],data[minExs,],newExs)
# learn a model if required
if (is.null(learner)) return(newdataset)
else do.call(learner,list(form,newdataset,...))
}
# ===================================================
# Obtain a set of smoted examples for a set of rare cases.
# L. Torgo, Feb 2010
# ---------------------------------------------------
smote.exs <- function(data,tgt,N,k)
# INPUTS:
# data are the rare cases (the minority "class" cases)
# tgt is the name of the target variable
# N is the percentage of over-sampling to carry out;
# and k is the number of nearest neighours to use for the generation
# OUTPUTS:
# The result of the function is a (N/100)*T set of generated
# examples with rare values on the target
{
nomatr <- c()
T <- matrix(nrow=dim(data)[1],ncol=dim(data)[2]-1)
for(col in seq.int(dim(T)[2]))
if (class(data[,col]) %in% c('factor','character')) {
T[,col] <- as.integer(data[,col])
nomatr <- c(nomatr,col)
} else T[,col] <- data[,col]
if (N < 100) { # only a percentage of the T cases will be SMOTEd
nT <- NROW(T)
idx <- sample(1:nT,as.integer((N/100)*nT))
T <- T[idx,]
N <- 100
}
p <- dim(T)[2]
nT <- dim(T)[1]
ranges <- apply(T,2,max)-apply(T,2,min)
nexs <- as.integer(N/100) # this is the number of artificial exs generated
# for each member of T
new <- matrix(nrow=nexs*nT,ncol=p) # the new cases
for(i in 1:nT) {
# the k NNs of case T[i,]
xd <- scale(T,T[i,],ranges)
for(a in nomatr) xd[,a] <- xd[,a]==0
dd <- drop(xd^2 %*% rep(1, ncol(xd)))
kNNs <- order(dd)[2:(k+1)]
for(n in 1:nexs) {
# select randomly one of the k NNs
neig <- sample(1:k,1)
ex <- vector(length=ncol(T))
# the attribute values of the generated case
difs <- T[kNNs[neig],]-T[i,]
new[(i-1)*nexs+n,] <- T[i,]+runif(1)*difs
for(a in nomatr)
new[(i-1)*nexs+n,a] <- c(T[kNNs[neig],a],T[i,a])[1+round(runif(1),0)]
}
}
newCases <- data.frame(new)
for(a in nomatr)
newCases[,a] <- factor(newCases[,a],levels=1:nlevels(data[,a]),labels=levels(data[,a]))
newCases[,tgt] <- factor(rep(data[1,tgt],nrow(newCases)),levels=levels(data[,tgt]))
colnames(newCases) <- colnames(data)
newCases
}
It has been removed from the CRAN library. There are instructions on how to retrieve it from the archive.
Either follow the link - https://packagemanager.rstudio.com/client/#/repos/2/packages/DMwR
OR copy-paste the three lines of code mentioned below:
install.packages("devtools")
devtools::install_version('DMwR', '0.4.1')
library("DMwR")
EDIT: this is the error I got while downloading the DMwR package in 2022, but looks like when the question was posted, the error happened because of another reason.
The reason is that the package 'DMwR' was built under R version 3.4.3 So the solution is actually explained in the marked answer in details. Hence, to be short:Just run the script below to get the problem solved! 
install.packages('abind')
install.packages('zoo')
install.packages('xts')
install.packages('quantmod')
install.packages('ROCR')
install.packages("DMwR")
library("DMwR")

Parallel proccessing in R doParallel foreach save data

Progress has been made on getting the parallel processing part working but saving the vector with the fetch distances is not working properly. The error I get is
df_Test_Fetch <- data.frame(x_lake_length)
Error in data.frame(x_lake_length) : object 'x_lake_length' not found
write.table(df_Test_Fetch,file="C:/tempTest_Fetch.csv",row.names=TRUE,col.names=TRUE, sep=",")
Error in is.data.frame(x) : object 'df_Test_Fetch' not found
I have tried altering the code below so that the foreach step is output to x_lake_length. But that did not output the vector as I hoped. How can I get the actually results to be saved to a csv file. I am running a windows 8 computer with R x64 3.3.0.
Thanks you in advance
Jen
Here is the full code.
# make sure there is no prexisting data
rm(x_lake_length)
# Libraries ---------------------------------------------------------------
if (!require("pacman")) install.packages("pacman")
pacman::p_load(lakemorpho,rgdal,maptools,sp,doParallel,foreach,
doParallel)
# HPC ---------------------------------------------------------------------
cores_2_use <- detectCores() - 2
cl <- makeCluster(cores_2_use, useXDR = F)
clusterSetRNGStream(cl, 9956)
registerDoParallel(cl, cores_2_use)
# Data --------------------------------------------------------------------
ogrDrivers()
dsn <- system.file("vectors", package = "rgdal")[1]
# the line below is commented out but when I run the script on my data the line below is what I use instead of the one above
# then making the name changes as needed
# dsn<-setwd("J:\\Elodea\\ByHUC6\\")
ogrListLayers(dsn)
ogrInfo(dsn=dsn, layer="trin_inca_pl03")
owd <- getwd()
setwd(dsn)
ogrInfo(dsn="trin_inca_pl03.shp", layer="trin_inca_pl03")
setwd(owd)
x <- readOGR(dsn=dsn, layer="trin_inca_pl03")
summary(x)
# Analysis ----------------------------------------------------------------
myfun <- function(x,i){tmp<-lakeMorphoClass(x[i,],NULL,NULL,NULL)
x_lake_length<-vector("numeric",length = nrow(x))
x_lake_length[i]<-lakeMaxLength(tmp,200)
print(i)
Sys.sleep(0.1)}
foreach(i = 1:nrow(x),.combine=cbind,.packages=c("lakemorpho","rgdal")) %dopar% (
myfun(x,i)
)
options(digits=10)
df_Test_Fetch <- data.frame(x_lake_length)
write.table(df_Test_Fetch,file="C:/temp/Test_Fetch.csv",row.names=TRUE,col.names=TRUE, sep=",")
print(proc.time())
I think this is what you want, though without understanding the subject matter I can't be 100% sure.
What I did was add a return() to your parallelized function and assigned the value of that returned object to x_lake_length when you call the foreach. But I'm only guessing that that's what you were trying to do, so please correct me if I'm wrong.
# make sure there is no prexisting data
rm(x_lake_length)
# Libraries ---------------------------------------------------------------
if (!require("pacman")) install.packages("pacman")
pacman::p_load(lakemorpho,rgdal,maptools,sp,doParallel,foreach,
doParallel)
# HPC ---------------------------------------------------------------------
cores_2_use <- detectCores() - 2
cl <- makeCluster(cores_2_use, useXDR = F)
clusterSetRNGStream(cl, 9956)
registerDoParallel(cl, cores_2_use)
# Data --------------------------------------------------------------------
ogrDrivers()
dsn <- system.file("vectors", package = "rgdal")[1]
# the line below is commented out but when I run the script on my data the line below is what I use instead of the one above
# then making the name changes as needed
# dsn<-setwd("J:\\Elodea\\ByHUC6\\")
ogrListLayers(dsn)
ogrInfo(dsn=dsn, layer="trin_inca_pl03")
owd <- getwd()
setwd(dsn)
ogrInfo(dsn="trin_inca_pl03.shp", layer="trin_inca_pl03")
setwd(owd)
x <- readOGR(dsn=dsn, layer="trin_inca_pl03")
summary(x)
# Analysis ----------------------------------------------------------------
myfun <- function(x,i){tmp<-lakeMorphoClass(x[i,],NULL,NULL,NULL)
x_lake_length<-vector("numeric",length = nrow(x))
x_lake_length[i]<-lakeMaxLength(tmp,200)
print(i)
Sys.sleep(0.1)
return(x_lake_length)
}
x_lake_length <- foreach(i = 1:nrow(x),.combine=cbind,.packages=c("lakemorpho","rgdal")) %dopar% (
myfun(x,i)
)
options(digits=10)
df_Test_Fetch <- data.frame(x_lake_length)
write.table(df_Test_Fetch,file="C:/temp/Test_Fetch.csv",row.names=TRUE,col.names=TRUE, sep=",")
print(proc.time())

R2WinBUGS error in R

I'm trying to duplicate some code and am running into troubles with WinBUGS. The code was written in 2010 and I think that back then, the package was installed with additional files which R is now looking for and can't find (hence the error), but I'm not sure.
R stops trying to run #bugs.directory (see code) and the error is:
Error in file(con, "rb") : cannot open the connection
In addition: Warning message:
In file(con, "rb") :
cannot open file 'C:/Users/Hiwi/Documents/R/Win-library/3.0/R2winBUGS/System/Rsrc/Registry.odc': No such file or directory
Error in bugs.run(n.burnin, bugs.directory, WINE = WINE, useWINE = useWINE, :
WinBUGS executable does not exist in C:/Users/Hiwi/Documents/R/Win-library/3.0/R2winBUGS
I have the results of the analysis so if there is another way of conducting a Bayesian analysis for the "rawdata" file (in the 14 day model with [-3,0] event window) or if someone would PLEASE shed some light on what's wrong with the code, I would be forever grateful.
The code is:
rm(list=ls(all=TRUE))
setwd("C:/Users/Hiwi/Dropbox/Oracle/Oracle CD files/analysis/chapter6_a")
library(foreign)
rawdata <- read.dta("nyt.dta",convert.factors = F)
library(MASS)
summary(glm.nb(rawdata$num_events_14 ~ rawdata$nyt_num))
# WinBUGS code
library("R2WinBUGS")
nb.model <- function(){
for (i in 1:n){ # loop for all observations
# stochastic component
dv[i]~dnegbin( p[i], r)
# link and linear predictor
p[i] <- r/(r+lambda[i])
log(lambda[i] ) <- b[1] + b[2] * iv[i]
}
#
# prior distributions
r <- exp(logr)
logr ~ dnorm(0.0, 0.01)
b[1]~dnorm(0,0.001) # prior (please note: second element is 1/variance)
b[2]~dnorm(0,0.001) # prior
}
write.model(nb.model, "negativebinomial.bug")
n <- dim(rawdata)[1] # number of observations
winbug.data <- list(dv = rawdata$num_events_14,
iv = rawdata$nyt_num,
n=n)
winbug.inits <- function(){list(logr = 0 ,b=c(2.46,-.37)
)} # Ausgangswerte aus der Uniformverteilung zwischen -1 und 1
bug.erg <- bugs(data=winbug.data,
inits=winbug.inits,
#inits=NULL,
parameters.to.save = c("b","r"),
model.file="negativebinomial.bug",
n.chains=3, n.iter=10000, n.burnin=5000,
n.thin=1,
codaPkg=T,
debug=F,
#bugs.directory="C:/Users/Hiwi/Documents/R/Win-library/3.0/R2winBUGS/"
bugs.directory="C:/Users/Hiwi/Documents/R/Win-library/3.0/R2winBUGS"
)
tempdir()
setwd(tempdir())
file.rename("codaIndex.txt","simIndex.txt")
file.rename("coda1.txt","sim1.txt")
file.rename("coda2.txt","sim2.txt")
file.rename("coda3.txt","sim3.txt")
posterior <- rbind(read.coda("sim1.txt","simIndex.txt"),read.coda("sim2.txt","simIndex.txt"),read.coda("sim3.txt","simIndex.txt"))
post.df <- as.data.frame(posterior)
summary(post.df)
quantile(post.df[,2],probs=c(.025,.975))
quantile(post.df[,2],probs=c(.05,.95))
quantile(post.df[,2],probs=c(.10,.90))
tempdir()
Difficult to say for sure without sitting at your PC... Maybe it is something to do with R2WinBUGS looking in the wrong directory for WinBUGS.exe? You can point R2WinBUGS to the right place using the bugs.directory argument in the bugs function.
If not, try and install OpenBUGS and give R2OpenBUGS a go.

Error in parallel process using doSNOW

Error in { : task 1 failed - "invalid connection"
Why do I get this error, every time when I try to use all 4 cores for a parallel process.
Here is the example code:
NumberOfCluster <- 4
cl <- makeCluster(NumberOfCluster)
registerDoSNOW(cl)
fl<- file(file.choose(),"r") # file.choose() is going to locate a file(.tsv)
# of size 8 gb (RAM is 4 GB)
foreach(i=1:3) %dopar% {
View(name_fil <- read.delim(fl,nrows = 1000000,header = TRUE))
}
You're getting an error because file objects can't be exported to the workers. Instead, you could export the name of the file and open that file on each of the workers:
fname <- file.choose()
foreach(i=1:3) %dopar% {
fl <- file(fname, "r")
View(name_fil <- read.delim(fl,nrows = 1000000,header = TRUE))
}
You may run into problems using the View function next, but this should solve the "invalid connection" error.

Resources