How to check if stopCluster (R) worked - r

When I try to remove a cluster from my workspace with stopCluster, it does not seem to work. Below is the code I am using.
> cl <- makeCluster(3)
> cl
socket cluster with 3 nodes on host ‘localhost’
> stopCluster(cl)
> cl
socket cluster with 3 nodes on host ‘localhost’
Note that the command cl still is called a socket cluster with 3 nodes after I have supposedly removed it. Shouldn't I get an error that object cl is not found? How do I know that my cluster has actually been removed? A related question: if I close R, is the cluster terminated and my computer returned to its normal state of being able to use all of its cores?

You shouldn't get an error that cl is not found, until you run rm(cl). Stopping a cluster doesn't remove the object from your environment.
Use showConnections to see that no connections are active:
> require(parallel)
Loading required package: parallel
> cl <- makeCluster(3)
> cl
socket cluster with 3 nodes on host ‘localhost’
> showConnections()
description class mode text isopen can read can write
3 "<-localhost:11129" "sockconn" "a+b" "binary" "opened" "yes" "yes"
4 "<-localhost:11129" "sockconn" "a+b" "binary" "opened" "yes" "yes"
5 "<-localhost:11129" "sockconn" "a+b" "binary" "opened" "yes" "yes"
> stopCluster(cl)
> showConnections()
description class mode text isopen can read can write
>
Whether or not your computer is "returned to its normal state" depends on the type of cluster you create. If it's just a simple socket or fork cluster, then gracefully stopping the parent process should cause all the child processes to terminate. If it's a more complicated cluster, it's possible terminating R will not stop all the jobs it started on the nodes.

Unfortunately, the print.SOCKcluster method doesn't tell you if the cluster object is usable. However, you can find out if it's usable by printing the elements of the cluster object, thus using the print.SOCKnode method. For example:
> library(parallel)
> cl <- makeCluster(3)
> for (node in cl) try(print(node))
node of a socket cluster on host ‘localhost’ with pid 29607
node of a socket cluster on host ‘localhost’ with pid 29615
node of a socket cluster on host ‘localhost’ with pid 29623
> stopCluster(cl)
> for (node in cl) try(print(node))
Error in summary.connection(connection) : invalid connection
Error in summary.connection(connection) : invalid connection
Error in summary.connection(connection) : invalid connection
Note that print.SOCKnode actually sends a message via the socket connection in order to get the process ID of the corresponding worker, as seen in the source code:
> parallel:::print.SOCKnode
function (x, ...)
{
sendCall(x, eval, list(quote(Sys.getpid())))
pid <- recvResult(x)
msg <- gettextf("node of a socket cluster on host %s with pid %d",
sQuote(x[["host"]]), pid)
cat(msg, "\n", sep = "")
invisible(x)
}
<bytecode: 0x2f0efc8>
<environment: namespace:parallel>
Thus, if you've called stopCluster on the cluster object, you'll get errors trying to use the socket connections.

Related

database connection intermittently fails when using dopar

I am trying to access an SQL Server database from R and need to parallelise the process for higher throughput using doSNOW. When setting up the cluster, I first initialise the connection, but for some of the cores in the cluster, database connection fails without explanation.
cl <- makeCluster(10)
registerDoSNOW(cl)
clusterEvalQ(cl, {
library(RODBC)
dbhandle <- odbcDriverConnect(%connectionstring%)
})
This code prints a list of the connections and whilst some have been successfully initialised, others have failed (returned -1). This happens randomly and different connections fail each time the code is run.
[[1]]
[1] -1
[[2]]
RODBC Connection 1
Details:
case=nochange
DRIVER=SQL Server
SERVER=redacted
UID=
Trusted_Connection=Yes
WSID=redacted
DATABASE=redacted
[[3]]
[1] -1
[[4]]
RODBC Connection 1
Details:
case=nochange
DRIVER=SQL Server
SERVER=redacted
UID=
Trusted_Connection=Yes
WSID=redacted
DATABASE=redacted
As per comments, adding sleep(Sys.getpid()/1000) fixes the problem
clusterEvalQ(cl, {
sleep(Sys.getpid()/1000)
library(RODBC)
dbhandle <- odbcDriverConnect(%connectionstring%)
})

How to setup workers for parallel processing in R using snowfall and multiple Windows nodes?

I’ve successfully used snowfall to setup a cluster on a single server with 16 processors.
require(snowfall)
if (sfIsRunning() == TRUE) sfStop()
number.of.cpus <- 15
sfInit(parallel = TRUE, cpus = number.of.cpus)
stopifnot( sfCpus() == number.of.cpus )
stopifnot( sfParallel() == TRUE )
# Print the hostname for each cluster member
sayhello <- function()
{
info <- Sys.info()[c("nodename", "machine")]
paste("Hello from", info[1], "with CPU type", info[2])
}
names <- sfClusterCall(sayhello)
print(unlist(names))
Now, I am looking for complete instructions on how to move to a distributed model. I have 4 different Windows machines with a total of 16 cores that I would like to use for a 16 node cluster. So far, I understand that I could manually setup a SOCK connection or leverage MPI. While it appears possible, I haven’t found clear and complete directions as to how.
The SOCK route appears to depend on code in a snowlib script. I can generate a stub from the master side with the following code:
winOptions <-
list(host="172.01.01.03",
rscript="C:/Program Files/R/R-2.7.1/bin/Rscript.exe",
snowlib="C:/Rlibs")
cl <- makeCluster(c(rep(list(winOptions), 2)), type = "SOCK", manual = T)
It yields the following:
Manually start worker on 172.01.01.03 with
"C:/Program Files/R/R-2.7.1/bin/Rscript.exe"
C:/Rlibs/snow/RSOCKnode.R
MASTER=Worker02 PORT=11204 OUT=/dev/null SNOWLIB=C:/Rlibs
It feels like a reasonable start. I found code for RSOCKnode.R on GitHub under the snow package:
local({
master <- "localhost"
port <- ""
snowlib <- Sys.getenv("R_SNOW_LIB")
outfile <- Sys.getenv("R_SNOW_OUTFILE") ##**** defaults to ""; document
args <- commandArgs()
pos <- match("--args", args)
args <- args[-(1 : pos)]
for (a in args) {
pos <- regexpr("=", a)
name <- substr(a, 1, pos - 1)
value <- substr(a,pos + 1, nchar(a))
switch(name,
MASTER = master <- value,
PORT = port <- value,
SNOWLIB = snowlib <- value,
OUT = outfile <- value)
}
if (! (snowlib %in% .libPaths()))
.libPaths(c(snowlib, .libPaths()))
library(methods) ## because Rscript as of R 2.7.0 doesn't load methods
library(snow)
if (port == "") port <- getClusterOption("port")
sinkWorkerOutput(outfile)
cat("starting worker for", paste(master, port, sep = ":"), "\n")
slaveLoop(makeSOCKmaster(master, port))
})
It’s not clear how to actually start a SOCK listener on the workers, unless it is buried in snow::recvData.
Looking into the MPI route, as far as I can tell, Microsoft MPI version 7 is a starting point. However, I could not find a Windows alternative for sfCluster. I was able to start the MPI service, but it does not appear to listen on port 22 and no amount of bashing against it with snowfall::makeCluster has yielded a result. I’ve disabled the firewall and tried testing with makeCluster and directly connecting to the worker from the master with PuTTY.
Is there a comprehensive, step-by-step guide to setting up a snowfall cluster on Windows workers that I’ve missed? I am fond of snowfall::sfClusterApplyLB and would like to continue using that, but if there is an easier solution, I’d be willing to change course. Looking into Rmpi and parallel, I found alternative solutions for the master side of the work, but still little to no specific detail on how to setup workers running Windows.
Due to the nature of the work environment, neither moving to AWS, nor Linux is an option.
Related questions without definitive answers for Windows worker nodes:
How to set up cluster slave nodes (on Windows)
Parallel R on a Windows cluster
Create a cluster of co-workers' Windows 7 PCs for parallel processing in R?
There were several options for HPC infrastructure considered: MPICH, Open MPI, and MS MPI. Initially tried to use MPICH2 but gave up as the latest stable release 1.4.1 for Windows dated back by 2013 and no support since those times. Open MPI is not supported by Windows. Then only the MS MPI option is left.
Unfortunately snowfall does not support MS MPI so I decided to go with pbdMPI package, which supports MS MPI by default. pbdMPI implements the SPMD paradigm in contrast withRmpi, which uses manager/worker parallelism.
MS MPI installation, configuration, and execution
Install MS MPI v.10.1.2 on all machines in the to-be Windows HPC cluster.
Create a directory accessible to all nodes, where R-scripts / resources will reside, for example, \HeadMachine\SharedDir.
Check if MS MPI Launch Service (MsMpiLaunchSvc) running on all nodes.
Check, that MS MPI has the rights to run R application on all the nodes on behalf of the same user, i.e. SharedUser. The user name and the password must be the same for all machines.
Check, that R should be launched on behalf of the SharedUser user.
Finally, execute mpiexec with the following options mentioned in Steps 7-10:
mpiexec.exe -n %1 -machinefile "C:\MachineFileDir\hosts.txt" -pwd
SharedUserPassword –wdir "\HeadMachine\SharedDir" Rscript hello.R
where
-wdir is a network path to the directory with shared resources.
–pwd is a password by SharedUser user, for example, SharedUserPassword.
–machinefile is a path to hosts.txt text file, for example С:\MachineFileDir\hosts.txt. hosts.txt file must be readable from the head node at the specified path and it contains a list of IP addresses of the nodes on which the R script is to be run.
As a result of Step 7 MPI will log in as SharedUser with the password SharedUserPassword and execute copies of the R processes on each computer listed in the hosts.txt file.
Details
hello.R:
library(pbdMPI, quiet = TRUE)
init()
cat("Hello World from
process",comm.rank(),"of",comm.size(),"!\n")
finalize()
hosts.txt
The hosts.txt - MPI Machines File - is a text file, the lines of which contain the network names of the computers on which R scripts will be launched. In each line, after the computer name is separated by a space (for MS MPI), the number of MPI processes to be launched. Usually, it equals the number of processors in each node.
Sample of hosts.txt with three nodes having 2 processors each:
192.168.0.1 2
192.168.0.2 2
192.168.0.3 2

R Running foreach dopar loop on HPC MPIcluster

I got access to an HPC cluster with a MPI partition.
My problem is that -no matter what I try- my code (which works fine on my PC) doesn't run on the HPC cluster. The code looks like this:
library(tm)
library(qdap)
library(snow)
library(doSNOW)
library(foreach)
> cl<- makeCluster(30, type="MPI")
> registerDoSNOW(cl)
> np<-getDoParWorkers()
> np
> Base = "./Files1a/"
> files = list.files(path=Base,pattern="\\.txt");
>
> for(i in 1:length(files)){
...some definitions and variable generation...
+ text<-foreach(k = 1:10, .combine='c') %do%{
+ text= if (file.exists(paste("./Files", k, "a/", files[i], sep=""))) paste(tolower(readLines(paste("./Files", k, "a/", files[i], sep=""))) , collapse=" ") else ""
+ }
+
+ docs <- Corpus(VectorSource(text))
+
+ for (k in 1:10){
+ ID[k] <- paste(files[i], k, sep="_")
+ }
+ data <- as.data.frame(docs)
+ data[["docs"]]=ID
+ rm(docs)
+ data <- sentSplit(data, "text")
+
+ frequency=NULL
+ cs <- ceiling(length(POLKEY$x) / getDoParWorkers())
+ opt <- list(chunkSize=cs)
+ frequency<-foreach(j = 2: length(POLKEY$x), .options.mpi=opt, .combine='cbind') %dopar% ...
+ write.csv(frequency, file =paste("./Result/output", i, ".csv", sep=""))
+ rm(data, frequency)
+ }
When I run the batch job the session gets killed at the time limit. Whereas I receive the following message after the MPI cluster initialization:
Loading required namespace: Rmpi
--------------------------------------------------------------------------
PMI2 initialized but returned bad values for size and rank.
This is symptomatic of either a failure to use the
"--mpi=pmi2" flag in SLURM, or a borked PMI2 installation.
If running under SLURM, try adding "-mpi=pmi2" to your
srun command line. If that doesn't work, or if you are
not running under SLURM, try removing or renaming the
pmi2.h header file so PMI2 support will not automatically
be built, reconfigure and build OMPI, and then try again
with only PMI1 support enabled.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
An MPI process has executed an operation involving a call to the
"fork()" system call to create a child process. Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your MPI job may hang, crash, or produce silent
data corruption. The use of fork() (or system() or other calls that
create child processes) is strongly discouraged.
The process that invoked fork was:
Local host: ...
MPI_COMM_WORLD rank: 0
If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
--------------------------------------------------------------------------
30 slaves are spawned successfully. 0 failed.
Unfortunately, it seems that the loop doesn't go through once as no output is returned.
For the sake of completeness, my batch file:
#!/bin/bash -l
#SBATCH --job-name MyR
#SBATCH --output MyR-%j.out
#SBATCH --nodes=5
#SBATCH --ntasks-per-node=6
#SBATCH --mem=24gb
#SBATCH --time=00:30:00
MyRProgram="$HOME/R/hpc_test2.R"
cd $HOME/R
export R_LIBS_USER=$HOME/R/Libs2
# start R with my R program
module load R
time R --vanilla -f $MyRProgram
Does anybody have a suggestion how to solve the problem? What am I doing wrong?
Thanks in advance for your help!
Your script is an MPI application, so you need to execute it appropriately via Slurm. The Open MPI FAQ has a special section on how to do that:
https://www.open-mpi.org/faq/?category=slurm
The most important point is that your script shouldn't execute R directly, but should execute it via the mpirun command, using something like:
mpirun -np 1 R --vanilla -f $MyRProgram
My guess is that the "PMI2" error is caused by not executing R via mpirun. I don't think the "fork" message indicates a real problem and it happens to me at times. I think it happens because R calls "fork" when initializing, but this has never caused a problem for me. I'm not sure why I only get this message occasionally.
Note that it is very important to tell mpirun to only launch one process since the other processes will be spawned, so you should use the mpirun -np 1 option. If Open MPI was properly built with Slurm support, then Open MPI should know where to launch those processes when they are spawned, but if you don't use -np 1, then all 30 processes launched via mpirun will spawn 30 processes each, causing a huge mess.
Finally, I think you should tell makeCluster to spawn only 29 processes to avoid running a total of 31 MPI processes. Depending on your network configuration, even that much oversubscription can cause problems.
I would create the cluster object as follows:
library(snow)
library(Rmpi)
cl<- makeCluster(mpi.universe.size() - 1, type="MPI")
That's safer and makes it easier to keep your R script and job script in sync with each other.

How can I shut down Rserve gracefully?

I have tried many options both in Mac and in Ubuntu.
I read the Rserve documentation
http://rforge.net/Rserve/doc.html
and that for the Rserve and RSclient packages:
http://cran.r-project.org/web/packages/RSclient/RSclient.pdf
http://cran.r-project.org/web/packages/Rserve/Rserve.pdf
I cannot figure out what is the correct workflow for opening/closing a connection within Rserve and for shutting down Rserve 'gracefully'.
For example, in Ubuntu, I installed R from source with the ./config --enable-R-shlib (following the Rserve documentation) and also added the 'control enable' line in /etc/Rserve.conf.
In an Ubuntu terminal:
library(Rserve)
library(RSclient)
Rserve()
c<-RS.connect()
c ## this is an Rserve QAP1 connection
## Trying to shutdown the server
RSshutdown(c)
Error in writeBin(as.integer....): invalid connection
RS.server.shutdown(c)
Error in RS.server.shutdown(c): command failed with satus code 0x4e: no control line present (control commands disabled or server shutdown)
I can, however, CLOSE the connection:
RS.close(c)
>NULL
c ## Closed Rserve connection
After closing the connection, I also tried the options (also tried with argument 'c', even though the connection is closed):
RS.server.shutdown()
RSshutdown()
So, my questions are:
1- How can I close Rserve gracefully?
2- Can Rserve be used without RSclient?
I also looked at
How to Shutdown Rserve(), running in DEBUG
but the question refers to the debug mode and is also unresolved. (I don't have enough reputation to comment/ask whether the shutdown works in the non-debug mode).
Also looked at:
how to connect to Rserve with an R client
Thanks so much!
Load Rserve and RSclient packages, then connect to the instances.
> library(Rserve)
> library(RSclient)
> Rserve(port = 6311, debug = FALSE)
> Rserve(port = 6312, debug = TRUE)
Starting Rserve...
"C:\..\Rserve.exe" --RS-port 6311
Starting Rserve...
"C:\..\Rserve_d.exe" --RS-port 6312
> rsc <- RSconnect(port = 6311)
> rscd <- RSconnect(port = 6312)
Looks like they're running...
> system('tasklist /FI "IMAGENAME eq Rserve.exe"')
> system('tasklist /FI "IMAGENAME eq Rserve_d.exe"')
Image Name PID Session Name Session# Mem Usage
========================= ======== ================ =========== ============
Rserve.exe 8600 Console 1 39,312 K
Rserve_d.exe 12652 Console 1 39,324 K
Let's shut 'em down.
> RSshutdown(rsc)
> RSshutdown(rscd)
And they're gone...
> system('tasklist /FI "IMAGENAME eq Rserve.exe"')
> system('tasklist /FI "IMAGENAME eq Rserve_d.exe"')
INFO: No tasks are running which match the specified criteria.
Rserve can be used w/o RSclient by starting it with args and/or a config script. Then you can connect to it from some other program (like Tableau) or with your own code. RSclient provides a way to pass commands/data to Rserve from an instance of R.
Hope this helps :)
On a Windows system, if you want to close an RServe instance, you can use the system function in R to close it down.
For example in R:
library(Rserve)
Rserve() # run without any arguments or ports specified
system('tasklist /FI "IMAGENAME eq Rserve.exe"') # run this to see RServe instances and their PIDs
system('TASKKILL /PID {yourPID} /F') # run this to kill off the RServe instance with your selected PID
If you have closed your RServe instance with that PID correctly, the following message will appear:
SUCCESS: The process with PID xxxx has been terminated.
You can check the RServe instance has been closed down by entering
system('tasklist /FI "IMAGENAME eq Rserve.exe"')
again. If there are no RServe instances running any more, you will get the message
INFO: No tasks are running which match the specified criteria.
More help and info on this topic can be seen in this related question.
Note that the 'RSClient' approach mentioned in an earlier answer is tidier and easier than this one, but I put it forward anyway for those who start RServe without knowing how to stop it.
If you are not able to shut it down within R, run the codes below to kill it in terminal. These codes work on Mac.
$ ps ax | grep Rserve # get active Rserve sessions
You will see outputs like below. 29155 is job id of the active Rserve session.
29155 /Users/userid/Library/R/3.5/library/Rserve/libs/Rserve
38562 0:00.00 grep Rserve
Then run
$ kill 29155

R - Error in Rmpi with snow

I'm trying to execute an MPI cluster over 3 different computers inside a local area network with the following R code:
library(plyr)
library(class)
library(snow)
cl <- makeCluster(spec=c("localhost","ip1","ip2"),master="ip3")
but I'm getting an error:
Error in mpi.comm.spawn(slave = mpitask, slavearg = args, nslaves = count, :
Calloc could not allocate memory (18446744071562067968 of 4 bytes)
Warning messages:
1: In if (nslaves <= 0) stop("Choose a positive number of slaves.") : [...]
2: In mpi.comm.spawn(slave = mpitask, slavearg = args, nslaves = count, :
NA produced by coercition
What is this error due? I couldn't find any relevant topic on the current subject.
When calling makeCluster to create an MPI cluster, the spec argument should either be a number or missing, depending on whether you want the workers to be spawned or not. You can't specify the hostnames, as you would when creating a SOCK cluster. And in order to start workers on other machines with an MPI cluster, you have to execute your R script using a command such as mpirun, mpiexec, etc., depending on your MPI installation, and you specify the hosts to use via arguments to mpirun, not to makeCluster.
In your case, you might execute your script with:
$ mpirun -n 1 -H ip3,localhost,ip1,ip2 R --slave -f script.R
Since -n 1 is used, your script executes only on "ip3", not all four hosts, but MPI knows about the other three hosts, and will be able to spawn processes to them.
You would create the MPI cluster in that script with:
cl <- makeCluster(3)
This should cause a worker to be spawned on "localhost", "ip1", and "ip2", with the master process running on "ip3" (at least with Open MPI: I'm not sure about other MPI distributions). I don't believe the "master" option is used with the MPI transport: it's primarily used by the SOCK transport.
You can get lots of information about mpirun from its man page.
You can even try out executing the code in cluster nodes by following:
Create a file with name
nodelist -> Write down the machine names inside that one below the other.
Using mpirun try the following command in terminal :
mpirun -np (no.of processes) -machinefile (path where your nodelist file is present) Rscript (filename.R). Ignore round braces.
By default it will take the first node as the master and spawn the process to rest of the nodes including itself as slaves.

Resources