How to shut down an open R cluster connection using parallel - r

In the question here, the OP mentioned using kill to stop each individual processes, well because I wasn't aware that connections remain open if you you push "stop" while running this in parallel in R Studio on Windows 10, and like a fool I tried to run the same thing 4-5 times, so now I have about 15 open connections on my poor 3 core machines stealing eating up all of my CPU. I can restart my R, but then I have to reclaim all of these unsaved objects which will take a good hour and I'd rather not waste the time. Likewise, the answers in the linked post are great but all of them are about how to prevent the issue in the future not how to actually solve the issue when you have it.
So I'm looking for something like:
# causes problem
lapply(c('doParallel','doSNOW'), library, character.only = TRUE)
n_c <- detectCores()-1
cl<- makeCluster(n_c)
registerDoSNOW(cl)
stop()
stopCluster(cl) #not reached
# so to close off the connection we use something like
a <- showConnections()
cls$description %>% kill
The issue is very frustrating, any help would be appreciated.

Use
autoStopCluster <- function(cl) {
stopifnot(inherits(cl, "cluster"))
env <- new.env()
env$cluster <- cl
attr(cl, "gcMe") <- env
reg.finalizer(env, function(e) {
message("Finalizing cluster ...")
message(capture.output(print(e$cluster)))
try(parallel::stopCluster(e$cluster), silent = FALSE)
message("Finalizing cluster ... done")
})
cl
}
and then set up your cluster as:
cl <- autoStopCluster(makeCluster(n_c))
Old cluster objects no longer reachable will then be automatically stopped when garbage collected. You can trigger the garbage collector by calling gc(). For example, if you call:
cl <- autoStopCluster(makeCluster(n_c))
cl <- autoStopCluster(makeCluster(n_c))
cl <- autoStopCluster(makeCluster(n_c))
cl <- autoStopCluster(makeCluster(n_c))
cl <- autoStopCluster(makeCluster(n_c))
gc()
and watch your OSes process monitor, you'll see lots of workers being launched, but eventually when the garbage collector runs only the most recent set of cluster workers remain.
EDIT 2018-09-05: Added debug output messages to show when the registered finalizer runs, which happens when the garbage collector runs. Remove those message() lines and use silent = TRUE if you want it to be completely silent.

Related

Rstudio refuse to employ more cores [Not about code]

I would like to multithread calculate in R. Below is my code. It examines the "Quality" column of NGS data and remembers the rows that contain ,.
It worked fine but didn't save time. Now I figured out the reason. Multiple threads (11 of them) were successfully created but only 1 got processed. See update 1. I also tried DoMC in the place of DoParallel, the example code worked but not if I insert my code into the shell. See Update2.
Forgive me to introduce Update 3. There are occations the program runs as planned on a 4 thread computer, under Windows. But it's not consistent.
library(microseq)
library(foreach)
library(doParallel)
library(stringr)
cn<-detectCores()-1 ##i.e. 11
read1<-readFastq("pairIGK.fastq")
Cluster <- makeCluster(cn, type = "SOCK", methods = FALSE)
registerDoParallel(Cluster)
del_list <- foreach (i=1:nrow(read1), .inorder = FALSE, .packages = c("stringr")) %dopar% {
if(str_count(read1$Quality[i],",")!=0) i
}
stopCluster(Cluster)
del_list<-unlist(del_list)
read1<-read1[c(-(del_list)),]
Update1. I tested the code on the same 12 thread computer but under Linux. I found basically only one core is working for R. I saw 10 or 11 Rsession items under the monitor app, and they were not processed at all.
Update2. I found a slide from Microsoft talking about Multi-thread calculation with R.
https://blog.revolutionanalytics.com/downloads/Speed%20up%20R%20with%20Parallel%20Programming%20in%20the%20Cloud%20%28useR%20July2018%29.pdf
It highlighted the package doMC by giving a example calculating the likelihood 2 classmate sharing birthday on varied class size. I tested the provided code per se, while it actually used all cores. The code goes,
pbirthdaysim <- function(n) {
ntests <- 100000
pop <- 1:365
anydup <- function(i)
any(duplicated(
sample(pop, n, replace=TRUE)))
sum(sapply(seq(ntests), anydup)) / ntests
}
library(doMC)
registerDoMC(11)
bdayp <- foreach(n=1:100) %dopar% pbirthdaysim(n)
It took ~20s to finish on my 12 thread machine, which agrees with the slide.
However, when I insert my function into the shell, the same thing happened. Multiple thread got created by only one was actually processed. My code goes like:
library(microseq)
library(foreach)
library(doParallel)
library(stringr)
library(doMC)
cn<-detectCores()-1
read1<-readFastq("pairIGK.fastq")
registerDoMC(cn)
del_list <- foreach (i=1:nrow(read1), .inorder = FALSE, .packages = c("stringr")) %dopar% {
if(str_count(read1$Quality[i],",")!=0) i
}
del_list<-unlist(del_list)
read1<-read1[c(-(del_list)),]
Update 3.
I'm really confused and will go on investigating.

Future solutions

I am working with a large data set, which I use to make certain calculations. Since it is a huge data set, my machine, I am working on, is doing the job excessively long, for this reason I decided to use the future package in order to distribute the work between several machines and speed up the calculations.
So, my problem is that through the future (using putty & ssh) I can connect to those machines (in parallel), but the work itself is doing the main one, without any distribution. Maybe you can advice some solution:
How to make it work in all machines;
As well, how to check if the process is working (I mean some function or anything that could help to verify the functionment functionality of those, ofc if it's existing).
My code:
library(future)
workers <- c("000.000.0.000", "111.111.1.111")
plan(remote, envir = parent.frame(), workers= workers, myip = "222.222.2.22")
start <- proc.time()
cl <- makeClusterPSOCK(
c("000.000.0.000", "111.111.1.111"), user = "...",
rshcmd = c("plink", "-ssh", "-pw", "..."),
rshopts = c("-i", "V:\\vbulavina\\privatekey.ppk"),
homogeneous = FALSE))
setwd("V:/vbulavina/r/inversion")
a <- source("fun.r")
f <- future({source("pasos.r")})
l <- future({source("pasos2.R")})
time_elapsed_parallel <- proc.time() - start
time_elapsed_parallel
f and l objects are supposed to be done in parallel, but the master machine is doing all the job, so I'm a bit confused if i can do something concerning it.
PS: I tried plan() with remote, multiprocess, multisession, cluster and nothing.
PS2: my local machine is Windows and try to connect to Kubuntu and Debian (firewall is off in all of those).
Thnx in advance.
Author of future here. First, make sure you can setup the PSOCK cluster, i.e. connect to the two workers over SSH and run Rscript on them. This you do as:
library(future)
workers <- c("000.000.0.000", "111.111.1.111")
cl <- makeClusterPSOCK(workers, user = "...",
rshcmd = c("plink", "-ssh", "-pw", "..."),
rshopts = c("-i", "V:/vbulavina/privatekey.ppk"),
homogeneous = FALSE)
print(cl)
### socket cluster with 2 nodes on hosts '000.000.0.000', '111.111.1.111'
(If the above makeClusterPSOCK() stalls or doesn't work, add argument verbose = TRUE to get more info - feel free to report back here.)
Next, with the PSOCK cluster set up, tell the future system to parallelize over those two workers:
plan(cluster, workers = cl)
Test that futures are actually resolved remotes, e.g.
f <- future(Sys.info()[["nodename"]])
print(value(f))
### [1] "000.000.0.000"
I leave the remaining part, which also needs adjustments, for now - let's make sure to get the workers up and running first.
Continuing, using source() in parallel processing complicates things, especially when the parallelization is done on different machines. For instance, calling source("my_file.R") on another machine requires that the file my_file.R is available on that machine too. Even if it is, it also complicates things when it comes to the automatic identification of variables that need to be exported to the external machine. A safer approach is to incorporate all the code in the main script. Having said all this, you can try to replace:
f <- future({source("pasos.r")})
l <- future({source("pasos2.R")})
with
futureSource <- function(file, envir = parent.frame(), ...) {
expr <- parse(file)
future(expr, substitute = FALSE, envir = envir, ...)
}
f <- futureSource("pasos.r")
l <- futureSource("pasos2.R")
As long as pasos.r and pasos2.R don't call source() internally, this c/should work.
BTW, what version of Windows are you on? Because with an up-to-date Windows 10, you have built-in support for SSH and you no longer need to use PuTTY.
UPDATE 2018-07-31: Continue answer regarding using source() in futures.

Processes become zombies while r parallel session still working

I'm trying to query my DB large number of times and to activate some logic over the query's result set.
I'm using Roracle and dopar in order to do so (BTW-my first try was with RJDBC, but I switched to Roracle because I got Error reading from connection; Now, I no longer get this error, but I have the problem described below).
The problem is that most of the process are dying (become zombies) during the parallel session. I monitor this using top command over my linux system; the log file which shows me the progress of my parallel loop; and monitoring my DB during the session. When I'm starting the program, I see that the workers are loaded and the program progresses in high pace, but then most of them are died, and the program become slow (or not working at all) with no error message.
Here some example code of what I'm trying to do:
library(doParallel)
library(Roracle)
temp <- function(i) {
# because you can't get access to my DB, it's irrelevant to file the following rows(where I put three dots)- But I checked my DB connection and it works fine.
drv <- ...
host <- ...
port <- ...
sid <- ...
connect.string <- paste(...)
conn_oracle <- dbConnect(drv, username=..., password=..., dbname=connect.string)
myData <- dbGetQuery(conn_oracle, sprintf("SELECT '%s%' FROM dual", i))
print(i)
dbDisconnect(conn_oracle)
}
cl <- makeCluster(10, outfile = "par_log.txt")
registerDoParallel(cl)
output <- foreach(i=1:100000, .inorder=T, .verbose=T, .combine='rbind',
.packages=c('Roracle'),
.export=c('temp'))
%dopar% {temp(i)}
stopCluster(cl)
Any help will be appreciated!

curl memory usage in R for multiple files in parLapply loop

I have a project that's downloading ~20 million PDFs multithreaded on an ec2. I'm most proficient in R and it's a one off so my initial assessment was that the time savings from bash scripting wouldn't be enough to justify the time spent on the learning curve. So I decided just to call curl from within an R script. The instance is a c4.8xlarge, rstudio server over ubuntu with 36 cores and 60 gigs of memory.
With any method I've tried it runs up to the max ram fairly quickly. It runs alright but I'm concerned swapping the memory is slowing it down. curl_download or curl_fetch_disk work much more quickly than the native download.file function (one pdf per every .05 seconds versus .2) but those both run up to max memory extremely quickly and then seem to populate the directory with empty files. With the native function I was dealing with the memory problem by suppressing output with copious usage of try() and invisible(). That doesn't seem to help at all with the curl package.
I have three related questions if anyone could help me with them.
(1) Is my understanding of how memory is utilized correct in that needlessly swapping memory would cause the script to slow down?
(2) curl_fetch_disk is supposed to be writing direct to disk, does anyone have any idea as to why it would be using so much memory?
(3) Is there any good way to do this in R or am I just better off learning some bash scripting?
Current method with curl_download
getfile_sweep.fun <- function(url
,filename){
invisible(
try(
curl_download(url
,destfile=filename
,quiet=T
)
)
)
}
Previous method with native download.file
getfile_sweep.fun <- function(url
,filename){
invisible(
try(
download.file(url
,destfile=filename
,quiet=T
,method="curl"
)
)
)
}
parLapply loop
len <- nrow(url_sweep.df)
gc.vec <- unlist(lapply(0:35, function(x) x + seq(
from=100,to=len,by=1000)))
gc.vec <- gc.vec[order(gc.vec)]
start.time <- Sys.time()
ptm <- proc.time()
cl <- makeCluster(detectCores()-1,type="FORK")
invisible(
parLapply(cl,1:len, function(x){
invisible(
try(
getfile_sweep.fun(
url = url_sweep.df[x,"url"]
,filename = url_sweep.df[x,"filename"]
)
)
)
if(x %in% gc.vec){
gc()
}
}
)
)
stopCluster(cl)
Sweep.time <- proc.time() - ptm
Sample of data -
Sample of url_sweep.df:
https://www.dropbox.com/s/anldby6tcxjwazc/url_sweep_sample.rds?dl=0
Sample of existing.filenames:
https://www.dropbox.com/s/0n0phz4h5925qk6/existing_filenames_sample.rds?dl=0
Notes:
1- I do not have such powerful system available to me, so I cannot reproduce every issue mentioned.
2- All the comments are being summarized here
3- It was stated that machine received an upgrade: EBS to provisioned SSD w/ 6000 IOPs/sec, however the issue persists
Possible issues:
A- if memory swap starts to happen then you are nor purely working with RAM anymore and I think R would have harder and harder time to find available continues memory spaces.
B- work load and the time it takes to finish the workload, compared to the number of cores
c- parallel set up, and fork cluster
Possible solutions and troubleshooting:
B- Limiting memory usage
C- Limiting number of cores
D- If the code runs fine on a smaller machine like personal desktop than issue is with how the parallel usage is setup, or something with fork cluster.
Things to still try:
A- In general running jobs in parallel incurs overhead, now more cores you have, you will see the effects more. when you pass a lot of jobs that take very very little time (think smaller than second) this will results in increase of overhead related to constantly pushing jobs. try to limit the core to 8 just like your desktop and try your code? does the code run fine? if yes than try to increase the workload as you increase the cores available to the program.
Start with lower end of spectrum of number of cores and amount of ram an increase them as you increase the workload and see where the fall happens.
B- I will post a summery about Parallelism in R, this might help you catch something that we have missed
What worked:
Limiting the number of cores has fixed the issue. As mentioned by OP, he has also made other changes to the code, however i do not have access to them.
You can use the async interface instead. Short example below:
cb_done <- function(resp) {
filename <- basename(urltools::path(resp$url))
writeBin(resp$content, filename)
}
pool <- curl::new_pool()
for (u in urls) curl::curl_fetch_multi(u, pool = pool, done = cb_done)
curl::multi_run(pool = pool)

%dopar% parallel foreach loop fails to exit when called from inside a function (R)

I have written the following code (running in RStudio for Windows) to read a long list of very large text files into memory using a parallel foreach loop:
open.raw.txt <- function() {
files <- choose.files(caption="Select .txt files for import")
cores <- detectCores() - 2
registerDoParallel(cores)
data <- foreach(file.temp = files[1:length(files)], .combine = cbind) %dopar%
as.numeric(read.table(file.temp)[, 4])
stopImplicitCluster()
return(data)
}
Unfortunately, however, the function fails to complete and debugging shows that it gets stuck at the foreach loop stage. Oddly, windows task manager indicated that I am at close to full capacity processor wise (I have 32 cores, and this should use 30 of them) for around 10 seconds, then it drops back to baseline. However the loop never completes, indicating that it is doing the work and then getting stuck.
Even more bizarrely, if I remove the 'function' bit and just run each step one-by-one as follows:
files <- choose.files(caption="Select .txt files for import")
cores <- detectCores() - 2
registerDoParallel(cores)
data <- foreach(file.temp = files[1:length(files)], .combine = cbind) %dopar%
as.numeric(read.table(file.temp)[, 4])
stopImplicitCluster()
Then it all works fine. What is going on?
Update: I ran the function and then left it for a while (around an hour) and finally it completed. I am not quite sure how to interpret this, given that multiple cores are still only used for the first 10 seconds or so. Could the issue be with how the tasks are being shared out? Or maybe memory management? I'm new to parallelism, so not sure how to investigate this.
The problem is that you have multiple process opening and closing the same file. Usually when a file is opened by a process it is locked to other process, so that prevents reading the file in parallel

Resources