Using MPI on Slurm, is there a way to send messages between separate jobs? - mpi

I'm new to using MPI (mpi4py) and Slurm. I need to run about 50000 tasks, so to obey the administrator-set limit of about 1000, I've been running them like this:
sbrunner.sh:
#!/bin/bash
for i in {1..50}
do
sbatch m2slurm.sh $i
sleep 0.1
done
m2slurm.sh:
#!/bin/bash
#SBATCH --job-name=mpi
#SBATCH --output=mpi_50000.out
#SBATCH --time=0:10:00
#SBATCH --ntasks=1000
srun --mpi=pmi2 --output=mpi_50k${1}.out python par.py data_50000.pkl ${1} > ${1}py.out 2> ${1}.err
par.py (irrelevant stuff omitted):
offset = (int(sys.argv[2])-1)*1000
comm = MPI.COMM_WORLD
k = comm.Get_rank()
d = data[k+offset]
# ... do something with d ...
allresults = comm.gather(result, root=0)
comm.Barrier()
if k == 0:
print(allresults)
Is this a sensible way to get around the limit of 1000 tasks?
Is there a better way to consolidate results? I now have 50 files I have to concatenate manually. Is there some comm_world that can exist between different jobs?

I think you need to make your application divide the work among 1000 tasks (MPI ranks) and consolidate the results after that with MPI collective calls i.e. MPI_Reduce or MPI_AllReduce calls.
trying to work around the limit won't help you as the jobs you started will be queued one after another.
Jobs arrays will give similar behavior like what you did in the batch file you provided. So still your application must be able to processes all data items given only N tasks(MPI ranks).
No need to pool to make sure all other jobs are finished take a look at slurm job dependency parameter
https://hpc.nih.gov/docs/job_dependencies.html
Edit:
You can use job dependeny to make a new job that will run after all other jobs finish and this job will collect the results and merge them into one big file. I still believe you are over thinking the obvious solution make rank 0 (master collect all results and save them to the disk)

This looks like a perfect candidate for job arrays. Each job in an array is identical with the exception of a $SLURM_ARRAY_TASK_ID environment variable. You can use this in the same way that you're using the command line variable.
(You'll need to check that MaxArraySize is set high enough by your sysadmin. Check the output of scontrol show config | grep MaxArraySize )

what do you mean by 50000 tasks ?
do you mean one MPI job with 50000 MPI tasks ?
or do you mean 50000 independant serial programs ?
or do you mean any combination where (number of MPI jobs) * (number of tasks per job) = 5000
if 1), well, consult your system administrator. of course you can allocate 50000 slots in multiple SLURM jobs, manually wait they are all running at the same time and then mpirun your app outside of SLURM. this is both ugly and unefficient, and you might get in trouble if this is seen as an attempt to circumvent the system limits.
if 2) or 3), then job array is a good candidate. and if i understand correctly your app, you will need an extra post processing step in order to concatenate/merge all your outputs in a single file.
and if you go with 3), you will need to find the sweet spot
(generally speaking, 50000 serial program are more efficient than fifty 1000 tasks MPI jobs or one 50000 tasks MPI program but merging 50000 file is less efficient than merging 50 files (or not merging anything at all)
and if you cannot do the post-processing on a frontend node, then you can use job dependency to start it after all the computation have completed

Related

Any way to use future.apply parallelization without PBS/TORQUE killing the job?

I frequently use the packages future.apply and future to parallelize tasks in R. This works perfectly well in my local machines. However, if I try to use them in a computer cluster, managed by PBS/TORQUE, the job gets killed for violating the resources policy. After reviewing the processes, I noticed that the resources_used.mem and resources_used.vmem as reported by qstat are ridiculously high. Is there any way to fix this?
Note: I already know and use the package batchtools and future.batchtools, but they produce jobs to launch to the queues, so this requires me to organize the scripts in a particular way, so I would like to avoid this for this specific example.
I have prepared the following MVE. As you can see, the code simply allocates a vector with 10^9 elements, and then performs, in parallel using future_lapply, some operations (here just a trivial check).
library(future.apply)
plan(multicore, workers = 12)
sample <- rnorm(n = 10^9, mean = 10, sd = 10)
print(object.size(sample)/(1024*1024)) # fills ~ 8 gb of RAM
options(future.globals.maxSize=+Inf)
options(future.gc = TRUE)
future_lapply(future.seed = TRUE,
X = 1:12, function(idx){
# just do some stuff
for(i in sample){
if (i > 0) dummy <- 1
}
return(dummy)
})
If run on my local computer (no PBS-TORQUE involved), this works well (meaning no problem with the RAM) assuming 32Gb of RAM are available. However, if run through TORQUE/PBS on a machine that has enough resources, like this:
qsub -I -l mem=60Gb -l nodes=1:ppn=12 -l walltime=72:00:00
the job gets automatically killed due to violating the resources policy. I am pretty sure that this has to do with PBS/TORQUE not measuring correctly the resources used since, since if I check
qstat -f JOBNAME | grep used
I get:
resources_used.cput = 00:05:29
resources_used.mem = 102597484kb
resources_used.vmem = 213467760kb
resources_used.walltime = 00:02:06
Telling me that the process is using ~102Gb of mem and ~213Gb of vmem. It does not, you can actually monitor the node with e.g. htop and it is using the correct amount of RAM, but TORQUE/PBS is measuring much more.

Julia Distributed, redundant iterations appearing

I ran
mpiexec -n $nprocs julia --project myfile.jl
on a cluster, where myfile.jl has the following form
using Distributed; using Dates; using JLD2; using LaTeXStrings
#everywhere begin
using SharedArrays; using QuantumOptics; using LinearAlgebra; using Plots; using Statistics; using DifferentialEquations; using StaticArrays
#Defining some other functions and SharedArrays to be used later e.g.
MySharedArray=SharedArray{SVector{Nt,Float64}}(Np,Np)
end
#sync #distributed for pp in 1:Np^2
for jj in 1:Nj
#do some stuff with local variables
for tt in 1:Nt
#do some stuff with local variables
end
end
MySharedArray[pp]=... #using linear indexing
println("$pp finished")
end
timestr=Dates.format(Dates.now(), "yyyy-mm-dd-HH:MM:SS")
filename="MyName"*timestr
#save filename*".jld2"
#later on, some other small stuff like making and saving a figure. (This does give an error "no method matching heatmap_edges(::Surface{Array{Float64,2}}, ::Symbol)" but I think that this is a technical thing about Plots so not very related to the bigger issue here)
However, when looking at the output, there are a few issues that make me conclude that something is wrong
The "$pp finished" output is repeated many times for each value of pp. It seems that this amount is actually equal to 32=$nprocs
Despite the code not being finished, "MyName" files are generated. It should be one, but I get a dozen of them with different timestr component
EDIT: two more things that I can add
the output of the different "MyName" files is not identical, but this is expected since random numbers are used in the inner loops. There are 28 of them, a number that I don't easily recognize except that its again close to the 32 $nprocs
earlier, I wrote that the walltime was exceeded, but this turns out not to be true. The .o file ends with "BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES ... EXIT CODE :9", pretty shortly after the last output file.
$nprocs is obtained in the pbs script through
#PBS -l select=1:ncpus=32:mpiprocs=32
nprocs= `cat $PBS_NODEFILE|wc -l`
As pointed out by adamslc on the Julia discourse, the proper way to use Julia on a cluster is to either
Start a session with one core from the job script, add more with addprocs() in the Julia script itself
Use more specialized Julia packages
https://discourse.julialang.org/t/julia-distributed-redundant-iterations-appearing/57682/3

Opening a new instance of R and sourcing a script within that instance

Background/Motivation:
I am running a bioinformatics pipeline that, if executed from beginning to end linearly takes several days to finish. Fortunately, some of the tasks don't depend upon each other so they can be performed individually. For example, Task 2, 3, and 4 all depend upon the output from Task 1, but do not need information from each other. Task 5 uses the output of 2, 3, and 4 as input.
I'm trying to write a script that will open new instances of R for each of the three tasks and run them simultaneously. Once all three are complete I can continue with the remaining pipeline.
What I've done in the past, for more linear workflows, is have one "master" script that sources (source()) each task's subscript in turn.
I've scoured SO and google and haven't been able to find a solution for this particular problem. Hopefully you guys can help.
From within R, you can run system() to invoke commands within a terminal and open to open a file. For example, the following will open a new terminal instance:
system("open -a Terminal .",wait=FALSE)
Similarly, I can start a new r session by using
system("open -a r .")
What I can't figure out for the life of me is how to set the "input" argument so that it sources one of my scripts. For example, I would expect the following to open a new terminal instance, call r within the new instance, and then source the script.
system("open -a Terminal .",wait=FALSE,input=paste0("r; source(\"/path/to/script/M_01-A.R\",verbose=TRUE,max.deparse.length=Inf)"))
Answering my own question in the event someone else is interested down the road.
After a couple of days of working on this, I think the best way to carry out this workflow is to not limit myself to working just in R. Writing a bash script offers more flexibility and is probably a more direct solution. The following example was suggested to me on another website.
#!/bin/bash
# Run task 1
Rscript Task1.R
# now run the three jobs that use Task1's output
# we can fork these using '&' to run in the background in parallel
Rscript Task2.R &
Rscript Task3.R &
Rscript Task4.R &
# wait until background processes have finished
wait %1 %2 %3
Rscript Task5.R
You might be interested in the future package (I'm the author). It allows you to write your code as:
library("future")
v1 %<-% task1(args_1)
v2 %<-% task2(v1, args_2)
v3 %<-% task3(v1, args_3)
v4 %<-% task4(v1, args_4)
v5 %<-% task5(v2, v3, v4, args_5)
Each of those v %<-% expr statements creates a future based on the R expression expr (and all of it's dependencies) and assigns it to a promise v. It is only when v is used, it will block and wait for the value v to be available.
How and where these futures are resolved is decided by the user of the above code. For instance, by specifying:
library("future")
plan(multiprocess)
at the top, then the futures (= the different tasks) are resolved in parallel on your local machine. If you use,
plan(cluster, workers = c("n1", "n3", "n3", "n5"))
they're resolved on those for machine (where n3 accepts two concurrent jobs).
This works on all operating systems (including Windows).
If you have access to a HPC compute with schedulers such as Slurm, SGE, and TORQUE / PBS, you can use the future.BatchJobs package, e.g.
plan(future.BatchJobs::batchjobs_torque)
PS. One reason for creating future was to do large-scale Bioinformatics in parallel / distributed.

MRJob - Limit Number of Task Attemps

In MyJob, how do you limit the number of task attempts (if a task fails)?
I have long running tasks (have increased the timeout, accordingly), but I want the job to end after 2 failed attempts at the same task, rather than 4-5.
I couldn't find anything like this in the docs:
http://mrjob.readthedocs.org/en/latest//en/latest/guides/configs-reference.html
For map jobs, you can set mapreduce.map.maxattempts in Hadoop 2. For reduce jobs, set mapreduce.reduce.maxattempts (source).
The equivalents in Hadoop 1 are: mapred.map.max.attempts and mapred.reduce.max.attempts.
If you are using a conf file in MRJob, you can set this as:
runners:
emr:
jobconf:
mapreduce.map.maxattempts: 2

Parallelizing on a supercomputer and then combining the parallel results (R)

I've got access to a big, powerful cluster. I'm a halfway decent R programmer, but totally new to shell commands (and terminal commands in general besides basic things that one needs to do to use ubuntu).
I want to use this cluster to run a bunch of parallel processes in R, and then I want to combine them. Specifically, I have a problem analogous to:
my.function <-function(data,otherdata,N){
mod = lm(y~x, data=data)
a = predict(mod,newdata = otherdata,se.fit=TRUE)
b = rnorm(N,a$fit,a$se.fit)
b
}
r1 = my.function
r2 = my.function
r3 = my.function
r4 = my.function
...
r1000 = my.function
results = list(r1,r2,r3,r4, ... r1000)
The above is just a dumb example, but basically I want to do something 1000 times in parallel, and then do something with all of the results from the 1000 processes.
How do I submit 1000 jobs simultaneously to the cluster, and then combine all the results, like in the last line of the code?
Any recommendations for well-written manuals/references for me to go RTFM with would be welcome as well. Unfortunately, the documents that I've found aren't particularly intelligible.
Thanks in advance!
You can combine plyr with doMC package (that is a parallel backend to the foreach package) as follows:
require(plyr)
require(doMC)
registerDoMC(20) # for 20 processors
llply(1:1000, function(idx) {
out <- my.function(.)
}, .parallel = TRUE)
Edit: If you're talking about submitting simultaneous jobs, then don't you have a LSF license? You can then use bsub to submit as many jobs as you need and it also takes care of load-balancing and what not...!
Edit 2: A small note on load-balancing (example using LSF's bsub):
What you mention is something similar to what I wrote here => LSF. You can submit jobs in batches. For ex: using in LSF you can use bsub to submit a job to the cluster like so:
bsub -m <nodes> -q <queue> -n <processors> -o <output.log>
-e <error.log> Rscript myscript.R
and this will place you on the queue and allocate for you the number of processors (if and when available) your job will start running (depending on resources). You can pause, restart, suspend your jobs.. and much much more.. qsub is something similar to this concept. The learning curve maybe a bit steep, but it is worth it.
We wrote a survey paper on State of of the Art in Parallel Computing with R in the Journal of Statistical Software (which is an open journal). You may find this useful as an introduction.
Message Passing Interface do what you want to do, and is very easy to do it. after compiled, you need to run :
mpirun -np [no.of.process] [executable]
you select where to run it with a simple text file with host ip fields like:
node01 192.168.0.1
node02 192.168.0.2
node03 192.168.0.3
here more examples of MPI.

Resources