MPI checkpoint usage - mpi

I'd like to take advantage of MPI checkpoint feature to save my job. According to the suggestion at https://wiki.mpich.org/mpich/index.php/Checkpointing
I should be able to send SIGUSR1 to mpiexec ( in my case, I send it to mpirun ) to trigger a checkpoint. However, when I do so I don't see any file saved in my checkpoint directory that I specified with -ckpoint-prefix
Here is my mpirun -info output
HYDRA build details:
Version: 4.1 Update 1
Release Date: 20130522
Process Manager: pmi
Bootstrap servers available: ssh rsh fork slurm srun ll llspawn.stdio lsf blaunch sge qrsh persist jmi
Resource management kernels available: slurm srun ll llspawn.stdio lsf blaunch sge qrsh pbs
Checkpointing libraries available: blcr
Demux engines available: poll select
My command line is:
mpirun -ckpointlib blcr -ckpoint-prefix /home/user/temp/ckpoint -ckpoint-interval 1800 -np 274 $PROGPATH/myapp
The way I send signal is kill -s USR1 1900, 1900 is the pid of miprun. Whenever I send the signal, the program simply ends. No crash though. Anybody has experience on MPI checkpoint?

I think I figured it out. I send USR1 to mpirun, but I should send it to mpiexec.hydra instead. Even though some online article says mpirun and mpiexec are the same thing.

Related

"no enough slots" error of running Open MPI on databricks cluster with Linux

I try to use mpi to run a C application on databricks clusters.
I have downloaded Open MPI from
https://download.open-mpi.org/release/open-mpi/v4.0/openmpi-4.0.3.tar.gz
and installed on databricks cluster.
It was built on databricks cluster with Ubuntu.
Operating system/version: Linux 4.4.0 Ubuntu
Computer hardware: x86_64
Network type: databricks
I am trying to run from python notebook on databricks:
%sh
mpirun --allow-run-as-root -np 20 MY_c_Application
The MY_c_Application was written by C and compiled on databricks Linux.
My databricks cluster has 21 nodes with one as driver. Each node has 32 cores.
When I run the above command, I got the error as follows.
Could you please let me know how this could be caused ?
Or, do I miss something ?
thanks
There are not enough slots available in the system to satisfy the 20
slots that were requested by the application:
MY_c_application
Either request fewer slots for your application, or make more slots available for use.
A "slot" is the Open MPI term for an allocatable unit where we can launch a process.
The number of slots available are defined by the environment in which Open MPI processes are run:
Hostfile, via "slots=N" clauses (N defaults to number of processor cores if not provided)
The --host command line parameter, via a ":N" suffix on the hostname
(N defaults to 1 if not provided)
Resource manager (e.g., SLURM, PBS/Torque, LSF, etc.)
If none of a hostfile, the --host command line parameter, or an RM
is present, Open MPI defaults to the number of processor cores In
all the above cases, if you want Open MPI to default to the number
of hardware threads instead of the number of processor cores, use
the --use-hwthread-cpus option.
Alternatively, you can use the --oversubscribe option to ignore the
number of available slots when deciding the number of processes to launch.
UPDATE
After adding a hostfile , this problem is gone.
sudo mpirun --allow-run-as-root -np 25 --hostfile my_hostfile ./MY_C_APP
thanks
Sharing the answer as per by the original poster:
After adding a hostfile, the problem as resolved.
sudo mpirun --allow-run-as-root -np 25 --hostfile my_hostfile ./MY_C_APP

Slurm and Openmpi: An ORTE daemon has unexpectedly failed after launch and before communicating back to mpirun

I have installed openmpi and slurm in two nodes. i want to use slurm to run mpi jobs. When i use srun to run non-mpi jobs, everything is ok. However, i got some errors when i use salloc to run mpi jobs. Environment and codes are as follows.
Env:
slurm 17.02.1-2
mpirun (Open MPI) 2.1.0
test.sh
#!/bin/bash
MACHINEFILE="nodes.$SLURM_JOB_ID"
# Generate Machinefile for mpich such that hosts are in the same
# order as if run via srun
#
srun -l /bin/hostname | sort -n | awk '{print $2}' > $MACHINEFILE
source /home/slurm/allreduce/tf/tf-allreduce/bin/activate
mpirun -np $SLURM_NTASKS -machinefile $MACHINEFILE test
rm $MACHINEFILE
command
salloc -N2 -n2 bash test.sh
ERROR
salloc: Granted job allocation 97
--------------------------------------------------------------------------
An ORTE daemon has unexpectedly failed after launch and before
communicating back to mpirun. This could be caused by a number
of factors, including an inability to create a connection back
to mpirun due to a lack of common network interfaces and/or no
route found between them. Please check network connectivity
(including firewalls and network routing requirements).
--------------------------------------------------------------------------
salloc: Relinquishing job allocation 97
Anyone can help? Thanks.

How to list avaliable resources per node in MPI?

I have an access to MPI cluster. It is a pure, clean lan cluster, no SLURM or anething except OpenMP, mpicc, mpirun installed. I have sudo rights. Accessible and configured MPI nodes are all listed in /etc/hosts. I can compile and run MPI programms, yet how to get information on MPI cluster abilities: totall cores avaliable, processors info, total memory, currently running tasks?
Generaly I search for analog of sinfo and squeue that would work in MPI environment?
total cores avaliable:
total memory:
You can try to use Portable Hardware Locality hwloc to see the hardware topology and get info about total cores and total memory.
Additionally you can get information about CPU using lscpu or cat /proc/cpuinfo
currently running tasks:
You can use the monitoring software nmon from IMB (its free)
The option -t of nmon reports the top running process (like top command). You can use nmon online or offline mode.
The following example is from IMB developerWorks
nmon -fT -s 30 -c 120
Is getting one "snapshot" every 30 seconds until it gets 120 snapshots. Then you can examine the output.
If you run it without -f you will see the results live

Submitting Open MPI jobs to SGE

I've installed openmpi , not in /usr/... but in a /commun/data/packages/openmpi/ , it was compiled with --with-sge.
I've added a new PE in SGE as descibed in http://docs.oracle.com/cd/E19080-01/n1.grid.eng6/817-5677/6ml49n2c0/index.html
# /commun/data/packages/openmpi/bin/ompi_info | grep gridengine
MCA ras: gridengine (MCA v2.0, API v2.0, Component v1.6.3)
# qconf -sq all.q | grep pe_
pe_list make orte
Without SGE, the program runs without any problem, using several processors.
/commun/data/packages/openmpi/bin/orterun -np 20 ./a.out args
Now I want to submit my program to SGE
In the Open MPI FAQ, I read:
# Allocate a SGE interactive job with 4 slots
# from a parallel environment (PE) named 'orte'
shell$ qsh -pe orte 4
but my output is:
qsh -pe orte 4
Your job 84550 ("INTERACTIVE") has been submitted
waiting for interactive job to be scheduled ...
Could not start interactive job.
I've also tried the mpirun command embedded in a script:
$ cat ompi.sh
#!/bin/sh
/commun/data/packages/openmpi/bin/mpirun \
/path/to/a.out args
but it fails
$ cat ompi.sh.e84552
error: executing task of job 84552 failed: execution daemon on host "node02" didn't accept task
--------------------------------------------------------------------------
A daemon (pid 18327) died unexpectedly with status 1 while attempting
to launch so we are aborting.
There may be more information reported by the environment (see above).
This may be because the daemon was unable to find all the needed shared
libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
location of the shared libraries on the remote nodes and this will
automatically be forwarded to the remote nodes.
--------------------------------------------------------------------------
error: executing task of job 84552 failed: execution daemon on host "node01" didn't accept task
--------------------------------------------------------------------------
mpirun noticed that the job aborted, but has no info as to the process
that caused that situation.
How can I fix this?
answer in the openmpi mailing list: http://www.open-mpi.org/community/lists/users/2013/02/21360.php
In my case setting "job_is_first_task FALSE" and "control_slaves TRUE" solved the problem.
# qconf -mp mpi1
pe_name mpi1
slots 9
user_lists NONE
xuser_lists NONE
start_proc_args /bin/true
stop_proc_args /bin/true
allocation_rule $fill_up
control_slaves TRUE
job_is_first_task FALSE
urgency_slots min
accounting_summary FALSE

error on running mpi job

I'm trying to run a MPI job on a cluster with torque and openmpi 1.3.2 installed and I'm always getting the following error:
"mpirun was unable to launch the specified application as it could not find an executable:
Executable: -p
Node: compute-101-10.local
while attempting to start process rank 0."
I'm using the following script to do the qsub:
#PBS -N mphello
#PBS -l walltime=0:00:30
#PBS -l nodes=compute-101-10+compute-101-15
cd $PBS_O_WORKDIR
mpirun -npersocket 1 -H compute-101-10,compute-101-15 /home/username/mpi_teste/mphello
Any idea why this happens?
What I want is to run 1 process in each node (compute-101-10 and compute-101-15). What am I getting wrong here?
I've already tried several combinations of the mpirun command, but either the program runs on only one node or it gives me the above error...
Thanks in advance!
The -npersocket option did not exist in OpenMPI 1.2.
The diagnostics that OpenMPI reported
mpirun was unable to launch the specified application as it could not
find an executable: Executable: -p
is exactly what mpirun in OpenMPI 1.2 would say if called with this option.
Running mpirun --version will determine which version of OpenMPI is default on the compute nodes.
The problem is that the -npersocket flag is only supported by Open MPI 1.3.2 and the cluster where I'm running my code only has Open MPI 1.2 which doesn't support that flag.
A possible way around is to use the flag -loadbalance and specify the nodes where i want the code to run with the flag -H node1,node2,node3,... like this:
mpirun -loadbalance -H node1,node2,...,nodep -np number_of_processes program_name
that way each node will run number_of_processes/p processes, where p the number of nodes where the processes will be run.

Resources