Using MPI communication with containerized applications - mpi

I've successfully executed a program on my cluster using
mpirun -np 1 ./my_program
I've containerized my_program using Singularity and produced my_program.simg. The container also includes the same version of OpenMPI as the host. I'd like to have nodes in my cluster execute the Singularity container, so I tried:
mpirun -np 1 \
singularity run my_program.simg
However, this fails:
[pn111:395108] PMIX ERROR: NOT-FOUND in file ../../../../../../../openmpi-3.1.3/opal/mca/pmix/pmix2x/pmix/src/server/pmix_server_ops.c at line 1865
[pn111:395108] PMIX ERROR: NOT-FOUND in file ../../../../../../../openmpi-3.1.3/opal/mca/pmix/pmix2x/pmix/src/server/pmix_server_ops.c at line 1865
[pn111:395108] PMIX ERROR: NOT-FOUND in file ../../../../../../../openmpi-3.1.3/opal/mca/pmix/pmix2x/pmix/src/server/pmix_server_ops.c at line 1865
I'm assuming that I need to expose something to the container so it knows how to speak to the MPI universe. Is there a socket I need to bind into the container, or an environment variable I need to set, or something like that?
The Singularity documentation suggests there's nothing extra to do.
This GitHub issue may have good advice, which I will try.

Related

mpirun error of oneAPI with Slurm (and PBS) in old cluster

Recently I installed Intel OneAPI including c compiler, FORTRAN compiler and mpi library and complied VASP with it.
Before presenting the question, there are some tricks I need to clarify during the installation of VASP:
GLIBC2.14: the cluster is an old machine with a glibc version of 2.12, where OneAPI needs a version of 2.14. So I compile the GLIBC2.14 and export the ld_path: export LD_LIBRARY_PATH="~/mysoft/glibc214/lib:$LD_LIBRARY_PATH"
ld 2.24: The ld version is 2.20 in the cluster, while a higher version is needed. So I installed binutils 2.24.
There is one master computer connected with 30 calculating nodes in the cluster. The calculation is executed with 3 ways:
When I do the calculation in the master, it's totally OK.
When I login the nodes manually with rsh command, the calculation in the logged node is also no problem.
But usually I submit the calculation script from the master (with slurm or pbs), and then do the calculation in the node. In that case, I met following error message:
[mpiexec#node3.alineos.net] poll_for_event (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:159): check exit codes error
[mpiexec#node3.alineos.net] HYD_dmx_poll_wait_for_proxy_event (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:212): poll for event error
[mpiexec#node3.alineos.net] HYD_bstrap_setup (../../../../../src/pm/i_hydra/libhydra/bstrap/src/intel/i_hydra_bstrap.c:1062): error waiting for event
[mpiexec#node3.alineos.net] HYD_print_bstrap_setup_error_message (../../../../../src/pm/i_hydra/mpiexec/intel/i_mpiexec.c:1015): error setting up the bootstrap proxies
[mpiexec#node3.alineos.net] Possible reasons:
[mpiexec#node3.alineos.net] 1. Host is unavailable. Please check that all hosts are available.
[mpiexec#node3.alineos.net] 2. Cannot launch hydra_bstrap_proxy or it crashed on one of the hosts. Make sure hydra_bstrap_proxy is available on all hosts and it has right permissions.
[mpiexec#node3.alineos.net] 3. Firewall refused connection. Check that enough ports are allowed in the firewall and specify them with the I_MPI_PORT_RANGE variable.
[mpiexec#node3.alineos.net] 4. pbs bootstrap cannot launch processes on remote host. You may try using -bootstrap option to select alternative launcher.
I only met this error with oneAPI compiled codes but Intel® Parallel Studio XE compiled. Do you have any idea of this error? Your response will be highly appreciated.
Best,
Léon
Could it be a permissions error with the Slurm agent not having the correct permissions or library path?

“unable to find the specified executable file” when trying to use mpirun on julia

I am trying to run my julia code on multiple nodes of a cluster, which uses Moab and Torque for the scheduler and resource manager.
In an interactive session where I requested 3 nodes, I load julia and openmpi modules and run:
mpirun -np 72 --hostfile $PBS_NODEFILE -display-allocation julia --project=. "./estimation/test.jl"
The mpirun does successfully recognize my 3 nodes since it displays:
====================== ALLOCATED NODES ======================
comp-bc-0383: slots=24 max_slots=0 slots_inuse=0 state=UP
comp-bc-0378: slots=24 max_slots=0 slots_inuse=0 state=UNKNOWN
comp-bc-0372: slots=24 max_slots=0 slots_inuse=0 state=UNKNOWN
=================================================================
However, after that it returns an error message
--------------------------------------------------------------------------
mpirun was unable to find the specified executable file, and therefore
did not launch the job. This error was first reported for process
rank 48; it may have occurred for other processes as well.
NOTE: A common cause for this error is misspelling a mpirun command
line parameter option (remember that mpirun interprets the first
unrecognized command line token as the executable).
Node: comp-bc-0372
Executable: /opt/aci/sw/julia/1.5.3_gcc-4.8.5-ips/bin/julia
--------------------------------------------------------------------------
What could be the possible cause of this? Is it because it has trouble accessing julia from other nodes? (I think this is the case because the code runs as long as -np X where x <= 24, which is the number of slots for one node; as soon as x >= 25, it fails to run)
Here a good manual how to work with modules and mpirun. UsingMPIstacksWithModules
To sum it up with what is written in the manual:
It should be highlighted that modules are nothing else than a structured way to manage your environment variables; so, whatever hurdles there are about modules, apply equally well about environment variables.
What you need is to export the environment variables in your mpirun command with -x PATH -x LD_LIBRARY_PATH. To see if this worked you can then run
mpirun -np 72 --hostfile $PBS_NODEFILE -display-allocation -x PATH -x LD_LIBRARY_PATH which julia
Also, you should consider giving the whole path of the file you want to run, so /path/to/estimation/test.jl instead of ./estimation/test.jl since your working directory is not the same in every node. (In general it is always safer to use whole paths).
By using whole paths, you should also be able to use /path/to/julia (that is the output of which julia) instead of only julia, this way you should not need to export the environment variables.

ORTE problem when running MPI on multiple computing nodes

I am trying to run a simple MPI example on a cluster with multiple computing nodes. Now I am just using two test nodes, including gpu8 and gpu12.
What I've done include:
gpu8 and gpu12 have the correct MPI environment (OpenMPI-4.0.1). I can successfully run the MPI example on a single node.
Passwordless login between gpu8 and gpu12 has been setup. They can ssh to another node with no issues.
There is a hostfile on each node containing
gpu8
gpu12
The executable files are under the same path.
echo $PATH (on both nodes) gives
/home/user_1/share/local/openmpi-4.0.1/bin:xxxxxx
echo $LD_LIBRARY_PATH (on both nodes) gives
/home/t716/shshi/share/local/openmpi-4.0.1/lib:
The ORTE problem:
I am running mpirun -np 2 --hostfile /home/user_2/hosts ./home/user_2/mpi-hello-world/mpi_hello_world. The error output is:
bash: orted: command not found
--------------------------------------------------------------------------
ORTE was unable to reliably start one or more daemons.
This usually is caused by:
* not finding the required libraries and/or binaries on
one or more nodes. Please check your PATH and LD_LIBRARY_PATH
settings, or configure OMPI with --enable-orterun-prefix-by-default
* lack of authority to execute on one or more specified nodes.
Please verify your allocation and authorities.
* the inability to write startup files into /tmp (--tmpdir/orte_tmpdir_base).
Please check with your sys admin to determine the correct location to use.
* compilation of the orted with dynamic libraries when static are required
(e.g., on Cray). Please check your configure cmd line and consider using
one of the contrib/platform definitions for your system type.
* an inability to create a connection back to mpirun due to a
lack of common network interfaces and/or no route found between
them. Please check network connectivity (including firewalls
and network routing requirements).
--------------------------------------------------------------------------

dbm error only when submitting python job in Slurm

I am running a python code on a remote machine. When I run it on the head node of the computer, it executes with no problem.
But when I use Slurm workload manager:
sbatch --wrap="python mycode.py" -N 1 --cpus-per-task=8 -o mycode.o
Then the code fails with the following error (only showing the end of the error):
.
.
line 91, in open
"available".format(result))
dbm.error: db type is dbm.gnu, but the module is not available
I'm just confused how a code could run fine without submitting through Slurm, but fail when I do use Slurm.
The compute (remote) nodes probably don't have the same software installed as the head node, or you may need to do some configuration steps before running. Check with the administrator of the cluster.

Error in locking authority file for parallelisation with qmake

I work with Rscript on a cluster. qmake (a specialized version of GNU-make for the cluster) is used to parallelize jobs on several nodes. But Rscript seems to need to write a Xauthority file and it creates an error when every nodes work in the same time. In this way, my makefile-bases pipeline stops after the first group of parallelized tasks and don't start the next group of tasks. But the results of the first group are ok.
I'm also invoking usr/bin/xvfb-run ( https://en.wikipedia.org/wiki/Xvfb) when runnning RScript.
I've already changed the ssh-config (FORWARD X11 yes) but the problem persists.
I also tried to change the name of Xauthority file for each job but it didn't work (option -f in Rscript).
Here is the error which appear at the beginning of the process
/usr/bin/xauth: error in locking authority file .Xauthority
Here is the error which appears before the process stops :
/usr/bin/xvfb-run: line 171: kill: (44402) - No such process
qmake: *** [Data/Median/median_B00FTXC.tmp] Error 1

Resources