Internalize creation of MPI processes - mpi

Is there a way to internalize the creation of MPI processes? Instead of specifying the number of processes in the commandline "mpiexec -np 2 ./[PROG]"; I would like the number of processes be specified internally.
Cheers

Yes. You're looking for MPI_Spawn() from MPI-2, which launches a (possibly different) program with a number of processes that can be specified at runtime, and creates a new communicatator which you can use in place of MPI_COMM_WORLD to communicate amongst both the original and the new processes.

Related

number of workers and processes in Julia

in the distributed scheme, Julia distinguishes between workers and processes as far as I understand with the number of processes being +1 the number of workers. If in my machine there are only 10 cores, should I use 10 workers (julia -p 10 file.jl) or reserve 1 for the first process? Is the first process lightweight in general? What should I use for computations the number of workers or the number of processes? Thanks.
[Adding to what #philipsgabler said.]
Assuming you will be using #distributed for loop or pmap, the process 1 will be responsible for controlling the computation and hence just will be waiting for the I/O from the workers. In some scenarios process 1 will be responsible for aggregating the data delivered by the workers. Hence you can resonably assume the process 1 is lightweight.
On the other hand if you use #spawnat than you have a strict control what is going on the Julia cluster. I would though though still recommend using process 1 as the process controlling the cluster rather than putting big workload on it.
Quoting the docs:
Each process has an associated identifier. The process providing the interactive Julia prompt always has an id equal to 1. The processes used by default for parallel operations are referred to as "workers". When there is only one process, process 1 is considered a worker. Otherwise, workers are considered to be all processes other than process 1. As a result, adding 2 or more processes is required to gain benefits from parallel processing methods like pmap. Adding a single process is beneficial if you just wish to do other things in the main process while a long computation is running on the worker.
The command line option -p auto "launches as many workers as the number of local CPU threads (logical cores)". That is, in addition to process 1 -- on my machine, workers() with -p auto returns the eight ids 2--9.
All processes are in principle equal, but definitions will only be visible in process 1, unless explicitely done so by, e.g., #everywhere (so, if at all, the workers are more lightweight).

Mulitple processes launched by mpirun at different times within a single slurm allocation

I am trying to launch multiple mpirun commands from within a single slurm allocation but launching at different times and with different core counts.
At the minute the script launches something like
mpirun -np 2 python parallel_script.py input1
mpirun -np 3 python parallel_script.py input2
Doing this naively leads to all the processes being launched on the same (0-n) cores. This can be fixed for a single node by turning off process binding, but it seems there is no way to extend this to multiple node jobs.
Is there any easy mechanism to tell it to keep track of which cores are in use? Alternatively, is there a way to specify an offset to the mpi mapping so as to specify that I want the process to actually launch on logical core X rather than logical core 0.
I'm trying to avoid having to rely too heavily on the scheduler implementation here so as this is useful on more than one system.

MPI Spawn: Not enough slots available / All which nodes are allocated for this job are already filled

I am trying use MPI's Spawn functionality to run subprocesses that also use MPI. I am using MPI 2x and dynamic process management.
I have a master process (maybe I should say "master program") that runs in python (via mpi4py) that uses MPI to communicate between cores. This master process/program runs on 16 cores, and it will also make MPI_Comm_spawn_multiple calls to C and Fortran programs (which also use MPI). While the C and Fortran processes run, the master python program waits until they are finished.
A little more explicitly, the master python program does two primary things:
Uses MPI to do preprocessing for the spawning in step (2). MPI_Barrier is called after this preprocessing to ensure that all ranks have finished their preprocessing before step (2) begins. Note that the preprocessing is distributed across all 16 cores, and at the end of the preprocessing the resulting information is passed back to the root rank (e.g. rank == 0).
After the preprocessing, the root rank spawns 4 workers, each of which use 4 cores (i.e. all 16 cores are needed to run all 4 processes at the same time). This is done via MPI_Comm_spawn_multiple and these workers use MPI to communicate within their 4 cores. In the master python program, only rank == 0 spawns the C and Fortran subprocesses, and an MPI_Barrier is called after the spawn on all ranks so that all the rank != 0 cores wait until the spawned processes finish before they continue execution.
Repeat (1) and (2) many many times in a for loop.
The issue I am having is that if I use mpiexec -np 16 to start the master python program, all the cores are being taken up by the master program and I get the error:
All nodes which are allocated for this job are already filled.
when the program hits the MPI_Comm_spawn_multiple line.
If I use any other value less than 16 for -np, then only some of the cores are allocated and some are available (but I still need all 16), so I get a similar error:
There are not enough slots available in the system to satisfy the 4 slots
that were requested by the application:
/home/username/anaconda/envs/myenvironment/bin/python
Either request fewer slots for your application, or make more slots available
for use.
So it seems like even though I am going to run MPI_Barrier in step (2) to block until the spawned processes finish, MPI still thinks those cores are being used and won't allocate another process on top of them. Is there a way to fix this?
(If the answer is hostfiles, could you please explain them for me? I am not understanding the full idea and how they might be useful here.)
This is the poster of this question. I found out that I can use -oversubscribe as an argument to mpiexec to avoid these errors, but as Zulan mentioned in his comments, this could be a poor decision.
In addition, I don't know if the cores are being subscribed like I want them to be. For example, maybe all 4 C/Fortran processes are being run on the same 4 cores. I don't know how to tell.
Most MPIs have a parameter -usize 123 for the mpiexec program that indicates the size of the "universe", which can be larger than the world communicator. In that case you can spawn extra processes up to the size of the universe. You can query the size of the universe:
int universe_size, *universe_size_attr,uflag;
MPI_Comm_get_attr(comm_world,MPI_UNIVERSE_SIZE,
&universe_size_attr,&uflag);
universe_size = *universe_size_attr;

Single node, multiple MPI tasks

I need to debug an MPI code for which I only have access to a single node/machine. The problem is the bug I am looking for only arises when running on more than node but it doesn't when running, for example, two MPI tasks in the same node, everything goes fine. I assume that my MPI implementation (mviapich2) cleverly treats tasks running on the same node by, for example, replacing network communications by IPC strategies or even memcpy.
So my question is: how could I run two MPI tasks on a single node but making MPI treat them as tasks on different nodes? Is that possible?
You can disable the MVAPICH2 shared memory device by setting the MV2_USE_SHARED_MEM environment variable to 0:
mpiexec ... -env MV2_USE_SHARED_MEM 0 ... ./executable
Make sure that your MVAPICH2 was built with the TCP/IP device, otherwise your ranks won't be able to communicate with shared memory support turned off.

Group MPI tasks by host

I want to easily perform collective communications independently on each machine of my cluster. Let's say I have 4 machines with 8 cores on each, my MPI program would run 32 MPI tasks. What I would like is, for a given function:
on each host, only one task performs a computation, the other tasks do nothing during this computation. In my example, 4 MPI tasks will do the computation, 28 others are waiting.
once the computation is done, each MPI task on each will perform a collective communication ONLY to local tasks (tasks running on the same host).
Conceptually, I understand I must create one communicator for each host. I searched around, and found nothing explicitly doing that. I am not really comfortable with MPI groups and communicators. Here my two questions:
is MPI_Get_processor_name is enough unique for such a behaviour?
more generally, do you have a piece of code doing that?
The specification says that MPI_Get_processor_name returns "A unique specifier for the actual (as opposed to virtual) node", so I think you'd be ok with that. I guess you'd do a gather to assemble all the host names and then assign groups of processors to go off and make their communicators; or dup MPI_COMM_WORLD, turn the names into integer hashes, and use mpi_comm_split to partition the set.
You could also take the approach janneb suggests and use implementation-specific options to mpirun to ensure that the MPI implementation assigns tasks that way; OpenMPI uses --byslot to generate this ordering; with mpich2 you can use -print-rank-map to see the mapping.
But is this really what you want to do? If the other processes are sitting idle while one processor is working, how is this better than everyone redundantly doing the calculation? (Or is this very memory or I/O intensive, and you're worried about contention?) If you're going to be doing a lot of this -- treating on-node parallelization very different from off-node parallelization -- then you may want to think about hybrid programming models - running one MPI task per node and MPI_spawning subtasks or using OpenMP for on-node communications, both as suggested by HPM.
I don't think (educated thought, not definitive) that you'll be able to do what you want entirely from within your MPI program.
The response of the system to a call to MPI_Get_processor_name is system-dependent; on your system it might return node00, node01, node02, node03 as appropriate, or it might return my_big_computer for whatever processor you are actually running on. The former is more likely, but it is not guaranteed.
One strategy would be to start 32 processes and, if you can determine what node each is running on, partition your communicator into 4 groups, one on each node. This way you can manage inter- and intra-communications yourself as you wish.
Another strategy would be to start 4 processes and pin them to different nodes. How you pin processes to nodes (or processors) will depend on your MPI runtime and any job management system you might have, such as Grid Engine. This will probably involve setting environment variables -- but you don't tell us anything about your run-time system so we can't guess what they might be. You could then have each of the 4 processes dynamically spawn a further 7 (or 8) processes and pin those to the same node as the initial process. To do this, read up on the topic of intercommunicators and your run-time system's documentation.
A third strategy, now it's getting a little crazy, would be to start 4 separate MPI programs (8 processes each), one on each node of your cluster, and to join them as they execute. Read about MPI_Comm_connect and MPI_Open_port for details.
Finally, for extra fun, you might consider hybridising your program, running one MPI process on each node, and have each of those processes execute an OpenMP shared-memory (sub-)program.
Typically your MPI runtime environment can be controlled e.g. by environment variables how tasks are distributed over nodes. The default tends to be sequential allocation, that is, for your example with 32 tasks distributed over 4 8-core machines you'd have
machine 1: MPI ranks 0-7
machine 2: MPI ranks 8-15
machine 3: MPI ranks 16-23
machine 4: MPI ranks 24-31
And yes, MPI_Get_processor_name should get you the hostname so you can figure out where the boundaries between hosts are.
The modern MPI 3 answer to this is to call MPI_Comm_split_type

Resources