in the distributed scheme, Julia distinguishes between workers and processes as far as I understand with the number of processes being +1 the number of workers. If in my machine there are only 10 cores, should I use 10 workers (julia -p 10 file.jl) or reserve 1 for the first process? Is the first process lightweight in general? What should I use for computations the number of workers or the number of processes? Thanks.
[Adding to what #philipsgabler said.]
Assuming you will be using #distributed for loop or pmap, the process 1 will be responsible for controlling the computation and hence just will be waiting for the I/O from the workers. In some scenarios process 1 will be responsible for aggregating the data delivered by the workers. Hence you can resonably assume the process 1 is lightweight.
On the other hand if you use #spawnat than you have a strict control what is going on the Julia cluster. I would though though still recommend using process 1 as the process controlling the cluster rather than putting big workload on it.
Quoting the docs:
Each process has an associated identifier. The process providing the interactive Julia prompt always has an id equal to 1. The processes used by default for parallel operations are referred to as "workers". When there is only one process, process 1 is considered a worker. Otherwise, workers are considered to be all processes other than process 1. As a result, adding 2 or more processes is required to gain benefits from parallel processing methods like pmap. Adding a single process is beneficial if you just wish to do other things in the main process while a long computation is running on the worker.
The command line option -p auto "launches as many workers as the number of local CPU threads (logical cores)". That is, in addition to process 1 -- on my machine, workers() with -p auto returns the eight ids 2--9.
All processes are in principle equal, but definitions will only be visible in process 1, unless explicitely done so by, e.g., #everywhere (so, if at all, the workers are more lightweight).
Related
The setup: A single-processor executable and two parallel mpi-based codes that can run on 100s of processors each. All on an HPC cluster that uses a PBS-based job scheduler.
The problem: Using a shared memory communication between single-processor executable and the parallel codes requires that rank 0 of the parallel codes all be located physically on the same node in an HPC cluster that uses a PBS job scheduler.
The question: Can a PBS script be created that can specify that rank 0 of the two parallel codes must start on a specific node( the same node that the other single-processor executable is running on)?
Example:
ExecA --Single processor
ExecB -- 100 processors
ExecC -- 100 processors
I want a situation where ExecA, ExecB(Rank0), and ExecC(Rank0) all start up on the same node. I need this so that the Rank 0 processors can communicate with the single-processor using a shared memory paradigm and then broadcast that information out to the rest of their respective MPI processes.
From this post, it does appear that specification of the number of cores to use on a code can be controlled using the PBS script. From my reading of the MPI manual, it also appears that if given a hostfile, MPI will sequentially go down the hostfile until it has allocated all the processors that were requested. So theoretically if I had a hostfile/machinefile that contained the host name of a particular node, and had a specification of 1 processor on that node being used, then I believe rank 0 would likely reside on that node.
I know that most cluster-based job schedulers do provide node names for users that they can use to specify a particular node to execute on, but I can't quite determine if ability to generally tell a scheduler "Hey, for this parallel job put the first process on this node, and put the rest elsewhere" is possible.
I am trying use MPI's Spawn functionality to run subprocesses that also use MPI. I am using MPI 2x and dynamic process management.
I have a master process (maybe I should say "master program") that runs in python (via mpi4py) that uses MPI to communicate between cores. This master process/program runs on 16 cores, and it will also make MPI_Comm_spawn_multiple calls to C and Fortran programs (which also use MPI). While the C and Fortran processes run, the master python program waits until they are finished.
A little more explicitly, the master python program does two primary things:
Uses MPI to do preprocessing for the spawning in step (2). MPI_Barrier is called after this preprocessing to ensure that all ranks have finished their preprocessing before step (2) begins. Note that the preprocessing is distributed across all 16 cores, and at the end of the preprocessing the resulting information is passed back to the root rank (e.g. rank == 0).
After the preprocessing, the root rank spawns 4 workers, each of which use 4 cores (i.e. all 16 cores are needed to run all 4 processes at the same time). This is done via MPI_Comm_spawn_multiple and these workers use MPI to communicate within their 4 cores. In the master python program, only rank == 0 spawns the C and Fortran subprocesses, and an MPI_Barrier is called after the spawn on all ranks so that all the rank != 0 cores wait until the spawned processes finish before they continue execution.
Repeat (1) and (2) many many times in a for loop.
The issue I am having is that if I use mpiexec -np 16 to start the master python program, all the cores are being taken up by the master program and I get the error:
All nodes which are allocated for this job are already filled.
when the program hits the MPI_Comm_spawn_multiple line.
If I use any other value less than 16 for -np, then only some of the cores are allocated and some are available (but I still need all 16), so I get a similar error:
There are not enough slots available in the system to satisfy the 4 slots
that were requested by the application:
/home/username/anaconda/envs/myenvironment/bin/python
Either request fewer slots for your application, or make more slots available
for use.
So it seems like even though I am going to run MPI_Barrier in step (2) to block until the spawned processes finish, MPI still thinks those cores are being used and won't allocate another process on top of them. Is there a way to fix this?
(If the answer is hostfiles, could you please explain them for me? I am not understanding the full idea and how they might be useful here.)
This is the poster of this question. I found out that I can use -oversubscribe as an argument to mpiexec to avoid these errors, but as Zulan mentioned in his comments, this could be a poor decision.
In addition, I don't know if the cores are being subscribed like I want them to be. For example, maybe all 4 C/Fortran processes are being run on the same 4 cores. I don't know how to tell.
Most MPIs have a parameter -usize 123 for the mpiexec program that indicates the size of the "universe", which can be larger than the world communicator. In that case you can spawn extra processes up to the size of the universe. You can query the size of the universe:
int universe_size, *universe_size_attr,uflag;
MPI_Comm_get_attr(comm_world,MPI_UNIVERSE_SIZE,
&universe_size_attr,&uflag);
universe_size = *universe_size_attr;
I wanna know why a context switch is slow compared to asynchronous operations on the same thread.
Why is better to run N threads (with N equals to the number of cores), each one processing M clients assynchronously, instead of running M threads? I've told the reason is the context switch overhead, but I can't find how slow are context switchs.
Just to clarify I will assume that when you say “instead of running M threads” you mean N*M threads (if you run M threads, each one will need to process N clients in order to match the same number of total clients and this will be a similar case).
So the difference between N threads running in N cores, each one processing M clients, and N*M threads running in the same number of cores it is that in the first case you won’t have to create new threads and, as you said, you won’t have context switching. This is an advantage because the work needed to create OS threads is heavy; it needs to create a different process space, a new stack, etc. Besides, if you have more threads the OS scheduler will be stopping and activating the running processes, which it is also time-consuming. Every time the scheduler change the process assigned to a core it will probably also need to cache the context of this process, adding a lot of cache-misses and consequently more time.
On the other hand, if you have a fixed number of thread, equals to the number of cores (sometimes even N-1 is suggested) you can manage the “tasks” or clients in a user-level scheduler which may incur in a few more computations of your program but avoid a lot of OS processes and memory management, making the overall execution faster. Some current parallel APIs such as .Net Task Parallel Library (TPL), OpenMP, Intel’s Threading Building Blocks, or Cilk embody this model of parallelism called dynamic multithreading.
Is there a way to internalize the creation of MPI processes? Instead of specifying the number of processes in the commandline "mpiexec -np 2 ./[PROG]"; I would like the number of processes be specified internally.
Cheers
Yes. You're looking for MPI_Spawn() from MPI-2, which launches a (possibly different) program with a number of processes that can be specified at runtime, and creates a new communicatator which you can use in place of MPI_COMM_WORLD to communicate amongst both the original and the new processes.
I want to easily perform collective communications independently on each machine of my cluster. Let's say I have 4 machines with 8 cores on each, my MPI program would run 32 MPI tasks. What I would like is, for a given function:
on each host, only one task performs a computation, the other tasks do nothing during this computation. In my example, 4 MPI tasks will do the computation, 28 others are waiting.
once the computation is done, each MPI task on each will perform a collective communication ONLY to local tasks (tasks running on the same host).
Conceptually, I understand I must create one communicator for each host. I searched around, and found nothing explicitly doing that. I am not really comfortable with MPI groups and communicators. Here my two questions:
is MPI_Get_processor_name is enough unique for such a behaviour?
more generally, do you have a piece of code doing that?
The specification says that MPI_Get_processor_name returns "A unique specifier for the actual (as opposed to virtual) node", so I think you'd be ok with that. I guess you'd do a gather to assemble all the host names and then assign groups of processors to go off and make their communicators; or dup MPI_COMM_WORLD, turn the names into integer hashes, and use mpi_comm_split to partition the set.
You could also take the approach janneb suggests and use implementation-specific options to mpirun to ensure that the MPI implementation assigns tasks that way; OpenMPI uses --byslot to generate this ordering; with mpich2 you can use -print-rank-map to see the mapping.
But is this really what you want to do? If the other processes are sitting idle while one processor is working, how is this better than everyone redundantly doing the calculation? (Or is this very memory or I/O intensive, and you're worried about contention?) If you're going to be doing a lot of this -- treating on-node parallelization very different from off-node parallelization -- then you may want to think about hybrid programming models - running one MPI task per node and MPI_spawning subtasks or using OpenMP for on-node communications, both as suggested by HPM.
I don't think (educated thought, not definitive) that you'll be able to do what you want entirely from within your MPI program.
The response of the system to a call to MPI_Get_processor_name is system-dependent; on your system it might return node00, node01, node02, node03 as appropriate, or it might return my_big_computer for whatever processor you are actually running on. The former is more likely, but it is not guaranteed.
One strategy would be to start 32 processes and, if you can determine what node each is running on, partition your communicator into 4 groups, one on each node. This way you can manage inter- and intra-communications yourself as you wish.
Another strategy would be to start 4 processes and pin them to different nodes. How you pin processes to nodes (or processors) will depend on your MPI runtime and any job management system you might have, such as Grid Engine. This will probably involve setting environment variables -- but you don't tell us anything about your run-time system so we can't guess what they might be. You could then have each of the 4 processes dynamically spawn a further 7 (or 8) processes and pin those to the same node as the initial process. To do this, read up on the topic of intercommunicators and your run-time system's documentation.
A third strategy, now it's getting a little crazy, would be to start 4 separate MPI programs (8 processes each), one on each node of your cluster, and to join them as they execute. Read about MPI_Comm_connect and MPI_Open_port for details.
Finally, for extra fun, you might consider hybridising your program, running one MPI process on each node, and have each of those processes execute an OpenMP shared-memory (sub-)program.
Typically your MPI runtime environment can be controlled e.g. by environment variables how tasks are distributed over nodes. The default tends to be sequential allocation, that is, for your example with 32 tasks distributed over 4 8-core machines you'd have
machine 1: MPI ranks 0-7
machine 2: MPI ranks 8-15
machine 3: MPI ranks 16-23
machine 4: MPI ranks 24-31
And yes, MPI_Get_processor_name should get you the hostname so you can figure out where the boundaries between hosts are.
The modern MPI 3 answer to this is to call MPI_Comm_split_type