MPI Number of processors? - mpi

Following is my code in MPI, which I run it over a core i7 CPU (quad core), but the problem is it shows me that it's running under 1 processor CPU, which has to be 4.
int main(int argc, char *argv[])
{
int rank, size;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
printf("Hello world! I am %d of %d\n", rank, size);
MPI_Finalize();
return 0;
}
I was wondering if the problem is with MPI library or sth else?
Here is the result that it shows me:
Hello world! I am 0 of 1
Additional info:
Windows 7 - Professional x64

Prima facie it looks like you are running the program directly. Did you try using mpiexec -n 2 or -n 4?

Related

Using PBS batch scripting, is there a way to view the contents of PBS_NODEFILE?

I read in the qsub man page that PBS_NODEFILE is created once mpirun, mpiexec, aprun, whatever, begins to execute. Is there a way to view its contents? I tried the following:
C code catfile.c:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
char command[256];
MPI_Init(&argc, &argv);
sprintf(command, "%s %s\n", "cat", argv[1]);
printf("%s\n", command);
system(command);
MPI_Finalize();
}
In my batch script, I run with
cc -o catfile catfile.c
aprun -n 1 ./catfile $PBS_NODEFILE
Output is
cat /var/spool/PBS/aux/1894.pbs01
cat: /var/spool/PBS/aux/1894.pbs01: No such file or directory
That's all I've got. Is there another way? Thanks.

Process placement with aprun -- need one process per node

I need to run an MPI code on a Cray system under aprun. For reasons I won't go into, I am being asked to run it such that no node has more than one process. I have been puzzling over the aprun man page and I'm not sure if I've figured this out or not. If I have only two processes, will this command ensure that they run on different nodes? (Let's say there are 32 cores on a node.)
> aprun -n 2 -d 32 --cc depth ./myexec
If anyone is interested, my above command line does work. I tested it with the code:
#include <mpi.h>
#include <stdio.h>
 
int main(int argc, char **argv) {
char hname[80];
int length;
 
MPI_Init(&argc, &argv);
 
MPI_Get_processor_name(hname, &length);
printf("Hello world from %s\n", hname);
MPI_Finalize();
}

How can I run multiple threads inside of a given MPI process?

I understand that a single MPI job Launches many processes which could be run on multiple nodes.
How do I run multiple threads inside of a given MPI process using MPI_THREAD_MULTIPLE?
I was unable to find enough information in relation to the topic.
Assuming your using OpenMP to run multiple threads
You will write the OpenMP code as you would do with out the MPI. (this statement is over simplified)
When the MPI comes you need to consider how your process will communicate. MPI is not sending messages to individual threads but individual process. For that reason MPI provides four modes of interaction with threads.
MPI_THREAD_SINGLE: Provides only one thread
MPI_THREAD_FUNNELED: Can provide many threads, but only the master thread can make MPI calls. The master thread is the one who call MPI_Init...
MPI_THREAD_SERIALIZED: Can provide many threads, but only one can make MPI calls at a time.
MPI_THREAD_MULTIPE: Can provide many threads, and all of them can make MPI call at any time.
You need to specify the mode you want at MPI_Init, which becomes:
MPI_Init_thread(&argc, &argv, HERE_PUT_THE_MODE_YOU_NEED, PROVIDED_MODE)
Ex:
MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPE, &provided)
At the provided field the MPI_Init_thread returns the provided mode. Make sure that you got a mode that your code can cope with it.
Also, avoid the use of MPI_Probe and MPI_IProbe, because they are not thread save. You should use MPI_Mprobe and MPI_Improbe.
Here is a simple 'hello world' example as #ab2050 asked:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <omp.h>
#include "mpi.h"
int main(int argc, char *argv[]) {
int provided;
int rank;
MPI_Init_thread(&argc, &argv, MPI_THREAD_FUNNELED, &provided);
if (provided != MPI_THREAD_FUNNELED) {
fprintf(stderr, "Warning MPI did not provide MPI_THREAD_FUNNELED\n");
}
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
#pragma omp parallel default(none), \
shared(rank), \
shared(ompi_mpi_comm_world), \
shared(ompi_mpi_int), \
shared(ompi_mpi_char)
{
printf("Hello from thread %d at rank %d parallel region\n",
omp_get_thread_num(), rank);
#pragma omp master
{
char helloWorld[12];
if (rank == 0) {
strcpy(helloWorld, "Hello World");
MPI_Send(helloWorld, 12, MPI_CHAR, 1, 0, MPI_COMM_WORLD);
printf("Rank %d send: %s\n", rank, helloWorld);
}
else {
MPI_Recv(helloWorld, 12, MPI_CHAR, 0, 0, MPI_COMM_WORLD,
MPI_STATUS_IGNORE);
printf("Rank %d received: %s\n", rank, helloWorld);
}
}
}
MPI_Finalize();
return 0;
}
You have to run this code on two process. Because 'MPI_THREAD_FUNNELED' is selected only the master thread makes MPI calls.
The following variables are specified at OpenMP data scoping place
because is needed by gcc version 6.1.1. Older versions like 4.8 do not require to declare them.
ompi_mpi_comm_world
ompi_mpi_char

MPI: Why does my MPICH program fails for large no. of processes?

/* C Example */
#include <mpi.h>
#include <stdio.h>
#include <stddef.h>
#include <stdlib.h>
int main (int argc, char* argv[])
{
int rank, size;
int buffer_length = MPI_MAX_PROCESSOR_NAME;
char hostname[buffer_length];
MPI_Init (&argc, &argv); /* starts MPI */
MPI_Comm_rank (MPI_COMM_WORLD, &rank); /* get current process id */
MPI_Comm_size (MPI_COMM_WORLD, &size); /* get number of processes */
MPI_Get_processor_name(hostname, &buffer_length); /* get hostname */
printf( "Hello world from process %d running on %s of %d\n", rank, hostname, size );
MPI_Finalize();
return 0;
}
The above program compiles and run successfully on ubuntu 12.04 for smaller no. of processes. But it fails when I try to execute with 1000s of processes. Why it is so?
I am expecting that scheduler would keep the threads in queue and can dispatch one by one (I am running this code on a single core machine)
Why the following error is coming for large no. of processes and how to resolve this issue?
root#ubuntu:/home# mpiexec -n 1000 ./hello
[proxy:0:0#ubuntu] HYDU_create_process (./utils/launch/launch.c:26): pipe error (Too many open files)
[proxy:0:0#ubuntu] launch_procs (./pm/pmiserv/pmip_cb.c:751): create process returned error
[proxy:0:0#ubuntu] HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:935): launch_procs returned error
[proxy:0:0#ubuntu] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status
[proxy:0:0#ubuntu] main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event
Killed
You are running into the open file limit on your system. Default in Ubuntu is 1024. You can try raising the limit in your session with the ulimit command.
ulimit -n 2048

MPI_Barrier() does not work on a small cluster

I want to use MPI_Barrier() in my programme, but there are some fatal errors.
This is my code:
1 #include <stdio.h>
2 #include "mpi.h"
3
4 int main(int argc, char* argv[]){
5 int rank, size;
6
7 MPI_Init(&argc, &argv);
8 MPI_Comm_rank(MPI_COMM_WORLD, &rank);
9 MPI_Comm_size(MPI_COMM_WORLD, &size);
10 printf("Hello, world, I am %d of %d. \n", rank, size);
11 MPI_Barrier(MPI_COMM_WORLD);
12 MPI_Finalize();
13
14 return 0;
15 }
And this is the output:
Hello, world, I am 0 of 2.
Hello, world, I am 1 of 2.
Fatal error in PMPI_Barrier: Other MPI error, error stack:
PMPI_Barrier(425).........: MPI_Barrier(MPI_COMM_WORLD) failed
MPIR_Barrier_impl(331)....: Failure during collective
MPIR_Barrier_impl(313)....:
MPIR_Barrier_intra(83)....:
dequeue_and_set_error(596): Communication error with rank 0
Any suggestions?
Thanks and Regards!
This generally reflects some sort of configuration error -- either host or username configurations are not consistent across nodes, or there is some sort of firewall blocking some ports. The MPICH2 FAQ discusses some places to look.

Resources