MPI Barrier C++ - mpi

I want to use MPI (MPICH2) on windows. I write this command:
MPI_Barrier(MPI_COMM_WORLD);
And I expect it blocks all Processors until all group members have called it. But it is not happen. I add a schematic of my code:
int a;
if(myrank == RootProc)
a = 4;
MPI_Barrier(MPI_COMM_WORLD);
cout << "My Rank = " << myrank << "\ta = " << a << endl;
(With 2 processor:) Root processor (0) acts correctly, but processor with rank 1 doesn't know the a variable, so it display -858993460 instead of 4.
Can any one help me?
Regards

You're only assigning a in process 0. MPI doesn't share memory, so if you want the a in process 1 to get the value of 4, you need to call MPI_Send from process 0 and MPI_Recv from process 1.

Variable a is not initialized - it is possible that is why it displays that number. In MPI, variable a is duplicated between the processes - so there are two values for a, one of which is uninitialized. You want to write:
int a = 4;
if (myrank == RootProc)
...
Or, alternatively, do an MPI_send in the Root (id 0), and an MPI_recv in the slave (id 1) so the value in the root is also set in the slave.
Note: that code triggers a small alarm in my head, so I need to check something and I'll edit this with more info. Until then though, the uninitialized value is most certainly a problem for you.
Ok I've checked the facts - your code was not properly indented and I missed the missing {}. The barrier looks fine now, although the snippet you posted does not do too much, and is not a very good example of a barrier because the slave enters it directly, whereas the root will set the value of the variable to 4 and then enter it. To test that it actually works, you probably want some sort of a sleep mechanism in one of the processes - that will yield (hope it's the correct term) the other process as well, preventing it from printing the cout until the sleep is over.

Blocking is not enough, you have to send data to other processes (memory in not shared between processes).
To share data across ALL processes use:
int MPI_Bcast(void* buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm )
so in your case:
MPI_Bcast(&a, 1, MPI_INT, 0, MPI_COMM_WORLD);
here you send one integer pointed by &a form process 0 to all other.
//MPI_Bcast is sender for root process and receiver for non-root processes
You can also send some data to specyfic process by:
int MPI_Send( void *buf, int count, MPI_Datatype datatype, int dest,
int tag, MPI_Comm comm )
and then receive by:
int MPI_Recv(void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status)

Related

Unexpected result from MPI isend and irecv

My goal was to send a vector from process 0, to process 1. Then, send it back from process 1 to process 0.
I have two questions from my implementation,
1- Why does the sending back from process 1 to process 0 takes longer than the vice versa?
The first send-recv takes ~1e-4 seconds in total and the second send-recv takes ~1 second.
2- When I increase size of the vector, I get the following error. What is the reason for that issue?
mpirun noticed that process rank 0 with PID 11248 on node server1 exited on signal 11 (Segmentation fault).
My Updated C++ code is as follows
#include <mpi.h>
#include <stdio.h>
#include <iostream>
#include <vector>
#include <boost/timer/timer.hpp>
#include <math.h>
using namespace std;
int main(int argc, char** argv) {
// Initialize the MPI environment
MPI_Init(NULL, NULL);
MPI_Request request, request2,request3,request4;
MPI_Status status;
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
srand( world_rank );
int n = 1e3;
double *myvector = new double[n];
if (world_rank==0){
myvector[n-1] = 1;
}
MPI_Barrier (MPI_COMM_WORLD);
if (world_rank==0){
boost::timer::cpu_timer timer;
MPI_Isend(myvector, n, MPI_DOUBLE , 1, 0, MPI_COMM_WORLD, &request);
boost::timer::cpu_times elapsedTime1 = timer.elapsed();
cout << " Wallclock time on Process 1:"
<< elapsedTime1.wall / 1e9 << " (sec)" << endl;
MPI_Irecv(myvector, n, MPI_DOUBLE, 1, 0, MPI_COMM_WORLD, &request4);
MPI_Wait(&request4, &status);
printf("Test if data is recieved from node 1: %1.0f\n",myvector[n-1]);
boost::timer::cpu_times elapsedTime2 = timer.elapsed();
cout <<" Wallclock time on Process 1:"
<< elapsedTime2.wall / 1e9 << " (sec)" << endl;
}else{
boost::timer::cpu_timer timer;
MPI_Irecv(myvector, n, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, &request2);
MPI_Wait(&request2, &status);
boost::timer::cpu_times elapsedTime1 = timer.elapsed();
cout << " Wallclock time on Process 2:"
<< elapsedTime1.wall / 1e9 << " (sec)" << endl;
printf("Test if data is recieved from node 0: %1.0f\n",myvector[n-1]);
myvector[n-1] = 2;
MPI_Isend(myvector, n, MPI_DOUBLE , 0, 0, MPI_COMM_WORLD, &request3);
boost::timer::cpu_times elapsedTime2 = timer.elapsed();
cout<< " Wallclock time on Process 2:"
<< elapsedTime1.wall / 1e9 << " (sec)" << endl;
}
MPI_Finalize();
}
The output is:
Wallclock time on Process 1:2.484e-05 (sec)
Wallclock time on Process 2:0.000125325 (sec)
Test if data is recieved from node 0: 1
Wallclock time on Process 2:0.000125325 (sec)
Test if data is recieved from node 1: 2
Wallclock time on Process 1:1.00133 (sec)
Timing discrepancies
First of all, you don't measure the time to send the message. This is why posting the actual code you use for timing is essential.
You measure four times, for the two sends, you only time the call to MPI_Isend. This is the Immediate version of the API call. As the name suggests, it completes immediately. The timing has nothing to do with the actual time for sending the message.
For the receive operations, you measure MPI_Irecv and a corresponding MPI_Wait. This is the time between initiating the receive and the the local availability of the message. This is again different from the message transfer time, as it does not consider the time difference between posting the receive and the corresponding send. In general, you have to consider the late sender and late receiver cases. Further even for blocking send operations, a local completion does not imply a completed transfer, remote completion, or even initiation.
Timing MPI transfers is difficult.
Checking for completion
There is still the question as to why anything in this code could take an entire second. That is certainly not a sensible time unless your network uses IPoAC. The likely reason is that you do not check for completion of all messages. MPI implementations are often single threaded and can only make progress on communication during the respective API calls. To use immediate messages, you must either periodically call MPI_Test* until the request is finished or complete the request using MPI_Wait*.
I don't know why you chose to use immediate MPI functions in the first place. If you call MPI_Wait right after starting an MPI_Isend/MPI_Irecv, you might as well just call MPI_Send/MPI_Recv. You need immediate functions for concurrent communication and computation, to allow concurrent irregular communication patterns, and to avoid deadlocks in certain situations. If you don't need the immediate functions, use the blocking ones instead.
Segfault
While I cannot reproduce, I suspect this is caused by using the same buffer (myvector) for two concurrently running MPI operations. Don't do that. Either use a separate buffer, or make sure the first operation completes. Generally - you are not allowed to touch a buffer in any way after passing it to MPI_Isend/MPI_Irecv until you know the request is completed via MPI_Test*/MPI_Wait*.
P.S.
If you think you need immediate operations to avoid deadlocks while sending and receiving, consider MPI_Sendrecv instead.

How to programmatically detect the number of cores and run an MPI program using all cores

I do not want to use mpiexec -n 4 ./a.out to run my program on my core i7 processor (with 4 cores). Instead, I want to run ./a.out, have it detect the number of cores and fire up MPI to run a process per core.
This SO question and answer MPI Number of processors? led me to use mpiexec.
The reason I want to avoid mpiexec is because my code is destined to be a library inside a larger project I'm working on. The larger project has a GUI and the user will be starting long computations that will call my library, which will in turn use MPI. The integration between the UI and the computation code is not trivial... so launching an external process and communicating via a socket or some other means is not an option. It must be a library call.
Is this possible? How do I do it?
This is quite a nontrivial thing to achieve in general. Also, there is hardly any portable solution that does not depend on some MPI implementation specifics. What follows is a sample solution that works with Open MPI and possibly with other general MPI implementations (MPICH, Intel MPI, etc.). It involves a second executable or a means for the original executable to directly call you library provided some special command-line argument. It goes like this.
Assume the original executable was started simply as ./a.out. When your library function is called, it calls MPI_Init(NULL, NULL), which initialises MPI. Since the executable was not started via mpiexec, it falls back to the so-called singleton MPI initialisation, i.e. it creates an MPI job that consists of a single process. To perform distributed computations, you have to start more MPI processes and that's where things get complicated in the general case.
MPI supports dynamic process management, in which one MPI job can start a second one and communicate with it using intercommunicators. This happens when the first job calls MPI_Comm_spawn or MPI_Comm_spawn_multiple. The first one is used to start simple MPI jobs that use the same executable for all MPI ranks while the second one can start jobs that mix different executables. Both need information as to where and how to launch the processes. This comes from the so-called MPI universe, which provides information not only about the started processes, but also about the available slots for dynamically started ones. The universe is constructed by mpiexec or by some other launcher mechanism that takes, e.g., a host file with list of nodes and number of slots on each node. In the absence of such information, some MPI implementations (Open MPI included) will simply start the executables on the same node as the original file. MPI_Comm_spawn[_multiple] has an MPI_Info argument that can be used to supply a list of key-value paris with implementation-specific information. Open MPI supports the add-hostfile key that can be used to specify a hostfile to be used when spawning the child job. This is useful for, e.g., allowing the user to specify via the GUI a list of hosts to use for the MPI computation. But let's concentrate on the case where no such information is provided and Open MPI simply runs the child job on the same host.
Assume the worker executable is called worker. Or that the original executable can serve as worker if called with some special command-line option, -worker for example. If you want to perform computation with N processes in total, you need to launch N-1 workers. This is simple:
(separate executable)
MPI_Comm child_comm;
MPI_Comm_spawn("./worker", MPI_ARGV_NULL, N-1, MPI_INFO_NULL, 0,
MPI_COMM_SELF, &child_comm, MPI_ERRCODES_IGNORE);
(same executable, with an option)
MPI_Comm child_comm;
char *argv[] = { "-worker", NULL };
MPI_Comm_spawn("./a.out", argv, N-1, MPI_INFO_NULL, 0,
MPI_COMM_SELF, &child_comm, MPI_ERRCODES_IGNORE);
If everything goes well, child_comm will be set to the handle of an intercommunicator that can be used to communicate with the new job. As intercommunicators are kind of tricky to use and the parent-child job division requires complex program logic, one could simply merge the two sides of the intercommunicator into a "big world" communicator that replaced MPI_COMM_WORLD. On the parent's side:
MPI_Comm bigworld;
MPI_Intercomm_merge(child_comm, 0, &bigworld);
On the child's side:
MPI_Comm parent_comm, bigworld;
MPI_Get_parent(&parent_comm);
MPI_Intercomm_merge(parent_comm, 1, &bigworld);
After the merge is complete, all processes can communicate using bigworld instead of MPI_COMM_WORLD. Note that child jobs do not share their MPI_COMM_WORLD with the parent job.
To put it all together, here is a complete functioning example with two separate program codes.
main.c
#include <stdio.h>
#include <mpi.h>
int main (void)
{
MPI_Init(NULL, NULL);
printf("[main] Spawning workers...\n");
MPI_Comm child_comm;
MPI_Comm_spawn("./worker", MPI_ARGV_NULL, 2, MPI_INFO_NULL, 0,
MPI_COMM_SELF, &child_comm, MPI_ERRCODES_IGNORE);
MPI_Comm bigworld;
MPI_Intercomm_merge(child_comm, 0, &bigworld);
int size, rank;
MPI_Comm_rank(bigworld, &rank);
MPI_Comm_size(bigworld, &size);
printf("[main] Big world created with %d ranks\n", size);
// Perform some computation
int data = 1, result;
MPI_Bcast(&data, 1, MPI_INT, 0, bigworld);
data *= (1 + rank);
MPI_Reduce(&data, &result, 1, MPI_INT, MPI_SUM, 0, bigworld);
printf("[main] Result = %d\n", result);
MPI_Barrier(bigworld);
MPI_Comm_free(&bigworld);
MPI_Comm_free(&child_comm);
MPI_Finalize();
printf("[main] Shutting down\n");
return 0;
}
worker.c
#include <stdio.h>
#include <mpi.h>
int main (void)
{
MPI_Init(NULL, NULL);
MPI_Comm parent_comm;
MPI_Comm_get_parent(&parent_comm);
int rank, size;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
printf("[worker] %d of %d here\n", rank, size);
MPI_Comm bigworld;
MPI_Intercomm_merge(parent_comm, 1, &bigworld);
MPI_Comm_rank(bigworld, &rank);
MPI_Comm_size(bigworld, &size);
printf("[worker] %d of %d in big world\n", rank, size);
// Perform some computation
int data;
MPI_Bcast(&data, 1, MPI_INT, 0, bigworld);
data *= (1 + rank);
MPI_Reduce(&data, NULL, 1, MPI_INT, MPI_SUM, 0, bigworld);
printf("[worker] Done\n");
MPI_Barrier(bigworld);
MPI_Comm_free(&bigworld);
MPI_Comm_free(&parent_comm);
MPI_Finalize();
return 0;
}
Here is how it works:
$ mpicc -o main main.c
$ mpicc -o worker worker.c
$ ./main
[main] Spawning workers...
[worker] 0 of 2 here
[worker] 1 of 2 here
[worker] 1 of 3 in big world
[worker] 2 of 3 in big world
[main] Big world created with 3 ranks
[worker] Done
[worker] Done
[main] Result = 6
[main] Shutting down
The child job has to use MPI_Comm_get_parent to obtain the intercommunicator to the parent job. When a process is not part of such a child job, the returned value will be MPI_COMM_NULL. This allows for an easy way to implement both the main program and the worker in the same executable. Here is a hybrid example:
#include <stdio.h>
#include <mpi.h>
MPI_Comm bigworld_comm = MPI_COMM_NULL;
MPI_Comm other_comm = MPI_COMM_NULL;
int parlib_init (const char *argv0, int n)
{
MPI_Init(NULL, NULL);
MPI_Comm_get_parent(&other_comm);
if (other_comm == MPI_COMM_NULL)
{
printf("[main] Spawning workers...\n");
MPI_Comm_spawn(argv0, MPI_ARGV_NULL, n-1, MPI_INFO_NULL, 0,
MPI_COMM_SELF, &other_comm, MPI_ERRCODES_IGNORE);
MPI_Intercomm_merge(other_comm, 0, &bigworld_comm);
return 0;
}
int rank, size;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
printf("[worker] %d of %d here\n", rank, size);
MPI_Intercomm_merge(other_comm, 1, &bigworld_comm);
return 1;
}
int parlib_dowork (void)
{
int data = 1, result = -1, size, rank;
MPI_Comm_rank(bigworld_comm, &rank);
MPI_Comm_size(bigworld_comm, &size);
if (rank == 0)
{
printf("[main] Doing work with %d processes in total\n", size);
data = 1;
}
MPI_Bcast(&data, 1, MPI_INT, 0, bigworld_comm);
data *= (1 + rank);
MPI_Reduce(&data, &result, 1, MPI_INT, MPI_SUM, 0, bigworld_comm);
return result;
}
void parlib_finalize (void)
{
MPI_Comm_free(&bigworld_comm);
MPI_Comm_free(&other_comm);
MPI_Finalize();
}
int main (int argc, char **argv)
{
if (parlib_init(argv[0], 4))
{
// Worker process
(void)parlib_dowork();
printf("[worker] Done\n");
parlib_finalize();
return 0;
}
// Main process
// Show GUI, save the world, etc.
int result = parlib_dowork();
printf("[main] Result = %d\n", result);
parlib_finalize();
printf("[main] Shutting down\n");
return 0;
}
And here is an example output:
$ mpicc -o hybrid hybrid.c
$ ./hybrid
[main] Spawning workers...
[worker] 0 of 3 here
[worker] 2 of 3 here
[worker] 1 of 3 here
[main] Doing work with 4 processes in total
[worker] Done
[worker] Done
[main] Result = 10
[worker] Done
[main] Shutting down
Some things to keep in mind when designing such parallel libraries:
MPI can only be initialised once. If necessary, call MPI_Initialized to check if the library has already been initialised.
MPI can only be finalized once. Again, MPI_Finalized is your friend. It can be used in something like an atexit() handler to implement a universal MPI finalisation on program exit.
When used in threaded contexts (usual when GUIs are involved), MPI must be initialised with support for threads. See MPI_Init_thread.
You can get number of CPUs by using for example this solution, and then start the MPI process by calling MPI_comm_spawn. But you will need to have a separate executable file.

MPI_Sendrecv deadlock

Could anyone help me to fix the bug in the following simple MPI prog. I was trying to use MPI_Sendrecv to send the "c" value from rank 1 to 2, and they print it from rank 2.
But, the following code ends with a deadlock.
What is the mistake, how to correctly use MPI_Sendrecv (in this situation)
#include<stdio.h>
#include"mpi.h"
int main (int argc, char **argv)
{
int size, rank;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
printf("Hi dear, I am printing from rank %d\n",rank);
double a, b, c;
MPI_Status status, status2;
if (rank == 0)
{
a = 10.1;
MPI_Send(&a,1,MPI_DOUBLE,1,99,MPI_COMM_WORLD);
}
if (rank == 1)
{
b = 20.1;
MPI_Recv(&a,1,MPI_DOUBLE,0,99,MPI_COMM_WORLD,&status);
c = a + b;
printf("\nThe value of c is %f \n",c);
}
MPI_Sendrecv(&c,1,MPI_DOUBLE,2,100,
&c,1,MPI_DOUBLE,1,100,MPI_COMM_WORLD,&status2);
MPI_Barrier(MPI_COMM_WORLD);
if(rank == 2)
{
printf("\n Printing from rank %d, c is %f\n",rank, c);
}
MPI_Finalize();
return 0;
When a process calls MPI_Sendrecv, it will always try to execute both the send and receive part. For instance, if it is process 2, it will not look at the dest (fourth argument), see a "2" and think, "Oh, I don't have to do any sending. I'll just do the receive." nstead, process 2 will see the "2" and think, "Ah, I have to send something to myself." Further, as you have written this code, all processes see the MPI_SendRecv and think, "Oh. I have to send something to process 2 (the fourth argument) and receive something from process 1 (the ninth argument). So here we go ..." The problem is that process 1 isn't getting a command to send anything, so everyone, even process 1, is waiting for process 1 to send something.
MPI_Sendrecv is a very useful function. I find uses for it all the time. But it is intended for sending in a "chain". For instance, 0 sends to 1 while 1 sends to 2 while 2 sends to 0 or whatever. I think in this case you are better off with the usual MPI_Send and MPI_Recv.

C MPI multiple dynamic array passing

I'm trying to ISend() two arrays: arr1,arr2 and an integer n which is the size of arr1,arr2. I understood from this post that sending a struct that contains all three is not an option since n is only known at run time. Obviously, I need n to be received first since otherwise the receiving process wouldn't know how many elements to receive. What's the most efficient way to achieve this without using the blokcing Send() ?
Sending the size of the array is redundant (and inefficient) as MPI provides a way to probe for incoming messages without receiving them, which provides just enough information in order to properly allocate memory. Probing is performed with MPI_PROBE, which looks a lot like MPI_RECV, except that it takes no buffer related arguments. The probe operation returns a status object which can then be queried for the number of elements of a given MPI datatype that can be extracted from the content of the message with MPI_GET_COUNT, therefore explicitly sending the number of elements becomes redundant.
Here is a simple example with two ranks:
if (rank == 0)
{
MPI_Request req;
// Send a message to rank 1
MPI_Isend(arr1, n, MPI_DOUBLE, 1, 0, MPI_COMM_WORLD, &req);
// Do not forget to complete the request!
MPI_Wait(&req, MPI_STATUS_IGNORE);
}
else if (rank == 1)
{
MPI_Status status;
// Wait for a message from rank 0 with tag 0
MPI_Probe(0, 0, MPI_COMM_WORLD, &status);
// Find out the number of elements in the message -> size goes to "n"
MPI_Get_count(&status, MPI_DOUBLE, &n);
// Allocate memory
arr1 = malloc(n*sizeof(double));
// Receive the message. ignore the status
MPI_Recv(arr1, n, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
}
MPI_PROBE also accepts the wildcard rank MPI_ANY_SOURCE and the wildcard tag MPI_ANY_TAG. One can then consult the corresponding entry in the status structure in order to find out the actual sender rank and the actual message tag.
Probing for the message size works as every message carries a header, called envelope. The envelope consists of the sender's rank, the receiver's rank, the message tag and the communicator. It also carries information about the total message size. Envelopes are sent as part of the initial handshake between the two communicating processes.
Firstly you need to allocate memory (full memory = n = elements) to arr1 and arr2 with rank 0. i.e. your front end processor.
Divide the array into parts depending on the no. of processors. Determine the element count for each processor.
Send this element count to the other processors from rank 0.
The second send is for the array i.e. arr1 and arr2
In other processors allocate arr1 and arr2 according to the element count received from main processor i.e. rank = 0. After receiving element count, receive the two arrays in the allocated memories.
This is a sample C++ Implementation but C will follow the same logic. Also just interchange Send with Isend.
#include <mpi.h>
#include <iostream>
using namespace std;
int main(int argc, char*argv[])
{
MPI::Init (argc, argv);
int rank = MPI::COMM_WORLD.Get_rank();
int no_of_processors = MPI::COMM_WORLD.Get_size();
MPI::Status status;
double *arr1;
if (rank == 0)
{
// Setting some Random n
int n = 10;
arr1 = new double[n];
for(int i = 0; i < n; i++)
{
arr1[i] = i;
}
int part = n / no_of_processors;
int offset = n % no_of_processors;
// cout << part << "\t" << offset << endl;
for(int i = 1; i < no_of_processors; i++)
{
int start = i*part;
int end = start + part - 1;
if (i == (no_of_processors-1))
{
end += offset;
}
// cout << i << " Start: " << start << " END: " << end;
// Element_Count
int e_count = end - start + 1;
// cout << " e_count: " << e_count << endl;
// Sending
MPI::COMM_WORLD.Send(
&e_count,
1,
MPI::INT,
i,
0
);
// Sending Arr1
MPI::COMM_WORLD.Send(
(arr1+start),
e_count,
MPI::DOUBLE,
i,
1
);
}
}
else
{
// Element Count
int e_count;
// Receiving elements count
MPI::COMM_WORLD.Recv (
&e_count,
1,
MPI::INT,
0,
0,
status
);
arr1 = new double [e_count];
// Receiving FIrst Array
MPI::COMM_WORLD.Recv (
arr1,
e_count,
MPI::DOUBLE,
0,
1,
status
);
for(int i = 0; i < e_count; i++)
{
cout << arr1[i] << endl;
}
}
// if(rank == 0)
delete [] arr1;
MPI::Finalize();
return 0;
}
#Histro The point I want to make is, that Irecv/Isend are some functions themselves manipulated by MPI lib. The question u asked depend completely on your rest of the code about what you do after the Send/Recv. There are 2 cases:
Master and Worker
You send part of the problem (say arrays) to the workers (all other ranks except 0=Master). The worker does some work (on the arrays) then returns back the results to the master. The master then adds up the result, and convey new work to the workers. Now, here you would want the master to wait for all the workers to return their result (modified arrays). So you cannot use Isend and Irecv but a multiple send as used in my code and corresponding recv. If your code is in this direction you wanna use B_cast and MPI_Reduce.
Lazy Master
The master divides the work but doesn't care of the result from his workers. Say you want to program a pattern of different kinds for same data. Like given characteristics of population of some city, you want to calculate the patterns like how many are above 18, how
many have jobs, how much of them work in some company. Now these results don't have anything to do with one another. In this case you don't have to worry about whether the data is received by the workers or not. The master can continue to execute the rest of the code. This is where it is safe to use Isend/Irecv.

MPI Receive/Gather Dynamic Vector Length

I have an application that stores a vector of structs. These structs hold information about each GPU on a system like memory and giga-flop/s. There are a different number of GPUs on each system.
I have a program that runs on multiple machines at once and I need to collect this data. I am very new to MPI but am able to use MPI_Gather() for the most part, however I would like to know how to gather/receive these dynamically sized vectors.
class MachineData
{
unsigned long hostMemory;
long cpuCores;
int cudaDevices;
public:
std::vector<NviInfo> nviVec;
std::vector<AmdInfo> amdVec;
...
};
struct AmdInfo
{
int platformID;
int deviceID;
cl_device_id device;
long gpuMem;
float sgflops;
double dgflops;
};
Each machine in a cluster populates its instance of MachineData. I want to gather each of these instances, but I am unsure how to approach gathering nviVec and amdVec since their length varies on each machine.
You can use MPI_GATHERV in combination with MPI_GATHER to accomplish that. MPI_GATHERV is the variable version of MPI_GATHER and it allows for the root rank to gather differt number of elements from each sending process. But in order for the root rank to specify these numbers it has to know how many elements each rank is holding. This could be achieved using simple single element MPI_GATHER before that. Something like this:
// To keep things simple: root is fixed to be rank 0 and MPI_COMM_WORLD is used
// Number of MPI processes and current rank
int size, rank;
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int *counts = new int[size];
int nelements = (int)vector.size();
// Each process tells the root how many elements it holds
MPI_Gather(&nelements, 1, MPI_INT, counts, 1, MPI_INT, 0, MPI_COMM_WORLD);
// Displacements in the receive buffer for MPI_GATHERV
int *disps = new int[size];
// Displacement for the first chunk of data - 0
for (int i = 0; i < size; i++)
disps[i] = (i > 0) ? (disps[i-1] + counts[i-1]) : 0;
// Place to hold the gathered data
// Allocate at root only
type *alldata = NULL;
if (rank == 0)
// disps[size-1]+counts[size-1] == total number of elements
alldata = new int[disps[size-1]+counts[size-1]];
// Collect everything into the root
MPI_Gatherv(vectordata, nelements, datatype,
alldata, counts, disps, datatype, 0, MPI_COMM_WORLD);
You should also register MPI derived datatype (datatype in the code above) for the structures (binary sends will work but won't be portable and will not work in heterogeneous setups).

Resources