MPI: How to use MPI_Win_allocate_shared properly - mpi

I would like to use a shared memory between processes. I tried MPI_Win_allocate_shared but it gives me a strange error when I execute the program:
Assertion failed in file ./src/mpid/ch3/include/mpid_rma_shm.h at line 592: local_target_rank >= 0
internal ABORT
Here's my source:
# include <stdlib.h>
# include <stdio.h>
# include <time.h>
# include "mpi.h"
int main ( int argc, char *argv[] );
void pt(int t[], int s);
int main ( int argc, char *argv[] )
{
int rank, size, shared_elem = 0, i;
MPI_Init ( &argc, &argv );
MPI_Comm_rank ( MPI_COMM_WORLD, &rank );
MPI_Comm_size ( MPI_COMM_WORLD, &size );
MPI_Win win;
int *shared;
if (rank == 0) shared_elem = size;
MPI_Win_allocate_shared(shared_elem*sizeof(int), sizeof(int), MPI_INFO_NULL, MPI_COMM_WORLD, &shared, &win);
if(rank==0)
{
MPI_Win_lock(MPI_LOCK_EXCLUSIVE, 0, MPI_MODE_NOCHECK, win);
for(i = 0; i < size; i++)
{
shared[i] = -1;
}
MPI_Win_unlock(0,win);
}
MPI_Barrier(MPI_COMM_WORLD);
int *local = (int *)malloc( size * sizeof(int) );
MPI_Win_lock(MPI_LOCK_SHARED, 0, 0, win);
for(i = 0; i < 10; i++)
{
MPI_Get(&(local[i]), 1, MPI_INT, 0, i,1, MPI_INT, win);
}
printf("processus %d (avant): ", rank);
pt(local,size);
MPI_Win_unlock(0,win);
MPI_Win_lock(MPI_LOCK_EXCLUSIVE, 0, 0, win);
MPI_Put(&rank, 1, MPI_INT, 0, rank, 1, MPI_INT, win);
MPI_Win_unlock(0,win);
MPI_Win_lock(MPI_LOCK_SHARED, 0, 0, win);
for(i = 0; i < 10; i++)
{
MPI_Get(&(local[i]), 1, MPI_INT, 0, i,1, MPI_INT, win);
}
printf("processus %d (apres): ", rank);
pt(local,size);
MPI_Win_unlock(0,win);
MPI_Win_free(&win);
MPI_Free_mem(shared);
MPI_Free_mem(local);
MPI_Finalize ( );
return 0;
}
void pt(int t[],int s)
{
int i = 0;
while(i < s)
{
printf("%d ",t[i]);
i++;
}
printf("\n");
}
I get the following result:
processus 0 (avant): -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
processus 0 (apres): 0 -1 -1 -1 -1 -1 -1 -1 -1 -1
processus 4 (avant): 0 -1 -1 -1 -1 -1 -1 -1 -1 -1
processus 4 (apres): 0 -1 -1 -1 4 -1 -1 -1 -1 -1
Assertion failed in file ./src/mpid/ch3/include/mpid_rma_shm.h at line 592: local_target_rank >= 0
internal ABORT - process 5
Assertion failed in file ./src/mpid/ch3/include/mpid_rma_shm.h at line 592: local_target_rank >= 0
internal ABORT - process 6
Assertion failed in file ./src/mpid/ch3/include/mpid_rma_shm.h at line 592: local_target_rank >= 0
internal ABORT - process 9
Can someone please help me figure out what's going wrong & what that error means ? Thanks a lot.

MPI_Win_allocate_shared is a departure from the very abstract nature of MPI. It exposes the underlying memory organisation and allows the programs to bypass the expensive (and often confusing) MPI RMA operations and utilise the shared memory directly on systems that have such. While MPI typically deals with distributed-memory environments where ranks do not share the physical memory address space, a typical HPC system nowadays consists of many interconnected shared-memory nodes. Thus, it is possible for ranks that execute on the same node to attach to shared memory segments and communicate by sharing data instead of message passing.
MPI provides a communicator split operation that allows one to create subgroups of ranks such that the ranks in each subgroup are able to share memory:
MPI_Comm_split_type(comm, MPI_COMM_TYPE_SHARED, key, info, &newcomm);
On a typical cluster, this essentially groups the ranks by the nodes they execute on. Once the split is done, a shared-memory window allocation can be executed over the ranks in each newcomm. Note that for a multi-node cluster job this will result in several independent newcomm communicators and thus several shared memory windows. Ranks on one node won't (and shouldn't) be able to see the shared memory windows on other nodes.
In that regard, MPI_Win_allocate_shared is a platform-independent wrapper around the OS-specific mechanisms for shared memory allocation.

There are several problems with this code and the usage. Some of these are mentioned in #Hristolliev's answer.
you have to run all the processes in the same node to have a intranode communicator or use "communicator split shared".
you need to run this code with at least 10 processes.
Third, local should be deallocated with free().
you should get the shared pointer from a query.
you should deallocate shared (I think this is taken care by Win_free)
This is the resulting code:
# include <stdlib.h>
# include <stdio.h>
# include <time.h>
# include "mpi.h"
int main ( int argc, char *argv[] );
void pt(int t[], int s);
int main ( int argc, char *argv[] )
{
int rank, size, shared_elem = 0, i;
MPI_Init ( &argc, &argv );
MPI_Comm_rank ( MPI_COMM_WORLD, &rank );
MPI_Comm_size ( MPI_COMM_WORLD, &size );
MPI_Win win;
int *shared;
// if (rank == 0) shared_elem = size;
// MPI_Win_allocate_shared(shared_elem*sizeof(int), sizeof(int), MPI_INFO_NULL, MPI_COMM_WORLD, &shared, &win);
if (rank == 0)
{
MPI_Win_allocate_shared(size, sizeof(int), MPI_INFO_NULL,
MPI_COMM_WORLD, &shared, &win);
}
else
{
int disp_unit;
MPI_Aint ssize;
MPI_Win_allocate_shared(0, sizeof(int), MPI_INFO_NULL,
MPI_COMM_WORLD, &shared, &win);
MPI_Win_shared_query(win, 0, &ssize, &disp_unit, &shared);
}
if(rank==0)
{
MPI_Win_lock(MPI_LOCK_EXCLUSIVE, 0, MPI_MODE_NOCHECK, win);
for(i = 0; i < size; i++)
{
shared[i] = -1;
}
MPI_Win_unlock(0,win);
}
MPI_Barrier(MPI_COMM_WORLD);
int *local = (int *)malloc( size * sizeof(int) );
MPI_Win_lock(MPI_LOCK_SHARED, 0, 0, win);
for(i = 0; i < 10; i++)
{
MPI_Get(&(local[i]), 1, MPI_INT, 0, i,1, MPI_INT, win);
}
printf("processus %d (avant): ", rank);
pt(local,size);
MPI_Win_unlock(0,win);
MPI_Win_lock(MPI_LOCK_EXCLUSIVE, 0, 0, win);
MPI_Put(&rank, 1, MPI_INT, 0, rank, 1, MPI_INT, win);
MPI_Win_unlock(0,win);
MPI_Win_lock(MPI_LOCK_SHARED, 0, 0, win);
for(i = 0; i < 10; i++)
{
MPI_Get(&(local[i]), 1, MPI_INT, 0, i,1, MPI_INT, win);
}
printf("processus %d (apres): ", rank);
pt(local,size);
MPI_Win_unlock(0,win);
MPI_Win_free(&win);
// MPI_Free_mem(shared);
free(local);
// MPI_Free_mem(local);
MPI_Finalize ( );
return 0;
}
void pt(int t[],int s)
{
int i = 0;
while(i < s)
{
printf("%d ",t[i]);
i++;
}
printf("\n");
}

Related

process vm readv fails after certain number of iovec in MPI

I'm using process_vm_readv to get data from one process to the other in MPI.
I found the program will start getting trash after certain number of iovec (in this case 1024) given to process_vm_readv.
I wasn't sure what is going on, did the kernel running out of memory? Or something wrong with in my code.
Or did process_vm_readv has a upper limit for iovec?
I self-generated a vector pattern (8 bytes out of every 16 bytes) for iovec.
And the program will run until 1GB is filled with this pattern on both threads.
sbuf and rbuf have been allocated each for 1GB of memory.
And the program sits on a 24GB+ machine.
void do_test( int slen, int rlen, int scount, int rcount, void *sbuf, void *rbuf ){
int rank, err;
double timers[REP];
MPI_Win win;
pid_t pid;
MPI_Comm_rank( MPI_COMM_WORLD, &rank );
if( rank == 0 ){
MPI_Win_create( NULL, 0, 1, MPI_INFO_NULL, MPI_COMM_WORLD, &win );
int send_iovcnt;
struct iovec *send_iov;
struct iovec *iov = malloc( sizeof(struct iovec) * scount );
for( int p = 0; p < scount; p++ ){
iov[p].iov_base = (char*)rbuf + p * 16;
iov[p].iov_len = 8;
}
MPI_Recv( &pid, sizeof(pid_t), MPI_BYTE, 1, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE );
MPI_Recv( &send_iovcnt, 1, MPI_INT, 1, 1, MPI_COMM_WORLD, MPI_STATUS_IGNORE );
send_iov = malloc( sizeof(struct iovec) * send_iovcnt );
MPI_Recv( send_iov, sizeof(struct iovec) * send_iovcnt, MPI_BYTE, 1, 2, MPI_COMM_WORLD, MPI_STATUS_IGNORE );
for( int i = 0; i < REP; i++ ){
cache_flush();
timers[i] = MPI_Wtime();
MPI_Win_fence( 0, win );
process_vm_readv( pid, iov, send_iovcnt, send_iov, send_iovcnt, 0 );
MPI_Win_fence( 0, win );
cache_flush();
timers[i] = MPI_Wtime() - timers[i];
}
free(send_iov);
free(iov);
print_result( 8 * scount, REP, timers );
} else if( rank == 1 ){
MPI_Win_create( sbuf, slen, 1, MPI_INFO_NULL, MPI_COMM_WORLD, &win );
struct iovec *iov = malloc( sizeof(struct iovec) * rcount );
for( int p = 0; p < rcount; p++ ){
iov[p].iov_base = (char*)sbuf + p * 16;
iov[p].iov_base = 8;
}
pid = getpid();
MPI_Send( &pid, sizeof(pid_t), MPI_BYTE, 0, 0, MPI_COMM_WORLD );
MPI_Send( &rcount, 1, MPI_INT, 0, 1, MPI_COMM_WORLD );
MPI_Send( iov, rcount * sizeof(struct iovec), MPI_BYTE, 0, 2, MPI_COMM_WORLD );
for( int i = 0; i < REP; i++ ){
cache_flush();
MPI_Win_fence( 0, win );
MPI_Win_fence( 0, win );
}
free(iov);
}
Found in the man page of process_vm_readv(2) is the following text:
The values specified in the liovcnt and riovcnt arguments must be less than or equal to IOV_MAX (defined in <limits.h> or accessible via the call sysconf(_SC_IOV_MAX)).
On my Linux system, the value of IOV_MAX (ultimately defined in /usr/include/x86_64-linux-gnu/bits/uio_lim.h) is 1024.

MPI on Sun Grid Engine cluster

I am running MPI applications on a cluster with Sun Grid Engine, using OpenMPI.
Has anyone ever experience MPI communications which hang while the applications are running?
For example: Process on rank 0 calls MPI_Send to process on rank 1, and process on rank 1 calls MPI_Recv from process on rank 0. The ranks are correct, the tags are correct, however the communication just does not happen therefore the application never terminates.
N.B. The applications work on my laptop and other machines so it is not a case that I have some IDs wrong... Also, these applications work on the cluster the first time, but when I submit the exact same job again the MPI communication hangs as explained above.
If anyone has experienced something similar, any help would be appreciated, thanks.
EDIT:
I have implemented a very simple example of one of the applications:
#include <mpi.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <stdarg.h>
#include <pthread.h>
#include <sys/time.h>
#define NUM_CELLS 5 //Total number of integers to sort
/* Initial process to create numbers and send to first cell */
void *start(void *data)
{
int i, num;
time_t t;
srand((unsigned) time(&t));
//Create array of random numbers and print
for(i = 0; i < NUM_CELLS; i++)
{
num = rand() % 100;
printf("0 SEND\n");
MPI_Send(&num, 1, MPI_INT, 1, 1, MPI_COMM_WORLD);
printf("0 SENT\n");
}
return NULL;
}
/* Process for individual sort cell in sort pump */
void *sort_cell(void *data)
{
int *pos = (int *)data;
int num=2, i;
for(i = 0; i < NUM_CELLS; i++)
{
//Receive num
printf("%d WAIT\n", *pos);
if(*pos == 1)
MPI_Recv(&num, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
else if(*pos == 2)
MPI_Recv(&num, 1, MPI_INT, 1, 2, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
else if(*pos == 3)
MPI_Recv(&num, 1, MPI_INT, 2, 3, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
else if(*pos == 4)
MPI_Recv(&num, 1, MPI_INT, 3, 4, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
else if(*pos == 5)
MPI_Recv(&num, 1, MPI_INT, 4, 5, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("%d RECV\n", *pos);
//Keep larger number and send smaller number to next cell
printf("%d SEND\n", *pos);
if(*pos == 1)
MPI_Send(&num, 1, MPI_INT, 2, 2, MPI_COMM_WORLD);
else if(*pos == 2)
MPI_Send(&num, 1, MPI_INT, 3, 3, MPI_COMM_WORLD);
else if(*pos == 3)
MPI_Send(&num, 1, MPI_INT, 4, 4, MPI_COMM_WORLD);
else if(*pos == 4)
MPI_Send(&num, 1, MPI_INT, 1, 5, MPI_COMM_WORLD);
printf("%d SENT\n", *pos);
}
return NULL;
}
int main(int argc, char **argv)
{
int i;
double elapsedTime;
struct timeval t1, t2;
//Start timer
gettimeofday(&t1, NULL);
int my_rank, provided;
MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPLE, &provided);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
//Stop timer
gettimeofday(&t2, NULL);
elapsedTime = (t2.tv_sec - t1.tv_sec) * 1000.0; //sec to ms
elapsedTime += (t2.tv_usec - t1.tv_usec) / 1000.0; //us to ms
printf("Rank %d - Setup time: %f milliseconds\n", my_rank, elapsedTime);
//Start timer
gettimeofday(&t1, NULL);
//Execute processes in parallel according to mapping
if(my_rank == 0)
{
int num_threads = 1;
pthread_t threads[num_threads];
pthread_create(&threads[0], NULL, start, NULL);
for(i = 0; i < num_threads; i++)
(void) pthread_join(threads[i], NULL);
}
if(my_rank == 1)
{
int pos = 1;
int num_threads = 2;
pthread_t threads[num_threads];
pthread_create(&threads[0], NULL, sort_cell, (void *) &pos);
int pos1 = 5;
pthread_create(&threads[1], NULL, sort_cell, (void *) &pos1);
for(i = 0; i < num_threads; i++)
(void) pthread_join(threads[i], NULL);
}
if(my_rank == 2)
{
int pos = 2;
int num_threads = 1;
pthread_t threads[num_threads];
pthread_create(&threads[0], NULL, sort_cell, (void *) &pos);
for(i = 0; i < num_threads; i++)
(void) pthread_join(threads[i], NULL);
}
if(my_rank == 3)
{
int pos = 3;
int num_threads = 1;
pthread_t threads[num_threads];
pthread_create(&threads[0], NULL, sort_cell, (void *) &pos);
for(i = 0; i < num_threads; i++)
(void) pthread_join(threads[i], NULL);
}
if(my_rank == 4)
{
int pos = 4;
int num_threads = 1;
pthread_t threads[num_threads];
pthread_create(&threads[0], NULL, sort_cell, (void *) &pos);
for(i = 0; i < num_threads; i++)
(void) pthread_join(threads[i], NULL);
}
MPI_Barrier(MPI_COMM_WORLD);
//Stop timer
gettimeofday(&t2, NULL);
elapsedTime = (t2.tv_sec - t1.tv_sec) * 1000.0; //sec to ms
elapsedTime += (t2.tv_usec - t1.tv_usec) / 1000.0; //us to ms
printf("Rank %d - Execution time: %f milliseconds\n", my_rank, elapsedTime);
MPI_Finalize();
return 0;
}
Depending on the order of the sort_cells on the ranks, it works and I cannot understand why..
The application is as follows:
Sort_cell -> Sort_cell -> Sort_cell -> Sort_cell -> Sort_cell -> Sort_cell
ranks: 0 1 2 3 4 1
(where each Sort_cell is a process run on the specified ranks respectively)
If these are run on ranks which are grouped, ie: 00111, 01233, 00112, etc it works.. but once i execute on a stray rank (like the above example) ie: 00110, 01221, 01231, etc the MPI communication hangs. Any suggestions?

Removing MPI_Bcast()

So I have a some code where I am using MPI_Bcast to send information from the root node to all nodes, but instead I want to get my P0 to send chunks of the array to individual processes.
How do I do this with MPI_Send and MPI_Receive?
I've never used them before and I don't know if I need to loop my MPI_Receive to effectively send everything or what.
I've put giant caps lock comments in the code where I need to replace my MPI_Bcast(), sorry in advance for the waterfall of code.
Code:
#include "mpi.h"
#include <stdio.h>
#include <math.h>
#define MAXSIZE 10000000
int add(int *A, int low, int high)
{
int res = 0, i;
for(i=low; i<=high; i++)
res += A[i];
return(res);
}
int main(argc,argv)
int argc;
char *argv[];
{
int myid, numprocs, x;
int data[MAXSIZE];
int i, low, high, myres, res;
double elapsed_time;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);
if (myid == 0)
{
for(i=0; i<MAXSIZE; i++)
data[i]=1;
}
/* star the timer */
elapsed_time = -MPI_Wtime();
//THIS IS WHERE I GET CONFUSED ABOUT MPI_SEND AND MPI_RECIEVE!!!
MPI_Bcast(data, MAXSIZE, MPI_INT, 0, MPI_COMM_WORLD);
x = MAXSIZE/numprocs;
low = myid * x;
high = low + x - 1;
if (myid == numprocs - 1)
high = MAXSIZE-1;
myres = add(data, low, high);
printf("I got %d from %d\n", myres, myid);
MPI_Reduce(&myres, &res, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD);
/* stop the timer*/
elapsed_time += MPI_Wtime();
if (myid == 0)
printf("The sum is %d, time taken = %f.\n", res,elapsed_time);
MPI_Barrier(MPI_COMM_WORLD);
printf("The sum is %d at process %d.\n", res,myid);
MPI_Finalize();
return 0;
}
You need MPI_Scatter. A good intro is here: http://mpitutorial.com/tutorials/mpi-scatter-gather-and-allgather/
I think in your code it could look like this:
elements_per_proc = MAXSIZE/numprocs;
// Create a buffer that will hold a chunk of the global array
int *data_chunk = malloc(sizeof(int) * elements_per_proc);
MPI_Scatter(data, elements_per_proc, MPI_INT, data_chunk,
elements_per_proc, MPI_INT, 0, MPI_COMM_WORLD);
If you really want use MPI_Send and MPI_Recv, then you can use something like this:
int x = MAXSIZE / numprocs;
int *procData = new int[x];
if (rank == 0) {
for (int i = 1; i < num; i++) {
MPI_Send(data + i*x, x, MPI_INT, i, 0, MPI_COMM_WORLD);
}
} else {
MPI_Recv(procData, x, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);
}

MPI Receive from many proceses

Here is my code:
if (rank != 0) {
// trimitem numarul de pixeli prelucrati
rc = MPI_Send(&pixeli, 1, MPI_INT, 0, tag, MPI_COMM_WORLD);
// trimitem coordonatele de unde am inceput prelucrarea
rc = MPI_Send(&first_line, 1, MPI_INT, 0, tag, MPI_COMM_WORLD);
rc = MPI_Send(&first_col, 1, MPI_INT, 0, tag, MPI_COMM_WORLD);
for (i = 0; i < pixeli; i++) {
rc = MPI_Send(&results[i], 1, MPI_INT, 0, tag, MPI_COMM_WORLD);
}
}
else {
for (i = 1; i < numtasks; i++) {
rc = MPI_Recv(&received_pixels, 1, MPI_INT, i, tag, MPI_COMM_WORLD, &Stat);
results_recv = (int*) calloc (received_pixels, sizeof(int));
rc = MPI_Recv(&start_line_recv, 1, MPI_INT, i, tag, MPI_COMM_WORLD, &Stat);
rc = MPI_Recv(&start_col_recv, 1, MPI_INT, i, tag, MPI_COMM_WORLD, &Stat);
for (j = 0; j < received_pixels; j++) {
rc = MPI_Recv(&results_recv[j], 1, MPI_INT, i, tag, MPI_COMM_WORLD, &Stat);
}
free(results_recv);
}
If I run this with 2 proceses it is ok because one will send and the other one will receive.
If I run this with 4 proceses I receive the following error messages:
Fatal error in MPI_Recv: Other MPI error, error stack:
MPI_Recv(186)...........................: MPI_Recv(buf=0xbff05324, count=1, MPI_INT, src=1, tag=1, MPI_COMM_WORLD, status=0xbff053ec) failed
MPIDI_CH3I_Progress(461)................:
MPID_nem_handle_pkt(636)................:
MPIDI_CH3_PktHandler_EagerShortSend(308): Failed to allocate memory for an unexpected message. 261895 unexpected messages queued.
What should I do to fix this?
These lines:
for (i = 0; i < pixeli; i++) {
rc = MPI_Send(&results[i], 1, MPI_INT, 0, tag, MPI_COMM_WORLD);
}
and the corresponding MPI_Recvs look like they're essentially reimplementing MPI_Gather. Using the MPI_Gather call with size set to pixeli instead of 1 may allow the implementation to schedule the sends and receives more efficiently, but more importantly, it will probably drastically cut down on the total number of send/receive pairs needed to complete the whole batch of communication. You could do similar by removing the for loop and doing:
rc = MPI_Send(&results[i], pixeli, MPI_INT, 0, tag, MPI_COMM_WORLD);
but again, using the builtin MPI_Gather would be the preferred way of doing it.
The shortest answer is to tell you to use synchronious communications, that is MPI_Ssend() instead of MPI_Send().
The trouble is that you send to many messages which are buffered (i guess...but i though MPI_Send() was blocking...). The memory consumption goes up until failure...Synchronious messages avoid buffering but it does not reduce the number of messages and it may be slower.
You can reduce the number of messages and increase performances by sending many pixels at once : second argument of MPI_Send() or MPI_Recv()...
Sending a buffer of 3 int [pixeli,first_line,first_col] would also limit communications.
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <string.h>
#include "mpi.h"
int main(int argc,char *argv[])
{
int rank, size;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int pixeli=1000000;
int received_pixels;
int first_line,first_col,start_line_recv,start_col_recv;
int tag=0;
int results[pixeli];
int i,j;
for(i=0;i<pixeli;i++){
results[i]=rank*pixeli+i;
}
int* results_recv;
int rc;
MPI_Status Stat;
if (rank != 0) {
// trimitem numarul de pixeli prelucrati
rc = MPI_Ssend(&pixeli, 1, MPI_INT, 0, tag, MPI_COMM_WORLD);
// trimitem coordonatele de unde am inceput prelucrarea
rc = MPI_Ssend(&first_line, 1, MPI_INT, 0, tag, MPI_COMM_WORLD);
rc = MPI_Ssend(&first_col, 1, MPI_INT, 0, tag, MPI_COMM_WORLD);
MPI_Send(&results[0], pixeli, MPI_INT, 0, tag, MPI_COMM_WORLD);
//for (i = 0; i < pixeli; i++) {
// rc = MPI_Send(&results[i], 1, MPI_INT, 0, tag, MPI_COMM_WORLD);
//}
}
else {
for (i = 1; i < size; i++) {
rc = MPI_Recv(&received_pixels, 1, MPI_INT, i, tag, MPI_COMM_WORLD, &Stat);
results_recv = (int*) calloc (received_pixels, sizeof(int));
rc = MPI_Recv(&start_line_recv, 1, MPI_INT, i, tag, MPI_COMM_WORLD, &Stat);
rc = MPI_Recv(&start_col_recv, 1, MPI_INT, i, tag, MPI_COMM_WORLD, &Stat);
MPI_Recv(&results_recv[0], received_pixels, MPI_INT, i, tag, MPI_COMM_WORLD, &Stat);
//for (j = 0; j < received_pixels; j++) {
// rc = MPI_Recv(&results_recv[j], 1, MPI_INT, i, tag, MPI_COMM_WORLD, &Stat);
//printf("proc %d %d\n",rank,results_recv[j]);
//}
free(results_recv);
}
}
MPI_Finalize();
return 0;
}
Bye,
Francis

difficulty with MPI_Gather function

I have a value on local array (named lvotes) for each of the processors (assume 3 processors), and first element of each is storing a value, i.e.:
P0 : 4
P1 : 6
p2 : 7
Now, using MPI_Gather, I want gather them all in P0, so It will look like :
P0 : 4, 6, 7
I used gather this way:
MPI_Gather(lvotes, P, MPI_INT, lvotes, 1, MPI_INT, 0, MPI_COMM_WORLD);
But I get problems. It's my first time coding in MPI. I could use any suggestion.
Thanks
This is a common issue with people using the gather/scatter collectives for the first time; in both the send and receive counts you specify the count of items to send to or receive from each process. So although it's true that you'll be, in total, getting (say) P items, if P is the number of processors, that's not what you specify to the gather operation; you specify you are sending a count of 1, and receiving a count of 1 (from each process). Like so:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <mpi.h>
int main ( int argc, char **argv ) {
int rank;
int size;
int lvotes;
int *gvotes;
MPI_Init ( &argc, &argv );
MPI_Comm_rank ( MPI_COMM_WORLD, &rank );
MPI_Comm_size ( MPI_COMM_WORLD, &size );
if (rank == 0)
gvotes = malloc(size * sizeof(int) );
/* everyone sets their first lvotes element */
lvotes = rank+4;
/* Gather to process 0 */
MPI_Gather(&lvotes, 1, MPI_INT, /* send 1 int from lvotes.. */
gvotes, 1, MPI_INT, /* gather 1 int each process into lvotes */
0, MPI_COMM_WORLD); /* ... to root process 0 */
printf("P%d: %d\n", rank, lvotes);
if (rank == 0) {
printf("P%d: Gathered ", rank);
for (int i=0; i<size; i++)
printf("%d ", gvotes[i]);
printf("\n");
}
if (rank == 0)
free(gvotes);
MPI_Finalize();
return 0;
}
Running gives
$ mpirun -np 3 ./gather
P1: 5
P2: 6
P0: 4
P0: Gathered 4 5 6

Resources