Behavior of MPI_Send and MPI_Recv - mpi

Why these lines of code:
if(my_rank != 0) {
sprintf(msg, "Hello from %d of %d...", my_rank, comm_sz);
if(my_rank == 2) {
sleep(2);
sprintf(msg, "Hello from %d of %d, I have slept 2 seconds...", my_rank, comm_sz);
}
MPI_Send(msg, strlen(msg), MPI_CHAR, 0, 0, MPI_COMM_WORLD);
}
else {
printf("Hello from the chosen Master %d\n", my_rank);
for(i = 1; i < comm_sz; i++) {
MPI_Recv(msg, MAX_STRING, MPI_CHAR, i, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("%s\n", msg);
}
}
give this result?
Hello from the chosen Master 0
Hello from 1 of 5...
Hello from 2 of 5, I have slept 2 seconds...
Hello from 3 of 5... have slept 2 seconds...
Hello from 4 of 5... have slept 2 seconds...
Doesn't each process have its copy of 'msg' ?

strlen() does not include the null terminator, hence it will not be sent to the master. Receiving the message from rank 3 will not overwrite the later part of the string, so it is still displayed. You should use strlen(msg) + 1 as send count.

Related

when using MPI_Isend() MPI_and Irecv(), should the pair use the same request? in case the message is already an array, how it should be passed?

In this code I am trying to broadcast using non blocking send and receive as a practice. I have multiple questions and issues.
1.Should I pair Isend() and Irecv() to use the same request?
2.When the message is an array, how should it be passed? in this case, message or &message?
3.Why I cannot run this code on less or more than 8 processors? if the rank doesn't exit, shouldn't it just go on without executing that piece of code?
4.The snippet on the at the bottom is there in order to print the total time once, but the waitall() does not work, and I do not understand why.
5. When passing arrays longer than 2^12, I get segmentation error, while I have checked the limits of Isend() and Irecv() and they supposed to handle even bigger length messages.
6.I used long double for record the time, is this a common or good practice? when I used smaller variables like float or double I would get nan.
#include<stdio.h>
#include<stdlib.h>
#include<math.h>
#include<mpi.h>
int main(int argc, char *argv[]){
MPI_Init(&argc, &argv);
int i, rank, size, ready;
long int N = pow(2, 10);
float* message = (float *)malloc(sizeof(float *) * N + 1);
long double start, end;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
//MPI_Request* request = (MPI_Request *)malloc(sizeof(MPI_Request *) * size);
MPI_Request request[size-1];
/*Stage I: -np 8*/
if(rank == 0){
for(i = 0; i < N; i++){
message[i] = N*rand();
message[i] /= rand();
}
start = MPI_Wtime();
MPI_Isend(&message, N, MPI_FLOAT, 1, 0, MPI_COMM_WORLD, &request[0]);
MPI_Isend(&message, N, MPI_FLOAT, 2, 0, MPI_COMM_WORLD, &request[1]);
MPI_Isend(&message, N, MPI_FLOAT, 4, 0, MPI_COMM_WORLD, &request[3]);
printf("Processor root-rank %d- sent the message...\n", rank);
}
if (rank == 1){
MPI_Irecv(&message, N, MPI_FLOAT, 0, 0, MPI_COMM_WORLD, &request[0]);
MPI_Wait(&request[0], MPI_STATUS_IGNORE);
printf("Processor rank 1 received the message.\n");
MPI_Isend(&message, N, MPI_FLOAT, 3, 0, MPI_COMM_WORLD, &request[2]);
MPI_Isend(&message, N, MPI_FLOAT, 5, 0, MPI_COMM_WORLD, &request[4]);
}
if(rank == 2){
MPI_Irecv(&message, N, MPI_FLOAT, 0, 0, MPI_COMM_WORLD, &request[1]);
MPI_Wait(&request[1], MPI_STATUS_IGNORE);
printf("Processor rank 2 received the message.\n");
MPI_Isend(&message, N, MPI_FLOAT, 6, 0, MPI_COMM_WORLD, &request[5]);
}
if(rank == 3){
MPI_Irecv(&message, N, MPI_FLOAT, 1, 0, MPI_COMM_WORLD, &request[2]);
MPI_Wait(&request[2], MPI_STATUS_IGNORE);
printf("Processor rank 3 received the message.\n");
MPI_Isend(&message, N, MPI_FLOAT, 7, 0, MPI_COMM_WORLD, &request[6]);
}
if(rank == 4){
MPI_Irecv(&message, N, MPI_FLOAT, 0, 0, MPI_COMM_WORLD, &request[3]);
MPI_Wait(&request[3], MPI_STATUS_IGNORE);
printf("Processor rank 4 received the message.\n");
}
if(rank == 5){
MPI_Irecv(&message, N, MPI_FLOAT, 1, 0, MPI_COMM_WORLD, &request[4]);
MPI_Wait(&request[4], MPI_STATUS_IGNORE);
printf("Processor rank 5 received the message.\n");
}
if(rank == 6){
MPI_Irecv(&message, N, MPI_FLOAT, 2, 0, MPI_COMM_WORLD, &request[5]);
MPI_Wait(&request[5], MPI_STATUS_IGNORE);
printf("Processor rank 6 received the message.\n");
}
if(rank == 7){
MPI_Irecv(&message, N, MPI_FLOAT, 3, 0, MPI_COMM_WORLD, &request[6]);
MPI_Wait(&request[6], MPI_STATUS_IGNORE);
printf("Processor rank 7 received the message.\n");
}
/*MPI_Testall(size-1,request,&ready, MPI_STATUS_IGNORE);*/
/* if (ready){*/
end = MPI_Wtime();
printf("Total Time: %Lf\n", end - start);
/*}*/
MPI_Finalize();
}
Each MPI task runs in its own address space, so there is no correlation between request[1] on rank 0 and request[1] on rank 2. That means you do not have to "pair" the requests. That being said, if you think "pairing" the requests improves the readability of your code, you might want to do so even if this is not required.
the buffer parameter of MPI_Isend() and MPI_Irecv() is a pointer to the start of the data, this is message (and not &message) here.
if you run with let's say 2 MPI tasks, MPI_Send(..., dest=2, ...) on rank 0 will fail because there 2 is an invalid rank in the MPI_COMM_WORLD communicator.
many requests are uninitialized when MPI_Waitall() (well, MPI_Testall() here) is invoked. One option is to first initialize all of them to MPI_REQUEST_NULL.
using &message results in memory corruption and that likely explains the crash.
From the MPI standard, the prototype is double MPI_Wtime(), so you'd rather use double here (the NaN likely come from the memory corruption described above)

Incorrect buffer size in MPI

I am getting the following error output while executing MPI_Recv:
MPI_Recv(buf=0x000000D62C56FC60, count=1, MPI_INT, src=3, tag=0, MPI_COMM_WORLD, status=0x0000000000000001) failed
Message truncated; 8 bytes received but buffer size is 4
My function needs to find the number of a row which has a maximum element at the ind position.
My function's code is found below:
int find_row(Matr matr, int ind)
{
int max = ind;
for (int i = ind + 1 + CurP; i < N; i += Pnum)
if (matr[i][ind] > matr[max][ind])
max = i;
int ans = max;
if (CurP != 0)
{
MPI_Send(&max, 1, MPI_INT, 0, 0, MPI_COMM_WORLD);
MPI_Barrier(MPI_COMM_WORLD);
}
else
{
MPI_Barrier(MPI_COMM_WORLD);
for (int i = 1; i < Pnum; i++)
{
MPI_Recv(&max, 1, MPI_INT, i, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("max %d %Lf! Process %d;\n", max, matr[max][ind], i);
fflush(stdout);
if (matr[max][ind] > matr[ans][ind])
ans = max;
}
}
return ans;
}
Matr is the following type definition: typedef vector<vector<long double> >& Matr;
CurP and Pnum are initialized in the following way:
MPI_Comm_size(MPI_COMM_WORLD, &Pnum);
MPI_Comm_rank(MPI_COMM_WORLD, &CurP);
Please help me solve this issue. Thanks!
It's my fail. I execute MPI_Bcast from not all processes in another part of my code.

MPI: How to use MPI_Win_allocate_shared properly

I would like to use a shared memory between processes. I tried MPI_Win_allocate_shared but it gives me a strange error when I execute the program:
Assertion failed in file ./src/mpid/ch3/include/mpid_rma_shm.h at line 592: local_target_rank >= 0
internal ABORT
Here's my source:
# include <stdlib.h>
# include <stdio.h>
# include <time.h>
# include "mpi.h"
int main ( int argc, char *argv[] );
void pt(int t[], int s);
int main ( int argc, char *argv[] )
{
int rank, size, shared_elem = 0, i;
MPI_Init ( &argc, &argv );
MPI_Comm_rank ( MPI_COMM_WORLD, &rank );
MPI_Comm_size ( MPI_COMM_WORLD, &size );
MPI_Win win;
int *shared;
if (rank == 0) shared_elem = size;
MPI_Win_allocate_shared(shared_elem*sizeof(int), sizeof(int), MPI_INFO_NULL, MPI_COMM_WORLD, &shared, &win);
if(rank==0)
{
MPI_Win_lock(MPI_LOCK_EXCLUSIVE, 0, MPI_MODE_NOCHECK, win);
for(i = 0; i < size; i++)
{
shared[i] = -1;
}
MPI_Win_unlock(0,win);
}
MPI_Barrier(MPI_COMM_WORLD);
int *local = (int *)malloc( size * sizeof(int) );
MPI_Win_lock(MPI_LOCK_SHARED, 0, 0, win);
for(i = 0; i < 10; i++)
{
MPI_Get(&(local[i]), 1, MPI_INT, 0, i,1, MPI_INT, win);
}
printf("processus %d (avant): ", rank);
pt(local,size);
MPI_Win_unlock(0,win);
MPI_Win_lock(MPI_LOCK_EXCLUSIVE, 0, 0, win);
MPI_Put(&rank, 1, MPI_INT, 0, rank, 1, MPI_INT, win);
MPI_Win_unlock(0,win);
MPI_Win_lock(MPI_LOCK_SHARED, 0, 0, win);
for(i = 0; i < 10; i++)
{
MPI_Get(&(local[i]), 1, MPI_INT, 0, i,1, MPI_INT, win);
}
printf("processus %d (apres): ", rank);
pt(local,size);
MPI_Win_unlock(0,win);
MPI_Win_free(&win);
MPI_Free_mem(shared);
MPI_Free_mem(local);
MPI_Finalize ( );
return 0;
}
void pt(int t[],int s)
{
int i = 0;
while(i < s)
{
printf("%d ",t[i]);
i++;
}
printf("\n");
}
I get the following result:
processus 0 (avant): -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
processus 0 (apres): 0 -1 -1 -1 -1 -1 -1 -1 -1 -1
processus 4 (avant): 0 -1 -1 -1 -1 -1 -1 -1 -1 -1
processus 4 (apres): 0 -1 -1 -1 4 -1 -1 -1 -1 -1
Assertion failed in file ./src/mpid/ch3/include/mpid_rma_shm.h at line 592: local_target_rank >= 0
internal ABORT - process 5
Assertion failed in file ./src/mpid/ch3/include/mpid_rma_shm.h at line 592: local_target_rank >= 0
internal ABORT - process 6
Assertion failed in file ./src/mpid/ch3/include/mpid_rma_shm.h at line 592: local_target_rank >= 0
internal ABORT - process 9
Can someone please help me figure out what's going wrong & what that error means ? Thanks a lot.
MPI_Win_allocate_shared is a departure from the very abstract nature of MPI. It exposes the underlying memory organisation and allows the programs to bypass the expensive (and often confusing) MPI RMA operations and utilise the shared memory directly on systems that have such. While MPI typically deals with distributed-memory environments where ranks do not share the physical memory address space, a typical HPC system nowadays consists of many interconnected shared-memory nodes. Thus, it is possible for ranks that execute on the same node to attach to shared memory segments and communicate by sharing data instead of message passing.
MPI provides a communicator split operation that allows one to create subgroups of ranks such that the ranks in each subgroup are able to share memory:
MPI_Comm_split_type(comm, MPI_COMM_TYPE_SHARED, key, info, &newcomm);
On a typical cluster, this essentially groups the ranks by the nodes they execute on. Once the split is done, a shared-memory window allocation can be executed over the ranks in each newcomm. Note that for a multi-node cluster job this will result in several independent newcomm communicators and thus several shared memory windows. Ranks on one node won't (and shouldn't) be able to see the shared memory windows on other nodes.
In that regard, MPI_Win_allocate_shared is a platform-independent wrapper around the OS-specific mechanisms for shared memory allocation.
There are several problems with this code and the usage. Some of these are mentioned in #Hristolliev's answer.
you have to run all the processes in the same node to have a intranode communicator or use "communicator split shared".
you need to run this code with at least 10 processes.
Third, local should be deallocated with free().
you should get the shared pointer from a query.
you should deallocate shared (I think this is taken care by Win_free)
This is the resulting code:
# include <stdlib.h>
# include <stdio.h>
# include <time.h>
# include "mpi.h"
int main ( int argc, char *argv[] );
void pt(int t[], int s);
int main ( int argc, char *argv[] )
{
int rank, size, shared_elem = 0, i;
MPI_Init ( &argc, &argv );
MPI_Comm_rank ( MPI_COMM_WORLD, &rank );
MPI_Comm_size ( MPI_COMM_WORLD, &size );
MPI_Win win;
int *shared;
// if (rank == 0) shared_elem = size;
// MPI_Win_allocate_shared(shared_elem*sizeof(int), sizeof(int), MPI_INFO_NULL, MPI_COMM_WORLD, &shared, &win);
if (rank == 0)
{
MPI_Win_allocate_shared(size, sizeof(int), MPI_INFO_NULL,
MPI_COMM_WORLD, &shared, &win);
}
else
{
int disp_unit;
MPI_Aint ssize;
MPI_Win_allocate_shared(0, sizeof(int), MPI_INFO_NULL,
MPI_COMM_WORLD, &shared, &win);
MPI_Win_shared_query(win, 0, &ssize, &disp_unit, &shared);
}
if(rank==0)
{
MPI_Win_lock(MPI_LOCK_EXCLUSIVE, 0, MPI_MODE_NOCHECK, win);
for(i = 0; i < size; i++)
{
shared[i] = -1;
}
MPI_Win_unlock(0,win);
}
MPI_Barrier(MPI_COMM_WORLD);
int *local = (int *)malloc( size * sizeof(int) );
MPI_Win_lock(MPI_LOCK_SHARED, 0, 0, win);
for(i = 0; i < 10; i++)
{
MPI_Get(&(local[i]), 1, MPI_INT, 0, i,1, MPI_INT, win);
}
printf("processus %d (avant): ", rank);
pt(local,size);
MPI_Win_unlock(0,win);
MPI_Win_lock(MPI_LOCK_EXCLUSIVE, 0, 0, win);
MPI_Put(&rank, 1, MPI_INT, 0, rank, 1, MPI_INT, win);
MPI_Win_unlock(0,win);
MPI_Win_lock(MPI_LOCK_SHARED, 0, 0, win);
for(i = 0; i < 10; i++)
{
MPI_Get(&(local[i]), 1, MPI_INT, 0, i,1, MPI_INT, win);
}
printf("processus %d (apres): ", rank);
pt(local,size);
MPI_Win_unlock(0,win);
MPI_Win_free(&win);
// MPI_Free_mem(shared);
free(local);
// MPI_Free_mem(local);
MPI_Finalize ( );
return 0;
}
void pt(int t[],int s)
{
int i = 0;
while(i < s)
{
printf("%d ",t[i]);
i++;
}
printf("\n");
}

How to wait till all process receive particular data using MPI blocking fucntions?

I am distributing adjacency matrix row by row to the processes using MPI_Send and MPI_Recv functions. I used MPI_Barrier but the program is getting stuck! How do I wait till all the processes get their part of a matrix?
/* Distribute the adjacency matrix */
if( my_rank == 0) {
int row_start_l, row_end_l, dest_l;
for(dest_l=1; dest_l < comm_size; dest_l++) {
row_start_l = dest_l*(num_nodes/comm_size);
if( dest_l != (comm_size - 1) ) {
row_end_l = (dest_l + 1)*(num_nodes/comm_size) - 1;
}
else {
row_end_l = num_nodes - 1;
}
for(i = row_start_l; i <= row_end_l; i++) {
MPI_Send(&g.matrix[i][1], 1, MPI_INT, dest_l, TAG_AM_DATA, MPI_COMM_WORLD);
// Send Adjacency matrix to appropriate destinations. You can first send the appropriate size
MPI_Send(&g.matrix[i], (g.matrix[i][1])+2, MPI_INT, dest_l, TAG_AM_DATA, MPI_COMM_WORLD);
}
}
for(j=0; j < num_nodes; ) {
for(k=0; k < 120; k++) {
if(j >= num_nodes) {
break;
}
sendrecv_buffer_double[k] = g.column_denominator[j];
j++;
}
MPI_Bcast(&k, 1, MPI_DOUBLE, 0, MPI_COMM_WORLD);
MPI_Bcast(&sendrecv_buffer_double[0], k, MPI_DOUBLE, 0, MPI_COMM_WORLD);
} cnt++;
}
else {
int recv_count;
int recvd;
cnt++;
adj_matrix = (int **)malloc(my_num_rows*sizeof(int*));
for(i=0; i < my_num_rows; i++) {
MPI_Recv(&recv_count, 1, MPI_INT, 0, TAG_AM_DATA, MPI_COMM_WORLD, &status);
adj_matrix[i] = (int *)malloc((2 + recv_count)*sizeof(int));
// Receive adjacency matrix from root.
MPI_Recv(&adj_matrix[i], 2+recv_count, MPI_INT, 0, TAG_AM_DATA, MPI_COMM_WORLD, &status);
}
recvd = 0;
while(recvd < num_nodes) {
MPI_Bcast(&recv_count, 1, MPI_DOUBLE, 0, MPI_COMM_WORLD);
MPI_Bcast(&g.column_denominator[recvd], recv_count, MPI_DOUBLE, 0, MPI_COMM_WORLD);
recvd += recv_count;
}
}
// Wait till all the processes have their assigned parts of the matrix.
MPI_Barrier(MPI_COMM_WORLD);//I am getting error here
Error message:
Process 0 reading the input file.. Number of nodes = 100 Process 0
done reading.. Fatal error in PMPI_Barrier: Message truncated, error
stack: PMPI_Barrier(426)...................:
MPI_Barrier(MPI_COMM_WORLD) failed
MPIR_Barrier_impl(308)..............:
MPIR_Bcast_impl(1369)...............:
MPIR_Bcast_intra(1199)..............:
MPIR_Bcast_binomial(149)............:
MPIC_Recv(109)......................:
MPIDI_CH3U_Request_unpack_uebuf(605): Message truncated; 8 bytes
received but buffer size is 1
I'm not too sure of what your "adjacency matrix" looks like and how is must be distributed, but I guess this is a job for MPI_Scatter() rather than a series of MPI_Bcast()...

LU Decomposition MPI

This is a MPI code for LU Decomposition.
I have used the following strategy -
There is a master(rank 0) and others are slaves. The master sends rows to each slave.
Since each slave might receive more than row, I store all the received rows in a
buffer and then perform LU Decomposition on it. After doing that I send back the
buffer to the master. The master does not do any computation. It just sends and receives.
for(i=0; i<n; i++)
map[i] = i%(numProcs-1) + 1;
for(i=0; i<n-1; i++)
{
if(rank == 0)
{
status = pivot(LU,i,n);
for(j=0; j<n; j++)
row1[j] = LU[n*i+j];
}
MPI_Bcast(&status, 1, MPI_INT, 0, MPI_COMM_WORLD);
if(status == -1)
return -1;
MPI_Bcast(row1, n, MPI_DOUBLE, 0, MPI_COMM_WORLD);
int tag1 = 1, tag2 = 2, tag3 = 3, tag4 = 4;
if(rank == 0)
{
int pno, start, index, l, rowsReceived = 0;
MPI_Request req;
MPI_Status stat;
for(j=i+1; j<n; j++)
MPI_Isend(&LU[n*j], n, MPI_DOUBLE, map[j], map[j], MPI_COMM_WORLD, &req);
if(i>=n-(numProcs-1))
cnt++;
for(j=0; j<numProcs-1-cnt; j++)
{
MPI_Recv(&pno, 1, MPI_INT, MPI_ANY_SOURCE, tag2, MPI_COMM_WORLD, &stat);
//printf("1. Recv from %d and j : %d and i : %d\n",pno,j,i);
MPI_Recv(&rowsReceived, 1, MPI_INT, pno, tag3, MPI_COMM_WORLD, &stat);
MPI_Recv(rowFinal, n*rowsReceived, MPI_DOUBLE, pno, tag4, MPI_COMM_WORLD, &stat);
/* Will not go more than numProcs anyways */
for(k=i+1; k<n; k++)
{
if(map[k] == pno)
{
start = k;
break;
}
}
for(k=0; k<rowsReceived; k++)
{
index = start + k*(numProcs-1);
for(l=0; l<n; l++)
LU[n*index+l] = rowFinal[n*k+l];
}
}
}
else
{
int rowsReceived = 0;
MPI_Status stat, stats[3];
MPI_Request reqs[3];
for(j=i+1; j<n; j++)
if(map[j] == rank)
rowsReceived += 1;
for(j=0; j<rowsReceived; j++)
{
MPI_Recv(&rowFinal[n*j], n, MPI_DOUBLE, 0, rank, MPI_COMM_WORLD, &stat);
}
for(j=0; j<rowsReceived; j++)
{
double factor = rowFinal[n*j+i]/row1[i];
for(k=i+1; k<n; k++)
rowFinal[n*j+k] -= (row1[k]*factor);
rowFinal[n*j+i] = factor;
}
if(rowsReceived != 0)
{
//printf("Data sent from %d iteration : %d\n",rank,i);
MPI_Isend(&rank, 1, MPI_INT, 0, tag2, MPI_COMM_WORLD, &reqs[0]);
MPI_Isend(&rowsReceived, 1, MPI_INT, 0, tag3, MPI_COMM_WORLD, &reqs[1]);
MPI_Isend(rowFinal, n*rowsReceived, MPI_DOUBLE, 0, tag4, MPI_COMM_WORLD, &reqs[2]);
}
//MPI_Waitall(3,reqs,stats);
}
}
The problem that I am facing is that sometimes the program hangs. My guess is
that the sends and receives are not being matched but I am not being able to
figure out where the problem lies.
I ran test cases on matrices of size 1000x1000, 2000x2000, 3000x3000, 5000x5000
and 7000x7000. Presently the code hangs for 7000x7000. Could someone please help
me out?
Things to note :-
map implements the mapping scheme, which row goes to which slave.
rowsReceived tells each slave the no of rows it will receive. I dont need to
calculate that each and every time, but I will fix it later.
row1 is the buffer in which the active row will be stored.
rowFinal is the buffer of the rows being received and being modified.
cnt is not important and can be ignored. For that the check for
rowReceived!=0 needs to be removed.
It looks like you are never completing your nonblocking operations. You have a bunch of calls to MPI_Isend and MPI_Irecv throughout the code, but you're never doing an MPI_Wait or MPI_Test (or one of the similar calls). Without that completion call, those nonblocking calls will never complete.

Resources