Receiving data in slaves mpi spawn c - mpi

I'm trying to implement the following scenario using mpi_comm_spawn & scatter :
1- Master spawns 2 processes with a job.
2- He scatters an array to those spawned processes.
3- The spawned processes receive the scattered array sort it then send it back.
4- The master receives the sorted parts of the array.
I'd like to know how to do the step 2, so far i've tried with send and receives, they work perfectly but i want to do it with the scatter function.
Edit : Here's what i'd like to do in the master code , i'm missing the part in the slave's where i receive the scattered array
/*Master Here*/
MPI_Comm_spawn(slave, MPI_ARGV_NULL, 2, MPI_INFO_NULL,0, MPI_COMM_WORLD, &inter_comm, array_of_errcodes);
printf("MASTER Sending a message to slaves \n");
MPI_Send(message, 50, MPI_CHAR,0 , tag, inter_comm);
MPI_Scatter(array, 10, MPI_INT, &array_r, 10, MPI_INT, MPI_ROOT, inter_comm);
Thanks.

master.c
#include "mpi.h"
int main(int argc, char *argv[])
{
int n_spawns = 2;
MPI_Comm intercomm;
MPI_Init(&argc, &argv);
MPI_Comm_spawn("worker_program", MPI_ARGV_NULL, n_spawns, MPI_INFO_NULL, 0, MPI_COMM_SELF, &intercomm, MPI_ERRCODES_IGNORE);
int sendbuf[2] = {3, 5};
int recvbuf; // redundant for master.
MPI_Scatter(sendbuf, 1, MPI_INT, &recvbuf, 1, MPI_INT, MPI_ROOT, intercomm);
MPI_Finalize();
return 0;
}
worker.c
#include "mpi.h"
#include <stdio.h>
int main(int argc, char *argv[])
{
MPI_Init(&argc, &argv);
MPI_Comm intercomm;
MPI_Comm_get_parent(&intercomm);
int sendbuf[2]; // redundant for worker.
int recvbuf;
MPI_Scatter(sendbuf, 1, MPI_INT, &recvbuf, 1, MPI_INT, 0, intercomm);
printf("recvbuf = %d\n", recvbuf);
MPI_Finalize();
return 0;
}
Command line
mpicc master.c -o master_program
mpicc worker.c -o worker_program
mpirun -n 1 master_program

Related

MPI_Test returning true flags for requests despite never sending?

I have some code that for testing purposes, I removed all sends and only have non-blocking receives. You can imagine my surprise when using MPI_Test the flags were indicating some of the requests were actually being completed. I have my code setup in a cartesian grid, with a small replica below, although this doesn't reproduce the error:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h> // for sleep
#include <mpi.h>
void test(int pos);
MPI_Comm comm_cart;
int main(int argc, char *argv[])
{
int i, j;
int rank, size;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
/* code for mpi cartesian gird topology */
int dim[1];
dim[0] = 2;
int periods[1];
periods[0] = 0;
int reorder = 1;
int coords[1];
MPI_Cart_create(MPI_COMM_WORLD, 1, dim, periods, 1, &comm_cart);
MPI_Cart_coords(comm_cart, rank, 2, coords);
test(coords[0]);
MPI_Finalize();
return (0);
}
void test(int pos)
{
float placeholder[4];
int other = (pos+1) % 2;
MPI_Request reqs[8];
int flags[4];
for(int iter = 0; iter < 20; iter++){
// Test requests from previous time cycle
for(int i=0;i<4;i++){
if(iter == 0) break;
MPI_Test(&reqs[0], &flags[0] , MPI_STATUS_IGNORE);
printf("Flag: %d\n", flags[0]);
}
MPI_Irecv(&placeholder[0], 1, MPI_FLOAT, other, 0, comm_cart, &reqs[0]);
}
}
Any help would be appreciated.
The issue is with MPI_Test and MPI_PROC_NULLs. Quite often when using MPI_Cart_shift, you end up with MPI_PROC_NULLs as if you're on the edge of the grid, a neighbouring cell simply doesn't exist in some directions.
I can't find any documentation for this anywhere, so I had to discover it myself, but when you do an MPI_Irecv with an MPI_PROC_NULL source, it will instantly complete and when tested using MPI_Test, the flag will return true for a completed request. Example code below:
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
int main(int argc, char *argv[])
{
MPI_Init(&argc, &argv);
int t;
int flag;
MPI_Request req;
MPI_Irecv(&t, 1, MPI_INT, MPI_PROC_NULL, 0, MPI_COMM_WORLD, &req);
MPI_Test(&req, &flag, MPI_STATUS_IGNORE);
printf("Flag: %d\n", flag);
MPI_Finalize();
return (0);
}
Which returns the following when run:
Flag: 1
Flag: 1

The system cannot find the path specified in MPI

I am writing a sample program MPI in which one process send an integer to other process.
This is my source code
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
// Find out rank, size
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int number;
if (world_rank == 0) {
number = -1;
MPI_Send(&number, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
}
else if (world_rank == 1) {
MPI_Recv(&number, 1, MPI_INT, 0, 0, MPI_COMM_WORLD,
MPI_STATUS_IGNORE);
printf("Process 1 received number %d from process 0\n",
number);
}
}
And this is the error when I run mpiexec in windows cmd line
ERROR: Error reported: failed to set work directory to 'D:\study_documents\Thesis\Nam 4\Demo\Sample Codes\MPI_HelloWorld\Debug' on DESKTOP-EKN1RD3
Error (3) The system cannot find the path specified.

Sending data to a given communicator

I wonder if it is possible to send data to a 3rd party communicator by having its integer value.
In other words, I would like to send a MPI_Comm (int) to processes in a non-related communicator in order to establish a communication between them.
For example, this picture shows my goal:
For this purpose, I have developed testing codes which intend to transfer the MPI_Comm.
parent.c
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
MPI_Init(NULL, NULL);
int world_size, world_rank;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
MPI_Comm children;
int err[world_size], msg;
MPI_Comm_spawn("./children", NULL, 2, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &children, err);
if (world_rank == 0) {
MPI_Send(&children, 1, MPI_INT, 0, 0, children);
MPI_Recv(&msg, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, children, MPI_STATUS_IGNORE);
}
MPI_Finalize();
return (0);
}
children.c
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
MPI_Init(NULL, NULL);
int world_size, world_rank;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
MPI_Comm parent;
MPI_Comm_get_parent(&parent);
int comm, msg = 123;
if (world_rank == 0) {
MPI_Recv(&comm, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, parent, MPI_STATUS_IGNORE);
MPI_Comm children = (MPI_Comm) comm;
MPI_Send(&msg, 1, MPI_INT, 0, 0, children);
}
MPI_Finalize();
return (0);
}
Of course, the code does not work due to:
Fatal error in MPI_Send: Invalid communicator, error stack
Is there any way two establish that connection?
PS: This is an example, and I know that if I used "parent" comm in the send of "children.c", it would work. But my intention is to send data to a 3rd party communicator with only its integer id.

MPI_Waitall hangs

I have this MPI program which hangs without completing. Any idea where it is going wrong? May be I am missing something but I cannot think of a possible issue with the code. Changing the order of send and receive doesn't work either. (But I am guessing any order would do due to nonblocking nature of the calls.)
#include <mpi.h>
int main(int argc, char** argv) {
int p = 2;
int myrank;
double in_buf[1];
double out_buf[1];
MPI_Comm comm = MPI_COMM_WORLD;
MPI_Status stat;
MPI_Init(&argc, &argv);
MPI_Comm_rank(comm, &myrank);
MPI_Comm_size(comm, &p);
MPI_Request requests[2];
MPI_Status statuses[2];
if (myrank == 0) {
MPI_Isend(out_buf,1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, &requests[0]);
MPI_Irecv(in_buf, 1, MPI_DOUBLE, 1, 1, MPI_COMM_WORLD, &requests[1]);
} else {
MPI_Irecv(in_buf, 1, MPI_DOUBLE, 1, 1, MPI_COMM_WORLD, &requests[0]);
MPI_Isend(out_buf, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, &requests[1]);
}
MPI_Waitall(2, requests, statuses);
printf("Done...\n");
}
Right of the bat it looks like your tags are mismatched.
You post isend from rank 0 with tag=0 but post irecv from rank 1 with tag=1.
I assume you launch two processes as well, right? the int p = 2 doesn't do anything useful.

Is MPI_Alltoall used correctly?

I intend to achieve a simple task using MPI collective communication but being new to MPI, I have found the collective routines somewhat non-intuitive. I have 4 slaves, each of which must read a unique string and send the string to all the other slaves.
I looked into MPI_Bcast, MPI_Scatter, and MPI_Alltoall. I settled for MPI_Alltoall but the program ends with bad termination.
The program is:
int main(int argc,char *argv[])
{
int my_rank, num_workers;
MPI_Comm SLAVES_WORLD;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &num_workers);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
createSlavesCommunicator(&SLAVES_WORLD);
char send_msg[20], recv_buf[20];
sprintf(send_msg, "test string %d", my_rank);
MPI_Alltoall(send_buf, strlen(send_buf), MPI_CHAR, recv_buf, 20, MPI_CHAR, MPI_COMM_WORLD);
printf("slave %d recvd message %s\n", my_rank, recv_buf);
}
void createSlavesCommunicator(MPI_Comm *SLAVES_WORLD)
{
MPI_Group SLAVES_GROUP, MPI_COMM_GROUP;
int ranks_to_excl[1];
ranks_to_excl[0] = 0;
MPI_Comm_group(MPI_COMM_WORLD, &MPI_COMM_GROUP);
MPI_Group_excl(MPI_COMM_GROUP, 1, ranks_to_excl, &SLAVES_GROUP);
MPI_Comm_create(MPI_COMM_WORLD, SLAVES_GROUP, SLAVES_WORLD);
}
MPI_AlltoAll() sends messages from everyone to everyone. Input and output buffer should be much larger than 20 if each process sends 20 char. Starting from your code, here is how MPI_AlltoAll() works :
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <string.h>
#include "mpi.h"
int main(int argc,char *argv[])
{
int my_rank, num_workers;
MPI_Comm SLAVES_WORLD;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &num_workers);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
//createSlavesCommunicator(&SLAVES_WORLD);
char send_msg[20*num_workers], recv_buf[20*num_workers];
int i;
for(i=0;i<num_workers;i++){
sprintf(&send_msg[i*20], "test from %d to %d", my_rank,i);
}
MPI_Barrier(MPI_COMM_WORLD);
//MPI_Alltoall(send_msg, strlen(send_msg), MPI_CHAR, recv_buf, 20, MPI_CHAR, MPI_COMM_WORLD);
MPI_Alltoall(send_msg, 20, MPI_CHAR, recv_buf, 20, MPI_CHAR, MPI_COMM_WORLD);
MPI_Barrier(MPI_COMM_WORLD);
for(i=0;i<num_workers;i++){
printf("slave %d recvd message %s\n", my_rank, &recv_buf[20*i]);
}
MPI_Finalize();
return 0;
}
Looking at your question, it seems that MPI_AllGather() is the function that would do the trick...
"The block of data sent from the jth process is received by every process and placed in the jth block of the buffer recvbuf. "
http://www.mcs.anl.gov/research/projects/mpi/www/www3/MPI_Allgather.html
Bye,
Francis

Resources