MPI call and receive in a nested loop - mpi

I have a nested loop and from inside the loop I call the MPI send which I want it to
send to the receiver a specific value then at the receiver takes the data and again sends MPI messages
to another set of CPUs ... I used something like this but it looks like there is a problem in the receive ... and I cant see where I went wrong ..."the machine goes to infinite loop somewhere ...
I am trying to make it work like this :
master CPU >> send to other CPUs >> send to slave CPUs
.
.
.
int currentCombinationsCount;
int mp;
if (rank == 0)
{
for (int pr = 0; pr < combinationsSegmentSize; pr++)
{
int CblockBegin = CombinationsSegementsBegin[pr];
int CblockEnd = CombinationsSegementsEnd [pr];
currentCombinationsCount = numOfCombinationsEachLoop[pr];
prossessNum = 1; //specify which processor we are sending to
// now substitute and send to the main Processors
for (mp = CblockBegin; mp <= CblockEnd; mp++)
{
MPI_Send(&mp , 1, MPI_INT , prossessNum, TAG, MPI_COMM_WORLD);
prossessNum ++;
}
}//this loop goes through all the specified blocks for the combinations
} // end of rank 0
else if (rank > currentCombinationsCount)
{
// here I want to put other receives that will take values from the else below
}
else
{
MPI_Recv(&mp , 1, MPI_INT , 0, TAG, MPI_COMM_WORLD, &stat);
// the code stuck here in infinite loop
}

You've only initialised currentCombinationsCount within the if(rank==0) branch so all other procs will see an uninitialised variable. That will result in undefined behaviour and the outcome depends on your compiler. Your program may crash or the value may be set to 0 or an undetermined value.
If you're lucky, the value may be set to 0 in which case your branch reduces to:
if (rank == 0) { /* rank == 0 will enter this }
else if (rank > 0) { /* all other procs enter this }
else { /* never entered! Recvs are never called to match the sends */ }
You therefore end up with sends that are not matched by any receives. Since MPI_Send is potentially blocking, the sending proc may stall indefinitely. With procs blocking on sends, it can certainly look as thought "...the machine goes to infinite loop somewhere...".
If currentCombinationsCount is given an arbitrary value (instead of 0) then rank!=0 procs will enter arbitrary branchss (with a higher chance of all entering the final else). You then end up with second set of receives not being called resulting in the same issue as above.

Related

i want to check timeout response

In the case of general TCP communication, there is a procedure to check whether the received data comes within a certain time, but there is no capl api. So I want to add this logic (this code is written in a network module, not a test module)
To explain the code below, if there is no error in gtTpRxbuffer, the data is read.
I want to add time related logic to this part.
long TcpRecv( dword socket)
{
int result = 0;
result = TcpReceive( socket, gTcpRxBuffer, elcount( gTcpRxBuffer));
if ( 0 != result)
{
gIpLastErr = IpGetLastSocketError( socket);
if ( WSA_IO_PENDING != gIpLastErr)
{
IpGetLastSocketErrorAsString( socket, gIpLastErrStr, elcount( gIpLastErrStr));
writelineex( 0, 2, "TcpReceive error (%d): %s", gIpLastErr, gIpLastErrStr);
}
}
else{
sysGetVariableString(sysvar::TCPIP::TcpData,gTcpRxBuffer,elcount(gTcpRxBuffer));
return result;
}

Begin Transmission and Receiving Byte using I2C, PSOC

I'm new to the PSoC board and I'm trying to read the x,y,z values from a Digital Compass but I'm having a problem in beginning the Transmission with the compass itself.
I found some Arduino tutorial online here but since PSoC doesn't have the library I can't duplicate the code.
Also I was reading the HMC5883L datasheet here and I'm suppose to write bytes to the compass and obtain the values but I was unable to receive anything. All the values I received are zero which might be caused by reading values from wrong address.
Hoping for your answer soon.
PSoC is sorta tricky when you are first starting out with it. You need to read over the documentation carefully of both the device you want to talk to and the i2c module itself.
The datasheet for the device you linked states this on page 18:
All bus transactions begin with the master device issuing the start sequence followed by the slave address byte. The
address byte contains the slave address; the upper 7 bits (bits7-1), and the Least Significant bit (LSb). The LSb of the
address byte designates if the operation is a read (LSb=1) or a write (LSb=0). At the 9
th clock pulse, the receiving slave
device will issue the ACK (or NACK). Following these bus events, the master will send data bytes for a write operation, or
the slave will clock out data with a read operation. All bus transactions are terminated with the master issuing a stop
sequence.
If you use the I2C_MasterWriteBuf function, it wraps all that stuff the HMC's datasheet states above. The start command, dealing with that ack, the data handling, etc. The only thing you need to specify is how to transmit it.
If you refer to PSoC's I2C module datasheet, the MasterWriteBuf function takes in the device address, a pointer to the data you want to send, how many bytes you want to send, and a "mode". It shows what the various transfer modes in the docs.
I2C_MODE_COMPLETE_XFER Perform complete transfer from Start to Stop.
I2C_MODE_REPEAT_START Send Repeat Start instead of Start.
I2C_MODE_NO_STOP Execute transfer without a Stop
The MODE_COMPLETE_XFRE transfer will send the start and stop command for you if I'm not mistaken.
You can "bit-bang" this also if you want but calling directly on the I2C_MasterSendStart, WriteByte, SendStop, etc. But it's just easier to call on their writebuf functions.
Pretty much you need to write your code like follows:
// fill in your data or pass in the buffer of data you want to write
// if this is contained in a function call. I'm basing this off of HMC's docs
uint8 writeBuffer[3];
uint8 readBuffer[6];
writeBuffer[0] = 0x3C;
writeBuffer[1] = 0x00;
writeBuffer[2] = 0x70;
I2C_MasterWriteBuf(HMC_SLAVE_ADDRESS, &writeBuffer, 3, I2C_MODE_COMPLETE_XFER);
while((I2C_MasterStatus() & I2C_MSTAT_WR_CMPLT) == 0u)
{
// wait for operation to finish
}
writeBuffer[1] = 0x01;
writeBuffer[2] = 0xA0;
I2C_MasterWriteBuf(HMC_SLAVE_ADDRESS, &writeBuffer, 3, I2C_MODE_COMPLETE_XFER);
// wait for operation to finish
writeBuffer[1] = 0x02;
writeBuffer[2] = 0x00;
I2C_MasterWriteBuf(HMC_SLAVE_ADDRESS, &writeBuffer, 3, I2C_MODE_COMPLETE_XFER);
// wait for operation to finish
CyDelay(6); // docs state 6ms delay before you can start looping around to read
for(;;)
{
writeBuffer[0] = 0x3D;
writeBuffer[1] = 0x06;
I2C_MasterWriteBuf(HMC_SLAVE_ADDRESS, &writeBuffer, 2, I2C_MODE_COMPLETE_XFER);
// wait for operation to finish
// Docs don't state any different sort of bus transactions for reads.
// I'm assuming it'll be the same as a write
I2C_MasterReadBuf(HMC_SLAVE_ADDRESS, readBuffer, 6, I2C_MODE_COMPLETE_XFER);
// wait for operation to finish, wait on I2C_MSTAT_RD_CMPLT instead of WR_COMPLT
// You should have something in readBuffer to work with
CyDelay(67); // docs state to wait 67ms before reading again
}
I just sorta wrote that off the top of my head. I have no idea if that'll work or not, but I think that should be a good place to start and try. They have I2C example projects to look at also I think.
Another thing to look at so the WriteBuf function doesn't just seem like some magical command, if you right-click on the MasterWriteBuf function and click on "Find Definition" (after you build the project) it'll show you what it's doing.
Following are the samples for I2C read and write operation on PSoC,
simple Write operation:
//Dumpy data values to write
uint8 writebuffer[3]
writebuffer[0] = 0x23
writebuffer[1] = 0xEF
writebuffer[2] = 0x0F
uint8 I2C_MasterWrite(uint8 slaveAddr, uint8 nbytes)
{
uint8 volatile status;
status = I2C_MasterClearStatus();
if(!(status & I2C_MSTAT_ERR_XFER))
{
status = I2C_MasterWriteBuf(slaveAddr, (uint8 *)&writebuffer, nbytes, I2C_MODE_COMPLETE_XFER);
if(status == I2C_MSTR_NO_ERROR)
{
/* wait for write complete and no error */
do
{
status = I2C_MasterStatus();
} while((status & (I2C_MSTAT_WR_CMPLT | I2C_MSTAT_ERR_XFER)) == 0u);
}
else
{
/* translate from I2CM_MasterWriteBuf() error output to
* I2C_MasterStatus() error output */
status = I2C_MSTAT_ERR_XFER;
}
}
return status;
}
Read Operation:
void I2C_MasterRead(uint8 slaveaddress, uint8 nbytes)
{
uint8 volatile status;
status = I2C_MasterClearStatus();
if(!(status & I2C_MSTAT_ERR_XFER))
{
/* Then do the read */
status = I2C_MasterClearStatus();
if(!(status & I2C_MSTAT_ERR_XFER))
{
status = I2C_MasterReadBuf(slaveaddress,
(uint8 *)&(readbuffer),
nbytes, I2C_MODE_COMPLETE_XFER);
if(status == I2C_MSTR_NO_ERROR)
{
/* wait for reading complete and no error */
do
{
status = I2C_MasterStatus();
} while((status & (I2C_MSTAT_RD_CMPLT | I2C_MSTAT_ERR_XFER)) == 0u);
if(!(status & I2C_MSTAT_ERR_XFER))
{
/* Decrement all RW bytes in the EZI2C buffer, by different values */
for(uint8 i = 0u; i < nbytes; i++)
{
readbuffer[i] -= (i + 1);
}
}
}
else
{
/* translate from I2C_MasterReadBuf() error output to
* I2C_MasterStatus() error output */
status = I2C_MSTAT_ERR_XFER;
}
}
}
if(status & I2C_MSTAT_ERR_XFER)
{
/* add error handler code here */
}
}

What is the right way to "notify" processors without blocking?

Suppose I have a very large array of things and I have to do some operation on all these things.
In case operation fails for one element, I want to stop the work [this work is distributed across number of processors] across all the array.
I want to achieve this while keeping the number of sent/received messages to a minimum.
Also, I don't want to block processors if there is no need to.
How can I do it using MPI?
This seems to be a common question with no easy answer. Both other answer have scalability issues. The ring-communication approach has linear communication cost, while in the one-sided MPI_Win-solution, a single process will be hammered with memory requests from all workers. This may be fine for low number of ranks, but pose issues when increasing the rank count.
Non-blocking collectives can provide a more scalable better solution. The main idea is to post a MPI_Ibarrier on all ranks except on one designated root. This root will listen to point-to-point stop messages via MPI_Irecv and complete the MPI_Ibarrier once it receives it.
The tricky part is that there are four different cases "{root, non-root} x {found, not-found}" that need to be handled. Also it can happen that multiple ranks send a stop message, requiring an unknown number of matching receives on the root. That can be solved with an additional reduction that counts the number of ranks that sent a stop-request.
Here is an example how this can look in C:
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
const int iter_max = 10000;
const int difficulty = 20000;
int find_stuff()
{
int num_iters = rand() % iter_max;
for (int i = 0; i < num_iters; i++) {
if (rand() % difficulty == 0) {
return 1;
}
}
return 0;
}
const int stop_tag = 42;
const int root = 0;
int forward_stop(MPI_Request* root_recv_stop, MPI_Request* all_recv_stop, int found_count)
{
int flag;
MPI_Status status;
if (found_count == 0) {
MPI_Test(root_recv_stop, &flag, &status);
} else {
// If we find something on the root, we actually wait until we receive our own message.
MPI_Wait(root_recv_stop, &status);
flag = 1;
}
if (flag) {
printf("Forwarding stop signal from %d\n", status.MPI_SOURCE);
MPI_Ibarrier(MPI_COMM_WORLD, all_recv_stop);
MPI_Wait(all_recv_stop, MPI_STATUS_IGNORE);
// We must post some additional receives if multiple ranks found something at the same time
MPI_Reduce(MPI_IN_PLACE, &found_count, 1, MPI_INT, MPI_SUM, root, MPI_COMM_WORLD);
for (found_count--; found_count > 0; found_count--) {
MPI_Recv(NULL, 0, MPI_CHAR, MPI_ANY_SOURCE, stop_tag, MPI_COMM_WORLD, &status);
printf("Additional stop from: %d\n", status.MPI_SOURCE);
}
return 1;
}
return 0;
}
int main()
{
MPI_Init(NULL, NULL);
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
srand(rank);
MPI_Request root_recv_stop;
MPI_Request all_recv_stop;
if (rank == root) {
MPI_Irecv(NULL, 0, MPI_CHAR, MPI_ANY_SOURCE, stop_tag, MPI_COMM_WORLD, &root_recv_stop);
} else {
// You may want to use an extra communicator here, to avoid messing with other barriers
MPI_Ibarrier(MPI_COMM_WORLD, &all_recv_stop);
}
while (1) {
int found = find_stuff();
if (found) {
printf("Rank %d found something.\n", rank);
// Note: We cannot post this as blocking, otherwise there is a deadlock with the reduce
MPI_Request req;
MPI_Isend(NULL, 0, MPI_CHAR, root, stop_tag, MPI_COMM_WORLD, &req);
if (rank != root) {
// We know that we are going to receive our own stop signal.
// This avoids running another useless iteration
MPI_Wait(&all_recv_stop, MPI_STATUS_IGNORE);
MPI_Reduce(&found, NULL, 1, MPI_INT, MPI_SUM, root, MPI_COMM_WORLD);
MPI_Wait(&req, MPI_STATUS_IGNORE);
break;
}
MPI_Wait(&req, MPI_STATUS_IGNORE);
}
if (rank == root) {
if (forward_stop(&root_recv_stop, &all_recv_stop, found)) {
break;
}
} else {
int stop_signal;
MPI_Test(&all_recv_stop, &stop_signal, MPI_STATUS_IGNORE);
if (stop_signal)
{
MPI_Reduce(&found, NULL, 1, MPI_INT, MPI_SUM, root, MPI_COMM_WORLD);
printf("Rank %d stopping after receiving signal.\n", rank);
break;
}
}
};
MPI_Finalize();
}
While this is not the simplest code, it should:
Introduce no additional blocking
Scale with the implementation of a barrier (usually O(log N))
Have a worst-case-latency from one found, to all stop of 2 * loop time ( + 1 p2p + 1 barrier + 1 reduction).
If many/all ranks find a solution at the same time, it still works but may be less efficient.
A possible strategy to derive this global stop condition in a non-blocking fashion is to rely on MPI_Test.
scenario
Consider that each process posts an asynchronous receive of type MPI_INT to its left rank with a given tag to build a ring. Then start your computation. If a rank encounters the stop condition it sends its own rank to its right rank. In the meantime each rank uses MPI_Test to check for the completion of the MPI_Irecv during the computation if it is completed then enter a branch first waiting the message and then transitively propagating the received rank to the right except if the right rank is equal to the payload of the message (not to loop).
This done you should have all processes in the branch, ready to trigger an arbitrary recovery operation.
Complexity
The topology retained is a ring as it minimizes the number of messages at most (n-1) however it augments the propagation time. Other topologies could be retained with more messages but lower spatial complexity for example a tree with a n.ln(n) complexity.
Implementation
Something like this.
int rank, size;
MPI_Init(&argc,&argv);
MPI_Comm_rank( MPI_COMM_WORLD, &rank);
MPI_Comm_size( MPI_COMM_WORLD, &size);
int left_rank = (rank==0)?(size-1):(rank-1);
int right_rank = (rank==(size-1))?0:(rank+1)%size;
int stop_cond_rank;
MPI_Request stop_cond_request;
int stop_cond= 0;
while( 1 )
{
MPI_Irecv( &stop_cond_rank, 1, MPI_INT, left_rank, 123, MPI_COMM_WORLD, &stop_cond_request);
/* Compute Here and set stop condition accordingly */
if( stop_cond )
{
/* Cancel the left recv */
MPI_Cancel( &stop_cond_request );
if( rank != right_rank )
MPI_Send( &rank, 1, MPI_INT, right_rank, 123, MPI_COMM_WORLD );
break;
}
int did_recv = 0;
MPI_Test( &stop_cond_request, &did_recv, MPI_STATUS_IGNORE );
if( did_recv )
{
stop_cond = 1;
MPI_Wait( &stop_cond_request, MPI_STATUS_IGNORE );
if( right_rank != stop_cond_rank )
MPI_Send( &stop_cond_rank, 1, MPI_INT, right_rank, 123, MPI_COMM_WORLD );
break;
}
}
if( stop_cond )
{
/* Handle the stop condition */
}
else
{
/* Cleanup */
MPI_Cancel( &stop_cond_request );
}
That is a question I've asked myself a few times without finding any completely satisfactory answer... The only thing I thought of (beside MPI_Abort() that does it but which is a bit extreme) is to create an MPI_Win storing a flag that will be raise by whichever process facing the problem, and checked by all processes regularly to see if they can go on processing. This is done using non-blocking calls, the same way as described in this answer.
The main weaknesses of this are:
This depends on the processes to willingly check the status of the flag. There is no immediate interruption of their work to notifying them.
The frequency of this checking must be adjusted by hand. You have to find the trade-off between the time you waste processing data while there's no need to because something happened elsewhere, and the time it takes to check the status...
In the end, what we would need is a way of defining a callback action triggered by an MPI call such as MPI_Abort() (basically replacing the abort action by something else). I don't think this exists, but maybe I overlooked it.

MPI Send/Recv millions of messages

I have this loop over NT (millions of iterations) for procs greater than 0. Messages of 120 bytes are sent to proc 0 for each iteration and proc 0 receives them (I have the same loop over NT for proc 0).
I want proc 0 to receive them ordered so I can store them in array nhdr1.
The problem is that proc 0 does not receive messages properly and I have often 0 values in array nhdr.
How can I modify the code so that the messages are received in the same order are they were sent?
[...]
if (rank == 0) {
nhdr = malloc((unsigned long)15*sizeof(*nhdr));
nhdr1 = malloc((unsigned long)NN*15*sizeof(*nhdr1));
itr = 0;
jnode = 1;
for (l=0; l<NT; l++) {
MPI_Recv(nhdr, 15, MPI_LONG, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
if (l == status.MPI_TAG) {
for (i=0; i<nkeys; i++)
nhdr1[itr*15+i] = nhdr[i];
}
itr++;
if (itr == NN) {
ipos = (unsigned long)(jnode-1)*NN*15*sizeof(*nhdr1);
fseek(ismfh, ipos, SEEK_SET);
nwrite += fwrite(nhdr1, sizeof(*nhdr1), NN*15, ismfh);
itr = 0;
jnode++;
}
}
free(nhdr);
free(nhdr1);
} else {
nhdr = malloc(15*sizeof(*nhdr));
irecmin = (rank-1)*NN+1;
irecmax = rank*NN;
for (l=0; l<NT; l++) {
if (jrec[l] >= irecmin && jrec[l] <= irecmax) {
indx1 = (unsigned long)(jrec[l]-irecmin) * 15;
for (i=0; i<15; i++)
nhdr[i] = nhdr1[indx1+i]; // nhdr1 is allocated before for rank>0!
MPI_Send(nhdr, 15, MPI_LONG, 0, l, MPI_COMM_WORLD);
}
}
free(nhdr);
}
There is no way to guarantee that your messages will arrive on rank 0 in the same order they were sent from different ranks. For example, if you have a scenario like this (S1 means send message 1) :
rank 0 ----------------
rank 1 ---S1------S3---
rank 2 ------S2------S4
There is no guarantee that the messages will arrive at rank 0 in the order S1, S2, S3, S4. The only guarantee made by MPI is that the messages from each rank that are sent on the same communicator with the same tag (which you are doing) will arrive in the same order they were sent. This means that the resulting order could be:
S1, S2, S3, S4
Or it could be:
S1, S3, S2, S4
or:
S2, S1, S3, S4
...and so on.
For most applications, this doesn't really matter. The ordering that's important is the logical ordering, not the real time ordering. You might take another look at your application and make sure you can't relax your requirements a bit.
What do you mean by " messages are received in the same order are they were sent"?
In the code now, the message ARE received in (roughly) the order that they are actually sent...but that order has nothing to do with the rank numbers, or really anything else. See #Wesley Bland's response for more on this.
If you mean "receive the messages in rank order"...then there are a few options.
First, a collective like MPI_Gather or MPI_Gatherv would be an "obvious" choice to ensure that the data is ordered by the rank that produced it. This only works if each rank does the same number of iterations, and those iterations stay roughly sync'd.
Second, you could remove the MPI_ANY_SOURCE, and post a set of MPI_IRevc with the buffers supplied "in order". When a message arrives, it will be in the correct buffer location "automatically." For each message that is received, a new MPI_Irecv could be posted with the correct recv buffer location supplied. Any un-matched MPI_Irecv's would need to be canceled at the end of the job.
keeping in mind that:
messages from a given rank are received in order and
messages have the originating processor rank in the status structure (status.MPI_SOURCE) returned by MPI_Recv()
you can use these two elements to properly place the received data into nhdr1.

C MPI multiple dynamic array passing

I'm trying to ISend() two arrays: arr1,arr2 and an integer n which is the size of arr1,arr2. I understood from this post that sending a struct that contains all three is not an option since n is only known at run time. Obviously, I need n to be received first since otherwise the receiving process wouldn't know how many elements to receive. What's the most efficient way to achieve this without using the blokcing Send() ?
Sending the size of the array is redundant (and inefficient) as MPI provides a way to probe for incoming messages without receiving them, which provides just enough information in order to properly allocate memory. Probing is performed with MPI_PROBE, which looks a lot like MPI_RECV, except that it takes no buffer related arguments. The probe operation returns a status object which can then be queried for the number of elements of a given MPI datatype that can be extracted from the content of the message with MPI_GET_COUNT, therefore explicitly sending the number of elements becomes redundant.
Here is a simple example with two ranks:
if (rank == 0)
{
MPI_Request req;
// Send a message to rank 1
MPI_Isend(arr1, n, MPI_DOUBLE, 1, 0, MPI_COMM_WORLD, &req);
// Do not forget to complete the request!
MPI_Wait(&req, MPI_STATUS_IGNORE);
}
else if (rank == 1)
{
MPI_Status status;
// Wait for a message from rank 0 with tag 0
MPI_Probe(0, 0, MPI_COMM_WORLD, &status);
// Find out the number of elements in the message -> size goes to "n"
MPI_Get_count(&status, MPI_DOUBLE, &n);
// Allocate memory
arr1 = malloc(n*sizeof(double));
// Receive the message. ignore the status
MPI_Recv(arr1, n, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
}
MPI_PROBE also accepts the wildcard rank MPI_ANY_SOURCE and the wildcard tag MPI_ANY_TAG. One can then consult the corresponding entry in the status structure in order to find out the actual sender rank and the actual message tag.
Probing for the message size works as every message carries a header, called envelope. The envelope consists of the sender's rank, the receiver's rank, the message tag and the communicator. It also carries information about the total message size. Envelopes are sent as part of the initial handshake between the two communicating processes.
Firstly you need to allocate memory (full memory = n = elements) to arr1 and arr2 with rank 0. i.e. your front end processor.
Divide the array into parts depending on the no. of processors. Determine the element count for each processor.
Send this element count to the other processors from rank 0.
The second send is for the array i.e. arr1 and arr2
In other processors allocate arr1 and arr2 according to the element count received from main processor i.e. rank = 0. After receiving element count, receive the two arrays in the allocated memories.
This is a sample C++ Implementation but C will follow the same logic. Also just interchange Send with Isend.
#include <mpi.h>
#include <iostream>
using namespace std;
int main(int argc, char*argv[])
{
MPI::Init (argc, argv);
int rank = MPI::COMM_WORLD.Get_rank();
int no_of_processors = MPI::COMM_WORLD.Get_size();
MPI::Status status;
double *arr1;
if (rank == 0)
{
// Setting some Random n
int n = 10;
arr1 = new double[n];
for(int i = 0; i < n; i++)
{
arr1[i] = i;
}
int part = n / no_of_processors;
int offset = n % no_of_processors;
// cout << part << "\t" << offset << endl;
for(int i = 1; i < no_of_processors; i++)
{
int start = i*part;
int end = start + part - 1;
if (i == (no_of_processors-1))
{
end += offset;
}
// cout << i << " Start: " << start << " END: " << end;
// Element_Count
int e_count = end - start + 1;
// cout << " e_count: " << e_count << endl;
// Sending
MPI::COMM_WORLD.Send(
&e_count,
1,
MPI::INT,
i,
0
);
// Sending Arr1
MPI::COMM_WORLD.Send(
(arr1+start),
e_count,
MPI::DOUBLE,
i,
1
);
}
}
else
{
// Element Count
int e_count;
// Receiving elements count
MPI::COMM_WORLD.Recv (
&e_count,
1,
MPI::INT,
0,
0,
status
);
arr1 = new double [e_count];
// Receiving FIrst Array
MPI::COMM_WORLD.Recv (
arr1,
e_count,
MPI::DOUBLE,
0,
1,
status
);
for(int i = 0; i < e_count; i++)
{
cout << arr1[i] << endl;
}
}
// if(rank == 0)
delete [] arr1;
MPI::Finalize();
return 0;
}
#Histro The point I want to make is, that Irecv/Isend are some functions themselves manipulated by MPI lib. The question u asked depend completely on your rest of the code about what you do after the Send/Recv. There are 2 cases:
Master and Worker
You send part of the problem (say arrays) to the workers (all other ranks except 0=Master). The worker does some work (on the arrays) then returns back the results to the master. The master then adds up the result, and convey new work to the workers. Now, here you would want the master to wait for all the workers to return their result (modified arrays). So you cannot use Isend and Irecv but a multiple send as used in my code and corresponding recv. If your code is in this direction you wanna use B_cast and MPI_Reduce.
Lazy Master
The master divides the work but doesn't care of the result from his workers. Say you want to program a pattern of different kinds for same data. Like given characteristics of population of some city, you want to calculate the patterns like how many are above 18, how
many have jobs, how much of them work in some company. Now these results don't have anything to do with one another. In this case you don't have to worry about whether the data is received by the workers or not. The master can continue to execute the rest of the code. This is where it is safe to use Isend/Irecv.

Resources