Im trying to do the Monte Carlo problem using MPI were we generate x amount of rand. num between 0 and 1 and then send n-length numbers to each processor. I'm using scatter function but my code doesn't run right, it compiles but it doesn't ask for the input. I dont understand how MPI loops by itself without loops, can some explain this and what is wrong with my code?
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <time.h>
#include <math.h>
#include "mpi.h"
main(int argc, char* argv[]) {
int my_rank; /* rank of process */
int p; /* number of processes */
int source; /* rank of sender */
int dest; /* rank of receiver */
int tag = 0; /* tag for messages */
char message[100]; /* storage for message */
MPI_Status status; /* return status for */
double *total_xr, *p_xr, total_size_xr, p_size_xr; /* receive */
/* Start up MPI */
MPI_Init(&argc, &argv);
/* Find out process rank */
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
/* Find out number of processes */
MPI_Comm_size(MPI_COMM_WORLD, &p);
double temp;
int i, partial_sum, x, total_sum, ratio_p, area;
total_size_xr = 0;
partial_sum = 0;
if(my_rank == 0){
while(total_size_xr <= 0){
printf("How many random numbers should each process get?: ");
scanf("%f", &p_size_xr);
}
total_size_xr = p*p_size_xr;
total_xr = malloc(total_size_xr*sizeof(double));
//xr generator will generate numbers between 1 and 0
srand(time(NULL));
for(i=0; i<total_size_xr; i++)
{
temp = 2.0 * rand()/(RAND_MAX+1.0) -1.0;
//this will make sure if any number computer stays in the boundry of 0 and 1, doesn't go over into the negative
while(temp < 0.0)
{
temp = 2.0 * rand()/(RAND_MAX+1.0) -1.0;
}
//array set to total random numbers generated to be scatter into processors
total_xr[i] = temp;
}
}
else{
//this will be the buffer for the processors to hold their own numbers to add
p_xr = malloc(p_size_xr*sizeof(double));
printf("\n\narray set\n\n");
//scatter xr into processors
MPI_Scatter(total_xr, total_size_xr, MPI_DOUBLE, p_xr, p_size_xr, MPI_DOUBLE, 0, MPI_COMM_WORLD);
//while in processor the partial sum will be caluclated by using xr and the formula sqrt(1-x*x)
for(i=0; i<p_size_xr; i++)
{
x = p_xr[i];
temp = sqrt(1 - (x*x));
partial_sum = partial_sum + temp;
}
//}
//we will send the partial sums to master processor which is processor 0 and add them and place
//the result in total_sum
MPI_Reduce(&partial_sum, &total_sum, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
//once we have all of the sums we need to multiply the total sum and multiply it with 1/N
//N being the number of processors, the area should contain the value of pi.
ratio_p = 1/p;
area = total_sum*ratio_p;
printf("\n\nThe area under the curve of f(x) = sqrt(1-x*x), between 0 and 1 is, %f\n\n", area);
/* Shut down MPI */
MPI_Finalize();
} /* main */
In general, its not good to rely on STDIN/STDOUT for an MPI program. It's possible that MPI implementation could put rank 0 on some node other than the node on which you're launching your jobs. In that case you have to worry about forwarding correctly. While this will work most of the time, it's not usually a good idea.
A better way to do things is to have your user input be in a file that the application can read or via command line variables. Those will be much more portable.
I'm not sure what you mean by MPI looping by itself without loops. Maybe you can clarify that comment if you still need an answer there.
Related
I get a strange behavior of my simple MPI program. I spent time to find an answer myself, but I can't. I red some questions here, like OpenMPI MPI_Barrier problems, MPI_SEND stops working after MPI_BARRIER, Using MPI_Bcast for MPI communication. I red MPI tutorial on mpitutorial.
My program just modify array that was broadcasted from root process and then gather modified arrays to one array and print them.
So, the problem is, that when I use code listed below with uncommented MPI_Barrier(MPI_COMM_WORLD) I get an error.
#include "mpi/mpi.h"
#define N 4
void transform_row(int* row, const int k) {
for (int i = 0; i < N; ++i) {
row[i] *= k;
}
}
const int root = 0;
int main(int argc, char** argv) {
MPI_Init(&argc, &argv);
int rank, ranksize;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &ranksize);
if (rank == root) {
int* arr = new int[N];
for (int i = 0; i < N; ++i) {
arr[i] = i * i + 1;
}
MPI_Bcast(arr, N, MPI_INT, root, MPI_COMM_WORLD);
}
int* arr = new int[N];
MPI_Bcast(arr, N, MPI_INT, root, MPI_COMM_WORLD);
//MPI_Barrier(MPI_COMM_WORLD);
transform_row(arr, rank * 100);
int* transformed = new int[N * ranksize];
MPI_Gather(arr, N, MPI_INT, transformed, N, MPI_INT, root, MPI_COMM_WORLD);
if (rank == root) {
for (int i = 0; i < ranksize; ++i) {
for (int j = 0; j < N ; j++) {
printf("%i ", transformed[i * N + j]);
}
printf("\n");
}
}
MPI_Finalize();
return 0;
}
The error comes with number of thread > 1. The error:
Fatal error in PMPI_Barrier: Message truncated, error stack:
PMPI_Barrier(425)...................: MPI_Barrier(MPI_COMM_WORLD) failed
MPIR_Barrier_impl(332)..............: Failure during collective
MPIR_Barrier_impl(327)..............:
MPIR_Barrier(292)...................:
MPIR_Barrier_intra(150).............:
barrier_smp_intra(111)..............:
MPIR_Bcast_impl(1452)...............:
MPIR_Bcast(1476)....................:
MPIR_Bcast_intra(1287)..............:
MPIR_Bcast_binomial(239)............:
MPIC_Recv(353)......................:
MPIDI_CH3U_Request_unpack_uebuf(568): Message truncated; 16 bytes received but buffer size is 1
I understand that some problem with buffer exists, but when I use MPI_buffer_attach to attach big buffer to MPI it don't help.
Seems I need to increase this buffer, but I don't now how to do this.
XXXXXX#XXXXXXXXX:~/test_mpi$ mpirun --version
HYDRA build details:
Version: 3.2
Release Date: Wed Nov 11 22:06:48 CST 2015
So help me please.
One issue is MPI_Bcast() is invoked twice by the root rank, but only once by the other ranks. And then root rank uses an uninitialized arr.
MPI_Barrier() might only hide the problem, but it cannot fix it.
Also, note that if N is "large enough", then the second MPI_Bcast() invoked by root rank will likely hang.
Here is how you can revamp the init/broadcast phase to fix these issues.
int* arr = new int[N];
if (rank == root) {
for (int i = 0; i < N; ++i) {
arr[i] = i * i + 1;
}
MPI_Bcast(arr, N, MPI_INT, root, MPI_COMM_WORLD);
Note in this case, you can simply initialize arr on all the ranks so you do not even need to broadcast the array.
As a side note, MPI program typically
#include <mpi.h>
and then use mpicc for the compilation/linking
(this is a wrapper that invoke the real compiler after setting the include/library paths and using the MPI libs)
I am wondering if anyone can offer an explanation.
I'll start with the code:
/*
Barrier implemented using tournament-style coding
*/
// Constraints: Number of processes must be a power of 2, e.g.
// 2,4,8,16,32,64,128,etc.
#include <mpi.h>
#include <stdio.h>
#include <unistd.h>
void mybarrier(MPI_Comm);
// global debug bool
int verbose = 1;
int main(int argc, char * argv[]) {
int rank;
int size;
int i;
int sum = 0;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int check = size;
// check to make sure the number of processes is a power of 2
if (rank == 0){
while(check > 1){
if (check % 2 == 0){
check /= 2;
} else {
printf("ERROR: The number of processes must be a power of 2!\n");
MPI_Abort(MPI_COMM_WORLD, 1);
return 1;
}
}
}
// simple task, with barrier in the middle
for (i = 0; i < 500; i++){
sum ++;
}
mybarrier(MPI_COMM_WORLD);
for (i = 0; i < 500; i++){
sum ++;
}
if (verbose){
printf("process %d arrived at finalize\n", rank);
}
MPI_Finalize();
return 0;
}
void mybarrier(MPI_Comm comm){
// MPI variables
int rank;
int size;
int * data;
MPI_Status * status;
// Loop variables
int i;
int a;
int skip;
int complete = 0;
int currentCycle = 1;
// Initialize MPI vars
MPI_Comm_rank(comm, &rank);
MPI_Comm_size(comm, &size);
// step 1, gathering
while (!complete){
skip = currentCycle * 2;
// if currentCycle divides rank evenly, then it is a target
if ((rank % currentCycle) == 0){
// if skip divides rank evenly, then it needs to receive
if ((rank % skip) == 0){
MPI_Recv(data, 0, MPI_INT, rank + currentCycle, 99, comm, status);
if (verbose){
printf("1: %d from %d\n", rank, rank + currentCycle);
}
// otherwise, it needs to send. Once sent, the process is done
} else {
if (verbose){
printf("1: %d to %d\n", rank, rank - currentCycle);
}
MPI_Send(data, 0, MPI_INT, rank - currentCycle, 99, comm);
complete = 1;
}
}
currentCycle *= 2;
// main process will never send, so this code will allow it to complete
if (currentCycle >= size){
complete = 1;
}
}
complete = 0;
currentCycle = size / 2;
// step 2, scattering
while (!complete){
// if currentCycle is 1, then this is the last loop
if (currentCycle == 1){
complete = 1;
}
skip = currentCycle * 2;
// if currentCycle divides rank evenly then it is a target
if ((rank % currentCycle) == 0){
// if skip divides rank evenly, then it needs to send
if ((rank % skip) == 0){
if (verbose){
printf("2: %d to %d\n", rank, rank + currentCycle);
}
MPI_Send(data, 0, MPI_INT, rank + currentCycle, 99, comm);
// otherwise, it needs to receive
} else {
if (verbose){
printf("2: %d waiting for %d\n", rank, rank - currentCycle);
}
MPI_Recv(data, 0, MPI_INT, rank - currentCycle, 99, comm, status);
if (verbose){
printf("2: %d from %d\n", rank, rank - currentCycle);
}
}
}
currentCycle /= 2;
}
}
Expected behavior
The code is to increment a sum to 500, wait for all other processes to reach that point using blocking MPI_Send and MPI_Recv calls, and then increment sum to 1000.
Observed behavior on cluster
Cluster behaves as expected
Anomalous behavior observed on my machine
All processes in main function are reported as being 99, which I have linked specifically to the tag of the second while loop of mybarrier.
In addition
My first draft was written with for loops, and with that one, the program executes as expected on the cluster as well, but on my machine execution never finishes, even though all processes call MPI_Finalize (but none move beyond it).
MPI Versions
My machine is running OpenRTE 2.0.2
The cluster is running OpenRTE 1.6.3
The questions
I have observed that my machine seems to run unexpectedly all of the time, while the cluster executes normally. This is true with other MPI code I have written as well. Was there major changes between 1.6.3 and 2.0.2 that I'm not aware of?
At any rate, I'm baffled, and I was wondering if anyone could offer some explanation as to why my machine seems to not run MPI correctly. I hope I have provided enough details, but if not, I will be happy to provide whatever additional information you require.
There is a problem with your code, maybe that's what causing the weird behavior you are seeing.
You are passing to the MPI_Recv routines a status object that hasn't been allocated. In fact, that pointer is not even initialized, so if it happens not to be NULL, the MPI_Recv will endup writing wherever in memory causing undefined behavior. The correct form is the following:
MPI_Status status;
...
MPI_Recv(..., &status);
Or if you want to use the heap:
MPI_Status *status = malloc(sizeof(MPI_Status));
...
MPI_Recv(..., status);
...
free(status);
Also since you are not using the value returned by the receive, you should instead use MPI_STATUS_IGNORE instead:
MPI_Recv(..., MPI_STATUS_IGNORE);
I am writing an MPI program and the MPI_Bcast function is very slow on one particular machine I am using. In order to narrow down the problem, I have the following two test programs. The first does many MPI_Send/MPI_Recv operations from process 0 to the others:
#include <stdlib.h>
#include <stdio.h>
#include <mpi.h>
#define N 1000000000
int main(int argc, char** argv) {
int rank, size;
/* initialize MPI */
MPI_Init(&argc, &argv);
/* get the rank (process id) and size (number of processes) */
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
/* have process 0 do many sends */
if (rank == 0) {
int i, j;
for (i = 0; i < N; i++) {
for (j = 1; j < size; j++) {
if (MPI_Send(&i, 1, MPI_INT, j, 0, MPI_COMM_WORLD) != MPI_SUCCESS) {
printf("Error!\n");
exit(0);
}
}
}
}
/* have the rest receive that many values */
else {
int i;
for (i = 0; i < N; i++) {
int value;
if (MPI_Recv(&value, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE) != MPI_SUCCESS) {
printf("Error!\n");
exit(0);
}
}
}
/* quit MPI */
MPI_Finalize( );
return 0;
}
This program runs in only 2.7 seconds or so with 4 processes.
This next program does exactly the same thing, except it uses MPI_Bcast to send the values from process 0 to the other processes:
#include <stdlib.h>
#include <stdio.h>
#include <mpi.h>
#define N 1000000000
int main(int argc, char** argv) {
int rank, size;
/* initialize MPI */
MPI_Init(&argc, &argv);
/* get the rank (process id) and size (number of processes) */
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
/* have process 0 do many sends */
if (rank == 0) {
int i, j;
for (i = 0; i < N; i++) {
if (MPI_Bcast(&i, 1, MPI_INT, 0, MPI_COMM_WORLD) != MPI_SUCCESS) {
printf("FAIL\n");
exit(0);
}
}
}
/* have the rest receive that many values */
else {
int i;
for (i = 0; i < N; i++) {
if (MPI_Bcast(&i, 1, MPI_INT, 0, MPI_COMM_WORLD) != MPI_SUCCESS) {
printf("FAIL\n");
exit(0);
}
}
}
/* quit MPI */
MPI_Finalize( );
return 0;
}
Both programs have the same value for N, and neither program returns an error from the communication calls. The second program should be at least a little bit faster. But it is not, it is much slower at roughly 34 seconds - around 12X slower!
This problem only manifests itself on one machine, but not others even though they are running the same operating system (Ubuntu) and don't have drastically different hardware. Also, I'm using OpenMPI on both.
I'm really pulling my hair out, does anyone have an idea?
Thanks for reading!
A couple of observations.
The MPI_Bcast is receiving the result into the "&i" buffer. The MPI_Recv is receiving the result into "&value". Is there some reason that decision was made?
The Send/Recv model will naturally synchronize. The MPI_Send calls are blocking and serialized. The matching MPI_Recv should always be ready when the MPI_Send is called.
In general, collectives tend to have larger advantages as the job size scales up.
I compiled and ran the programs using IBM Platform MPI. I lowered the N value by 100x to 10 Million, to speed up the testing. I changed the MPI_Bcast to receive the result in a "&value" buffer rather than into the "&i" buffer. I ran each case three times, and averaged the times. The times are the "real" value returned by "time" (this was necessary as the ranks were running remotely from the mpirun command).
With 4 ranks over shared memory, the Send/Recv model took 6.5 seconds, the Bcast model took 7.6 seconds.
With 32 ranks (8/node x 4 nodes, FDR InfiniBand), the Send/Recv model took 79 seconds, the Bcast model took 22 seconds.
With 128 ranks (16/node x 8 nodes, FDR Infiniband), the Send/Recv model took 134 seconds, the Bcast model took 44 seconds.
Given these timings AFTER the reduction in the N value by 100x to 10000000, I am going to suggest that the "2.7 second" time was a no-op. Double check that some actual work was done.
I am working on reader-writer problem. Algorithm wise, i believe the solution is ok. The only problem that I am facing is opening multiple readers/writer windows using xterm. When I run the program it goes into an infinite loop and it crashed the whole system. It also open multiple xterm windows. It might be silly and simple, but I just don't seem to be able to figure out why? I've been thinking about this since yesterday. How do i fix this problem? The suspected area of conflict is highlighted with ** comments...
#include <unistd.h> /* Symbolic Constants */
#include <sys/types.h> /* Primitive System Data Types */
#include <errno.h> /* Errors */
#include <stdio.h> /* Input/Output */
#include <stdlib.h> /* General Utilities */
#include <pthread.h> /* POSIX Threads */
#include <string.h> /* String handling */
#include <semaphore.h> /* Semaphore */
//Global Variablels
int rc = 0;
int wc = 0;
sem_t m1, m2, m3, w, r; //Semphore
int reader() {
sem_wait(&m3);
sem_wait(&r);
sem_wait(&m1);
rc++;
if(rc == 1) sem_wait(&w);
sem_post(&m1);
sem_post(&r);
sem_post(&m3);
system("xterm -e ./read");
//execlp("xterm", "-e", "./ahor2r", NULL);
sem_wait(&m1);
rc--;
if(rc == 0) sem_post(&w);
sem_post(&m1);
return 0;
}
int writer() {
sem_wait(&m2);
wc++;
if(wc == 1) sem_wait(&r);
sem_post(&m2);
sem_wait(&w);
//system("xterm -e ./write"); //writing is performed
execlp("xterm", "-e", "./ahor2w", NULL);
sem_post(&w);
sem_wait(&m2);
wc--;
if(wc == 0) sem_post(&r);
sem_post(&m2);
return 0;
}
int main() {
int ch;
sem_init(&m1, 0, 1);
sem_init(&m2, 0, 1);
sem_init(&m3, 0, 1);
sem_init(&w, 0, 1);
sem_init(&r, 0, 1);
/*****************************************************************************
**********infinite loop*******************************************************/
while(1) {
printf("\n\nEnter your option\n\n1> Create Reader\n2> Create Writer\n 3> Exit\n\t");
scanf("%d", &ch);
if(ch == 1)
switch(fork()) {
case -1:
perror("Cannot fork a new reader process\n");
break;
case 0:
reader();
}
else if (ch == 2)
switch(fork()) {
case -1:
perror("Cannot fork a new reader process\n");
break;
case 0:
writer();
}
else if (ch == 3) {
sem_destroy(&m1);
sem_destroy(&m2);
sem_destroy(&m3);
sem_destroy(&w);
sem_destroy(&r);
return 0;
}
else printf("INVALID OPTION - no action taken\n");
}
/*****************************************************************************
*****************************************************************************/
return 0;
}
You have two big errors:
You don't create your semaphores in shared memory. So each process gets its own copy of the semaphores, which makes no sense.
You don't create process-shared semaphores. The 0 in sem_init means that the semaphores will not be shared between processes. But fork creates new processes.
If pshared has the value 0, then the semaphore is shared between the threads of a process[.] If pshared is nonzero, then the semaphore is shared between processes, and should be located in a region of shared memory (see shm_open(3), mmap(2), and shmget(2)).
I am trying to use MPI to sort digits, after sorting by the different processors I want to use MPI_Gather to collect and later print all the sorted numbers but this is not working. Any help will be appreciated. Below is my code.
#include <stdio.h>
#include <time.h>
#include <math.h>
#include <stdlib.h>
#include <mpi.h> /* Include MPI's header file */
/* The IncOrder function that is called by qsort is defined as follows */
int IncOrder(const void *e1, const void *e2)
{
return (*((int *)e1) - *((int *)e2));
}
void CompareSplit(int nlocal, int *elmnts, int *relmnts, int *wspace, int keepsmall);
//int IncOrder(const void *e1, const void *e2);
int main(int argc, char *argv[]){
int n; /* The total number of elements to be sorted */
int npes; /* The total number of processes */
int myrank; /* The rank of the calling process */
int nlocal; /* The local number of elements, and the array that stores them */
int *elmnts; /* The array that stores the local elements */
int *relmnts; /* The array that stores the received elements */
int oddrank; /* The rank of the process during odd-phase communication */
int evenrank; /* The rank of the process during even-phase communication */
int *wspace; /* Working space during the compare-split operation */
int i;
MPI_Status status;
/* Initialize MPI and get system information */
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &npes);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
n = 30000;//atoi(argv[1]);
nlocal = n/npes; /* Compute the number of elements to be stored locally. */
/* Allocate memory for the various arrays */
elmnts = (int *)malloc(nlocal*sizeof(int));
relmnts = (int *)malloc(nlocal*sizeof(int));
wspace = (int *)malloc(nlocal*sizeof(int));
/* Fill-in the elmnts array with random elements */
srand(time(NULL));
for (i=0; i<nlocal; i++) {
elmnts[i] = rand()%100+1;
printf("\n%d:",elmnts[i]); //print generated random numbers
}
/* Sort the local elements using the built-in quicksort routine */
qsort(elmnts, nlocal, sizeof(int), IncOrder);
/* Determine the rank of the processors that myrank needs to communicate during
* ics/ccc.gifthe */
/* odd and even phases of the algorithm */
if (myrank%2 == 0) {
oddrank = myrank-1;
evenrank = myrank+1;
} else {
oddrank = myrank+1;
evenrank = myrank-1;
}
/* Set the ranks of the processors at the end of the linear */
if (oddrank == -1 || oddrank == npes)
oddrank = MPI_PROC_NULL;
if (evenrank == -1 || evenrank == npes)
evenrank = MPI_PROC_NULL;
/* Get into the main loop of the odd-even sorting algorithm */
for (i=0; i<npes-1; i++) {
if (i%2 == 1) /* Odd phase */
MPI_Sendrecv(elmnts, nlocal, MPI_INT, oddrank, 1, relmnts,
nlocal, MPI_INT, oddrank, 1, MPI_COMM_WORLD, &status);
else /* Even phase */
MPI_Sendrecv(elmnts, nlocal, MPI_INT, evenrank, 1, relmnts,
nlocal, MPI_INT, evenrank, 1, MPI_COMM_WORLD, &status);
CompareSplit(nlocal, elmnts, relmnts, wspace, myrank < status.MPI_SOURCE);
}
MPI_Gather(elmnts,nlocal,MPI_INT,relmnts,nlocal,MPI_INT,0,MPI_COMM_WORLD);
/* The master host display the sorted array */
//int len = sizeof(elmnts)/sizeof(int);
if(myrank == 0) {
printf("\nSorted array :\n");
int j;
for (j=0;j<n;j++) {
printf("relmnts[%d] = %d\n",j,relmnts[j]);
}
printf("\n");
//printf("sorted in %f s\n\n",((double)clock() - start) / CLOCKS_PER_SEC);
}
free(elmnts); free(relmnts); free(wspace);
MPI_Finalize();
}
/* This is the CompareSplit function */
void CompareSplit(int nlocal, int *elmnts, int *relmnts, int *wspace, int keepsmall){
int i, j, k;
for (i=0; i<nlocal; i++)
wspace[i] = elmnts[i]; /* Copy the elmnts array into the wspace array */
if (keepsmall) { /* Keep the nlocal smaller elements */
for (i=j=k=0; k<nlocal; k++) {
if (j == nlocal || (i < nlocal && wspace[i] < relmnts[j]))
elmnts[k] = wspace[i++];
else
elmnts[k] = relmnts[j++];
}
} else { /* Keep the nlocal larger elements */
for (i=k=nlocal-1, j=nlocal-1; k>=0; k--) {
if (j == 0 || (i >= 0 && wspace[i] >= relmnts[j]))
elmnts[k] = wspace[i--];
else
elmnts[k] = relmnts[j--];
}
}
}
If I understand your code you've gathered the separately-sorted sublists back onto one process into the array relmnts ? And then printed them in order of occurrence. But I can't see where you've done anything about sorting relmnts. (I often don't understand other people's code, so if I have misunderstood stop reading now.)
You seem to be hoping that the gather will mysteriously merge the sorted sub-lists into a sorted list for you. It ain't going to happen ! You will need to merge the elements from the sorted sub-lists yourself, possibly after gathering them back to one process, or possibly doing some sort of 'cascading gather'.
By this I mean, suppose that you had 32 processes, and 32 sub-lists, then you would merge the sub-lists from process 1 and process 2 onto process 1, 3 and 4 onto 3, ..., 31 and 32 onto 31. Then you would merge from process 1 and 3 onto 1, .... After 5 steps you'd have the whole list merged, in sorted order, on process 1 (I'm a Fortran programmer, I start counting at 1, I should have written 'the process with rank 0' etc).
Incidentally, the example you put in your comment to your own question may be misleading: it sort of looks like you gathered 3 sub-lists each of 4 elements and rammed them together. But there are no elements in sub-list 1 which are smaller than any of the elements in sub-list 2, that sort of thing. How did that happen if the original list was unsorted ?