The thing I am still not too certain about is what happens with the root process in MPI Scatter / Scatterv.
If I divide an array as I try in my code, do I need to include the root process in the number of receivers (hence making the sendcounts of size nproc) or is it excluded?
In my example code for Matrix Multiplication, I still get an error by one of the processes running into aberrant behaviour, terminating the program prematurely:
void readMatrix();
double StartTime;
int rank, nproc, proc;
//double matrix_A[N_ROWS][N_COLS];
double **matrix_A;
//double matrix_B[N_ROWS][N_COLS];
double **matrix_B;
//double matrix_C[N_ROWS][N_COLS];
double **matrix_C;
int low_bound = 0; //low bound of the number of rows of each process
int upper_bound = 0; //upper bound of the number of rows of [A] of each process
int portion = 0; //portion of the number of rows of [A] of each process
int main (int argc, char *argv[]) {
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &nproc);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
matrix_A = (double **)malloc(N_ROWS * sizeof(double*));
for(int i = 0; i < N_ROWS; i++) matrix_A[i] = (double *)malloc(N_COLS * sizeof(double));
matrix_B = (double **)malloc(N_ROWS * sizeof(double*));
for(int i = 0; i < N_ROWS; i++) matrix_B[i] = (double *)malloc(N_COLS * sizeof(double));
matrix_C = (double **)malloc(N_ROWS * sizeof(double*));
for(int i = 0; i < N_ROWS; i++) matrix_C[i] = (double *)malloc(N_COLS * sizeof(double));
int *counts = new int[nproc](); // array to hold number of items to be sent to each process
// -------------------> If we have more than one process, we can distribute the work through scatterv
if (nproc > 1) {
// -------------------> Process 0 initalizes matrices and scatters the portions of the [A] Matrix
if (rank==0) {
readMatrix();
}
StartTime = MPI_Wtime();
int counter = 0;
for (int proc = 0; proc < nproc; proc++) {
counts[proc] = N_ROWS / nproc ;
counter += N_ROWS / nproc ;
}
counter = N_ROWS - counter;
counts[nproc-1] = counter;
//set bounds for each process
low_bound = rank*(N_ROWS/nproc);
portion = counts[rank];
upper_bound = low_bound + portion;
printf("I am process %i and my lower bound is %i and my portion is %i and my upper bound is %i \n",rank,low_bound, portion,upper_bound);
//scatter the work among the processes
int *displs = new int[nproc]();
displs[0] = 0;
for (int proc = 1; proc < nproc; proc++) displs[proc] = displs[proc-1] + (N_ROWS/nproc);
MPI_Scatterv(matrix_A, counts, displs, MPI_DOUBLE, &matrix_A[low_bound][0], portion, MPI_DOUBLE, 0, MPI_COMM_WORLD);
//broadcast [B] to all the slaves
MPI_Bcast(&matrix_B, N_ROWS*N_COLS, MPI_DOUBLE, 0, MPI_COMM_WORLD);
// -------------------> Everybody does their work
for (int i = low_bound; i < upper_bound; i++) {//iterate through a given set of rows of [A]
for (int j = 0; j < N_COLS; j++) {//iterate through columns of [B]
for (int k = 0; k < N_ROWS; k++) {//iterate through rows of [B]
matrix_C[i][j] += (matrix_A[i][k] * matrix_B[k][j]);
}
}
}
// -------------------> Process 0 gathers the work
MPI_Gatherv(&matrix_C[low_bound][0],portion,MPI_DOUBLE,matrix_C,counts,displs,MPI_DOUBLE,0,MPI_COMM_WORLD);
}
...
The root process also takes place in the receiver side. If you are not interested in that, just set sendcounts[root] = 0.
See MPI_Scatterv for specific information on which values you have to pass exactly.
However, take care of what you are doing. I strongly suggest that you change the way you allocate your matrix as a one-dimensional array, using a single malloc like this:
double* matrix = (double*) malloc( N_ROWS * N_COLS * sizeof(double) );
If you still want to use a two-dimensional array, then you may need to define your types as a MPI derived datatype.
The datatype you are passing is not valid if you want to send more than a row in a single MPI transfer.
With MPI_DOUBLE you are telling MPI that the buffer contains a contiguous array of count MPI_DOUBLE values.
Since you are allocating a two-dimensional array using multiple malloc calls, then your data is not contiguous.
Related
If I have more than one work-item to execute some kernel code, do I need to have more events to track the execution time for each work-item?
I have some strange results, 1 work-item takes about 4 seconds to execute and 100 work-items also take about 4 seconds to execute. I can't see how this could be possible since my Nvidia GeForce GT 525M only has 2 compute units, each with 48 processing elements. This leads me to believe the event I listed as an argument in clEnqueueNDRangeKernel tracks only one work-item. Is that true and if so, how can I get it to track all the work-items?
This is what the Khronos user guide says about the event argument in clEnqueueNDRangeKernel:
event returns an event object that identifies this particular kernel execution instance
What is the meaning of "this particular kernel execution instance"? Isn't that a single work-item?
EDIT:
Relevant host code:
static const size_t numberOfWorkItems = 48;
const size_t globalWorkSize[] = { numberOfWorkItems, 0, 0 };
cl_event events;
ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL, globalWorkSize, NULL, 0, NULL, &events);
ret = clEnqueueReadBuffer(command_queue, memobj, CL_TRUE, 0, sizeof(cl_mem), val, 0, NULL, NULL);
clWaitForEvents(1, &events);
cl_ulong time_start;
cl_ulong time_end;
clGetEventProfilingInfo(events, CL_PROFILING_COMMAND_QUEUED, sizeof(cl_ulong), &time_start, NULL);
clGetEventProfilingInfo(events, CL_PROFILING_COMMAND_END, sizeof(cl_ulong), &time_end, NULL);
double nanoSeconds = (double) (time_end - time_start);
printf("OpenCl Execution time is: %f milliseconds \n",nanoSeconds / 1000000.0);
printf("Result: %lu\n", val[0]);
Kernel code:
kernel void parallel_operation(__global ulong *val) {
size_t i = get_global_id(0);
int n = 48;
local unsigned int result[48];
for (int z = 0; z < n; z++) {
result[z] = 0;
}
// here comes the long operation
for (ulong k = 0; k < 2000; k++) {
for (ulong j = 0; j < 10000; j++) {
result[i] += (j * 3) % 5;
}
}
barrier(CLK_LOCAL_MEM_FENCE);
if (i == 0) {
for (int z = 1; z < n; z++) {
result[0] += result[z];
}
*val = result[0];
}
}
You are measuring the execution time of your entire kernel function. Or in other words, the time between the first work-item starts and the last work-item finishes. To my knowledge there is no possibility to measure the execution time of one single work-item in OpenCL.
I am very new to OpenCL and am going through the Altera OpenCL examples.
In their matrix multiplication example, they have used the concept of blocks, where dimensions of the input matrices are multiple of block size. Here's the code:
void matrixMult( // Input and output matrices
__global float *restrict C,
__global float *A,
__global float *B,
// Widths of matrices.
int A_width, int B_width)
{
// Local storage for a block of input matrices A and B
__local float A_local[BLOCK_SIZE][BLOCK_SIZE];
__local float B_local[BLOCK_SIZE][BLOCK_SIZE];
// Block index
int block_x = get_group_id(0);
int block_y = get_group_id(1);
// Local ID index (offset within a block)
int local_x = get_local_id(0);
int local_y = get_local_id(1);
// Compute loop bounds
int a_start = A_width * BLOCK_SIZE * block_y;
int a_end = a_start + A_width - 1;
int b_start = BLOCK_SIZE * block_x;
float running_sum = 0.0f;
for (int a = a_start, b = b_start; a <= a_end; a += BLOCK_SIZE, b += (BLOCK_SIZE * B_width))
{
A_local[local_y][local_x] = A[a + A_width * local_y + local_x];
B_local[local_x][local_y] = B[b + B_width * local_y + local_x];
#pragma unroll
for (int k = 0; k < BLOCK_SIZE; ++k)
{
running_sum += A_local[local_y][k] * B_local[local_x][k];
}
}
// Store result in matrix C
C[get_global_id(1) * get_global_size(0) + get_global_id(0)] = running_sum;
}
Assume block size is 2, then: block_x and block_y are both 0; and local_x and local_y are both 0.
Then A_local[0][0] would be A[0] and B_local[0][0] would be B[0].
Sizes of A_local and B_local are 4 elements each.
In that case, how would A_local and B_local access other elements of the block in that iteration?
Also would separate threads/cores be assigned for each local_x and local_y?
There is definitely a barrier missing in your code sample. The outer for loop as you have it will only produce correct results if all work items are executing instructions in lockstep fashion, thus guaranteeing the local memory is populated before the for k loop.
Maybe this is the case for Altera and other FPGAs, but this is not correct for CPUs and GPUs.
You should add barrier(CLK_LOCAL_MEM_FENCE); if you are getting unexpected results, or want to be compatible with other type of hardware.
float running_sum = 0.0f;
for (int a = a_start, b = b_start; a <= a_end; a += BLOCK_SIZE, b += (BLOCK_SIZE * B_width))
{
A_local[local_y][local_x] = A[a + A_width * local_y + local_x];
B_local[local_x][local_y] = B[b + B_width * local_y + local_x];
barrier(CLK_LOCAL_MEM_FENCE);
#pragma unroll
for (int k = 0; k < BLOCK_SIZE; ++k)
{
running_sum += A_local[local_y][k] * B_local[local_x][k];
}
}
A_local and B_local are both shared by all work items of the work group, so all their elements are loaded in parallel (by all work items of the work group) at each step of the encompassing for loop.
Then each work item uses some of the loaded values (not necessarily the values the work item loaded itself) to do its share of the computation.
And finally, the work item stores its individual result into the global output matrix.
It is a classical tiled implementation of a matrix-matrix multiplication. However, I'm really surprised not to see any sort of call to a memory synchronisation function, such as work_group_barrier(CLK_LOCAL_MEM_FENCE) between the load of A_local and B_local and their use in the k loop... But I might very well have overlooked something here.
I implemented a reduce kernel in OpenCL to sum up all entries in the input vector of size N. For a easier testing I initialize the input vector with 1.0f. So the result should be N. But it is not!
Here is my reduce-kernel:
kernel void reduce(global float* input, global float* output, const unsigned int N, local float* cache)
{
const uint local_id = get_local_id(0);
const uint global_id = get_global_id(0);
const uint local_size = get_local_size(0);
cache[local_id] = (global_id < N) ? input[global_id] : 0.0f;
barrier(CLK_LOCAL_MEM_FENCE);
for (unsigned int s = local_size >> 1; s > 0; s >>= 1) {
if (local_id < s) {
cache[local_id] += cache[local_id + s];
}
barrier(CLK_LOCAL_MEM_FENCE);
}
if (local_id == 0) output[local_size] = cache[0];
}
And here is the setting for OpenCL:
const uint N = 8196;
cl_float a[N];
cl_float b[N];
for (uint i=0; i<N; i++) {
a[i] = 1.0f;
b[i] = 0.0f;
}
cl::Buffer inputBuffer(context, CL_MEM_WRITE_ONLY, sizeof(cl_float)*N);
cl::Buffer resultBuffer(context, CL_MEM_READ_ONLY, sizeof(cl_float)*N);
queue.enqueueWriteBuffer(inputBuffer, CL_TRUE, 0, sizeof(cl_float)*N, a);
queue.enqueueWriteBuffer(resultBuffer, CL_TRUE, 0, sizeof(cl_float)*N, b);
cl::Kernel addVectorKernel = cl::Kernel(program, "reduce");
size_t localSize = addVectorKernel.getWorkGroupInfo<CL_KERNEL_WORK_GROUP_SIZE>(device); // e.g. => 512
size_t globalSize = roundUp(localSize, N); // rounds up to a multiple of localSize
addVectorKernel.setArg(0, inputBuffer);
addVectorKernel.setArg(1, resultBuffer);
addVectorKernel.setArg(2, N);
addVectorKernel.setArg(3, (sizeof(cl_float) * localSize), NULL);
queue.enqueueNDRangeKernel(
addVectorKernel,
cl::NullRange,
cl::NDRange(globalSize),
cl::NDRange(localSize)
);
queue.finish(); // wait for ending
queue.enqueueReadBuffer(resultBuffer, CL_TRUE, 0, sizeof(cl_float)*N, b); // e.g. => 1024
The result depends on the workgroup size. What am I doing wrong? Is it the kernel itself or is it the settings for OpenCL?
You should be using the group's id when writing the sum back to global memory.
if (local_id == 0) output[local_size] = cache[0];
That line will write to output[512] repeatedly. You need each work group to write to a dedicated location in the output.
kernel void reduce(global float* input, global float* output, const unsigned int N, local float* cache)
{
const uint local_id = get_local_id(0);
const uint global_id = get_global_id(0);
const uint group_id = get_group_id(0);
const uint local_size = get_local_size(0);
cache[local_id] = (global_id < N) ? input[global_id] : 0.0f;
barrier(CLK_LOCAL_MEM_FENCE);
for (unsigned int s = local_size >> 1; s > 0; s >>= 1) {
if (local_id < s) {
cache[local_id] += cache[local_id + s];
}
barrier(CLK_LOCAL_MEM_FENCE);
}
if (local_id == 0) output[group_id] = cache[0];
}
Then you need to sum the values from the output on the host. Note that 'b' in the host code does not need to hold N elements. Only one element for each work group will be used.
//replace (globalSize/localSize) with the pre-calculated/known number of work groups
for (i=1; i<(globalSize/localSize); i++) {
b[0] += b[i];
}
Now b[0] is your grand total.
In the reduction for loop, you need this:
for(unsigned int s = localSize >> 1; s > 0; s >>= 1)
You are shifting one more bit than you should when initializing s.
After that's fixed, let's look at what your kernel is doing. The host code executes it with globalSize of 8192 and localSize of 512, which results in 16 work groups. Inside the kernel you first sum the data from the two consecutive memory locations at index 2*global_id. For work group with id 15, work item 0, that will be at index 15*512*2 = 15,360 and 15,361, which is outside the boundaries of your input array. I am surprised you don't get a crash. At the same time, this explains why you have double the values that you expect.
To fix it, you can do this:
cache[localID] = input[globalID];
Or specify a global size that's half of the number of the current one.
In OpenCL, if I want to add two N-dimension vectors, the global work group size (globalSize) should satisfy globalSize = ceil(N/localSize) * localSize, where localSize is the local work group size. Is this correct? If N = 1000, and localSize = 128, globalSize should be 1024? Can we always set globalSize some multiple of localSize and larger than needed?
I tried many times and it worked well for 1-dimension problems.
However, when it comes to 2d problems, for example, multiply two matrices of dimension m*n and n*p, the result matrix is of order m*p, things get more complicated.
The max work group size on my device is 128, so I set localSize [2] = {16,8} and
globalSize [2] = {ceil(m/16)*16,ceil(p/8)*8}.
It is similar to the 1-dimension case but the result is wrong!
If I set localSize [2] = {1,128} and change the globalSize accordingly, I can get the correct result. So where is the problem? Can anyone tell me why?
In addition, I find out the indices where the matrix element is wrong.
It seems that the result is wrong at (i,j) where i*p + j = n * some constant (n = 1,2,3...)
Why?
Here is my kernel function:
kernel void mmult(const int Mdim, const int Ndim, const int Pdim,
global float *A, global float *B, global float *C)
{
int i = get_global_id(1);
int j = get_global_id(0);
if(i < 0 || j < 0 || i > Mdim || j > Pdim) return;
else
{
float tmp = 0;
for(int k = 0; k < Ndim; k++)
tmp += A[i*Ndim+k] * B[k*Pdim+j];
C[i*Pdim + j] = tmp;
}
}
And then it is the host program:
#define __NO_STD_VECTOR // Use cl::vector instead of STL version
#define __CL_ENABLE_EXCEPTIONS
#include <CL/cl.hpp>
#include <utility>
#include <iostream>
#include <fstream>
#include <string>
#include <cmath>
using namespace cl;
int main()
{
// Create the two input matrices
int m = 1000;
int n = 1000;
int p = 1000;
float *A = new float[m*n];
float *B = new float[n*p];
for(int i = 0; i < m*n; i++)
{
A[i] = i;
}
for(int i = 0; i < n*p; i++)
{
B[i] = i;
}
try
{
// Get available platforms
vector<Platform> platforms;
Platform::get(&platforms);
// Select the default platform and create a context using this platform and the GPU
cl_context_properties cps[3] =
{
CL_CONTEXT_PLATFORM,
(cl_context_properties)(platforms[0])(),
0
};
Context context( CL_DEVICE_TYPE_GPU, cps);
// Get a list of devices on this platform
vector<Device> devices = context.getInfo<CL_CONTEXT_DEVICES>();
// Create a command queue and use the first device
CommandQueue queue = CommandQueue(context, devices[0]);
// Read source file
std::ifstream sourceFile("mmul.cl");
std::string sourceCode(
std::istreambuf_iterator<char>(sourceFile),
(std::istreambuf_iterator<char>()));
Program::Sources source(1, std::make_pair(sourceCode.c_str(), sourceCode.length()+1));
// Make program of the source code in the context
Program program = Program(context, source);
// Build program for these specific devices
program.build(devices);
// Make kernel
Kernel kernel(program, "mmult");
// Create memory buffers
Buffer bufferA = Buffer(context, CL_MEM_READ_ONLY, m*n * sizeof(float));
Buffer bufferB = Buffer(context, CL_MEM_READ_ONLY, p*n * sizeof(float));
Buffer bufferC = Buffer(context, CL_MEM_WRITE_ONLY, m*p * sizeof(float));
// Copy lists A and B to the memory buffers
queue.enqueueWriteBuffer(bufferA, CL_TRUE, 0, m * n * sizeof(float), A);
queue.enqueueWriteBuffer(bufferB, CL_TRUE, 0, p * n * sizeof(float), B);
// Set arguments to kernel
kernel.setArg(0, m);
kernel.setArg(1, n);
kernel.setArg(2, p);
kernel.setArg(3, bufferA);
kernel.setArg(4, bufferB);
kernel.setArg(5, bufferC);
// Run the kernel on specific ND range
NDRange global((ceil((float)(p)/16))*16,(ceil((float)(m)/8))*8);
NDRange local(16,8);
queue.enqueueNDRangeKernel(kernel, NullRange, global, local);
// Read buffer C into a local list
float *C = new float[m*p];
queue.enqueueReadBuffer(bufferC, CL_TRUE, 0, m*p * sizeof(float), C);
// check the correctness of the result
float *c = new float[m*p];
for(int i = 0; i < m; i++)
for(int j = 0; j < p; j++)
{
float z = 0.0;
for(int k = 0; k < n; k++)
{
z += A[i*n+k] * B[k*p+j];
}
c[i*p+j] = z;
}
for(int i = 0; i < m*p; i++)
{
if(fabs(c[i]-C[i])>0.001)
std::cout<<i<<" "<<c[i]<<" "<<C[i]<<std::endl;
}
delete []A;
delete []B;
delete []C;
}
catch(Error error)
{
std::cout << error.what() << "(" << error.err() << ")" << std::endl;
}
return 0;
}
Your bounds checking code inside your OpenCL kernel is incorrect. Instead of this:
if(i < 0 || j < 0 || i > Mdim || j > Pdim) return;
You should have this:
if(i < 0 || j < 0 || i >= Mdim || j >= Pdim) return;
Let's assume, that you have float matrix of size 1000x1000:
const int size = 1000;
// Whatever
float* myMatrix = (float*)calloc(size * size, sizeof(*myMatrix));
Determine size of Local Group first:
size_t localSize[] = {16, 8};
Then determine, how many Local Groups do you need:
size_t numLocalGroups[] = {ceil(size/localSize[0]), ceil(size/localSize[1])};
Finally, determine NDRange size:
size_t globalSize[] = {localSize[0] * numLocalGroups[0], localSize[1] * numLocalGroups[1]};
Don't forget to handle out-of-bounds access in right-most Local Groups.
So, my assignment is to write a spell check program and then parallelize it using openMPI. My take was to load the words from a text file into my array called dict[] and this is used as my dictionary. Next, I get input from the user and then it's supposed go through the dictionary array and check whether the current word is within the threshold percentage, if it is, print it out. But I'm only supposed to print out a certain amount of words. My problem is, is that, my suggestions[] array, doesn't seem to fill up the way I need it to, and it gets a lot of blank spots in it, whereas, I thought at least, is that the way I wrote it, is to just fill it when a word is within threshold. So it shouldn't get any blanks in it until there are no more words being added. I think it's close to being finished but I can't seem to figure this part out. Any help is appreciated.
#include <stdio.h>
#include <mpi.h>
#include <string.h>
#include <stdlib.h>
#define SIZE 30
#define max(x,y) (((x) > (y)) ? (x) : (y))
char *dict[50000];
char *suggestions[50000];
char enterWord[50];
char *myWord;
int wordsToPrint = 20;
int threshold = 40;
int i;
int words_added = 0;
int levenshtein(const char *word1, int len1, const char *word2, int len2){
int matrix[len1 + 1][len2 + 1];
int a;
for(a=0; a<= len1; a++){
matrix[a][0] = a;
}
for(a=0;a<=len2;a++){
matrix[0][a] = a;
}
for(a = 1; a <= len1; a++){
int j;
char c1;
c1 = word1[a-1];
for(j = 1; j <= len2; j++){
char c2;
c2 = word2[j-1];
if(c1 == c2){
matrix[a][j] = matrix[a-1][j-1];
}
else{
int delete, insert, substitute, minimum;
delete = matrix[a-1][j] +1;
insert = matrix[a][j-1] +1;
substitute = matrix[a-1][j-1] +1;
minimum = delete;
if(insert < minimum){
minimum = insert;
}
if(substitute < minimum){
minimum = substitute;
}
matrix[a][j] = minimum;
}//else
}//for
}//for
return matrix[len1][len2];
}//levenshtein
void prompt(){
printf("Enter word to search for: \n");
scanf("%s", &enterWord);
}
int p0_compute_output(int num_processes, char *word1){
int totalNumber = 0;
int k = 0;
int chunk = 50000 / num_processes;
for(i = 0; i < chunk; i++){
int minedits = levenshtein(word1, strlen(word1), dict[i], strlen(dict[i]));
int thresholdPercentage = (100 * minedits) / max(strlen(word1), strlen(dict[i]));
if(thresholdPercentage < threshold){
suggestions[totalNumber] = dict[i];
totalNumber = totalNumber + 1;
}
}//for
return totalNumber;
}//p0_compute_output
void p0_receive_output(int next_addition){
int num_to_add;
MPI_Comm comm;
MPI_Status status;
MPI_Recv(&num_to_add,1,MPI_INT,MPI_ANY_SOURCE, MPI_ANY_TAG,MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("--%d\n", num_to_add);
suggestions[next_addition] = dict[num_to_add];
next_addition = next_addition + 1;
}
void compute_output(int num_processes, int me, char *word1){
int chunk = 0;
int last_chunk = 0;
MPI_Comm comm;
if(50000 % num_processes == 0){
chunk = 50000 / num_processes;
last_chunk = chunk;
int start = me * chunk;
int end = me * chunk + chunk;
for(i = start; i < end;i++){
int minedits = levenshtein(word1, strlen(word1), dict[i], strlen(dict[i]));
int thresholdPercentage = (100 * minedits) / max(strlen(word1), strlen(dict[i]));
if(thresholdPercentage < threshold){
int number_to_send = i;
MPI_Send(&number_to_send, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
}
}
}
else{
chunk = 50000 / num_processes;
last_chunk = 50000 - ((num_processes - 1) * chunk);
if(me != num_processes){
int start = me * chunk;
int end = me * chunk + chunk;
for(i = start; i < end; i++){
int minedits = levenshtein(word1, strlen(word1), dict[i], strlen(dict[i]));
int thresholdPercentage = (100 * minedits) / max(strlen(word1), strlen(dict[i]));
if(thresholdPercentage < threshold){
int number_to_send = i;
MPI_Send(&number_to_send, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
}//if
}//for
}//if me != num_processes
else{
int start = me * chunk;
int end = 50000 - start;
for(i = start; i < end; i++){
int minedits = levenshtein(word1, strlen(word1), dict[i], strlen(dict[i]));
int thresholdPercentage = (100 * minedits) / max(strlen(word1), strlen(dict[i]));
if(thresholdPercentage < threshold){
int number_to_send = i;
MPI_Send(&number_to_send, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
}
}
}//me == num_processes
}//BIG else
return;
}//COMPUTE OUTPUT
void set_data(){
prompt();
MPI_Bcast(&enterWord,20 ,MPI_CHAR, 0, MPI_COMM_WORLD);
}//p0_send_inpui
//--------------------------MAIN-----------------------------//
main(int argc, char **argv){
int ierr, num_procs, my_id, loop;
FILE *myFile;
loop = 0;
for(i=0;i<50000;i++){
suggestions[i] = calloc(SIZE, sizeof(char));
}
ierr = MPI_Init(NULL, NULL);
ierr = MPI_Comm_rank(MPI_COMM_WORLD, &my_id);
ierr = MPI_Comm_size(MPI_COMM_WORLD, &num_procs);
printf("Check in from %d of %d processors\n", my_id, num_procs);
set_data();
myWord = enterWord;
myFile = fopen("words", "r");
if(myFile != NULL){
for(i=0;i<50000;i++){
dict[i] = calloc(SIZE, sizeof(char));
fscanf(myFile, "%s", dict[i]);
}//for
fclose(myFile);
}//read word list into dictionary
else printf("File not found");
if(my_id == 0){
words_added = p0_compute_output(num_procs, enterWord);
printf("words added so far: %d\n", words_added);
p0_receive_output(words_added);
printf("Threshold: %d\nWords To print: %d\n%s\n", threshold, wordsToPrint, myWord);
ierr = MPI_Finalize();
}
else{
printf("my word %s*\n", enterWord);
compute_output(num_procs, my_id, enterWord);
// printf("Process %d terminating...\n", my_id);
ierr = MPI_Finalize();
}
for(i=0;i<wordsToPrint;i++){
printf("*%s\n", suggestions[i]);
}//print suggestions
return (0);
}//END MAIN
Here are a few problems I see with what you're doing:
prompt() should only be called by rank 0.
The dictionary file should be read only by rank 0, then broadcast the array out to the other ranks
Alternatively, have rank 1 read the file while rank 0 is waiting for input, broadcast input and dictionary afterwards.
You're making the compute_output step overly complex. You can merge p0_compute_output and compute_output into one routine.
Store an array of indices into dict in each rank
This array will not be the same size in every rank, so the simplest way to do this would be to send from each rank a single integer indicating the size of the array, then send the array with this size. (The receiving rank must know how much data to expect). You could also use the sizes for MPI_Gatherv, but I expect this is more than you're wanting to do right now.
Once you have a single array of indices in rank 0, then use this to fill suggestions.
Save the MPI_Finalize call until immediately before the return call
For the final printf call, only rank 0 should be printing that. I suspect this is causing a large part of the "incorrect" result. As you have it, all ranks are printing suggestions, but it is only filled in rank 0. So the others will all be printing blank entries.
Try some of these changes, especially the last one, and see if that helps.