OpenCl wrong values when reading from multiple GPU - opencl

I have a kernel function that only writes number to a __global int* c
To be specific it looks like this:
__kernel void Add1(__global int* c)
{
*c = 3;
}
and in host code I have allocated memory for C value:
cl_mem bufferC[deviceNumber]; // deviceNumber = 8
for(int i = 0; i< deviceNumber; i++){
bufferC[i] = clCreateBuffer(context[i], CL_MEM_WRITE_ONLY, sizeof(cl_int) * global_size, NULL, &error);
}
for(int i = 0; i< deviceNumber; i++){
error = clSetKernelArg(kernel[i], 0, sizeof(cl_mem), (void*)&bufferC[i]);
}
for(int i = 0; i< deviceNumber; i++){
error = clEnqueueReadBuffer(commandQueue[i], bufferC[i], CL_TRUE, 0, sizeof(cl_int) * global_size, &c[i], 0, NULL, NULL);
}
and I print it like:
for (size_t i = 0; i < deviceNumber; ++i)
{
std::cout<< "delta = " << c[i] << std::endl;
}
and output:
delta = 3
delta = 11165
delta = -1329524360
delta = 11165
delta = 0
delta = 0
delta = -1329520352
delta = 11165
so first value is ok, rest is sort of garbage, do you know what mistake I made writing it?
Of course it is only a partial code, but I think I pasted all the lines regarding that 'c' value. Global size is set to 1.

Well, my mistake was creating number of contexts but in argument I put one device instead of a array of them. But I found it by printing error codes in program - try to do that if you have some problems! Cheers

Related

Do I need more events when timing multiple work-items?

If I have more than one work-item to execute some kernel code, do I need to have more events to track the execution time for each work-item?
I have some strange results, 1 work-item takes about 4 seconds to execute and 100 work-items also take about 4 seconds to execute. I can't see how this could be possible since my Nvidia GeForce GT 525M only has 2 compute units, each with 48 processing elements. This leads me to believe the event I listed as an argument in clEnqueueNDRangeKernel tracks only one work-item. Is that true and if so, how can I get it to track all the work-items?
This is what the Khronos user guide says about the event argument in clEnqueueNDRangeKernel:
event returns an event object that identifies this particular kernel execution instance
What is the meaning of "this particular kernel execution instance"? Isn't that a single work-item?
EDIT:
Relevant host code:
static const size_t numberOfWorkItems = 48;
const size_t globalWorkSize[] = { numberOfWorkItems, 0, 0 };
cl_event events;
ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL, globalWorkSize, NULL, 0, NULL, &events);
ret = clEnqueueReadBuffer(command_queue, memobj, CL_TRUE, 0, sizeof(cl_mem), val, 0, NULL, NULL);
clWaitForEvents(1, &events);
cl_ulong time_start;
cl_ulong time_end;
clGetEventProfilingInfo(events, CL_PROFILING_COMMAND_QUEUED, sizeof(cl_ulong), &time_start, NULL);
clGetEventProfilingInfo(events, CL_PROFILING_COMMAND_END, sizeof(cl_ulong), &time_end, NULL);
double nanoSeconds = (double) (time_end - time_start);
printf("OpenCl Execution time is: %f milliseconds \n",nanoSeconds / 1000000.0);
printf("Result: %lu\n", val[0]);
Kernel code:
kernel void parallel_operation(__global ulong *val) {
size_t i = get_global_id(0);
int n = 48;
local unsigned int result[48];
for (int z = 0; z < n; z++) {
result[z] = 0;
}
// here comes the long operation
for (ulong k = 0; k < 2000; k++) {
for (ulong j = 0; j < 10000; j++) {
result[i] += (j * 3) % 5;
}
}
barrier(CLK_LOCAL_MEM_FENCE);
if (i == 0) {
for (int z = 1; z < n; z++) {
result[0] += result[z];
}
*val = result[0];
}
}
You are measuring the execution time of your entire kernel function. Or in other words, the time between the first work-item starts and the last work-item finishes. To my knowledge there is no possibility to measure the execution time of one single work-item in OpenCL.

MPI Scatterv : How to deal with the root process?

The thing I am still not too certain about is what happens with the root process in MPI Scatter / Scatterv.
If I divide an array as I try in my code, do I need to include the root process in the number of receivers (hence making the sendcounts of size nproc) or is it excluded?
In my example code for Matrix Multiplication, I still get an error by one of the processes running into aberrant behaviour, terminating the program prematurely:
void readMatrix();
double StartTime;
int rank, nproc, proc;
//double matrix_A[N_ROWS][N_COLS];
double **matrix_A;
//double matrix_B[N_ROWS][N_COLS];
double **matrix_B;
//double matrix_C[N_ROWS][N_COLS];
double **matrix_C;
int low_bound = 0; //low bound of the number of rows of each process
int upper_bound = 0; //upper bound of the number of rows of [A] of each process
int portion = 0; //portion of the number of rows of [A] of each process
int main (int argc, char *argv[]) {
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &nproc);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
matrix_A = (double **)malloc(N_ROWS * sizeof(double*));
for(int i = 0; i < N_ROWS; i++) matrix_A[i] = (double *)malloc(N_COLS * sizeof(double));
matrix_B = (double **)malloc(N_ROWS * sizeof(double*));
for(int i = 0; i < N_ROWS; i++) matrix_B[i] = (double *)malloc(N_COLS * sizeof(double));
matrix_C = (double **)malloc(N_ROWS * sizeof(double*));
for(int i = 0; i < N_ROWS; i++) matrix_C[i] = (double *)malloc(N_COLS * sizeof(double));
int *counts = new int[nproc](); // array to hold number of items to be sent to each process
// -------------------> If we have more than one process, we can distribute the work through scatterv
if (nproc > 1) {
// -------------------> Process 0 initalizes matrices and scatters the portions of the [A] Matrix
if (rank==0) {
readMatrix();
}
StartTime = MPI_Wtime();
int counter = 0;
for (int proc = 0; proc < nproc; proc++) {
counts[proc] = N_ROWS / nproc ;
counter += N_ROWS / nproc ;
}
counter = N_ROWS - counter;
counts[nproc-1] = counter;
//set bounds for each process
low_bound = rank*(N_ROWS/nproc);
portion = counts[rank];
upper_bound = low_bound + portion;
printf("I am process %i and my lower bound is %i and my portion is %i and my upper bound is %i \n",rank,low_bound, portion,upper_bound);
//scatter the work among the processes
int *displs = new int[nproc]();
displs[0] = 0;
for (int proc = 1; proc < nproc; proc++) displs[proc] = displs[proc-1] + (N_ROWS/nproc);
MPI_Scatterv(matrix_A, counts, displs, MPI_DOUBLE, &matrix_A[low_bound][0], portion, MPI_DOUBLE, 0, MPI_COMM_WORLD);
//broadcast [B] to all the slaves
MPI_Bcast(&matrix_B, N_ROWS*N_COLS, MPI_DOUBLE, 0, MPI_COMM_WORLD);
// -------------------> Everybody does their work
for (int i = low_bound; i < upper_bound; i++) {//iterate through a given set of rows of [A]
for (int j = 0; j < N_COLS; j++) {//iterate through columns of [B]
for (int k = 0; k < N_ROWS; k++) {//iterate through rows of [B]
matrix_C[i][j] += (matrix_A[i][k] * matrix_B[k][j]);
}
}
}
// -------------------> Process 0 gathers the work
MPI_Gatherv(&matrix_C[low_bound][0],portion,MPI_DOUBLE,matrix_C,counts,displs,MPI_DOUBLE,0,MPI_COMM_WORLD);
}
...
The root process also takes place in the receiver side. If you are not interested in that, just set sendcounts[root] = 0.
See MPI_Scatterv for specific information on which values you have to pass exactly.
However, take care of what you are doing. I strongly suggest that you change the way you allocate your matrix as a one-dimensional array, using a single malloc like this:
double* matrix = (double*) malloc( N_ROWS * N_COLS * sizeof(double) );
If you still want to use a two-dimensional array, then you may need to define your types as a MPI derived datatype.
The datatype you are passing is not valid if you want to send more than a row in a single MPI transfer.
With MPI_DOUBLE you are telling MPI that the buffer contains a contiguous array of count MPI_DOUBLE values.
Since you are allocating a two-dimensional array using multiple malloc calls, then your data is not contiguous.

Swap memory buffers effectively in OpenCL: implementation

I have faced the same problem as here: How to effectively swap OpenCL memory buffers?. My first implementation was the same as has been described in the question, at each cycle it writes/reads memory buffers to/from the device. As pointed out this introduces useless read/write buffer overhead. The code (with memory overhead) below works fine:
//THIS WORKS!!!
f0_mem = clCreateBuffer(
context,
CL_MEM_READ_WRITE | CL_MEM_ALLOC_HOST_PTR,
sizeof (int)*(capacity + 1),
NULL,
&err);
f1_mem = (..."the same as above"...);
m_d_mem = clCreateBuffer(..., CL_MEM_WRITE_ONLY | CL_MEM_ALLOC_HOST_PTR, sizeof (int)*capacity,...);
for (int k = 0; k < numelem; k++) {
sumK = sumK - weight[k];
cmax = 0;
cmax = max(capacity - sumK, weight[k]);
total_elements = (size_t) (capacity - cmax + 1);
if (k % 2 == 0) {
//clEnqueueWriteBuffer of cl_mem buffers
writeBufferToDevice(f0_mem, f1_mem, f0, f1);
setKernelArgs(f0_mem, f1_mem, weight[k], value[k], (int) total_elements);
} else {
//clEnqueueWriteBuffer of cl_mem buffers
writeBufferToDevice(f1_mem, f0_mem, f1, f0);
setKernelArgs(f1_mem, f0_mem, weight[k], value[k], (int) total_elements);
}
err = clEnqueueNDRangeKernel(queue, kernel, 1, NULL, global_work_items, NULL, 0, NULL, NULL);
//clEnqueueReadBuffer of cl_mem buffers
readBufferFromDevice(f0_mem, f1_mem, m_d_mem, f0, f1, m_d);
memcpy(M + k*capacity, m_d, sizeof (int)*capacity);
}
EDIT: My kernel:
void kernel knapsack(global int *input_f, global int *output_f, global int *m_d, int cmax, int weightk, int pk, int maxelem){
int c = get_global_id(0)+cmax;
if(get_global_id(0) < maxelem){
if(input_f[c] < input_f[c - weightk] + pk){
output_f[c] = input_f[c - weightk] + pk;
m_d[c-1] = 1;
}
else{
output_f[c] = input_f[c];
}
}
}
After I have tried to implement the two suggested solutions:
simply swapping setKernelArgs(...)
create two kernels
For the first one this my code:
//ARGUMENTS SWAP
f0_mem = ...
f1_mem = ...
m_d_mem = ...
//clEnqueueWriteBuffer occurs hear
writeBufferToDevice( (cl_mem&) f0_mem, (cl_mem&) f1_mem, (cl_mem&) m_d_mem, (int*) f0, (int*) f1, (int*) m_d);
for (int k = 0; k < numelem; k++) {
/*
The same code block
*/
if (k % 2 == 0) {
setKernelArgs(f0_mem, f1_mem, weight[k], value[k], (int) total_elements);
} else {
setKernelArgs(f1_mem, f0_mem, weight[k], value[k], (int) total_elements);
}
err = clEnqueueNDRangeKernel(queue, kernel, 1, NULL, global_work_items, NULL, 0, NULL, NULL);
err = clEnqueueReadBuffer(queue, m_d_mem, CL_TRUE, 0, sizeof (int)*capacity, m_d, 0, NULL, NULL);
memcpy(M + k*capacity, m_d, sizeof (int)*capacity);
}
The second solution is implemented in this way:
//TWO KERNELS
f0_mem = ...
f1_mem = ...
m_d_mem = ...
//clEnqueueWriteBuffer occurs hear
writeBufferToDevice( (cl_mem&) f0_mem, (cl_mem&) f1_mem, (cl_mem&) m_d_mem, (int*) f0, (int*) f1, (int*) m_d);
for (int k = 0; k < numelem; k++) {
/*
The same code block
*/
if (k % 2 == 0) {
setKernelArgs(f0_mem, f1_mem, weight[k], value[k], (int) total_elements);
clEnqueueNDRangeKernel(queue, kernel0, 1, NULL, global_work_items, NULL, 0, NULL, NULL);
} else {
setKernelArgs(kernel1, f1_mem, f0_mem, weight[k], value[k], (int) total_elements);
clEnqueueNDRangeKernel(queue, kernel1, 1, NULL, global_work_items, NULL, 0, NULL, NULL);
}
clEnqueueReadBuffer(queue, m_d_mem, CL_TRUE, 0, sizeof (int)*capacity, m_d, 0, NULL, NULL);
memcpy(M + k*capacity, m_d, sizeof (int)*capacity);
}
Neither of the two solutions work for me (it seems to me, no swapping occur at all!), what am I doing wrong?
Sub-question: in the last two solutions, is it possible to have memory buffers filled with zeroes without using writeBufferToDevice( f0_mem, f1_mem, m_d_mem...) before the for cycle?
This work is based on this article:
Solving knapsack problems on GPU by V. Boyera, D. El Baza, M. Elkihel
related work: Accelerating the knapsack problem on GPUs by Bharath Suri
Both attempted solutions looks correct to me but there may be some dependencies between each iteration - you would have to post your kernel to check.
It works fine in your solution probably because you are writing and reading each iteration which works slower so it's enough time to synchronize itself.
You can try to add clFinish(command); after each OpenCL API call to see if that makes a difference.
Apart from that there is 3rd solution you could try: swapping pointers in the kernel. You will need to move your loop from CPU to GPU.
inline void swap_pointers(__global double **A, __global double **B)
{
__global double *tmp = *A;
*A = *B;
*B = tmp;
}
__kernel void my_kernel(
__global double *pA,
__global double *pB,
...
)
{
for (int k = 0; k < numelem; k++)
{
// some stuff here
swap_pointers(&pA, &pB);
barrier(CLK_GLOBAL_MEM_FENCE | CLK_LOCAL_MEM_FENCE);
}
}
Then read everything in one go on the host (m_d_mem must be big enough to store data from all iterations):
clEnqueueReadBuffer(queue, m_d_mem, CL_TRUE, 0, sizeof (int)*capacity*numelem, m_d, 0, NULL, NULL);
Solution:
At each cycle after copying m_d to M, the m_d should be reseted and written back to m_d_mem buffer object with Knapsack::writeBuffer_m_d_ToDevice()
ksack.readBuffer_m_d_FromDevice();
memcpy(M + k*capacity, m_d, sizeof (int)*capacity);
ksack.writeBuffer_m_d_ToDevice();//resets m_d_mem

OpenCL clEnqueueNDRangeKernel how to set work group size correctly

In OpenCL, if I want to add two N-dimension vectors, the global work group size (globalSize) should satisfy globalSize = ceil(N/localSize) * localSize, where localSize is the local work group size. Is this correct? If N = 1000, and localSize = 128, globalSize should be 1024? Can we always set globalSize some multiple of localSize and larger than needed?
I tried many times and it worked well for 1-dimension problems.
However, when it comes to 2d problems, for example, multiply two matrices of dimension m*n and n*p, the result matrix is of order m*p, things get more complicated.
The max work group size on my device is 128, so I set localSize [2] = {16,8} and
globalSize [2] = {ceil(m/16)*16,ceil(p/8)*8}.
It is similar to the 1-dimension case but the result is wrong!
If I set localSize [2] = {1,128} and change the globalSize accordingly, I can get the correct result. So where is the problem? Can anyone tell me why?
In addition, I find out the indices where the matrix element is wrong.
It seems that the result is wrong at (i,j) where i*p + j = n * some constant (n = 1,2,3...)
Why?
Here is my kernel function:
kernel void mmult(const int Mdim, const int Ndim, const int Pdim,
global float *A, global float *B, global float *C)
{
int i = get_global_id(1);
int j = get_global_id(0);
if(i < 0 || j < 0 || i > Mdim || j > Pdim) return;
else
{
float tmp = 0;
for(int k = 0; k < Ndim; k++)
tmp += A[i*Ndim+k] * B[k*Pdim+j];
C[i*Pdim + j] = tmp;
}
}
And then it is the host program:
#define __NO_STD_VECTOR // Use cl::vector instead of STL version
#define __CL_ENABLE_EXCEPTIONS
#include <CL/cl.hpp>
#include <utility>
#include <iostream>
#include <fstream>
#include <string>
#include <cmath>
using namespace cl;
int main()
{
// Create the two input matrices
int m = 1000;
int n = 1000;
int p = 1000;
float *A = new float[m*n];
float *B = new float[n*p];
for(int i = 0; i < m*n; i++)
{
A[i] = i;
}
for(int i = 0; i < n*p; i++)
{
B[i] = i;
}
try
{
// Get available platforms
vector<Platform> platforms;
Platform::get(&platforms);
// Select the default platform and create a context using this platform and the GPU
cl_context_properties cps[3] =
{
CL_CONTEXT_PLATFORM,
(cl_context_properties)(platforms[0])(),
0
};
Context context( CL_DEVICE_TYPE_GPU, cps);
// Get a list of devices on this platform
vector<Device> devices = context.getInfo<CL_CONTEXT_DEVICES>();
// Create a command queue and use the first device
CommandQueue queue = CommandQueue(context, devices[0]);
// Read source file
std::ifstream sourceFile("mmul.cl");
std::string sourceCode(
std::istreambuf_iterator<char>(sourceFile),
(std::istreambuf_iterator<char>()));
Program::Sources source(1, std::make_pair(sourceCode.c_str(), sourceCode.length()+1));
// Make program of the source code in the context
Program program = Program(context, source);
// Build program for these specific devices
program.build(devices);
// Make kernel
Kernel kernel(program, "mmult");
// Create memory buffers
Buffer bufferA = Buffer(context, CL_MEM_READ_ONLY, m*n * sizeof(float));
Buffer bufferB = Buffer(context, CL_MEM_READ_ONLY, p*n * sizeof(float));
Buffer bufferC = Buffer(context, CL_MEM_WRITE_ONLY, m*p * sizeof(float));
// Copy lists A and B to the memory buffers
queue.enqueueWriteBuffer(bufferA, CL_TRUE, 0, m * n * sizeof(float), A);
queue.enqueueWriteBuffer(bufferB, CL_TRUE, 0, p * n * sizeof(float), B);
// Set arguments to kernel
kernel.setArg(0, m);
kernel.setArg(1, n);
kernel.setArg(2, p);
kernel.setArg(3, bufferA);
kernel.setArg(4, bufferB);
kernel.setArg(5, bufferC);
// Run the kernel on specific ND range
NDRange global((ceil((float)(p)/16))*16,(ceil((float)(m)/8))*8);
NDRange local(16,8);
queue.enqueueNDRangeKernel(kernel, NullRange, global, local);
// Read buffer C into a local list
float *C = new float[m*p];
queue.enqueueReadBuffer(bufferC, CL_TRUE, 0, m*p * sizeof(float), C);
// check the correctness of the result
float *c = new float[m*p];
for(int i = 0; i < m; i++)
for(int j = 0; j < p; j++)
{
float z = 0.0;
for(int k = 0; k < n; k++)
{
z += A[i*n+k] * B[k*p+j];
}
c[i*p+j] = z;
}
for(int i = 0; i < m*p; i++)
{
if(fabs(c[i]-C[i])>0.001)
std::cout<<i<<" "<<c[i]<<" "<<C[i]<<std::endl;
}
delete []A;
delete []B;
delete []C;
}
catch(Error error)
{
std::cout << error.what() << "(" << error.err() << ")" << std::endl;
}
return 0;
}
Your bounds checking code inside your OpenCL kernel is incorrect. Instead of this:
if(i < 0 || j < 0 || i > Mdim || j > Pdim) return;
You should have this:
if(i < 0 || j < 0 || i >= Mdim || j >= Pdim) return;
Let's assume, that you have float matrix of size 1000x1000:
const int size = 1000;
// Whatever
float* myMatrix = (float*)calloc(size * size, sizeof(*myMatrix));
Determine size of Local Group first:
size_t localSize[] = {16, 8};
Then determine, how many Local Groups do you need:
size_t numLocalGroups[] = {ceil(size/localSize[0]), ceil(size/localSize[1])};
Finally, determine NDRange size:
size_t globalSize[] = {localSize[0] * numLocalGroups[0], localSize[1] * numLocalGroups[1]};
Don't forget to handle out-of-bounds access in right-most Local Groups.

Spellcheck program using MPI

So, my assignment is to write a spell check program and then parallelize it using openMPI. My take was to load the words from a text file into my array called dict[] and this is used as my dictionary. Next, I get input from the user and then it's supposed go through the dictionary array and check whether the current word is within the threshold percentage, if it is, print it out. But I'm only supposed to print out a certain amount of words. My problem is, is that, my suggestions[] array, doesn't seem to fill up the way I need it to, and it gets a lot of blank spots in it, whereas, I thought at least, is that the way I wrote it, is to just fill it when a word is within threshold. So it shouldn't get any blanks in it until there are no more words being added. I think it's close to being finished but I can't seem to figure this part out. Any help is appreciated.
#include <stdio.h>
#include <mpi.h>
#include <string.h>
#include <stdlib.h>
#define SIZE 30
#define max(x,y) (((x) > (y)) ? (x) : (y))
char *dict[50000];
char *suggestions[50000];
char enterWord[50];
char *myWord;
int wordsToPrint = 20;
int threshold = 40;
int i;
int words_added = 0;
int levenshtein(const char *word1, int len1, const char *word2, int len2){
int matrix[len1 + 1][len2 + 1];
int a;
for(a=0; a<= len1; a++){
matrix[a][0] = a;
}
for(a=0;a<=len2;a++){
matrix[0][a] = a;
}
for(a = 1; a <= len1; a++){
int j;
char c1;
c1 = word1[a-1];
for(j = 1; j <= len2; j++){
char c2;
c2 = word2[j-1];
if(c1 == c2){
matrix[a][j] = matrix[a-1][j-1];
}
else{
int delete, insert, substitute, minimum;
delete = matrix[a-1][j] +1;
insert = matrix[a][j-1] +1;
substitute = matrix[a-1][j-1] +1;
minimum = delete;
if(insert < minimum){
minimum = insert;
}
if(substitute < minimum){
minimum = substitute;
}
matrix[a][j] = minimum;
}//else
}//for
}//for
return matrix[len1][len2];
}//levenshtein
void prompt(){
printf("Enter word to search for: \n");
scanf("%s", &enterWord);
}
int p0_compute_output(int num_processes, char *word1){
int totalNumber = 0;
int k = 0;
int chunk = 50000 / num_processes;
for(i = 0; i < chunk; i++){
int minedits = levenshtein(word1, strlen(word1), dict[i], strlen(dict[i]));
int thresholdPercentage = (100 * minedits) / max(strlen(word1), strlen(dict[i]));
if(thresholdPercentage < threshold){
suggestions[totalNumber] = dict[i];
totalNumber = totalNumber + 1;
}
}//for
return totalNumber;
}//p0_compute_output
void p0_receive_output(int next_addition){
int num_to_add;
MPI_Comm comm;
MPI_Status status;
MPI_Recv(&num_to_add,1,MPI_INT,MPI_ANY_SOURCE, MPI_ANY_TAG,MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("--%d\n", num_to_add);
suggestions[next_addition] = dict[num_to_add];
next_addition = next_addition + 1;
}
void compute_output(int num_processes, int me, char *word1){
int chunk = 0;
int last_chunk = 0;
MPI_Comm comm;
if(50000 % num_processes == 0){
chunk = 50000 / num_processes;
last_chunk = chunk;
int start = me * chunk;
int end = me * chunk + chunk;
for(i = start; i < end;i++){
int minedits = levenshtein(word1, strlen(word1), dict[i], strlen(dict[i]));
int thresholdPercentage = (100 * minedits) / max(strlen(word1), strlen(dict[i]));
if(thresholdPercentage < threshold){
int number_to_send = i;
MPI_Send(&number_to_send, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
}
}
}
else{
chunk = 50000 / num_processes;
last_chunk = 50000 - ((num_processes - 1) * chunk);
if(me != num_processes){
int start = me * chunk;
int end = me * chunk + chunk;
for(i = start; i < end; i++){
int minedits = levenshtein(word1, strlen(word1), dict[i], strlen(dict[i]));
int thresholdPercentage = (100 * minedits) / max(strlen(word1), strlen(dict[i]));
if(thresholdPercentage < threshold){
int number_to_send = i;
MPI_Send(&number_to_send, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
}//if
}//for
}//if me != num_processes
else{
int start = me * chunk;
int end = 50000 - start;
for(i = start; i < end; i++){
int minedits = levenshtein(word1, strlen(word1), dict[i], strlen(dict[i]));
int thresholdPercentage = (100 * minedits) / max(strlen(word1), strlen(dict[i]));
if(thresholdPercentage < threshold){
int number_to_send = i;
MPI_Send(&number_to_send, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
}
}
}//me == num_processes
}//BIG else
return;
}//COMPUTE OUTPUT
void set_data(){
prompt();
MPI_Bcast(&enterWord,20 ,MPI_CHAR, 0, MPI_COMM_WORLD);
}//p0_send_inpui
//--------------------------MAIN-----------------------------//
main(int argc, char **argv){
int ierr, num_procs, my_id, loop;
FILE *myFile;
loop = 0;
for(i=0;i<50000;i++){
suggestions[i] = calloc(SIZE, sizeof(char));
}
ierr = MPI_Init(NULL, NULL);
ierr = MPI_Comm_rank(MPI_COMM_WORLD, &my_id);
ierr = MPI_Comm_size(MPI_COMM_WORLD, &num_procs);
printf("Check in from %d of %d processors\n", my_id, num_procs);
set_data();
myWord = enterWord;
myFile = fopen("words", "r");
if(myFile != NULL){
for(i=0;i<50000;i++){
dict[i] = calloc(SIZE, sizeof(char));
fscanf(myFile, "%s", dict[i]);
}//for
fclose(myFile);
}//read word list into dictionary
else printf("File not found");
if(my_id == 0){
words_added = p0_compute_output(num_procs, enterWord);
printf("words added so far: %d\n", words_added);
p0_receive_output(words_added);
printf("Threshold: %d\nWords To print: %d\n%s\n", threshold, wordsToPrint, myWord);
ierr = MPI_Finalize();
}
else{
printf("my word %s*\n", enterWord);
compute_output(num_procs, my_id, enterWord);
// printf("Process %d terminating...\n", my_id);
ierr = MPI_Finalize();
}
for(i=0;i<wordsToPrint;i++){
printf("*%s\n", suggestions[i]);
}//print suggestions
return (0);
}//END MAIN
Here are a few problems I see with what you're doing:
prompt() should only be called by rank 0.
The dictionary file should be read only by rank 0, then broadcast the array out to the other ranks
Alternatively, have rank 1 read the file while rank 0 is waiting for input, broadcast input and dictionary afterwards.
You're making the compute_output step overly complex. You can merge p0_compute_output and compute_output into one routine.
Store an array of indices into dict in each rank
This array will not be the same size in every rank, so the simplest way to do this would be to send from each rank a single integer indicating the size of the array, then send the array with this size. (The receiving rank must know how much data to expect). You could also use the sizes for MPI_Gatherv, but I expect this is more than you're wanting to do right now.
Once you have a single array of indices in rank 0, then use this to fill suggestions.
Save the MPI_Finalize call until immediately before the return call
For the final printf call, only rank 0 should be printing that. I suspect this is causing a large part of the "incorrect" result. As you have it, all ranks are printing suggestions, but it is only filled in rank 0. So the others will all be printing blank entries.
Try some of these changes, especially the last one, and see if that helps.

Resources