Recursive functions - argument value not updating after some value - recursion

I have written one recursive function and pointer to integer is passed as an argument. That integer value is incremented in function, but I am facing a strange issue that after some value its value never get updated. Even I am checking the value at that address.
Below is the code:--
computeWait(long long int begin, long long int begin2, long long int w,
int* current, int limit)
{
long long int next = 0, arrival = 0;
long long int next1 = 0, service = 0;
long long int serviceTime = 0;
long long int wait = 0;
static long long int Ta = 0;
static long long int Ts = 0;
static long long int W = 0;
while(*current < limit)
{
next = (16807 * begin) % m;
arrival = -200 * log((double)next/m);
next1 = (16807 * begin2) % m;
service = -100 * log(EDRN((double)next1));
wait = max(0, (w + service - arrival));
Ta = Ta + arrival;
Ts = Ts + service;
W = W + wait;
*current = *current + 1;;
computeWait(next, next1, wait, current, limit);
}
printf("\n\nTotal arrival %Ld Total service %Ld Total wait %Ld\n", Ta/limit, Ts/limit, W/limit);
}
int main(int agrc, char* argv[])
{
int num = 0;
int currentValue = 0; // seed number
int end = 1000000;
computeWait(1, 46831694, 0, &currentValue, end);
}
After 103917, its value doesnot get updated and it gives memory protection failure.
Please let me know where I am doing something wrong as it seems so trivial to fix it.
Thanks,
Neha.

Well, I did not really try to understand the code, but I saw some big numbers and the word recursion.
So I would guess its a stack overflow because your recursion is too deep?

Related

Arduino custom-made function is not working

I wrote my own Arduino function in order to measure the heart rate. But when it executes it does not work properly. In order to calculate heart rate, I have to do the following calculation:
heart_Rate = 60000/period;
But I realized that the period value keeps accumulating. As a result of that the heart rate reduces. But when I test this without making function (inside void loop) it's working perfectly.
This is my Arduino code:
int H_val = 0;
void setup() {
Serial.begin(9600);
}
float HeartRate() {
int threshold = 750;
int raw_ecg = 0;
int E_input = 0;
float period = 0;
unsigned long p_time = 0;
unsigned long c_time = 0;
int H_rate;
int oldvalue = 0;
oldvalue = raw_ecg;
raw_ecg = 0;
raw_ecg = analogRead(A0);
if (oldvalue < threshold && raw_ecg >= threshold) {
p_time = c_time;
c_time = millis();
period = c_time - p_time;
}
if (period <= 0) {
int H_rate = 0;
} else {
int H_rate = 60000 / period;
return H_rate;
}
delay(2);
}
void loop() {
H_val = HeartRate();
Serial.println(H_val);
}
How do I prevent the period from accumulating?
Every time you call the function, the local variables get initialized again. In your code this means that
p_time = c_time; //c_time = 0
c_time = millis();
period = c_time - p_time;
Hence, period will increase as millis() is increasing.
If you declare the variables as static, your problem will be solved:
static unsigned long p_time = 0;
static unsigned long c_time = 0;
By doing that, the variables keep existing (and keep their value) between function calls and keep their local scope.
The reason it is working inside the loop is that you never leave the loop and the variables never get re-initialized.
Edit: it's enough to declare c_time as static as you assign it's value to p_time anyway. But there are various ways to shorten the code.
It can be a problem of data type. Changing that variables to float or int may solve the problem.
float H_val;
float HeartRate(){
float H_rate;
return H_rate;
}
void loop(){
H_val = HeartRate();
}

Kernel hangs when using OpenCL pipes

I am trying to write an OpenCL kernel that uses OpenCL pipes. The kernel code is given below.
uint tid = get_global_id(0);
uint numWorkItems = get_global_size(0);
int i;
int rid;
int temp = 0, temp1 = 0;
int val;
int szgr = get_local_size(0);
int lid = get_local_id(0);
for(i = tid + start_index; i < rLen; i = i + numWorkItems){
temp = 0;
val = input[i];
temp = hashTable[val - 1];
if(temp){
temp1 = projection[val - 1];
}
reserve_id_t rid1 = work_group_reserve_write_pipe(c0, szgr);
while(is_valid_reserve_id(rid1) == false){
rid1 = work_group_reserve_write_pipe(c0, szgr);
}
if(is_valid_reserve_id(rid1))
{
write_pipe(c0,rid1,lid, &temp);
work_group_commit_write_pipe(c0, rid1);
}
reserve_id_t rid2 = work_group_reserve_write_pipe(c1, szgr);
while(is_valid_reserve_id(rid2) == false){
rid2 = work_group_reserve_write_pipe(c1, szgr);
}
if(is_valid_reserve_id(rid2))
{
write_pipe(c1,rid2,lid, &temp1);
work_group_commit_write_pipe(c1, rid2);
}
}
But the work_group_reserve_write_pipe function always fails and because of this the kernels hangs at the while loop. If I remove this while loop then the code doesnt hang but writing to the pipe doesnt happen. Can someone tell me why this is happening?
The pipe is declared as a _write_only pipe.
About work_group_reserve_write_pipe:
This built-in function must be encountered by all work-items in a
work-group executing the kernel with the same argument values;
otherwise the behavior is undefined.
the loop starts from tid + start_index so after some loop iterations, some work items doesn't hit this instruction. Also a while loop is doing same undefined behaviour.

MPI Scatterv : How to deal with the root process?

The thing I am still not too certain about is what happens with the root process in MPI Scatter / Scatterv.
If I divide an array as I try in my code, do I need to include the root process in the number of receivers (hence making the sendcounts of size nproc) or is it excluded?
In my example code for Matrix Multiplication, I still get an error by one of the processes running into aberrant behaviour, terminating the program prematurely:
void readMatrix();
double StartTime;
int rank, nproc, proc;
//double matrix_A[N_ROWS][N_COLS];
double **matrix_A;
//double matrix_B[N_ROWS][N_COLS];
double **matrix_B;
//double matrix_C[N_ROWS][N_COLS];
double **matrix_C;
int low_bound = 0; //low bound of the number of rows of each process
int upper_bound = 0; //upper bound of the number of rows of [A] of each process
int portion = 0; //portion of the number of rows of [A] of each process
int main (int argc, char *argv[]) {
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &nproc);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
matrix_A = (double **)malloc(N_ROWS * sizeof(double*));
for(int i = 0; i < N_ROWS; i++) matrix_A[i] = (double *)malloc(N_COLS * sizeof(double));
matrix_B = (double **)malloc(N_ROWS * sizeof(double*));
for(int i = 0; i < N_ROWS; i++) matrix_B[i] = (double *)malloc(N_COLS * sizeof(double));
matrix_C = (double **)malloc(N_ROWS * sizeof(double*));
for(int i = 0; i < N_ROWS; i++) matrix_C[i] = (double *)malloc(N_COLS * sizeof(double));
int *counts = new int[nproc](); // array to hold number of items to be sent to each process
// -------------------> If we have more than one process, we can distribute the work through scatterv
if (nproc > 1) {
// -------------------> Process 0 initalizes matrices and scatters the portions of the [A] Matrix
if (rank==0) {
readMatrix();
}
StartTime = MPI_Wtime();
int counter = 0;
for (int proc = 0; proc < nproc; proc++) {
counts[proc] = N_ROWS / nproc ;
counter += N_ROWS / nproc ;
}
counter = N_ROWS - counter;
counts[nproc-1] = counter;
//set bounds for each process
low_bound = rank*(N_ROWS/nproc);
portion = counts[rank];
upper_bound = low_bound + portion;
printf("I am process %i and my lower bound is %i and my portion is %i and my upper bound is %i \n",rank,low_bound, portion,upper_bound);
//scatter the work among the processes
int *displs = new int[nproc]();
displs[0] = 0;
for (int proc = 1; proc < nproc; proc++) displs[proc] = displs[proc-1] + (N_ROWS/nproc);
MPI_Scatterv(matrix_A, counts, displs, MPI_DOUBLE, &matrix_A[low_bound][0], portion, MPI_DOUBLE, 0, MPI_COMM_WORLD);
//broadcast [B] to all the slaves
MPI_Bcast(&matrix_B, N_ROWS*N_COLS, MPI_DOUBLE, 0, MPI_COMM_WORLD);
// -------------------> Everybody does their work
for (int i = low_bound; i < upper_bound; i++) {//iterate through a given set of rows of [A]
for (int j = 0; j < N_COLS; j++) {//iterate through columns of [B]
for (int k = 0; k < N_ROWS; k++) {//iterate through rows of [B]
matrix_C[i][j] += (matrix_A[i][k] * matrix_B[k][j]);
}
}
}
// -------------------> Process 0 gathers the work
MPI_Gatherv(&matrix_C[low_bound][0],portion,MPI_DOUBLE,matrix_C,counts,displs,MPI_DOUBLE,0,MPI_COMM_WORLD);
}
...
The root process also takes place in the receiver side. If you are not interested in that, just set sendcounts[root] = 0.
See MPI_Scatterv for specific information on which values you have to pass exactly.
However, take care of what you are doing. I strongly suggest that you change the way you allocate your matrix as a one-dimensional array, using a single malloc like this:
double* matrix = (double*) malloc( N_ROWS * N_COLS * sizeof(double) );
If you still want to use a two-dimensional array, then you may need to define your types as a MPI derived datatype.
The datatype you are passing is not valid if you want to send more than a row in a single MPI transfer.
With MPI_DOUBLE you are telling MPI that the buffer contains a contiguous array of count MPI_DOUBLE values.
Since you are allocating a two-dimensional array using multiple malloc calls, then your data is not contiguous.

OpenCL: Inserting local atomic_inc to reduction kernel

I am trying to include a local atomic similar to that described by DarkZeros here within a working reduction kernel. The kernel finds a largest value within a set of points; the aim of the local atomic is to allow me to filter selected point_ids into an output array without any gaps.
At present when I use the local atomic to increment the addition to a local array the kernel runs but produces a wrong overall highest point. If the atomic line is commented out then a correct result returns.
What is going on here and how do I fix it?
Simplified kernel code:
__kernel void reduce(__global const float4* dataSet, __global const int* input, const unsigned int items, //points and index
__global int* output, __local float4* shared, const unsigned int n, //finding highest
__global int* filtered, __global const float2* tri_input, const unsigned int pass, //finding filtered
__global int* global_count //global count
){
//set everything up
const unsigned int group_id = get_global_id(0) / get_local_size(0);
const unsigned int local_id = get_local_id(0);
const unsigned int group_size = items;
const unsigned int group_stride = 2 * group_size;
const int local_stride = group_stride * group_size;
__local float4 *zeroIt = &shared[local_id];
zeroIt->x = 0; zeroIt->y = 0; zeroIt->z = 0; zeroIt->w = 0;
volatile __local int local_count_set_1;
volatile __local int global_val_set_1;
volatile __local int filter_local[64];
if(local_id==0){
local_count_set_1 = 0;
global_val_set_1 = -1;
}
barrier(CLK_LOCAL_MEM_FENCE);
int i = group_id * group_stride + local_id;
while (i < n){
//load up a pair of points using the index to locate them within a massive dataSet
int ia = input[i];
float4 a = dataSet[ia-1];
int ib = input[i + group_size];
float4 b = dataSet[ib-1];
//on the first pass kernel increment a local count
if(pass == 0){
filter_local[atomic_inc(&local_count_set_1)] = 1; //including this line causes an erroneous highest point result
//filter_local[local_id] = 1; //but including this line does not
//atomic_inc(&local_count_set_1); //and neither does this one
}
//find the highest of the pair
float4 result;
if(a.z>b.z) result = a;
else result = b;
//load up the previous highest result locally
float4 s = shared[local_id];
//if the previous highest beat this, stick, else twist
if(s.z>result.z){ result = s; }
shared[local_id] = result;
i += local_stride;
}
barrier(CLK_LOCAL_MEM_FENCE);
if (group_size >= 512){
if (local_id < 256) {
__local float4 *a = &shared[local_id];
__local float4 *b = &shared[local_id+256];
if(b->z>a->z){ shared[local_id] = shared[local_id+256]; }
}}
//repeat barrier ops in increments down to group_size>=2 - this filters the highest result in shared
//finally, return the filtered highest result of shared to the global level
barrier(CLK_LOCAL_MEM_FENCE);
if(local_id == 0){
__local float4 *v = &shared[0];
int send = v->w ;
output[group_id] = send+1;
}}
[UPDATE]: When the atomic_inc line is included the 'wrong' highest point result is always a point near the end of the test dataset. I'm guessing that this means that the atomic_inc is affecting a latter comparison, but I'm not sure exactly what or where yet.
[UPDATE]: Edited code to simplify/clarify/update with debugging tweaks. Still not working and it is driving me loopy.
Total face-palm moment. In the setup phase of the kernel there are the lines:
if(local_id==0){
local_count_set_1 = 0;
global_val_set_1 = -1;
}
barrier(CLK_LOCAL_MEM_FENCE);
When these are split and the local_count_set_1 is included within the while loop, the error does not occur. i.e:
if(local_id==0) global_val_set_1 = -1;
barrier(CLK_LOCAL_MEM_FENCE);
while (i < n){
if(local_id==0) local_count_set_1 = 0;
barrier(CLK_LOCAL_MEM_FENCE);
....
if(pass = 0){
filter_local[atomic_inc(&local_count_set_1)] = 1;
}
....
I'm hoping this fixes the issue // will update if not.
Aaaand that's a weekend I'll never get back.

Spellcheck program using MPI

So, my assignment is to write a spell check program and then parallelize it using openMPI. My take was to load the words from a text file into my array called dict[] and this is used as my dictionary. Next, I get input from the user and then it's supposed go through the dictionary array and check whether the current word is within the threshold percentage, if it is, print it out. But I'm only supposed to print out a certain amount of words. My problem is, is that, my suggestions[] array, doesn't seem to fill up the way I need it to, and it gets a lot of blank spots in it, whereas, I thought at least, is that the way I wrote it, is to just fill it when a word is within threshold. So it shouldn't get any blanks in it until there are no more words being added. I think it's close to being finished but I can't seem to figure this part out. Any help is appreciated.
#include <stdio.h>
#include <mpi.h>
#include <string.h>
#include <stdlib.h>
#define SIZE 30
#define max(x,y) (((x) > (y)) ? (x) : (y))
char *dict[50000];
char *suggestions[50000];
char enterWord[50];
char *myWord;
int wordsToPrint = 20;
int threshold = 40;
int i;
int words_added = 0;
int levenshtein(const char *word1, int len1, const char *word2, int len2){
int matrix[len1 + 1][len2 + 1];
int a;
for(a=0; a<= len1; a++){
matrix[a][0] = a;
}
for(a=0;a<=len2;a++){
matrix[0][a] = a;
}
for(a = 1; a <= len1; a++){
int j;
char c1;
c1 = word1[a-1];
for(j = 1; j <= len2; j++){
char c2;
c2 = word2[j-1];
if(c1 == c2){
matrix[a][j] = matrix[a-1][j-1];
}
else{
int delete, insert, substitute, minimum;
delete = matrix[a-1][j] +1;
insert = matrix[a][j-1] +1;
substitute = matrix[a-1][j-1] +1;
minimum = delete;
if(insert < minimum){
minimum = insert;
}
if(substitute < minimum){
minimum = substitute;
}
matrix[a][j] = minimum;
}//else
}//for
}//for
return matrix[len1][len2];
}//levenshtein
void prompt(){
printf("Enter word to search for: \n");
scanf("%s", &enterWord);
}
int p0_compute_output(int num_processes, char *word1){
int totalNumber = 0;
int k = 0;
int chunk = 50000 / num_processes;
for(i = 0; i < chunk; i++){
int minedits = levenshtein(word1, strlen(word1), dict[i], strlen(dict[i]));
int thresholdPercentage = (100 * minedits) / max(strlen(word1), strlen(dict[i]));
if(thresholdPercentage < threshold){
suggestions[totalNumber] = dict[i];
totalNumber = totalNumber + 1;
}
}//for
return totalNumber;
}//p0_compute_output
void p0_receive_output(int next_addition){
int num_to_add;
MPI_Comm comm;
MPI_Status status;
MPI_Recv(&num_to_add,1,MPI_INT,MPI_ANY_SOURCE, MPI_ANY_TAG,MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("--%d\n", num_to_add);
suggestions[next_addition] = dict[num_to_add];
next_addition = next_addition + 1;
}
void compute_output(int num_processes, int me, char *word1){
int chunk = 0;
int last_chunk = 0;
MPI_Comm comm;
if(50000 % num_processes == 0){
chunk = 50000 / num_processes;
last_chunk = chunk;
int start = me * chunk;
int end = me * chunk + chunk;
for(i = start; i < end;i++){
int minedits = levenshtein(word1, strlen(word1), dict[i], strlen(dict[i]));
int thresholdPercentage = (100 * minedits) / max(strlen(word1), strlen(dict[i]));
if(thresholdPercentage < threshold){
int number_to_send = i;
MPI_Send(&number_to_send, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
}
}
}
else{
chunk = 50000 / num_processes;
last_chunk = 50000 - ((num_processes - 1) * chunk);
if(me != num_processes){
int start = me * chunk;
int end = me * chunk + chunk;
for(i = start; i < end; i++){
int minedits = levenshtein(word1, strlen(word1), dict[i], strlen(dict[i]));
int thresholdPercentage = (100 * minedits) / max(strlen(word1), strlen(dict[i]));
if(thresholdPercentage < threshold){
int number_to_send = i;
MPI_Send(&number_to_send, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
}//if
}//for
}//if me != num_processes
else{
int start = me * chunk;
int end = 50000 - start;
for(i = start; i < end; i++){
int minedits = levenshtein(word1, strlen(word1), dict[i], strlen(dict[i]));
int thresholdPercentage = (100 * minedits) / max(strlen(word1), strlen(dict[i]));
if(thresholdPercentage < threshold){
int number_to_send = i;
MPI_Send(&number_to_send, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
}
}
}//me == num_processes
}//BIG else
return;
}//COMPUTE OUTPUT
void set_data(){
prompt();
MPI_Bcast(&enterWord,20 ,MPI_CHAR, 0, MPI_COMM_WORLD);
}//p0_send_inpui
//--------------------------MAIN-----------------------------//
main(int argc, char **argv){
int ierr, num_procs, my_id, loop;
FILE *myFile;
loop = 0;
for(i=0;i<50000;i++){
suggestions[i] = calloc(SIZE, sizeof(char));
}
ierr = MPI_Init(NULL, NULL);
ierr = MPI_Comm_rank(MPI_COMM_WORLD, &my_id);
ierr = MPI_Comm_size(MPI_COMM_WORLD, &num_procs);
printf("Check in from %d of %d processors\n", my_id, num_procs);
set_data();
myWord = enterWord;
myFile = fopen("words", "r");
if(myFile != NULL){
for(i=0;i<50000;i++){
dict[i] = calloc(SIZE, sizeof(char));
fscanf(myFile, "%s", dict[i]);
}//for
fclose(myFile);
}//read word list into dictionary
else printf("File not found");
if(my_id == 0){
words_added = p0_compute_output(num_procs, enterWord);
printf("words added so far: %d\n", words_added);
p0_receive_output(words_added);
printf("Threshold: %d\nWords To print: %d\n%s\n", threshold, wordsToPrint, myWord);
ierr = MPI_Finalize();
}
else{
printf("my word %s*\n", enterWord);
compute_output(num_procs, my_id, enterWord);
// printf("Process %d terminating...\n", my_id);
ierr = MPI_Finalize();
}
for(i=0;i<wordsToPrint;i++){
printf("*%s\n", suggestions[i]);
}//print suggestions
return (0);
}//END MAIN
Here are a few problems I see with what you're doing:
prompt() should only be called by rank 0.
The dictionary file should be read only by rank 0, then broadcast the array out to the other ranks
Alternatively, have rank 1 read the file while rank 0 is waiting for input, broadcast input and dictionary afterwards.
You're making the compute_output step overly complex. You can merge p0_compute_output and compute_output into one routine.
Store an array of indices into dict in each rank
This array will not be the same size in every rank, so the simplest way to do this would be to send from each rank a single integer indicating the size of the array, then send the array with this size. (The receiving rank must know how much data to expect). You could also use the sizes for MPI_Gatherv, but I expect this is more than you're wanting to do right now.
Once you have a single array of indices in rank 0, then use this to fill suggestions.
Save the MPI_Finalize call until immediately before the return call
For the final printf call, only rank 0 should be printing that. I suspect this is causing a large part of the "incorrect" result. As you have it, all ranks are printing suggestions, but it is only filled in rank 0. So the others will all be printing blank entries.
Try some of these changes, especially the last one, and see if that helps.

Resources