is it the fibonacci algorithm without recursion linear? - recursion

I know that the fibonacci algorithm can be programmed without recursion like this:
int fibo(int n){
if(n <= 1){
return n;
}
int fibo = 1;
int fiboPrev = 1;
for(int i = 2; i < n; ++i){
int temp = fibo;
fibo += fiboPrev;
fiboPrev = temp;
}
return fibo;
}
and also that the recursive fibonacci has a complexity of O(2^k) approximately, but for what I see the non-recursive algorithm is O(n); so it seems is way more efficient, is it ok my calculus or is there any hidden complexity on the non-recursive solution?

Evaluate the complexity of the implementation on its own. In this case, the complexity related to the input n is defined by the for loop, which is directly proportional to the size of n. Therefore, the complexity is O(n) - linear.

Related

Best way to calculate powers of 2 in Java

Given an integer n, find 2^n. Here are two methods I know:
Method 1
int a = 1;
for(int i = 0; i < n; ++i)
a = a << 1;
Method 2
int a = Math.pow(2,n);
Given how fast bitshifts are, I was wondering which method would be faster. Also, how does Math.pow() work and why do people generally say it is slow?

The efficiency of prime factoring algorithms

I'm trying to understand what the issue is with creating an efficient prime factorisation algorithm. Specifically, the research I've done so far says that no algorithm has yet been discovered which can find the prime factors of a number in O(n2) time. However, the obvious algorithm to me is something like (pseudocode)
method(int number, ArrayList<int> listOfPrimes)
{
int x = 0;
for (int i : listOfPrimes)
{
for (int j : listOfPrimes)
{
if (i * j = number)
{
x = i*j;
}
}
}
return x;
}
I think method is O(n2), where n is the size of the list. Clearly my understanding of this issue is flawed or there wouldn't be such a fuss about prime factorisation. Where am I going wrong?
As alluded to by #dmuir, your "n" is not the correct "n". Otherwise a trivial O(n) algorithm to factor would be:
factor(n){
for (int i=2; i<n; i++) {
if (n % i == 0) {
print("found factor:", i);
return i;
}
}
}
For factoring, the size of the input is measured in digits, so "n" is the number of digits or bits in the number. The best algorithms have a very complicated complexity that requires some number theory to understand, but is "greater than" polynomial time yet "less than" exponential time, where the quoted phrases can be made formal. For this reason the complexity is sometimes referred to as "subexponential".
A more optimal way would be
method(int number, ArrayList<int> listOfPrimes)
{
int x = 0;
for (int i : listOfPrimes)
{
while(number%i == 0){
number /= i;
x++;
}
}
return x;
}
this returns the count prime factors of a number. the complexity would be
O(n + number_of_prime_factors)
where n is the length of listOfPrimes

Fibonacci series using alternate approach is not working

I have written a simple fibonacci series using recursion as below. But the below program is based on the formula fib(n)=fib(n-1)+fib(n-2).
Can we write a program to take a value of n and compute the fibonacci series using the formula fib(n+2)= fib(n)+fib(n+1). Can we write a program based on this formulae taking n as input.
public class FibonacciClass{
public static void main(String[] argv){
for (int index=0; index < 7; index++){
System.out.println("The Fibonacci series for the number "+index+" is " + fib(index));
}
}
private static int fib(int n){
if (n == 0 ) return 0;
if (n <= 2 ) return 1;
return (fib(n-1) + fib(n-2));
}
}
If we can solve the fib series using recursion, please let me know your inputs to write the program for the same.
hmm this sounds like you're trying to get an answer to a homework problem. But looks like you have legitimate reputation so:
Define
gib(n) = fib(n+2).
Use this to substitute for fib(n) and fib(n+1):
gib(n-2) = fib((n-2)+2) = fib(n)
gib(n-1) = fib((n-1)+2) = fib(n+1)
So the original equation becomes
fib(n+2)= fib(n)+fib(n+1) --> gib(n) = gib(n-2) + gib(n-1)
And we can recurse on this. We must make similar substitutions (n for n+2) in the code:
static unsigned int gib(int n)
{
if (n <= -2) return 0;
if (n == -1) return 1;
return gib(n - 2) + gib(n - 1);
}
I didnt include negative numbers that result in negative fibonacci (your code breaks on them too) so truly it needs to be returning "unsigned int". To modify for negative see here.

Parallelizing recursive function through MPI?

can we parallelize a recursive function using MPI?
I am trying to parallelize the quick sort function, but don't know if it works in MPI because it is recursive. I also want to know where should I do the parallel region.
// quickSort.c
#include <stdio.h>
void quickSort( int[], int, int);
int partition( int[], int, int);
void main()
{
int a[] = { 7, 12, 1, -2, 0, 15, 4, 11, 9};
int i;
printf("\n\nUnsorted array is: ");
for(i = 0; i < 9; ++i)
printf(" %d ", a[i]);
quickSort( a, 0, 8);
printf("\n\nSorted array is: ");
for(i = 0; i < 9; ++i)
printf(" %d ", a[i]);
}
void quickSort( int a[], int l, int r)
{
int j;
if( l < r )
{
// divide and conquer
j = partition( a, l, r);
quickSort( a, l, j-1);
quickSort( a, j+1, r);
}
}
int partition( int a[], int l, int r) {
int pivot, i, j, t;
pivot = a[l];
i = l; j = r+1;
while( 1)
{
do ++i; while( a[i] <= pivot && i <= r );
do --j; while( a[j] > pivot );
if( i >= j ) break;
t = a[i]; a[i] = a[j]; a[j] = t;
}
t = a[l]; a[l] = a[j]; a[j] = t;
return j;
}
I would also really appreciate it if there is another simpler code for the quick sort.
Well, technically you can, but I'm afraid this would be efficient only in SMP. And does the array fit to single node? If no, then you cannot perform even the first pass of a quick-sort.
If you really need to sort an array on a parallel system using MPI, you might want to consider using merge sort instead (of course you still can use quick sort for single blocks at each node, before you begin merging the blocks).
If you still want to use quick sort, but you are confused with the recursive version, here is a sketch of non-recursive algorithm which hopefully can be parallelized a bit easier, although it's essentially the same:
std::stack<std::pair<int, int> > unsorted;
unsorted.push(std::make_pair(0, size-1));
while (!unsorted.empty()) {
std::pair<int, int> u = unsorted.top();
unsorted.pop();
m = partition(A, u.first, u.second);
// here you can send one of intervals to another node instead of
// pushing it into the stack, so it would be processed in parallel.
if (m+1 < u.second) unsorted.push(std::make_pair(m+1, u.second));
if (u.first < m-1) unsorted.push(std::make_pair(u.first, m-1));
}
Theoretically "anything" can be parallelized using MPI, but remember that MPI isn't doing any parallelization itself. It's just providing the communication layer between processes. As long as all of your sends and receives (or collective calls) match up, it's a correct program for the most part. That being said, it may not be the most efficient thing to use MPI, depending on your algorithm. If you are going to be sorting lots and lots of data (more than can fit in the memory of one node) then it could be efficient to use MPI (you probably want to take a look at the RMA chapter in that case) or some other higher level library that might make things even simpler for this type of application (UPC, Co-array Fortran, SHMEM, etc.).

Time complexity (in big-O notation) of the following recursive code?

What is the Big-O time complexity ( O ) of the following recursive code?
public static int abc(int n) {
if (n <= 2) {
return n;
}
int sum = 0;
for (int j = 1; j < n; j *= 2) {
sum += j;
}
for (int k = n; k > 1; k /= 2) {
sum += k;
}
return abc(n - 1) + sum;
}
My answer is O(n log(n)). Is it correct?
Where I'm sitting...I think the runtime is O(n log n). Here's why.
You are making n calls to the function. The function definitely depends on n for the number of times the following two operations are made:
You loop up to 2*log(n) values to increment a sum.
For a worst case, n is extremely large, but the overall runtime doesn't change. A best case would be that n <= 2, such that only one operation is done (the looping would not occur).

Resources