How do I calculate the time complexity of this recursive function which halves the input value or halves it and then adds the input value? - recursion

I am having difficulties determining the time complexity of the code below:
int func(int n) { // n > 0
if (n < 2) {
return 1;
} else if (n % 2 == 0) {
return func(n / 3);
} else {
return func(n / 3) + n;
}
}
I have attempted to approach this question using Master Theorem, so I have tried to break it down into:
n = size of input
a = number of sub-problems in the recursion = 3
n/b = size of each sub-problem
f(n) = cost of the work done outside the recursive call
However, I am struggling to understand how to determine the size of each sub-problem and f(n) - the cost of the work done outside the recursive call. At the moment I am just assuming that we take the greater time complexity of the if/else statement so the time complexity would be O(logn).
Also, does the '+ n' in the else statement affect the time complexity of this function?
Any help to understand this would be greatly appreciated!

Related

How much different is dynamic programming from recursion

Mostly I have heard that if you can make a recursion code , you can convert it to a Dynamic programming code, but what is the need to do the same ? And how to convert a recursion code to DP ?
In dynamic programming there are 2 approaches, top-down and bottom-up.
lets take Fibonacci sequence as an example:
f(0) = 0 : x = 1,
f(1) = 1 : x = 1,
f(x) = f(x-1) + f(x-2) : x > 1
The top-down approach:
It uses recursion + memoization(storing the calculated states to avoid the recalculation):
int memo[1000];//initialized by zeroes
int f(int x) {
if (x == 0 || x == 1) return 1;
if (memo[x] != 0) return memo[x]; //trying to avoid recalculation
memo[x] = f(x - 1) + f(x - 2); //storing the result
return memo[x];
}
As you notice here to calculate the value f(x) we have to break it down into
f(x-1) and f(x-2), this why it is called top-down.
The bottom-up approach:
It uses loops(for,while...) rather than recursion and stores the values inside an array:
int memo[1000];
int bottom_up(int x) {
memo[0] = 1;
memo[1] = 1;
for (int i = 2; i < 1000; i++)
memo[i] = memo[i - 1] + memo[i - 2];
}
As you notice we calculate the values of Fibonacci sequence starting from the smaller values up to the bigger ones and this is why it is called bottom-up.
Converting the code from recursion to loops is considered converting the recursive code to an iterative code.
The recursive code will call itself multiple times and you should know that each function call will be stored inside the stack of your memory, so it is preferred to use the iterative approach as it will be better for memory and performance.

complexity on recursive Big-O

I have a Computer Science Midterm tomorrow and I need help determining the complexity of these recursive functions. I know how to solve simple cases, but I am still trying to learn how to solve these harder cases. Any help would be much appreciated and would greatly help in my studies, Thank you!
fonction F(n)
if n == 0
return 1
else
return F(n-1) * n
fonction UniqueElements(A[0..n-1])
for i=0 to i <= n-2 do
for j=i+1 to j <= n-1 do
if A[i] == A[j]
return false
return true
fonction BinRec(n)
if n == 1
return 1
else
return BinRec(floor(n/2)) + 1
For hands on learning, you can plug the functions into a program, and test their worst case scenario performance.
When trying to calculate O by hand, here are some things to remember
The +, -, *, and / offsets can be ignored. So 1 to n+5 and 1 to 5n is considered equivalent to 1 to n.
Also, Only the highest order of magnitude counts, so for O 2^n + n^2 + n, 2^n grows the fastest, so it is equivalent to O 2^n
With recursive functions, you are looking at how many times the function is called in the method (the split count) and how much it needs to be called (the depth, usually is equal to list length). So the final O will be depth_count^split_count
With loops, each nested loop multiplies to the one it's in, and sequential loops add, so (1-n){(1-n){}} (1-n){} is (n * n) + n) => n^2 + n =(only highest growth counts)> n^2
PRACTICE! You will need to practice to get the hang of the gatchas of growth rate and how control flows interact. (so do online practice quizs)
function F(n){
count++
if (n == 0)
return 1
else
return F(n-1) * n
}
function UniqueElements(A){
for (var i=0 ; i <= A.length-2; i++){
for (var j=i+1;j <= A.length-1; j++){
if (A[i] == A[j]){
return false
}
}
}
return true
}
function BinRec(n) {
count++
if (n == 1)
return 1
else
return BinRec(Math.floor(n/2)) + 1
}
count = 0;
console.log(F(10));
console.log(count);
count = 0;
console.log(UniqueElements([1,2,3,5]));
console.log(count);
count = 0;
console.log(BinRec(40));
console.log(count);

Dynamic programming problems using iteration

I have spent a lot of time to learn about implementing/visualizing dynamic programming problems using iteration but I find it very hard to understand, I can implement the same using recursion with memoization but it is slow when compared to iteration.
Can someone explain the same by a example of a hard problem or by using some basic concepts. Like the matrix chain multiplication, longest palindromic sub sequence and others. I can understand the recursion process and then memoize the overlapping sub problems for efficiency but I can't understand how to do the same using iteration.
Thanks!
Dynamic programming is all about solving the sub-problems in order to solve the bigger one. The difference between the recursive approach and the iterative approach is that the former is top-down, and the latter is bottom-up. In other words, using recursion, you start from the big problem you are trying to solve and chop it down to a bit smaller sub-problems, on which you repeat the process until you reach the sub-problem so small you can solve. This has an advantage that you only have to solve the sub-problems that are absolutely needed and using memoization to remember the results as you go. The bottom-up approach first solves all the sub-problems, using tabulation to remember the results. If we are not doing extra work of solving the sub-problems that are not needed, this is a better approach.
For a simpler example, let's look at the Fibonacci sequence. Say we'd like to compute F(101). When doing it recursively, we will start with our big problem - F(101). For that, we notice that we need to compute F(99) and F(100). Then, for F(99) we need F(97) and F(98). We continue until we reach the smallest solvable sub-problem, which is F(1), and memoize the results. When doing it iteratively, we start from the smallest sub-problem, F(1) and continue all the way up, keeping the results in a table (so essentially it's just a simple for loop from 1 to 101 in this case).
Let's take a look at the matrix chain multiplication problem, which you requested. We'll start with a naive recursive implementation, then recursive DP, and finally iterative DP. It's going to be implemented in a C/C++ soup, but you should be able to follow along even if you are not very familiar with them.
/* Solve the problem recursively (naive)
p - matrix dimensions
n - size of p
i..j - state (sub-problem): range of parenthesis */
int solve_rn(int p[], int n, int i, int j) {
// A matrix multiplied by itself needs no operations
if (i == j) return 0;
// A minimal solution for this sub-problem, we
// initialize it with the maximal possible value
int min = std::numeric_limits<int>::max();
// Recursively solve all the sub-problems
for (int k = i; k < j; ++k) {
int tmp = solve_rn(p, n, i, k) + solve_rn(p, n, k + 1, j) + p[i - 1] * p[k] * p[j];
if (tmp < min) min = tmp;
}
// Return solution for this sub-problem
return min;
}
To compute the result, we starts with the big problem:
solve_rn(p, n, 1, n - 1)
The key of DP is to remember all the solutions to the sub-problems instead of forgetting them, so we don't need to recompute them. It's trivial to make a few adjustments to the above code in order to achieve that:
/* Solve the problem recursively (DP)
p - matrix dimensions
n - size of p
i..j - state (sub-problem): range of parenthesis */
int solve_r(int p[], int n, int i, int j) {
/* We need to remember the results for state i..j.
This can be done in a matrix, which we call dp,
such that dp[i][j] is the best solution for the
state i..j. We initialize everything to 0 first.
static keyword here is just a C/C++ thing for keeping
the matrix between function calls, you can also either
make it global or pass it as a parameter each time.
MAXN is here too because the array size when doing it like
this has to be a constant in C/C++. I set it to 100 here.
But you can do it some other way if you don't like it. */
static int dp[MAXN][MAXN] = {{0}};
/* A matrix multiplied by itself has 0 operations, so we
can just return 0. Also, if we already computed the result
for this state, just return that. */
if (i == j) return 0;
else if (dp[i][j] != 0) return dp[i][j];
// A minimal solution for this sub-problem, we
// initialize it with the maximal possible value
dp[i][j] = std::numeric_limits<int>::max();
// Recursively solve all the sub-problems
for (int k = i; k < j; ++k) {
int tmp = solve_r(p, n, i, k) + solve_r(p, n, k + 1, j) + p[i - 1] * p[k] * p[j];
if (tmp < dp[i][j]) dp[i][j] = tmp;
}
// Return solution for this sub-problem
return dp[i][j];;
}
We start with the big problem as well:
solve_r(p, n, 1, n - 1)
Iterative solution is only to, well, iterate all the states, instead of starting from the top:
/* Solve the problem iteratively
p - matrix dimensions
n - size of p
We don't need to pass state, because we iterate the states. */
int solve_i(int p[], int n) {
// But we do need our table, just like before
static int dp[MAXN][MAXN];
// Multiplying a matrix by itself needs no operations
for (int i = 1; i < n; ++i)
dp[i][i] = 0;
// L represents the length of the chain. We go from smallest, to
// biggest. Made L capital to distinguish letter l from number 1
for (int L = 2; L < n; ++L) {
// This double loop goes through all the states in the current
// chain length.
for (int i = 1; i <= n - L + 1; ++i) {
int j = i + L - 1;
dp[i][j] = std::numeric_limits<int>::max();
for (int k = i; k <= j - 1; ++k) {
int tmp = dp[i][k] + dp[k+1][j] + p[i-1] * p[k] * p[j];
if (tmp < dp[i][j])
dp[i][j] = tmp;
}
}
}
// Return the result of the biggest problem
return dp[1][n-1];
}
To compute the result, just call it:
solve_i(p, n)
Explanation of the loop counters in the last example:
Let's say we need to optimize the multiplication of 4 matrices: A B C D. We are doing an iterative approach, so we will first compute the chains with the length of two: (A B) C D, A (B C) D, and A B (C D). And then chains of three: (A B C) D, and A (B C D). That is what L, i and j are for.
L represents the chain length, it goes from 2 to n - 1 (n is 4 in this case, so that is 3).
i and j represent the starting and ending position of the chain. In case L = 2, i goes from 1 to 3, and j goes from 2 to 4:
(A B) C D A (B C) D A B (C D)
^ ^ ^ ^ ^ ^
i j i j i j
In case L = 3, i goes from 1 to 2, and j goes from 3 to 4:
(A B C) D A (B C D)
^ ^ ^ ^
i j i j
So generally, i goes from 1 to n - L + 1, and j is i + L - 1.
Now, let's continue with the algorithm assuming that we are at the step where we have (A B C) D. We now need to take into account the sub-problems (which are already calculated): ((A B) C) D and (A (B C)) D. That is what k is for. It goes through all the positions between i and j and computes the sub problems.
I hope I helped.
The problem with recursion is the high number of stack frames that need to be pushed/popped. This can quickly become the bottle-neck.
The Fibonacci Series can be calculated with iterative DP or recursion with memoization. If we calculate F(100) in DP all we need is an array of length 100 e.g. int[100] and that's the guts of our used memory. We calculate all entries of the array pre-filling f[0] and f[1] as they are defined to be 1. and each value just depends on the previous two.
If we use a recursive solution we start at fib(100) and work down. Every method call from 100 down to 0 is pushed onto the stack, AND checked if it's memoized. These operations add up and iteration doesn't suffer from either of these. In iteration (bottom-up) we already know all of the previous answers are valid. The bigger impact is probably the stack frames; and given a larger input you may get a StackOverflowException for what was otherwise trivial with an iterative DP approach.

How to calculate the complexity of time and memory

I have the following code
public int X(int n)
{
if (n == 0)
return 0;
if (n == 1)
return 1;
else
return (X(n- 1) + X(n- 2));
}
I want to calculate the complexity of time and memory of this code
My code consists of a constant checking if (n == 0) return 0; so this will take a constant time assume c so we have either c or c or the calculation of the recursion functions which I can't calculate
Can anyone help me in this?
To calculate the value of X(n), you are calculating X(n-1) and X(n-2)
So T(n) = T(n-1) + T(n-2);
T(0) = 1
T(1) = 1
which is exponential O(2^n)
If you want detailed proof of how it will be O(2^n), check here.
Space complexity is linear.
(Just to be precise, If you consider the stack space taken for recursion, it's O(n))

Recursion optimization?

Why does Fibonacci recursive procedure works so long?
This is in OCaml:
let rec fib n = if n<2 then n else fib (n-1) + fib (n-2);;
This is in Mathematica:
Fib[n_] := If[n < 2, n, Fib[n - 1] + Fib[n - 2]]
This is in Java:
public static BigInteger fib(long n) {
if( n < 2 ) {
return BigInteger.valueOf(n);
}
else {
return fib(n-1).add(fib(n-2));
}
}
For n=100 it works for a long time, because, I guess, it traces tree with 2^100 nodes in time.
Although, there are only 100 numbers to generate, so it could consume just 100 memory registers and 100 calculation tacts.
So, execution could be optimized.
What does this task about and how is it solved? Since solution does not implemented in Mathematica it probably doesn't exist. What about research on this matter?
This is a classic example used to show the value of memoization. So, that's one approach to make it go faster.
(If you just want to calculate fibonacci quickly, of course it's extremely easy to rewrite the function to get the answer very fast. Start from 0 and work up to n, passing the previous 2 fibonacci numbers each time.)
I think the way to go is memoization as in the answer by #JeffreyScofield.
Define :
Fib2[n_] := Fib2[n] = If[n < 2, n, Fib2[n - 1] + Fib2[n - 2]]
Check :
Fib[30] // AbsoluteTiming
(* {9.202920, 832040} *)
Fib2[30] // AbsoluteTiming
(* {0., 832040} *)
Fib2[100] // AbsoluteTiming
(* {0.001000, 354224848179261915075} *)
For a recursive Fibonacci sequence even for n=100 it should not take that much time to operate. Whether it is recursive or iterative it should still execute in O(N) time because all it is doing is summing up the previous numbers which is done in constant time. Approximately how long does it take to compute?

Resources