What's the time complexity of the following two functions?
int fun1(int n){
if(n>0)
return (2*fun1(n-1)+1);
else return 1;
}
int fun2(int n){
if(n>0)
return (fun2(n-1)+fun2(n-1)+1);
else return 1;
}
Obviously for fun2 we write recursive equation as T(n) = 2*T(n-1) + 1 but how do we write recursive equation for fun1?
Just a quick look at the code (I may be wrong). The fun1 has the O(n) time complexity (linear), the fun2 has O(2^n) time complexity (exponential).
When you imagine levels of recursion, then one depth level doubles the number of recursive calls. So, for n == 10, there is one call of fun2(10), and then there are 2 calls of fun2(9), 4 calls of fun2(8), 8 calls of fun2(7), 16 for 6, 32 for 5, 64 for 4, 128 for 3, 256 for 2, 512 for 1, 1024 calls fun2(0). The last mentioned just return 1.
This is a nice example that you should always think twice when implementing functions like that using recursion. A simple fix (the 2*fun2(n-1) instead of fun2(n-1) + fun2(n-1)) makes it O(n).
This also explains why Fibonacci numbers should not be implemented using naive recursion. Frankly, simple loop without any recursion is much better in the case.
So, the equation for calculating the time complexity should contain 2^something + something. ;)
You're correct about fun2.
For fun1, think about your usual rules of math, disregard time complexity.
2*fun1(n-1) = fun1(n-1) + fun1(n-1), unless rules of multiplication can redefined, such as in modern analysis (I believe that's the vein of mathematics that's taught in. Been a while since I was in that class :) )
So with the distribution rule, fun1 is effectively the same as fun2, thus having the same time complexity.
Related
# The base case basically draws a segment.
import turtle
def fractal(order,length):
if order==0:
turtle.forward(length)
else:
l=length/3
fractal(order-1,l)
turtle.left(60)
fractal(order-1,l)
turtle.right(120)
fractal(order-1,l)
turtle.left(60)
fractal(order-1,l)
def snowflake(order,length):
fractal(order,length)
turtle.right(120)
fractal(order,length)
turtle.right(120)
fractal(order,length)
snowflake(3,300)
turtle.speed(0)
turtle.done()
This is a recursive function that traces a fractal shaped snowflake.
The complexity depends on order.
However, I can't figure it out when we have so many recursive actions happening for every order.
Although the function might look complicated, it is worth noting that the execution of fractal only depends on order. So complexity-wise, it can be reduced to just:
def fractal(order):
if order == 0:
# do O(1)
else:
fractal(order - 1)
fractal(order - 1)
fractal(order - 1)
i.e. 3 recursive calls with order - 1; the time complexity recurrence is then very simple:
T(n) = 3 * T(n - 1) (+ O(1))
T(1) = O(1)
– which easily works out to be O(3^n).
snowflake has 3 identical calls to fractal, so is also O(3^n).
int recursiveFunc(int n) {
if (n == 1) return 0;
for (int i = 2; i < n; i++)
if (n % i == 0) return i + recursiveFunc(n / i);
return n;
}
I know Complexity = length of tree from root node to leaf node * number of leaf nodes, but having hard time to come to an equation.
This one is tricky, because the runtime is highly dependent on what number you provide in as input in a way that most recursive functions are not.
For starters, notice that the way that this recursion works, it takes in a number and then either
returns without making any further calls if the number is prime, or
recursively calls itself on number divided by that proper factor.
This means that in one case, the function, called on a number n, will do Θ(n) work and make no calls (which happens if the number is prime), and in the other case will do Θ(d) work and then make a recursive call on the number n / d, which happens if n is composite and is the largest divisor of n.
One useful fact we'll use to analyze this function is that given a composite number n, the smallest factor d of n is never any greater than √n. If it were, then we would have that n = df for some other factor f, and since d is the smallest proper divisor, we'd have that f ≥ d, so df > √n √ n = n, which would be impossible.
With that in mind, we can argue that the worst-case runtime of this function is O(n), and in fact that happens when n is prime. Here's how to see this. Imagine the worst-case amount of time this function can take if it ends up making a recursive call. In that case, the function will do at most Θ(√n) work (let's assume our smallest divisor is as large as possible), then recursively makes a call on a number whose size is at most n / 2 (which is the absolute largest number we could get as part of the recursive call. In that case, we'd get this recurrence relation under the pessimistic assumption that we do the maximum work possible
T(n) = T(n / 2) + √n
This solves, by the Master Theorem, to Θ(√n), which is less work than what we'd do if we had a prime number as an input.
But what happens if, instead, we do the maximum amount of work possible for some number of iterations, and then end up with a prime number and stop? In that case, using the iteration method, we'd see that the work done would be
n1/2 + n1/4 + ... + n / 2k,
which would happen if we stopped after k iterations. In this case, notice that this expression is maximized when we pick k to be as small as possible - which would correspond to stopping as soon as possible, which happens if we pick a prime number for n.
So in this sense, the worst-case runtime of this function is Θ(n), which happens for n being a prime number, with composite numbers terminating much faster than this.
So how fast can this function be? Well, imagine, for example, that we have a number of the form pk, where p is some prime number. In that case, this function will do Θ(p) work to discover p as a prime factor, then recursively call itself on the number pk-1. If you think about what this will look like, this function will end up doing Θ(p) work Θ(k) times for a total runtime of Θ(pk). And since n = pk, we'd have k = logp n, so the runtime would be Θ(p logp n). That's minimized at either p = 2 or p = 3, and in either case gives us a runtime of Θ(log n) in this case.
I strongly suspect that's the best case here, though I'm not entirely sure. But what this does mean is that
the worst-case runtime is definitely Θ(n), occurring at prime numbers, and
the best-case runtime is O(log n), which I'm fairly certain is a tight bound but I'm not 100% sure how to prove.
Im trying to figure out an equation. This is f(n)=f(n-1) + 3n^2 - n. I also have the values to use as f(1), f(2), f(3). How would i go about solving this??
You would usually use recursion but, whether you do that or an iterative solution, you're missing (or simply haven't shown us) a vital bit of information, the terminating condition such as f(1) = 1 (for example).
With that extra piece of information, you could code up a recursive solution relatively easily, such as the following pseudo-code:
define f(n):
if n == 1:
return 1
return f(n-1) + (3 * n * n) - n
As an aside, that's not actually Fibonacci, which is the specific 1, 1, 2, 3, 5, 8, 13, ... sequence.
It can be said to be Fibonacci-like but it's actually more efficient to do this one recursively since it only involves one self-referential call per level whereas Fibonacci needs two:
define f(n):
if n <= 2:
return 1
return f(n-2) + f(n-1)
And if you're one of those paranoid types who doesn't like recursion (and I'll admit freely it can have its problems in the real world of limited stack depths), you could opt for the iterative version.
define f(n):
if n == 1:
return 1
parent = 1
for num = 2 to n inclusive:
result = parent + (3 * num * num) - num
parent = result
return result
If you ask this question on a programming site such as Stack Overflow, you can expect to get code as an answer.
On the other hand, if you are looking for a closed formula for f(n), then you should direct your question to a specialised StackExchange site such as Computer Science.
Note: what you are looking for is called the repertoire method. It can be used to solve your problem (the closed formula is very simple).
I was asked to use dynamic programming to solve a problem. I have mixed notes on what constitutes dynamic programming. I believe it requires a "bottom-up" approach, where smallest problems are solved first.
One thing I have contradicting information on, is whether something can be dynamic programming if the same subproblems are solved more than once, as is often the case in recursion.
For instance. For Fibonacci, I can have a recursive algorithm:
RecursiveFibonacci(n)
if (n=1 or n=2)
return 1
else
return RecursiveFibonacci(n-1) + RecursiveFibonacci(n-2)
In this situation, the same sub-problems may be solved over-and-over again. Does this render it is not dynamic programming? That is, if I wanted dynamic programming, would I have to avoid resolving subproblems, such as using an array of length n and storing the solution to each subproblem (the first indices of the array are 1, 1, 2, 3, 5, 8, 13, 21)?
Fibonacci(n)
F1 = 1
F2 = 1
for i=3 to n
Fi=Fi-1 + Fi-2
return Fn
Dynamic programs can usually be succinctly described with recursive formulas.
But if you implement them with simple recursive computer programs, these are often inefficient for exactly the reason you raise: the same computation is repeated. Fibonacci is a example of repeated computation, though it is not a dynamic program.
There are two approaches to avoiding the repetition.
Memoization. The idea here is to cache the answer computed for each set of arguments to the recursive function and return the cached value when it exists.
Bottom-up table. Here you "unwind" the recursion so that results at levels less than i are combined to the result at level i. This is usually depicted as filling in a table, where the levels are rows.
One of these methods is implied for any DP algorithm. If computations are repeated, the algorithm isn't a DP. So the answer to your question is "yes."
So an example... Let's try the problem of making change of c cents given you have coins with values v_1, v_2, ... v_n, using a minimum number of coins.
Let N(c) be the minimum number of coins needed to make c cents. Then one recursive formulation is
N(c) = 1 + min_{i = 1..n} N(c - v_i)
The base cases are N(0)=0 and N(k)=inf for k<0.
To memoize this requires just a hash table mapping c to N(c).
In this case the "table" has only one dimension, which is easy to fill in. Say we have coins with values 1, 3, 5, then the N table starts with
N(0) = 0, the initial condition.
N(1) = 1 + min(N(1-1), N(1-3), N(1-5) = 1 + min(0, inf, inf) = 1
N(2) = 1 + min(N(2-1), N(2-3), N(2-5) = 1 + min(1, inf, inf) = 2
N(3) = 1 + min(N(3-1), N(3-3), N(3-5) = 1 + min(2, 0, inf) = 1
You get the idea. You can always compute N(c) from N(d), d < c in this manner.
In this case, you need only remember the last 5 values because that's the biggest coin value. Most DPs are similar. Only a few rows of the table are needed to get the next one.
The table is k-dimensional for k independent variables in the recursive expression.
We think of a dynamic programming approach to a problem if it has
overlapping subproblems
optimal substructure
In very simple words we can say dynamic programming has two faces, they are top-down and bottom-up approaches.
In your case, it is a top-down approach if you are talking about the recursion.
In the top-down approach, we will try to write a recursive solution or a brute-force solution and memoize the results so that we will try to use that result when a similar subproblem arrives, so it is brute-force + memoization. We can achieve that brute-force approach with a simple recursive relation.
EDIT
So it seems I "underestimated" what varying length numbers meant. I didn't even think about situations where the operands are 100 digits long. In that case, my proposed algorithm is definitely not efficient. I'd probably need an implementation who's complexity depends on the # of digits in each operands as opposed to its numerical value, right?
As suggested below, I will look into the Karatsuba algorithm...
Write the pseudocode of an algorithm that takes in two arbitrary length numbers (provided as strings), and computes the product of these numbers. Use an efficient procedure for multiplication of large numbers of arbitrary length. Analyze the efficiency of your algorithm.
I decided to take the (semi) easy way out and use the Russian Peasant Algorithm. It works like this:
a * b = a/2 * 2b if a is even
a * b = (a-1)/2 * 2b + a if a is odd
My pseudocode is:
rpa(x, y){
if x is 1
return y
if x is even
return rpa(x/2, 2y)
if x is odd
return rpa((x-1)/2, 2y) + y
}
I have 3 questions:
Is this efficient for arbitrary length numbers? I implemented it in C and tried varying length numbers. The run-time in was near-instant in all cases so it's hard to tell empirically...
Can I apply the Master's Theorem to understand the complexity...?
a = # subproblems in recursion = 1 (max 1 recursive call across all states)
n / b = size of each subproblem = n / 1 -> b = 1 (problem doesn't change size...?)
f(n^d) = work done outside recursive calls = 1 -> d = 0 (the addition when a is odd)
a = 1, b^d = 1, a = b^d -> complexity is in n^d*log(n) = log(n)
this makes sense logically since we are halving the problem at each step, right?
What might my professor mean by providing arbitrary length numbers "as strings". Why do that?
Many thanks in advance
What might my professor mean by providing arbitrary length numbers "as strings". Why do that?
This actually change everything about the problem (and make your algorithm incorrect).
It means than 1234 is provided as 1,2,3,4 and you cannot operate directly on the whole number. You need to analyze your algorithm in terms of #additions, #multiplications, #divisions.
You should expect a division to be a bit more expensive than a multiplication, and a multiplication to be lot more expensive than an addition. So a good algorithm try to reduce the number of divisions and multiplications.
Check out the Karatsuba algorithm, (ps don't copy it that's not what your teacher want) is one of the fastest for this specification.
Add 3): Native integers are limited in how large (or small) numbers they can represent (32- or 64-bit integers for example). To represent arbitrary length numbers you can choose strings, because then you are not really limited by this. The problem is then, of course, that your arithmetic units are not really made to add strings ;-)