What is the complexity of a recursive snowflake function? - recursion

# The base case basically draws a segment.
import turtle
def fractal(order,length):
if order==0:
turtle.forward(length)
else:
l=length/3
fractal(order-1,l)
turtle.left(60)
fractal(order-1,l)
turtle.right(120)
fractal(order-1,l)
turtle.left(60)
fractal(order-1,l)
def snowflake(order,length):
fractal(order,length)
turtle.right(120)
fractal(order,length)
turtle.right(120)
fractal(order,length)
snowflake(3,300)
turtle.speed(0)
turtle.done()
This is a recursive function that traces a fractal shaped snowflake.
The complexity depends on order.
However, I can't figure it out when we have so many recursive actions happening for every order.

Although the function might look complicated, it is worth noting that the execution of fractal only depends on order. So complexity-wise, it can be reduced to just:
def fractal(order):
if order == 0:
# do O(1)
else:
fractal(order - 1)
fractal(order - 1)
fractal(order - 1)
i.e. 3 recursive calls with order - 1; the time complexity recurrence is then very simple:
T(n) = 3 * T(n - 1) (+ O(1))
T(1) = O(1)
– which easily works out to be O(3^n).
snowflake has 3 identical calls to fractal, so is also O(3^n).

Related

Time complexity of this recursive block

int recursiveFunc(int n) {
if (n == 1) return 0;
for (int i = 2; i < n; i++)
if (n % i == 0) return i + recursiveFunc(n / i);
return n;
}
I know Complexity = length of tree from root node to leaf node * number of leaf nodes, but having hard time to come to an equation.
This one is tricky, because the runtime is highly dependent on what number you provide in as input in a way that most recursive functions are not.
For starters, notice that the way that this recursion works, it takes in a number and then either
returns without making any further calls if the number is prime, or
recursively calls itself on number divided by that proper factor.
This means that in one case, the function, called on a number n, will do Θ(n) work and make no calls (which happens if the number is prime), and in the other case will do Θ(d) work and then make a recursive call on the number n / d, which happens if n is composite and is the largest divisor of n.
One useful fact we'll use to analyze this function is that given a composite number n, the smallest factor d of n is never any greater than √n. If it were, then we would have that n = df for some other factor f, and since d is the smallest proper divisor, we'd have that f ≥ d, so df > √n √ n = n, which would be impossible.
With that in mind, we can argue that the worst-case runtime of this function is O(n), and in fact that happens when n is prime. Here's how to see this. Imagine the worst-case amount of time this function can take if it ends up making a recursive call. In that case, the function will do at most Θ(√n) work (let's assume our smallest divisor is as large as possible), then recursively makes a call on a number whose size is at most n / 2 (which is the absolute largest number we could get as part of the recursive call. In that case, we'd get this recurrence relation under the pessimistic assumption that we do the maximum work possible
T(n) = T(n / 2) + √n
This solves, by the Master Theorem, to Θ(√n), which is less work than what we'd do if we had a prime number as an input.
But what happens if, instead, we do the maximum amount of work possible for some number of iterations, and then end up with a prime number and stop? In that case, using the iteration method, we'd see that the work done would be
n1/2 + n1/4 + ... + n / 2k,
which would happen if we stopped after k iterations. In this case, notice that this expression is maximized when we pick k to be as small as possible - which would correspond to stopping as soon as possible, which happens if we pick a prime number for n.
So in this sense, the worst-case runtime of this function is Θ(n), which happens for n being a prime number, with composite numbers terminating much faster than this.
So how fast can this function be? Well, imagine, for example, that we have a number of the form pk, where p is some prime number. In that case, this function will do Θ(p) work to discover p as a prime factor, then recursively call itself on the number pk-1. If you think about what this will look like, this function will end up doing Θ(p) work Θ(k) times for a total runtime of Θ(pk). And since n = pk, we'd have k = logp n, so the runtime would be Θ(p logp n). That's minimized at either p = 2 or p = 3, and in either case gives us a runtime of Θ(log n) in this case.
I strongly suspect that's the best case here, though I'm not entirely sure. But what this does mean is that
the worst-case runtime is definitely Θ(n), occurring at prime numbers, and
the best-case runtime is O(log n), which I'm fairly certain is a tight bound but I'm not 100% sure how to prove.

White-box and Black-box testing of recursive functions

I learned white-box and black-box testing in terms of iterative functions. Now i need to do white-box and black-box testing of several recursive functions (in F#). take the following recursive algorithm for gcd:
gcd (m, n)
if (m % n) = 0 then
n
else
gcd n ( m % n)
For the white-box test: how exactly do i go about covering the different branches of the algorithm? Naively one could say there are two branches but when the function is called more than once the possible branches will obviously increase. Should i do testing with arguments which results in different amounts of recursive calls or how exactly do i determine which values to test with?
black-box: i get the general idea of black box testing. we should look at possible values we might want to call the function with without having knowledge of its inner workings. In this case i am just not sure which are values we might want to call it with. one way could be just to start with two values m and n for which gcd = 1 and then do the same for values m and for which gcd = 2 up to some gcd= n for some arbitrary number n. Is this how one is supposed to go about this?
First of all, I don't think there is one single established definition of how to do white-box and black-box testing of recursive functions, but here is how I interpret it.
White-box testing. We want to test the function based on its inner working. In case of recursive functions, I think this means that we want to test that the recursive calls it makes are the ones we would expect. One way to do this is to log all recursive calls. A simple implementation of gcd that does this adds a parameter to keep a log and returns it with the result:
let rec gcd log m n =
let log = (m, n)::log
if (m % n) = 0 then List.rev log, n
else gcd log n (m % n)
Now, for some two parameters, say 54 and 22, you can do the calculation by hand, decide what the parameters of the recursive calls should be and write a test for that:
let log, res = gcd [] 54 22
log |> shouldEqual [ (54, 22); (22, 10); (10, 2) ]
Black-box testing. Here, we assume we do not know how exactly the function works, so we cannot test its internals. All we can do is to test it using a number of inputs. It is probably a good idea to think of corner-case or tricky inputs because those are the ones that could cause problems. Given a simple implementation:
let rec gcd m n =
if (m % n) = 0 then n
else gcd n (m % n)
I would probably write tests for the following:
// A random case where one of the numbers is the result
gcd 100 50 |> shouldEqual 50
gcd 50 100 |> shouldEqual 50
// A random case where the only divisor is 1
gcd 13 123 |> shouldEqual 1
gcd 123 13 |> shouldEqual 1
// The following are problematic and I'm not sure what the right behaviour is
gcd 0 0 // This probably should not be allowed
gcd 10 -5 // This returns -5, but I'm not sure that's what we want
Random testing.
You could also use random testing (which is a form of black box testing) to generate multiple test cases automatically. There are at least two random tests I can think of:
Generate two random numbers, a and b and check that gcd a b = gcd b a. This is testing only a very basic property, but it can cover quite a lot of cases.
Pick a random number a and a couple of primes p1, p2, .... Then split the primes into two groups and produce a*p1*p3*p5 and a*p2*p4*p6. Write a test that checks that the GCD of the two numbers is a.

Fibonacci sequence in solving an equation

Im trying to figure out an equation. This is f(n)=f(n-1) + 3n^2 - n. I also have the values to use as f(1), f(2), f(3). How would i go about solving this??
You would usually use recursion but, whether you do that or an iterative solution, you're missing (or simply haven't shown us) a vital bit of information, the terminating condition such as f(1) = 1 (for example).
With that extra piece of information, you could code up a recursive solution relatively easily, such as the following pseudo-code:
define f(n):
if n == 1:
return 1
return f(n-1) + (3 * n * n) - n
As an aside, that's not actually Fibonacci, which is the specific 1, 1, 2, 3, 5, 8, 13, ... sequence.
It can be said to be Fibonacci-like but it's actually more efficient to do this one recursively since it only involves one self-referential call per level whereas Fibonacci needs two:
define f(n):
if n <= 2:
return 1
return f(n-2) + f(n-1)
And if you're one of those paranoid types who doesn't like recursion (and I'll admit freely it can have its problems in the real world of limited stack depths), you could opt for the iterative version.
define f(n):
if n == 1:
return 1
parent = 1
for num = 2 to n inclusive:
result = parent + (3 * num * num) - num
parent = result
return result
If you ask this question on a programming site such as Stack Overflow, you can expect to get code as an answer.
On the other hand, if you are looking for a closed formula for f(n), then you should direct your question to a specialised StackExchange site such as Computer Science.
Note: what you are looking for is called the repertoire method. It can be used to solve your problem (the closed formula is very simple).

Can recursion be dynamic programming?

I was asked to use dynamic programming to solve a problem. I have mixed notes on what constitutes dynamic programming. I believe it requires a "bottom-up" approach, where smallest problems are solved first.
One thing I have contradicting information on, is whether something can be dynamic programming if the same subproblems are solved more than once, as is often the case in recursion.
For instance. For Fibonacci, I can have a recursive algorithm:
RecursiveFibonacci(n)
if (n=1 or n=2)
return 1
else
return RecursiveFibonacci(n-1) + RecursiveFibonacci(n-2)
In this situation, the same sub-problems may be solved over-and-over again. Does this render it is not dynamic programming? That is, if I wanted dynamic programming, would I have to avoid resolving subproblems, such as using an array of length n and storing the solution to each subproblem (the first indices of the array are 1, 1, 2, 3, 5, 8, 13, 21)?
Fibonacci(n)
F1 = 1
F2 = 1
for i=3 to n
Fi=Fi-1 + Fi-2
return Fn
Dynamic programs can usually be succinctly described with recursive formulas.
But if you implement them with simple recursive computer programs, these are often inefficient for exactly the reason you raise: the same computation is repeated. Fibonacci is a example of repeated computation, though it is not a dynamic program.
There are two approaches to avoiding the repetition.
Memoization. The idea here is to cache the answer computed for each set of arguments to the recursive function and return the cached value when it exists.
Bottom-up table. Here you "unwind" the recursion so that results at levels less than i are combined to the result at level i. This is usually depicted as filling in a table, where the levels are rows.
One of these methods is implied for any DP algorithm. If computations are repeated, the algorithm isn't a DP. So the answer to your question is "yes."
So an example... Let's try the problem of making change of c cents given you have coins with values v_1, v_2, ... v_n, using a minimum number of coins.
Let N(c) be the minimum number of coins needed to make c cents. Then one recursive formulation is
N(c) = 1 + min_{i = 1..n} N(c - v_i)
The base cases are N(0)=0 and N(k)=inf for k<0.
To memoize this requires just a hash table mapping c to N(c).
In this case the "table" has only one dimension, which is easy to fill in. Say we have coins with values 1, 3, 5, then the N table starts with
N(0) = 0, the initial condition.
N(1) = 1 + min(N(1-1), N(1-3), N(1-5) = 1 + min(0, inf, inf) = 1
N(2) = 1 + min(N(2-1), N(2-3), N(2-5) = 1 + min(1, inf, inf) = 2
N(3) = 1 + min(N(3-1), N(3-3), N(3-5) = 1 + min(2, 0, inf) = 1
You get the idea. You can always compute N(c) from N(d), d < c in this manner.
In this case, you need only remember the last 5 values because that's the biggest coin value. Most DPs are similar. Only a few rows of the table are needed to get the next one.
The table is k-dimensional for k independent variables in the recursive expression.
We think of a dynamic programming approach to a problem if it has
overlapping subproblems
optimal substructure
In very simple words we can say dynamic programming has two faces, they are top-down and bottom-up approaches.
In your case, it is a top-down approach if you are talking about the recursion.
In the top-down approach, we will try to write a recursive solution or a brute-force solution and memoize the results so that we will try to use that result when a similar subproblem arrives, so it is brute-force + memoization. We can achieve that brute-force approach with a simple recursive relation.

How do I use Master theorem to describe recursion?

Recently I have been studying recursion; how to write it, analyze it, etc. I have thought for a while that recurrence and recursion were the same thing, but some problems on recent homework assignments and quizzes have me thinking there are slight differences, that 'recurrence' is the way to describe a recursive program or function.
This has all been very Greek to me until recently, when I realized that there is something called the 'master theorem' used to write the 'recurrence' for problems or programs. I've been reading through the wikipedia page, but, as usual, things are worded in such a way that I don't really understand what it's talking about. I learn much better with examples.
So, a few questions:
Lets say you are given this recurrence:
r(n) = 2*r(n-2) + r(n-1);
r(1) = r(2)
= 1
Is this, in fact, in the form of the master theorem? If so, in words, what is it saying? If you were to be trying to write a small program or a tree of recursion based on this recurrence, what would that look like? Should I just try substituting numbers in, seeing a pattern, then writing pseudocode that could recursively create that pattern, or, since this may be in the form of the master theorem, is there a more straightforward, mathematical approach?
Now, lets say you were asked to find the recurrence, T(n), for the number of additions performed by the program created from the previous recurrence. I can see that the base case would probably be T(1) = T(2) = 0, but I'm not sure where to go from there.
Basically, I am asking how to go from a given recurrence to code, and the opposite. Since this looks like the master theorem, I'm wondering if there is a straightforward and mathematical way of going about it.
EDIT: Okay, I've looked through some of my past assignments to find another example of where I'm asked, 'to find the recurrence', which is the part of this question I'm having the post trouble with.
Recurrence that describes in the best
way the number of addition operations
in the following program fragment
(when called with l == 1 and r == n)
int example(A, int l, int r) {
if (l == r)
return 2;
return (A[l] + example(A, l+1, r);
}
A few years ago, Mohamad Akra and Louay Bazzi proved a result that generalizes the Master method -- it's almost always better. You really shouldn't be using the Master Theorem anymore...
See, for example, this writeup: http://courses.csail.mit.edu/6.046/spring04/handouts/akrabazzi.pdf
Basically, get your recurrence to look like equation 1 in the paper, pick off the coefficients, and integrate the expression in Theorem 1.
Zachary:
Lets say you are given this
recurrence:
r(n) = 2*r(n-2) + r(n-1); r(1) = r(2)
= 1
Is this, in fact, in the form of the
master theorem? If so, in words, what
is it saying?
I think that what your recurrence relation is saying is that for function of "r" with "n" as its parameter (representing the total number of data sets you're inputting), whatever you get at the nth position of the data-set is the output of the n-1 th position plus twice whatever is the result of the n-2 th position, with no non-recursive work being done. When you try to solve a recurrence relation, you're trying to go about expressing it in a way that doesn't involve recursion.
However, I don't think that that is in the correct form for the Master Theorem Method. Your statement is a "second order linear recurrence relation with constant coefficients". Apparently, according to my old Discrete Math textbook, that's the form you need to have in order to solve the recurrence relation.
Here's the form that they give:
r(n) = a*r(n-1) + b*r(n-2) + f(n)
For 'a' and 'b' are some constants and f(n) is some function of n. In your statement, a = 1, b = 2, and f(n) = 0. Whenever, f(n) is equal to zero the recurrence relation is known as "homogenous". So, your expression is homogenous.
I don't think that you can solve a homogenous recurrence relation using the Master Method Theoerm because f(n) = 0. None of the cases for Master Method Theorem allow for that because n-to-the-power-of-anything can't equal zero. I could be wrong, because I'm not really an expert at this but I don't that it's possible to solve a homogenous recurrence relation using the Master Method.
I that that the way to solve a homogeneous recurrence relation is to go by 5 steps:
1) Form the characteristic equation, which is something of the form of:
x^k - c[1]*x^k-1 - c[2]*x^k-2 - ... - c[k-1]*x - c[k] = 0
If you've only got 2 recursive instances in your homogeneous recurrence relation then you only need to change your equation into the Quadratic Equation where
x^2 - a*x - b = 0
This is because a recurrence relation of the form of
r(n) = a*r(n-1) + b*r(n-2)
Can be re-written as
r(n) - a*r(n-1) - b*r(n-2) = 0
2) After your recurrence relation is rewritten as a characteristic equation, next find the roots (x[1] and x[2]) of the characteristic equation.
3) With your roots, your solution will now be one of the two forms:
if x[1]!=x[2]
c[1]*x[1]^n + c[2]*x[2]^n
else
c[1]*x[1]^n + n*c[2]*x[2]^n
for when n>2.
4) With the new form of your recursive solution, you use the initial conditions (r(1) and r(2)) to find c[1] and c[2]
Going with your example here's what we get:
1)
r(n) = 1*r(n-1) + 2*r(n-2)
=> x^2 - x - 2 = 0
2) Solving for x
x = (-1 +- sqrt(-1^2 - 4(1)(-2)))/2(1)
x[1] = ((-1 + 3)/2) = 1
x[2] = ((-1 - 3)/2) = -2
3) Since x[1] != x[2], your solution has the form:
c[1](x[1])^n + c[2](x[2])^n
4) Now, use your initial conditions to find the two constants c[1] and c[2]:
c[1](1)^1 + c[2](-2)^1 = 1
c[1](1)^2 + c[2](-2)^2 = 1
Honestly, I'm not sure what your constants are in this situation, I stopped at this point. I guess you'd have to plug in numbers until you'd somehow got a value for both c[1] and c[2] which would both satisfy those two expressions. Either that or perform row reduction on a matrix C where C equals:
[ 1 1 | 1 ]
[ 1 2 | 1 ]
Zachary:
Recurrence that describes in the best
way the number of addition operations
in the following program fragment
(when called with l == 1 and r == n)
int example(A, int l, int r) {
if (l == r)
return 2;
return (A[l] + example(A, l+1, r);
}
Here's the time complexity values for your given code for when r>l:
int example(A, int l, int r) { => T(r) = 0
if (l == r) => T(r) = 1
return 2; => T(r) = 1
return (A[l] + example(A, l+1, r); => T(r) = 1 + T(r-(l+1))
}
Total: T(r) = 3 + T(r-(l+1))
Else, when r==l then T(r) = 2, because the if-statement and the return both require 1 step per execution.
Your method, written in code using a recursive function, would look like this:
function r(int n)
{
if (n == 2) return 1;
if (n == 1) return 1;
return 2 * r(n-2) + r(n-1); // I guess we're assuming n > 2
}
I'm not sure what "recurrence" is, but a recursive function is simply one that calls itself.
Recursive functions need an escape clause (some non-recursive case - for example, "if n==1 return 1") to prevent a Stack Overflow error (i.e., the function gets called so much that the interpreter runs out of memory or other resources)
A simple program that would implement that would look like:
public int r(int input) {
if (input == 1 || input == 2) {
return 1;
} else {
return 2 * r(input - 2) + r(input -1)
}
}
You would also need to make sure that the input is not going to cause an infinite recursion, for example, if the input at the beginning was less than 1. If this is not a valid case, then return an error, if it is valid, then return the appropriate value.
"I'm not exactly sure what 'recurrence' is either"
The definition of a "recurrence relation" is a sequence of numbers "whose domain is some infinite set of integers and whose range is a set of real numbers." With the additional condition that that the function describing this sequence "defines one member of the sequence in terms of a previous one."
And, the objective behind solving them, I think, is to go from a recursive definition to one that isn't. Say if you had T(0) = 2 and T(n) = 2 + T(n-1) for all n>0, you'd have to go from the expression "T(n) = 2 + T(n-1)" to one like "2n+2".
sources:
1) "Discrete Mathematics with Graph Theory - Second Edition", by Edgar G. Goodair and Michael M. Parmenter
2) "Computer Algorithms C++," by Ellis Horowitz, Sartaj Sahni, and Sanguthevar Rajasekaran.

Resources