What is the difference between dynamic programming and recursion?
I have gone through many articles in geeksforgeeks tutorial point and Wikipedia, but it seems to me that both are same.
Can you please explain me with example of Fibonacci series the difference between dynamic programming and recursion?
Calculating terms in the Fibonacci sequence is very easy, since in fact you only need to remember fib(n-2) and fib(n-1) in order to calculate fib(n). Because it is so easy, any algorithm is going to be extremely simple, so this example blurs the nuances between different dynamic programming paradigms. That being said, the wikipedia page you mentioned has a nice explication about fibonacci: https://en.wikipedia.org/wiki/Dynamic_programming#Fibonacci_sequence
A function is called recursive if it calls itself during its execution.
A dynamic programming algorithm might be implemented with or without using recursion.
The core of dynamic programming is exploiting the two following facts to write an algorithm:
A solution to a problem can be broken into solutions to subproblems;
When an optimal solution S to a problem P is broken into solutions s1, s2, ... to subproblems p1, p2, ..., then s1, s2, ... are all optimal solutions to their respective subproblems.
Note that these two facts are not true of all problems. A problem only lends itself to dynamic programming if those two facts apply to it.
A simple example is finding the shortest path from point A to point B: if a shortest path from A to B goes through point C, then its two halves from A to C and from C to B it is made of are also shortest paths.
In most situations, you could make recursive calls to solve the subproblems. But a "naive" recursive approach can easily result in an exponential algorithm, because of the cascading "in order to solve this problem, I need to solve these two (or more) subproblems" that might quickly escalate the number of problems you have to solve. Here is an example with fibonacci:
fib(5) = fib(4) + fib(3)
fib(4) = fib(3) + fib(2)
fib(3) = fib(2) + fib(1)
fib(2) = fib(1) + fib(0)
fib(1) = 1
fib(0) = 0
fib(1) = 1
fib(2) = fib(1) + fib(0)
fib(1) = 1
fib(0) = 0
fib(1) = 1
fib(3) = fib(2) + fib(1)
fib(2) = fib(1) + fib(0)
fib(1) = 1
fib(0) = 0
fib(1) = 1
Here we had to calculate 16 terms to find fib(5). But notice that there are only 6 different terms in total. Surely we could be more efficient by avoiding repeating the same calculations again and again.
To avoid this, dynamic programming algorithms most often amount to filling an array with the solutions to the subproblems. Once you've identified the list of subproblems and the array, there might not be much incentive to go "top-down" with recursive calls that start with the largest problem and successively break it down into smaller subproblems. Instead, you can fill the array "bottom-up" starting from the most trivial problems and then using those to solve the more complex problems, until you've made it up to the problem you originally wanted to solve. In the case of the fibonacci sequence, you might end up with the following code:
int f[n+1];
f[0] = 0;
f[1] = 1;
for (int k = 2; k <= n; k++)
f[k] = f[k-2] + f[k-1];
return f[n];
However, in the case of the fibonacci sequence, you only need to remember the last two terms at any time, so instead of filling a full array with all terms from fib(0) to fib(n), you can simply keep two variables (or a size-2 array) with the previous two results. Arguably this is still dynamic programming, although it ends up simply being a loop that calculates the terms of the sequence in order and it's hard to see anything "dynamic" about it.
Related
I am having trouble with simplifying the time complexity for this recursive algorithm for finding the Power-Set of a given Input Set. I not entirely sure if what I have got is correct so far either.
It's described at the bottom of the page in this link: http://www.ecst.csuchico.edu/~akeuneke/foo/csci356/notes/ch1/solutions/recursionSol.html
By considering each step taken by the function for an arbitrarily chosen Input Set of size 4 and then translating that to an Input Set of size n, I came to the result that the time complexity in terms of Big-O notation for this algorithm is: 2nnn
Is this correct? And is there a specific way to approach finding the time-complexity of recursive functions?
The run-time is actually O(n*2n). The simple explanation is that this is an asymptotically optimal algorithm insofar as the total work it does is dominated by creating the subsets which feature directly in the final output of the algorithm, with the total length of the output generated being O(n*2n). We can also analyze an annotated implementation of the pseudo-code (in JavaScript) to show this complexity more rigorously:
function powerSet(S) {
if (S.length == 0) return [[]] // O(1)
let e = S.pop() // O(1)
let pSetWithoutE = powerSet(S); // T(n-1)
let pSet = pSetWithoutE // O(1)
pSet.push(...pSetWithoutE.map(set => set.concat(e))) // O(2*|T(n-1)| + ||T(n-1)||)
return pSet; // O(1)
}
// print example:
console.log('{');
for (let subset of powerSet([1,2,3])) console.log(`\t{`, subset.join(', '), `}`);
console.log('}')
Where T(n-1) represents the run-time of the recursive call on n-1 elements, |T(n-1)| represents the number of subsets in the power-set returned by the recursive call, and ||T(n-1)|| represents the total number of elements across all subsets returned by the recursive call.
The line with complexity represented in these terms corresponds to the second bullet point of step 2. of the pseudocode: returning the union of the powerset without element e, and that same powerset with every subset s unioned with e:
(1) U ((2) = {s in (1) U e})
This union is implemented in terms of push and concat operations. The push does the union of (1) with (2) in |T(n-1)| time as |T(n-1)| new subsets are being unioned into the power-set. The map of concat operations is responsible for generating (2) by appending e to every element of pSetWithoutE in |T(n-1)| + ||T(n-1)|| time. This second complexity corresponds to there being ||T(n-1)|| elements across the |T(n-1)| subsets of pSetWithoutE (by definition), and each of those subsets being increased in size by 1.
We can then represent the run-time on input size n in these terms as:
T(n) = T(n-1) + 2|T(n-1)| + ||T(n-1)|| + 1; T(0) = 1
It can be proven via induction that:
|T(n)| = 2n
||T(n)|| = n2n-1
which yields:
T(n) = T(n-1) + 2*2n-1 + (n-1)2n-2 + 1; T(0) = 1
When you solve this recurrence relation analytically, you get:
T(n) = n + 2n + n/2*2n = O(n2n)
which matches the expected complexity for an optimal power-set generation algorithm. The solution of the recurrence relation can also be understood intuitively:
Each of n iterations does O(1) work outside of generating new subsets of the power-set, hence the n term in the final expression.
In terms of the work done in generating every subset of the power-set, each subset is pushed once after it is generated through concat. There are 2n subsets pushed, producing the 2n term. Each of these subsets has an average length of n/2, giving a combined length of n/2*2n which corresponds to the complexity of all concat operations. Hence, the total time is given by n + 2n + n/2*2n.
Get closed form of these equations if possible. Then, determine which would be faster than the other.
f(n) = 0.25f(n/3)+ f(n/10) + logn, f(1) = 1
g(n) = n + log(n-1)^2 + 1
In these equations, do I have to expand these recursions and try to discover patterns within? I really don't know how to calculate closed form intuitively
Short answer: g(n)>f(n)
Long answer: g is not even recursive, so you can see immediately that g(n)=O(n).
You can approximate f(n) <f(n/2)+logn
which, by the master theorem, is Θ(logn)
I want to practice old exam on AI and see one challenging question and need help from some experts...
A is initial state and G is a goal state. Cost is show on edge and Heuristic "H" values is shown on each circle. IDA* limit is 7.
We want to search this graph with IDA*. What is the order of visiting these nodes? (child is selected in alphabetical order and in equal condition the node is selected first that produce first.)
Solution is A,B,D,C,D,G.
My question is how this calculated, and how we can say this Heuristic
is Admissible and Consistent?
My question is how this calculated, and how we can say this Heuristic is Admissible and Consistent?
Let's first start with definitions of what are admissible and consistent heuristics:
An admissible heuristic never overestimates the cost of reaching the goal, i.e. the cost estimated to reach the goal is not greater than the cost of the shortest path from that node to the goal node in the graph.
You can easily see that for all nodes n in the graph the estimation h(n) is always smaller or equal than the real shortest path. For example, h(B) = 0 <= 6 (B->F->G).
Let c(n, m) denote the cost of an optimal path in the graph from a node n
to another node n'. A heuristic estimate function h(n) is consistent when
h(n) + c(n, m) <= h(n') for all nodes n , n' in the graph. Another way of seeing the property of consistency is monotonicity. Consistent heuristic functions are also called monotone functions, due to the estimated final cost of a partial solution, is monotonically non-decreasing along the best path to the goal. Thus, we can notice that your heuristic function is not consistent.
h(A) + c(A, B) <= h(B) -> 6 + 2 <= 0.
Let me do an analogy to explain it in a less mathematical way.
You are going for a run with your friend. At certain points you are asking your friend for how long does it take to finish your run. He is a very optimistic guy and he is always giving you a smaller time that you will be able to do, even if you run at your top all the rest of the way.
However, he is not very consistent in his estimations. At a point A he told you it will be at least an hour more to run, and after 30 minutes running you ask him again. Now, he is telling you that it is at least 5 minutes more from there. The estimation in point A is less informative than in point B, and therefore your heuristic friend is inconsistent.
Regarding the execution of IDA*, I copy-paste the pseudocode of the algorithm (I haven't tested) from the wikipedia:
node current node
g the cost to reach current node
f estimated cost of the cheapest path (root..node..goal)
h(node) estimated cost of the cheapest path (node..goal)
cost(node, succ) step cost function
is_goal(node) goal test
successors(node) node expanding function
procedure ida_star(root)
bound := h(root)
loop
t := search(root, 0, bound)
if t = FOUND then return bound
if t = ∞ then return NOT_FOUND
bound := t
end loop
end procedure
function search(node, g, bound)
f := g + h(node)
if f > bound then return f
if is_goal(node) then return FOUND
min := ∞
for succ in successors(node) do
t := search(succ, g + cost(node, succ), bound)
if t = FOUND then return FOUND
if t < min then min := t
end for
return min
end function
follow the execution for your example is straightforward. First we set the bound (or threshold) with the value of the heuristic function for the start node. We explore the graph with a depth first search approach ruling out the branches which f-value is greater than the bound. For example, f(F) = g(F) + h(F) = 4 + 4 > bound = 6.
The nodes are explored in the following order: A,B,D,C,D,G. In a first iteration of the algorithm nodes A,B,D are explored and we run out of options smaller than the bound.
The bound is updated and in the second iteration the nodes C,D and G are explored. Once we reach the solution node with a estimation (7) less than the bound (8), we have the optimal shortest path.
I am trying to perform asymptotic analysis on the following recursive function for an efficient way to power a number. I am having trouble determining the recurrence equation due to having different equations for when the power is odd and when the power is even. I am unsure how to handle this situation. I understand that the running time is theta(logn) so any advice on how to proceed to this result would be appreciated.
Recursive-Power(x, n):
if n == 1
return x
if n is even
y = Recursive-Power(x, n/2)
return y*y
else
y = Recursive-Power(x, (n-1)/2)
return y*y*x
In any case, the following condition holds:
T(n) = T(floor(n/2)) + Θ(1)
where floor(n) is the biggest integer not greater than n.
Since floor doesn't have influence on results, the equation is informally written as:
T(n) = T(n/2) + Θ(1)
You have guessed the asymptotic bound correctly. The result could be proved using Substitution method or Master theorem. It is left as an exercise for you.
I've been working through a recent Computer Science homework involving recursion and big-O notation. I believe I understand this pretty well (certainly not perfectly, though!) But there is one question in particular that is giving me the most problems. The odd thing is that by looking it, it looks to be the most simple one on the homework.
Provide the best rate of growth using the big-Oh notation for the solution to the following recurrence?
T(1) = 2
T(n) = 2T(n - 1) + 1 for n>1
And the choices are:
O(n log n)
O(n^2)
O(2^n)
O(n^n)
I understand that big O works as an upper bound, to describe the most amount of calculations, or the highest running time, that program or process will take. I feel like this particular recursion should be O(n), since, at most, the recursion only occurs once for each value of n. Since n isn't available, it's either better than that, O(nlogn), or worse, being the other three options.
So, my question is: Why isn't this O(n)?
There's a couple of different ways to solve recurrences: substitution, recurrence tree and master theorem. Master theorem won't work in the case, because it doesn't fit the master theorem form.
You could use the other two methods, but the easiest way for this problem is to solve it iteratively.
T(n) = 2T(n-1) + 1
T(n) = 4T(n-2) + 2 + 1
T(n) = 8T(n-3) + 4 + 2 + 1
T(n) = ...
See the pattern?
T(n) = 2n-1⋅T(1) + 2n-2 + 2n-3 + ... + 1
T(n) = 2n-1⋅2 + 2n-2 + 2n-3 + ... + 1
T(n) = 2n + 2n-2 + 2n-3 + ... + 1
Therefore, the tightest bound is Θ(2n).
I think you have misunderstood the question a bit. It does not ask you how long it would take to solve the recurrence. It is asking what the big-O (the asymptotic bound) of the solution itself is.
What you have to do is to come up with a closed form solution, i. e. the non-recursive formula for T(n), and then determine what the big-O of that expression is.
The question is asking for the big-Oh notation for the solution to the recurrence, not the cost of calculation the recurrence.
Put another way: the recurrence produces:
1 -> 2
2 -> 5
3 -> 11
4 -> 23
5 -> 47
What big-Oh notation best describes the sequence 2, 5, 11, 23, 47, ...
The correct way to solve that is to solve the recurrence equations.
I think this will be exponential. Each increment to n makes the value to be twice as large.
T(2) = 2 * T(1) = 4
T(3) = 2 * T(2) = 2 * 4
...
T(x) would be the running time of the following program (for example):
def fn(x):
if (x == 1):
return # a constant time
# do the calculation for n - 1 twice
fn(x - 1)
fn(x - 1)
I think this will be exponential. Each increment to n brings twice as much calculation.
No, it doesn't. Quite on the contrary:
Consider that for n iterations, we get running time R. Then for n + 1 iterations we'll get exactly R + 1.
Thus, the growth rate is constant and the overall runtime is indeed O(n).
However, I think Dima's assumption about the question is right although his solution is overly complicated:
What you have to do is to come up with a closed form solution, i. e. the non-recursive formula for T(n), and then determine what the big-O of that expression is.
It's sufficient to examine the relative size of T(n) and T(n + 1) iterations and determine the relative growth rate. The amount obviously doubles which directly gives the asymptotic growth.
First off, all four answers are worse than O(n)... O(n*log n) is more complex than plain old O(n). What's bigger: 8 or 8 * 3, 16 or 16 * 4, etc...
On to the actual question. The general solution can obviously be solved in constant time if you're not doing recursion
( T(n) = 2^(n - 1) + 2^(n) - 1 ), so that's not what they're asking.
And as you can see, if we write the recursive code:
int T( int N )
{
if (N == 1) return 2;
return( 2*T(N-1) + 1);
}
It's obviously O(n).
So, it appears to be a badly worded question, and they are probably asking you the growth of the function itself, not the complexity of the code. That's 2^n. Now go do the rest of your homework... and study up on O(n * log n)
Computing a closed form solution to the recursion is easy.
By inspection, you guess that the solution is
T(n) = 3*2^(n-1) - 1
Then you prove by induction that this is indeed a solution. Base case:
T(1) = 3*2^0 - 1 = 3 - 1 = 2. OK.
Induction:
Suppose T(n) = 3*2^(n-1) - 1. Then
T(n+1) = 2*T(n) + 1 = 3*2^n - 2 + 1 = 3*2^((n+1)-1) - 1. OK.
where the first equality stems from the recurrence definition,
and the second from the inductive hypothesis. QED.
3*2^(n-1) - 1 is clearly Theta(2^n), hence the right answer is the third.
To the folks that answered O(n): I couldn't agree more with Dima. The problem does not ask the tightest upper bound to the computational complexity of an algorithm to compute T(n) (which would be now O(1), since its closed form has been provided). The problem asks for the tightest upper bound on T(n) itself, and that is the exponential one.