When we are talking about divide and conquer ,we always use recursion .I've already known divide and conquer is a technique of algorithm design,but I got one question:
Are all divide and conquer algorithms recursion ,or put it another way ,is the thought of divide and conquer employed in all recursions?
I'm confused .
If I understand your problem correctly..
Are all Divide & Conquer algorithms recursive in nature? Yes!
By definition
There are three steps to applying Divide and Conquer algorithm in practice:
Divide the problem into one or more subproblems.
Conquer subproblems by solving them recursively. If the subproblem
sizes are small enough, however, just solve the subproblems in a
straightforward manner.
Combine the solutions to the subproblems into the solution for the
original problem.
But if you are concerned with the implementation part.. then recursion(though more elegant and simple) is not the only way.
Binary Search is a well-known instance of divide-and-conquer paradigm and here is the Iterative implementation of the algorithm.
//binary search for x, in array A[1 .. N]
min := 1;
max := N;
repeat
mid := (min+max) div 2;
if x > A[mid] then
min := mid + 1;
else
max := mid - 1;
until (A[mid] = x) or (min > max);
Related
I read recently in Steven Skiena "Algorithms design manual book" that the harmonic Sum is of O(lg n). I understand how this is possible mathematically, but when i saw the recursive solution to it, which is the following:
public static double harmonic(int n) {
if(n == 1) {
return 1.0;
} else {
return (1.0 / n) + harmonic(n - 1);
}
}
I can not see how this is O(lg n), I see it as O(n). is this code above really lg(n) if so how ? if not can someone refer me to an example where this is the case ?
I believe the confusion here is that big-O notation can be used not only for running times, but for the growth of functions in general.
As kaya3 comments, the return value of the function above is O(lg n), which means it grows asymptotically as fast as lg n as you increase n. This is quite likely what the book is claiming. It is a mathematical property of the harmonic series.
The running time of the function, however, takes O(n) time. This is what you are claiming, and it is perfectly true (assuming constant-time arithmetics).
If you are still confused about how big-O notation can be used for purposes other than the running time of algorithms, I recommend you check its definition.
I am struggling with a recursive doubling problem I was assigned. I understand that recursive doubling breaks up a bigger problem into smaller sub-problems so that the computation may be parallelized, but I don't think it is doable with this question.
Exercise 1.4. The operation
for (i) {
x[i+1] = a[i]*x[i] + b[i];
}
cannot be handled by a pipeline because there is a dependency between
input of one iteration of the operation and the output of the
previous. However, you can transform the loop into one that is
mathematically equivalent, and potentially more efficient to compute.
Derive an expression that computes x[i+2] from x[i] without involving
x[i+1]. This is known as recursive doubling. Assume you have plenty of
temporary storage. You can now perform the calculation by
• Doing some preliminary calculations;
• Computing x[i],x[i+2],x[i+4],..., and from these,
• Compute the missing terms x[i+1],x[i+3],....
Analyze the efficiency of this scheme by giving formulas for T0(n) and
Ts(n). Can you think of an argument why the preliminary calculations
may be of lesser importance in some circumstances?
So I understand the expression for x2 would be: x2 = a1(a0*x0+b0)+b1
but what I do not understand is A. how this relates to recursive double ... and B. how this would achieve any speedup if the result of the previous calculation is still needed.
The central concept is that once you can compute x[i+2] in terms of x[i], a[i], and b[i], you can then split into two threads:
Start with x[0] and compute the even-numbered terms.
Compute x[1] from x[0], then compute the odd-numbered terms.
In fact, if you have good insight into your parallelization overhead, you can generate a Fibonacci tree of processes, a new one starting each time a previous thread gets going nicely.
First off, apologies if there is a better way to format math equations, I could not find anything, but alas, the expressions are pretty short.
As part of an assigned problem I have to produce some code in C that will evaluate x^n/n! for an arbitrary x, and n = { 1-10 , 50, 100}
I can always brute force it with a large number library, but I am wondering if someone with better math skills then mine can suggest a better algorithm than something with a O(n!)...
I understand that I can split the numerator to x^(n/2)x^(n/2) for even values of n, and xx^(n-1/2)*x^(n-1/2) for odd values of n. And that I can further change that into a logarithm base x of n/2.
But I am stuck for multiple reasons:
1 - I do not think that computationally any of these changes actually make a lot of difference since they are not really helping me reduce the large number multiplications I have to perform, or their overall number.
2 - Even as I think of n! as 1*2*3*...*(n-1)*n, I still cannot rationalize a good way to simplify the overall equation.
3 - I have looked at Karatsuba's algorithm for multiplications, and although it is a possibility, it seems a bit complex for an intro to programming problem.
So I am wondering if you guys can think of any middle ground. I prefer explanations to straight answers if you have the time :)
Cheers,
My advice is to compute all the terms of the summation (put them in an array), and then sum them up in reverse order (i.e., smallest to largest) -- that reduces rounding error a little bit.
Note that you can compute the k-th term from the preceding one by multiplying by x/k -- you do not need to ever compute x^n or n! directly (this is important).
Find the big-O running time for each of these functions:
T(n) = T(n - 2) + n²
Our Answers: n², n³
T(n) = 3T(n/2) + n
Our Answers: O(n log n), O(nlog₂3)
T(n) = 2T(n/3) + n
Our Answers: O(n log base 3 of n), O(n)
T(n) = 2T(n/2) + n^3
Our Answers: O(n³ log₂n), O(n³)
So we're having trouble deciding on the right answers for each of the questions.
We all got different results and would like an outside opinion on what the running time would be.
Thanks in advance.
A bit of clarification:
The functions in the questions appear to be running time functions as hinted by their T() name and their n parameter. A more subtle hint is the fact that they are all recursive and recursive functions are, alas, a common occurrence when one produces a function to describe the running time of an algorithm (even when the algorithm itself isn't formally using recursion). Indeed, recursive formulas are a rather inconvenient form and that is why we use the Big O notation to better summarize the behavior of an algorithm.
A running time function is a parametrized mathematical expression which allows computing a [sometimes approximate] relative value for the running time of an algorithm, given specific value(s) for the parameter(s). As is the case here, running time functions typically have a single parameter, often named n, and corresponding to the total number of items the algorithm is expected to work on/with (for e.g. with a search algorithm it could be the total number of records in a database, with a sort algorithm it could be the number of entries in the unsorted list and for a path finding algorithm, the number of nodes in the graph....). In some cases a running time function may have multiple arguments, for example, the performance of an algorithm performing some transformation on a graph may be bound to both the total number of nodes and the total number of vertices or the average number of connections between two nodes, etc.
The task at hand (for what appears to be homework, hence my partial answer), is therefore to find a Big O expression that qualifies the upper bound limit of each of running time functions, whatever the underlying algorithm they may correspond to. The task is not that of finding and qualifying an algorithm to produce the results of the functions (this second possibility is also a very common type of exercise in Algorithm classes of a CS cursus but is apparently not what is required here.)
The problem is therefore more one of mathematics than of Computer Science per se. Basically one needs to find the limit (or an approximation thereof) of each of these functions as n approaches infinity.
This note from Prof. Jeff Erikson at University of Illinois Urbana Champaign provides a good intro to solving recurrences.
Although there are a few shortcuts to solving recurrences, particularly if one has with a good command of calculus, a generic approach is to guess the answer and then to prove it by induction. Tools like Excel, a few snippets in a programming languages such as Python or also MATLAB or Sage can be useful to produce tables of the first few hundred values (or beyond) along with values such as n^2, n^3, n! as well as ratios of the terms of the function; these tables often provide enough insight into the function to find the closed form of the function.
A few hints regarding the answers listed in the question:
Function a)
O(n^2) is for sure wrong:
a quick inspection of the first few values in the sequence show that n^2 is increasingly much smaller than T(n)
O(n^3) on the other hand appears to be systematically bigger than T(n) as n grows towards big numbers. A closer look shows that O(n^3) is effectively the order of the Big O notation for this function, but that O(n^3 / 6) is a more precise notation which systematically exceed the value of T(n) [for bigger values of n, and/or as n tends towards infinity] but only by a minute fraction compared with the coarser n^3 estimate.
One can confirm that O(n^3 / 6) is it, by induction:
T(n) = T(n-2) + n^2 // (1) by definition
T(n) = n^3 / 6 // (2) our "guess"
T(n) = ((n - 2)^3 / 6) + n^2 // by substitution of T(n-2) by the (2) expression
= (n^3 - 2n^2 -4n^2 -8n + 4n - 8) / 6 + 6n^2 / 6
= (n^3 - 4n -8) / 6
= n^3/6 - 2n/3 - 4/3
~= n^3/6 // as n grows towards infinity, the 2n/3 and 4/3 factors
// become relatively insignificant, leaving us with the
// (n^3 / 6) limit expression, QED
I wonder if the technique of divide and conquer always divide a problem into subproblems of same type? By same type, I mean one can implement it using a function with recursion. Can divide and conquer always be implemented by recursion?
Thanks!
"Always" is a scary word, but I can't think of a divide-and-conquer situation in which you couldn't use recursion. It is by definition that divide-and-conquer creates subproblems of the same form as the initial problem - these subproblems are continually broken down until some base case is reached, and the number of divisions correlates with the size of the input. Recursion is a natural choice for this kind of problem.
See the Wikipedia article for more good information.
A Divide-and-conquer algorithm is by definition one that can be solved by recursion. So the answer is yes.
Usually, yes! Merge sort is an example of the same. Here is an animated version of the same.
Yes. In divide and conquer algorithmic technique we divide the given bigger problem into smaller sub-problems. These smaller sub-problems must be similar to the bigger problem except that these are smaller in size.
For example, the problem of sorting an array of size N is no different from the problem of sorting an array of size N/2. Except that the latter problem size is smaller than that of former one.
If the smaller sub-problem is not similar to the bigger one, then the divide and conquer technique can not be used to solve the bigger problem. In other words, a given problem can be solved using divide and conquer technique only iff the given bigger problem can be divided into smaller sub problems which are similar to the bigger problem.
Examining merge sort algorithm will be enough for this question. After understanding implementation of merge sort algorithm with divide and conquer (also recursion) you will see how difficult it would be making it without recursion.
Actually the most important thing here is the complexity of the algorithm which is expressed with big-oh notation and nlogn for merge sort.
For mergesort exapmle there is another version which is called bottom-up merge sort. It is simple and non-recursive version of it.
it is about 10% slower than recursive, top-down mergesort on typical systems. You can refer to following link for more information. It is explained well in 3rd lecture.
https://www.coursera.org/learn/introduction-to-algorithms#
Recursion is a programming method where you define a function in terms of itself. The function generally calls itself with slightly modified parameters (in order to converge).
Divide the problem into two or more smaller subproblems.
Conquer the subproblems by solving them (recursively).
Combine the solutions to the subproblems into the solutions for the original problem.
Yes All Divide and Conquer always be implemented using recursion .
A typical Divide and Conquer algorithm solves a problem using following three steps.
Divide: Break the given problem into sub-problems of same type.
Conquer: Recursively solve these sub-problems
Combine: Appropriately combine the answers
Following are some standard algorithms that are Divide and Conquer algorithms.
1) Binary search,
2) Quick Sort,
3) Merge Sort,
4) Strassen’s Algorithm
Imagine P is a problem with size of n and S is the solution. In this case, if P is large enough to be divided into sub problem, for example P1, P2, P3, P4, ... , Pk; let say k sub problems and also there would be k solutions for each of k sub problems, like S1, S2, S3, ... , Sk; Now if we combine each solutions of sub problem together we can get the S result. In divide and conquer strategy what ever is the main problem all sube problems must be same. For example if P is sort then the P1, P2 and Pn must be sort too. So this is how it is recursive in nature. So, divide and conqure will be recursive.