Can every recursive problem be solved by dynamic programming - recursion

I'm a newbie in DSA and I just realized that all of the DP problems have recursive solutions. By the way I was wondering if there can be always a dp solution for all of recursive problems? if so, then why don't we give up on divide and conquer and just use dp?

Related

Dynamic programing: Tabular vs memoization

Is the time complexity of dynamic programming tabular approach and recursion with memoization approach the same? For example, in the Knapsack problem the tabular approach takes O(N*W) where N is the number of items and W is the weight. But what is the time complexity for the memoization approach?
Memoization is a method used to solve dynamic programming (DP) problems recursively in an efficient manner. DP abstracts away from the specific implementation, which may be either recursive or iterative (with loops and a table). Therefore, if used appropriately, the time complexity is the same, i.e. O(NW) in the knapsack problem over the integers.
This is what we used in introduction to CS and algorithm design courses in BGU (I was a T.A. in both if matters), but there might be other terminologies which I'm unaware of.
I hope it was helpful, good luck!
Is the time complexity of dynamic programming tabular approach and recursion with memoization approach the same?
Yes the do have same time complexity of O(N*W) where N is the number of items and W is the weight. However, If the original problem requires all subproblems to be solved like in the case of Knapsack problem,
tabulation usually outperformes memoization by a constant factor.
This is because tabulation has no overhead for recursion and can use a preallocated array rather than, say, a hash map.
What is the difference between tabulation and memoization?
When you solve a dynamic programming problem using tabulation (generally iterative) you solve the problem "bottom up", i.e., by solving all related sub-problems first, typically by filling up an n-dimensional table. Based on the results in the table, the solution to the "top" / original problem is then computed.
If you apply memoization (generally recursive) to solve the problem you do it by maintaining a map of already solved sub problems. You do it "top down" in the sense that you solve the "top" problem first (which typically recurses down to solve the sub-problems).
Which is better? Memoizaiton or tabulation?
If we don’t require to solve all the problems and are just looking for the optimal solution, memoization is better.
If we do require to solve all the problems, that means we are going to make numerous recursive calls which may fill the stack space respectively, and there tabulation is better.
The caveat is that memoization is generally more intuitive to implement especially when we don’t know the solution to subproblems, whereas tabulation requires us to know the solutions, or bottom, in advance, in order to build our way up.
Useful Resources:
What is Dynamic Programming? Memoization and Tabulation
Tabulation vs Memoization

Multiobjective Constrained Combinatorial Optimization in R

This is quite a general question, but I have not been able to find a solution so far.
I am trying to solve a problem of combinatorial optimization in which I have several objective functions to optimize, as well as several constraints to impose. I am thus trying to find some software (an R package preferably) that can solve this problem.
I have explored several options, but none of them seems to be useful for my purpose: lpSolveAPI is aimed for linear programming only, which is not the case; mco can minimize a multidimensional objective function, but does not seem to be able to manage binary (i.e. decision) variables, needed for combinatorial problems; adagio and CEGO can deal with combinatorial optimization problems, but as far as I can see they can only optimize a single unidimensional function.
Is there any other package I am not aware of that can handle this type of problem? Or any of the aforementioned may be useful, though I may be missing the way to the functionality I need?
Thank you so much in advance with this. It is being really a nightmare trying to find this out.

Efficient and fast way of solving linear programming problems in R

I am working with a very large dataset, typically dealing with a few millions of combinations.
I want to solve the assignment problem.(maximise the sum)
I had tried solving it on a small test set using adagio::assignment, clue::solve_LSAP
I wasnt able to successfully install the "lpSolve" package on my system, threw some segmentation fault
Wanted to know which of these is faster or any other method which does it faster.
Thanks....
An LP formulation is not a good way to solve the assignment problem, whichever library you use. You have to use the Hungarian algorithm, and it looks like solve_LSAP does exactly that.
No need to try anything else IMHO.
EDIT: An efficient implementation of the Hungarian method should be O(n^3), which is extremely fast for any optimization algorithm. If solve_LSAP is not fast enough for your problem (assumed it is implemented correctly), it is very unlikely that any exact method will work.
You will have to use some sort of heuristic to approximate the solution.

Recursive vs. Iterative algorithms

I'm implementing the Euclidian algorithm for finding the GCD (Greatest Common Divisor) of two integers.
Two sample implementations are given: Recursive and Iterative.
http://en.wikipedia.org/wiki/Euclidean_algorithm#Implementations
My Question:
In school I remember my professors talking about recursive functions like they were all the rage, but I have one doubt. Compared to an iterative version don't recursive algorithms take up more stack space and therefore much more memory? Also, because calling a function requires uses some overhead for initialization, aren't recursive algorithms more slower than their iterative counterpart?
It depends entirely on the language. If your language has tail-call recursion support(a lot do now days) then they will go at an equal speed. If it does not, then the recursive version will be slower and take more (precious) stack space.
It all depends on the language and compiler. Current computers aren't really geared towards efficient recursion, but some compilers can optimize some cases of recursion to run just as efficiently as a loop (essentially, it becomes a loop in the machine code). Then again, some compilers can't.
Recursion is perhaps more beautiful in a mathematical sense, but if you feel more comfortable with iteration, just use it.

Most important speed issues

I am participating in Al Zimmermann's Programming Contest.
http://www.azspcs.net/Contest/SonOfDarts
I have written a recursive algorithm but it takes a long time to run. I was wondering what are the most important things to consider about speed of recursive algorithms. I have made most of the properties global so they don't get allocated every time the recursions step. Is there anything else I can do that will speed up my program without changing my algorithm?
It depends on the details of your algorithm. If it is tail recursive you could transform it to an iterative algorithm fairly easily.
Recusrsion is always slower than itterative. Due to stack/heap/memory aloocation performs slower than most. It is alwyas easier to implement a recusive function in complex algorithms, nut if possible, itterative will be faster.
What language are you using for writing your program? Some languages like Haskell are tailor-made for recursive algorithms while others like Python are not.
How much time is being spent within each function call vs the number of recursive calls out of the function? Too less code being executed within the function itself would certainly lead to performance loss.
Variables on stack are usually much faster than global variables. Consider passing them around from function to function rather than putting them in global.
Unfortunately there isn't enough context in the question to provide better answer.
Recursive algorithms can also be designed in such a way that they are tail recursive. In such a situation, compilers support tail recursion optimization leading to much faster code.
There are probably a lot of overlapping sub-questions in your algorithm and you didn't save the intermediate results for each sub-questions. If you do, your program should be fast enough.
EDIT:
I just gave the dart question some thought and felt taking the recursion may not be a good approach to the solution. I did some research in SQL server with the sample given by the question:
create table regions (score int)
insert into regions values (0)
insert into regions values (1)
insert into regions values (2)
insert into regions values (4)
insert into regions values (7)
insert into regions values (11)
create table results (score int)
insert into results
select distinct (s1.score+s2.score+s3.score)
from regions s1, regions s2, regions s3
select * from results
The script clearly reveals a possible solution that can be easily implemented in an imperative programming style, without taking any recursive approach.
Don't assume the problem is with the recursion, or anything else a priori. Just do this, where you find out what's biggest, fix it, and move on to the next. I'm not saying it won't turn out that recursion is the big deal at some point. It's just that chances are very good there are bigger problems you can fix first.
If you can submit compiled code for Intel platforms, then:
Collocation of memory content to favor the CPU cache content wins over best classical algorithms in any area. Make sure to use Intel VTune performance analyzer output fed to your linker options to keep bodies of related functions located close in code memory.

Resources