How can I implement recursion with the help of a heap... is it a good idea? As much as I know, that heap is a wonderful data structure and can do many things which cannot be achieved by other data structures and that too very efficiently. So why dont we use Heap for implementing recursion?
Related
I'm running some performance sensitive code and looking to improve speed. I am using vnormdiff and findmax a lot and wondered whether these are the most efficient functions around? Any thoughts greatly appreciated.
Whenever you encounter a performance problem, it's good to look at your problem from two angles. First, is my overall algorithm the best it can be? If you're using an O(N^2) algorithm but an O(N) is available, that could make an enormous difference. It sounds like you're examining neighbors, so some of the more refined nearest-neighbor algorithms (which depend on dimensionality) might be of assistance.
Second, no discussion about optimization can really get started without profiling information. There's documentation on Julia's profiler here, and a graphical tool for inspecting it here.
I am learning data structure and algorithms. I found it especially difficult to understand recursions.
So I have the following questions. But they are not related to any specific code.
When I implement methods, when/where should I consider recursion?
In general coding convention, should I prefer recursion over simple iteration if they are both feasible?
How to actually comprehend most possible forms of recursion so I can think of them when I need? What is the best way to learn it? (Any related book or website?) Is there any pattern?
I know the question may sounds unconstructive if you find recursion simple and natural.
But for me it doesn't align with my intuition well. I do appreciate any help.
1
Very often recursive solutions to problems are smaller when data can be seen as similar. Eg. If you have a binary tree and you want to get the sum of all the leaf nodes you define sum-tree as If it's a leaf node, it's sum is it's value, if it's not a leaf node it the addition of the sum of both sub-trees.
Here's a Scheme implementation of my text
(define (sum-tree tree)
(if (leaf? tree)
(node-value tree)
(+ (sum-tree (node-left tree))
(sum-tree (node-right tree)))))
Or the same in Java, defined as a method in the Node class.
public int sum()
{
if ( isLeaf() )
return value;
else
return left.sum() + right.sum();
}
An iterative solution to this would be longer and harder to read. In this case you should prefer recursion.
2
It depends. If you are programming in Python or Java you should not since they donæt have tail recursion. With Scheme however, it's the only way to go. If your language supports tail recursion you should pick recursion when it makes clearer code.
3
Learn by doing. You need to write some algorithms that uses recursion as a tool. Use paper to follow the flow of the stack if you are unsure of the flow. Learning some Scheme or a similar functional language might help you a lot.
Recursion can be used when you are repeating the same thing over and over. For example, you are traversing a tree, you can use a recursion method to go to the left or right child.
I would go for the one that is easier to read. Generally, simple iteration will be faster as it does not have any overhead (recursion has some overhead, and can cause stack overflow if the levels are to deep, while simple iteration won't). But for some case, writing a recursive function is a lot easier than writing the equivalent in the simple iteration.
I would rather see the problem first and then decide whether I need recursion to solve it, not vice versa. Any algorithm book should be good enough. Perhaps you can start over reading http://en.wikipedia.org/wiki/Recursion to begin with. There is a simple example there about recursion, which I think you will be able to implement too using simple iteration.
At first, wrapping my head around recursion was hard as well. When I was learning recursion it was during school with Java. I found it more often I would use recursion over iterators as they were annoying to write in Java. However, I learned Ruby and I found myself writing recursive methods less and less. Then, I learned Elixir and Erlang and found myself writing a lot of recursive functions. My point? Some tools will give themselves for writing with certain style.
Now to answer your questions, since you're just starting to learn recursion, I would suggest diving deep into them and trying to get comfortable with them writing them as much as you can.
Certain tasks are much better off with recursion (e.g. Fibonacci sequence, traversing trees, etc..). Some other's you're better off writing a simple loop. However, note that you can write any recursive method with a loop. It might get tricky on certain occasions though.
All in all, recursion is actually a pretty cool concept once you get the hang of it.
Take a look at this question that relates to recursion: Erlang exercise, creating lists
I'd go for a study of some well known recursive algorithms. For instance, you could try to implement a factorial computation, or to get all the paths lengths in a tree.
By doing that you'll (hopefully) see how the recursive approach helps to simplify the code, and why it is a good approach in these particular cases. This could give you some ideas for future applications :)
I'm learning Data Structures & Algorithms now.
My lecture notes have an implementation of a binary search tree which is implemented using a recursive method. That is an elegant way, but my question is in real life code, should I implement a binary search tree recursively, will it generate a lot of calling stack if the tree has large height/depth number.
I understand that recursion is a key concept to understand lots of data structure concepts, but would you choose to use recursion in real life code?
A tree is recursive by nature. Each node of a tree represents a subtree, and each child of each note represents a subtree of that subtree, so recursion is the best bet, especially in practice where other people people might have to edit and maintain your code.
Now, IF depth becomes a problem for your call stack, than I'm afraid that there are deeper problems with your data structure (either it's monstrously huge, or it's very unbalanced)
"I understand that recursive is a key concept to understand lots of
data structure, but will you choose to use recursive in real life
code?"
After first learning about recursion I felt the same way. However, having been working in the Software industry for over a year now, I can say that I have used the concept of recursion to solve several problems. There are often times that recursion is cleaner, easier to understand/read, and just downright better. And to emphasize a point in the previous answer, a tree is a recursive data structure. IMO, there is no other way to traverse a BST :)
Many times, the compiler can optimize your code, to avoid creating a new stack frame for each recursive call (look up tail recursion, for example). Of course, it all depends on the algorithm and on your data structure. If the tree is reasonably balanced, I don't think a recursive algorithm should cause any problems.
its true that recursion is intutive and elegent and it produces code that is clear and concise. its also correct that some methods such as quick sort, DFS etc. are really hard to implement iterativelly.
but in practice recursive implementations are almost always going to be slow when compared to iterative counterparts because of all the function calls (To really understand the performance hit I suggest you learn how much book keeping stuff assembler has to do for a single function call).
the optimizations that we talk about are not applicable to every recursive method in general and manny compilers and interpreters dont even support them.
so in summary if you are writing something which is performance critical such as a data strucute then stay away from recursion (or use it if you are sure that your compiler/interpreter got you covered)
PS: CLRS (introduction to algorithms, page 290, last line) suggests that iterative search procedure for a BST is faster compared to recursive one.
Excuse my ignorance, as I'm not a computer engineer but with roots in biology. I have become a great fan of pre-allocating objects (kudos to SO and R inferno by Patrick Burns) and would like to improve my coding habits. In lieu of this fact, I've been thinking about writing more efficient functions and have the following question.
Is there any benefits in removing variables that will be overwritten at the start of the next loop, or is this just a waste of time? For the sake of argument, let's assume that the size of old and new variables is very similar or identical.
I think it will really depend on the specifics of the case. In some circumstances, when the object is large it might be a good idea to rm() it, especially if it is not needed and there are lots of other things to do before it gets overwritten. But then again, it's not impossible to conceive of circumstance were that strategy might be costly in terms of computation time.
The only way to know whether it would really be worthwhile is to try both ways and check with system.time().
No. Automatic garbage collection will take care of this just fine.
I am trying to do work with examples on Trees as given here: http://cslibrary.stanford.edu/110/BinaryTrees.html
These examples all solve problems via recursion, I wonder if we can provide a iterative solution for each one of them, meaning, can we always be sure that a problem which can be solved by recursion will also have a iterative solution, in general. If not, what example can we give to show a problem which can be solved only by recursion/Iteration?
--
The only difference between iteration and recursion on a computer is whether you use the built-in stack or a user-defined stack. So they are equivalent.
In my experience, most recursive solution can indeed be solved iteratively.
It is also a good technique to have, as recursive solutions may have too large an overhead in memory and CPU consumptions.
Since recursion uses an implicit stack on which it stores information about each call, you can always implement that stack yourself and avoid the recursive calls. So yes, every recursive solution can be transformed into an iterative one.
Read this question for a proof.
Recursion and iteration are two tools that, at a very fundamental level, do the same thing: execute a repeated operation over a defined set of values. They are interchangeable in that there is no problem that cannot, in some way, be solved by only one of them. That does not mean, however, that one cannot be more suited than the other.
Recursion has the advantage where it will continue without a known end. A perfect example of this is a tuned and threaded Quick Sort.
You can't spawn additional loops, but you can spawn new threads via recursion.
As an "old guy," I fall back to my memory of learning that recursive descent parsers are easier to write, but that stack-based, iterative parsers perform better. Here's an article that seems to support that idea with metrics:
http://www.texttoolkit.com/index.php?option=com_content&view=article&catid=35%3Atechnology&id=60%3Abeyond-recursive-descent&Itemid=55
One thing to note is the author's mention of overrunning the call stack with recursive descent. An iterative, stack-based implementation can be much more efficient of resources.