Recursion When to use it? [closed] - recursion

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
So we just finished the subject recursion in school and I am still wondering "why?".
I feel like I have just learned a hell of a lot about math in a programming way with the sole purpose of passing an exam later on and then never again.
So what I want to know is when to use it? I can only find people saying "when you want to call a function within itself" but why would you do that?

Recursion is the foundation of computation, every possible program can be expressed as a recursive function (in the lambda calculus). Hence, understanding recursion gives you a deeper understanding of the principles of computation.
Second, recursion is also a tool for understanding on the meta level: Lots of proofs over the natural numbers follow a pattern called "natural induction", which is a special case of structural induction which in turn allows you to understand properties of very complex systems in a relatively simple way.
Finally, it also helps to write good (i.e. readable) algorithms: Whenever there is data to store/handle in a repetitive calculation (i.e. more than incrementing a counter), you can use a recursive function to implicitly manage a stack for you. This is also often very efficient since most systems come with a machine stack at hand.

Related

Whats the difference between normal recursive solutions and Dynamic Programming related recursive solutions? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Can anyone please help me with this , How is the DP's Iterative approach different from the Recursive Approach.
Dynamic Programming and Recursion aren't necessarily opposites. What you're thinking is Memoization vs. Dynamic Programming.
Dynamic Programming is the approach to a problem that reduces duplicate computations as much as possible. This usually means taking a bottom-up approach - i.e. you calculate answers to smaller scale problems first and then use those answers to calculate higher order problems. Iterative approaches are usually used for Dynamic programming since it seems natural (although you can do it recursively too).
Memoization is the top-down approach to a problem and is usually done through recursion because it is more natural. In this case, you start with a higher order problem and make recursive calls for lower order problems in order to solve it.
In both cases you use a data-structure to store values computed so far.

Mathematical notation or Pseudocode? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
At this moment I am concerned about which is the best way to explain an algorithm intuitively.
I have try to read some pseudocode an wow it may be complex for some cases (specially for math applications even more than the formulas itself or pure code like in PHP, C++ or Py). I have thought how about describe algorithms from mathematical notation in a way such that a mathematician could understand it and a web developer too.
Do you think it is a good idea ? (IF all the grammars and structure, symbols and modelings of it will be well explained and it is compact)
Example:
Binary Search
It even could help to simplify algorithm complexity if a mathematical analysis is done (I think)
Depends on the algorithm. For me, I know I would have never gotten the concept of trees if I didn't get a visual drawing. Also the concept of nodes, while a drawing is good, actually seeing the data structure written down is better for that case.
It's student to student basis. I personally see that example of the Binary Search as the worst type of example but am sure some math individual would maybe understand that better.

Is recursion a Bad Practice in general? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I always thought things that make your code uneasy to follow while being avoidable are considered as Bad Practice. Recursion seems to be one of these things (not to mention the problems it can cause like stack overflowing etc.), so I was a bit surprised when we had to learn working seriously with them during a programming course (for it ocassionally results in shorter code). And it seems to me there is a disagreement about it between the professors too...
People already asked this specific to languages (like Python, where a comment compared it to a goto statement), mostly specific to a problem, but I am interested in general:
Is recursion considered as a Bad Practice or not in the modern programming? When should I avoid it, and are there any circumstances when can't?
Related questions I found:
Is recursion a feature in and of itself? (does not discuss whether it is good or bad)
Is recursion ever faster than looping? (answer describes that recursion can result in improvements in functional languages, but is expensive in others), also other questions discussing the performance
Recursive programming is not a bad practice. It is a tool in your toolbox and like any tool, when it's the only tool used that's when bad things happen. Or when it's used out of a proper context.
When do you use recursion? It's good when you have a tree dataset that you need to perform the same logic upon. For example: You have a tree list of string elements and you wish to calculate total length of all strings, down one branch. You'd define a method to calculate the length of a list element and then see if the element has a sibling and if it does call the method, passing the sibling - and the process starts over.
The most common pitfall would be of a stack overflow. When the list is so large, it can't be handled all at once. So you would implement checks to ensure this never happens. Such as breaking the linked list down into manageable pieces and in your function, utilize a static variable to track the number of levels you traverse - bailing once that number is exceeded.
You should avoid using when your dataset is not a tree type dataset. The only time I can think of that you actually can't avoid using recursion is in programming languages that do not support looping (Haskell).

Explain how the functional programming model differs from the procedural or object orientated models [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Can anyone explain how the functional programming model differs from the procedural or object orientated models.
I cannot conclude a good answer myself.
in my opinion FP is about pure functions (that is functions in a mathematical sense) - which implies referential transparency and, if you continue the thouhgt, immutable data.
This is the biggest difference I see: you don't mutate data - and most other aspects either directly follow from this or from cool type-systems (which are not necessary for a language to be called functional) and the academic nature.
But of course there is far more to it and you can read papers, complete books or just wikipedia about it.
please note that you can dispute the pure property and then things get a lot more fuzzy ... which should not surprise you, as most functional languages in wide to allow for mutation (Clojure, Scala, F#, Ocaml, ...) and there are not many pure ones.
In this case the biggest difference might be the way you abstract things with higher-order-functions (at least functions should be first class citizens - meaning you can pass them around and have them as values).
But overall this question is really opinionated and will very likely be closed as to broad or something - maybe you should ask for details instead of the big picture

Optimizing an SBCL Application Program for Speed [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I've just finished and tested the core of a common lisp application and want to optimize it for speed now. It works with SBCL and makes use of CLOS.
Could someone outline the way to optimize my code for speed?
Where will I have to start? Will I just have to provide some global declaration or will I have to blow up my code with type information for each binding? Is there a way to find out which parts of my code could be compiled better with further type information?
The programm makes heavy use of a single 1-dimensional array 0..119 where it shifts CLOS-Instances around.
Thank you you Advance!
It's not great to optimize in a vacuum, because there's no limit to the ugliness you can introduce to make things some fraction of a percent faster.
If it's not fast enough, it's helpful to define what success means so you know when to stop.
With that in mind, a good first pass is to run your project under the profiler (sb-sprof) to get an idea of where the time is spent. If it's in generic arithmetic, it can help to judiciously use modular arithmetic in inner loops. If it's in CLOS stuff, it might possibly help to switch to structures for key bits of data. Whatever's the most wasteful will direct where to spend your effort in optimization.
I think it could be helpful if, after profiling, you post a followup question along the lines of "A lot of my program's time is spent in <foo>, how do I make it faster?"

Resources