Is there any way to do example based programming? [closed] - functional-programming

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Is it possible to automatically generate a program/function by writing a certain amount of examples which show the before and after? How many examples would be needed to insure correctness and lack of holes? What is the name of doing such an automatic process?

It's called program synthesis or perhaps guessing. Your question reminds me of the 2013 ICFP programming contest where you were supposed to guess programs and all you had was the output for your chosen test input in a very simple functional language.
So imagine you wanted to make the identity function:
(define-by-example identity
(0 -> 0)
(1 -> 1))
There is something missing? What if I told you to make a function that returns it's own value for 0 and 1 only? How would you make that different than the above example? What if it's a non linear transformation. Eg. how many examples does it take to get a polynomial correct?
I guess we're back to something like this:
(define-by-example fib
(0 -> 0)
(1 -> 1)
(n -> (+ (fib (- n 1)) (fib (- n 2)))))
But many languages has exactly this. Even Schemes dirty cousin Racket has this:
#!racket
(define (fib n)
(match n
[0 0]
[1 1]
[n (+ (fib (- n 1)) (fib (- n 2)))]))
This is the Hello World of Haskell so we better write that next :-)
-- haskell
fib 0 = 0
fib 1 = 1
fib n = fib(n - 1) + fib(n - 2)
Now there are smart people out there that tries to find the best method of transferring the idea down to a syntax that is clear for the computer and us. I'm sure the essence of "by example" is good but it needs to be a way of explaining the corrolations too and I cannot think of a better way to do that than maths or code where maths don't apply.

This is a subdomain of machine learning called supervised learning. The most popular method to solve these problems is by using neural networks. Another method is genetic programming.
In the general case it is not possible to guarantee correctness, no matter how large the training set (number of examples) is. But in a typical application this is not required. The training is considered successfull if a high enough percentage os the results is correct.

Is it possible to automatically generate a program/function by writing a certain amount of examples which show the before and after?
I'd say yes, but ...
... it's not exactly that easy, and certainly not that generally applicable. There are basically two "approaches" that I know of:
a) Try to generate a program
Just pack together program code (most probably instructions from some instruction set) until you get the desired results. This can be done with brute force, but it's hardly possible to synthesize non-trivial functions, yet alone complete programs, or probably with techniques from your favorite artificial intelligence tool set, like hill climbing, simulated annealing, genetic programming and the whole rest.
The only reference that I know of is the "Superoptimizer" by Henry Massalin, which tries to generate a function that computes the same as a given function, just with fewer instructions.
It's probably better to use some higher level representation of "computation" than assembler instructions (probably the Lisp AST?), but that's just my guess.
b) Write a meta program that "somehow" learns to behave like the desired program
This is actually very common nowadays in the form of neural networks, e.g. for image or voice recognition. Here you don't try to get a program that does what you want (e.g. "recognize birds in arbitrary images") but rather write a general program that is capable of learning to "behave differently" and train that in order to behave as you want.
Note that there's a lot of effort going into understanding what these meta programs actually do after they learned their intended function. Just taking the first relevant result from a Google search is already an interesting read.

Related

Filter a range without using an intermediate list

I have written a function is-prime that verifies whether a given number is a prime number or not, and returns t or nil accordingly.
(is-prime 2) ; => T
(is-prime 3) ; => T
(is-prime 4) ; => NIL
So far, so good. Now I would like to generate a list of prime numbers between min and max, so I would like to come up with a function that takes those two values as parameters and returns a list of all prime numbers:
(defun get-primes (min max)
...)
Now this is where I am currently stuck. What I could do, of course, is create a list with the range of all numbers from min to max and run remove-if-not on it.
Anyway, this means, that I first have to create a potentially huge list with lots of numbers that I throw away anyway. So I would like to do it the other way round, build up a list that contains only the numbers between min and max that actually are prime according to the is-prime predicate.
How can I do this in a functional way, i.e. without using loop? My current approach (with loop) looks like this:
(defun get-primes (min max)
(loop
for guess from min to max
when (is-prime guess)
collect guess))
Maybe this is a totally dumb question, but I guess I don't see the forest for the trees.
Any hints?
Common Lisp does not favor pure Functional Programming approaches. The language is based on a more pragmatic view of an underlying machine: no TCO, stacks are limited, various resources are limited (the number of arguments which are allowed, etc.), mutation is possible. That does not sound very motivating for any Haskell enthusiast. But Common Lisp was developed to write Lisp applications, not for advancing FP research and development. Evaluation in Common Lisp (as usual in Lisp) is strict and not lazy. The default data structures are also not lazy. Though there are lazy libraries. Common Lisp also does not provide in the standard features like continuations or coroutines - which might be useful in this case.
The default Common Lisp approach is non-functional and uses some kind of iteration construct: DO, LOOP or the ITERATE library. Like in your example. Common Lisp users find it perfectly fine. Some, like me, think that Iterate is the best looping construct ever invented. ;-) The advantage is that it is relatively easy to create efficient code, even though the macros have a large implementation.
There are various ways to do it differently:
lazy streams. reading SICP helps, see the part on streams. Can be easily done in Common Lisp, too. Note that Common Lisp uses the word stream for I/O streams - which is something different.
use a generator function next-prime. Generate this function from the initial range and call it until you got all primes you are interested in.
Here is a simple example of a generator function, generating even numbers from some start value:
(defun make-next-even-fn (start)
(let ((current (- start
(if (evenp start) 2 1))))
(lambda ()
(incf current 2))))
CL-USER 14 > (let ((next-even-fn (make-next-even-fn 13)))
(flet ((get-next-even ()
(funcall next-even-fn)))
(print (get-next-even))
(print (get-next-even))
(print (get-next-even))
(print (get-next-even))
(list (get-next-even)
(get-next-even)
(get-next-even))))
14
16
18
20
(22 24 26)
Defining the next-prime generator is left as an exercise. ;-)
use some kind of more sophisticated lazy data structure. For Common Lisp there is Series, which was an early iteration proposal - also published in CLtL2: Series. The new book Common Lisp Recipes by Edi Weitz (math prof in Hamburg) has a Series example for computing primes. See chapter 7.15. You can download the source code for the book and find the example there.
the are simpler variants of Series, for example pipes

Prolog Accumulators. Are they really a "different" concept?

I am learning Prolog under my Artificial Intelligence Lab, from the source Learn Prolog Now!.
In the 5th Chapter we come to learn about Accumulators. And as an example, these two code snippets are given.
To Find the Length of a List
without accumulators:
len([],0).
len([_|T],N) :- len(T,X), N is X+1.
with accumulators:
accLen([_|T],A,L) :- Anew is A+1, accLen(T,Anew,L).
accLen([],A,A).
I am unable to understand, how the two snippets are conceptually different? What exactly an accumulator is doing different? And what are the benefits?
Accumulators sound like intermediate variables. (Correct me if I am wrong.) And I had already used them in my programs up till now, so is it really that big a concept?
When you give something a name, it suddenly becomes more real than it used to be. Discussing something can now be done by simply using the name of the concept. Without getting any more philosophical, no, there is nothing special about accumulators, but they are useful.
In practice, going through a list without an accumulator:
foo([]).
foo([H|T]) :-
foo(T).
The head of the list is left behind, and cannot be accessed by the recursive call. At each level of recursion you only see what is left of the list.
Using an accumulator:
bar([], _Acc).
bar([H|T], Acc) :-
bar(T, [H|Acc]).
At every recursive step, you have the remaining list and all the elements you have gone through. In your len/3 example, you only keep the count, not the actual elements, as this is all you need.
Some predicates (like len/3) can be made tail-recursive with accumulators: you don't need to wait for the end of your input (exhaust all elements of the list) to do the actual work, instead doing it incrementally as you get the input. Prolog doesn't have to leave values on the stack and can do tail-call optimization for you.
Search algorithms that need to know the "path so far" (or any algorithm that needs to have a state) use a more general form of the same technique, by providing an "intermediate result" to the recursive call. A run-length encoder, for example, could be defined as:
rle([], []).
rle([First|Rest],Encoded):-
rle_1(Rest, First, 1, Encoded).
rle_1([], Last, N, [Last-N]).
rle_1([H|T], Prev, N, Encoded) :-
( dif(H, Prev)
-> Encoded = [Prev-N|Rest],
rle_1(T, H, 1, Rest)
; succ(N, N1),
rle_1(T, H, N1, Encoded)
).
Hope that helps.
TL;DR: yes, they are.
Imagine you are to go from a city A on the left to a city B on the right, and you want to know the distance between the two in advance. How are you to achieve this?
A mathematician in such a position employs magic known as structural recursion. He says to himself, what if I'll send my own copy one step closer towards the city B, and ask it of its distance to the city? I will then add 1 to its result, after receiving it from my copy, since I have sent it one step closer towards the city, and will know my answer without having moved an inch! Of course if I am already at the city gates, I won't send any copies of me anywhere since I'll know that the distance is 0 - without having moved an inch!
And how do I know that my copy-of-me will succeed? Simply because he will follow the same exact rules, while starting from a point closer to our destination. Whatever value my answer will be, his will be one less, and only a finite number of copies of us will be called into action - because the distance between the cities is finite. So the total operation is certain to complete in a finite amount of time and I will get my answer. Because getting your answer after an infinite time has passed, is not getting it at all - ever.
And now, having found out his answer in advance, our cautious magician mathematician is ready to embark on his safe (now!) journey.
But that of course wasn't magic at all - it's all being a dirty trick! He didn't find out the answer in advance out of thin air - he has sent out the whole stack of others to find it for him. The grueling work had to be done after all, he just pretended not to be aware of it. The distance was traveled. Moreover, the distance back had to be traveled too, for each copy to tell their result to their master, the result being actually created on the way back from the destination. All this before our fake magician had ever started walking himself. How's that for a team effort. For him it could seem like a sweet deal. But overall...
So that's how the magician mathematician thinks. But his dual the brave traveler just goes on a journey, and counts his steps along the way, adding 1 to the current steps counter on each step, before the rest of his actual journey. There's no pretense anymore. The journey may be finite, or it may be infinite - he has no way of knowing upfront. But at each point along his route, and hence when ⁄ if he arrives at the city B gates too, he will know his distance traveled so far. And he certainly won't have to go back all the way to the beginning of the road to tell himself the result.
And that's the difference between the structural recursion of the first, and tail recursion with accumulator ⁄ tail recursion modulo cons ⁄ corecursion employed by the second. The knowledge of the first is built on the way back from the goal; of the second - on the way forth from the starting point, towards the goal. The journey is the destination.
see also:
Technical Report TR19: Unwinding Structured Recursions into Iterations. Daniel P. Friedman and David S. Wise (Dec 1974).
What are the practical implications of all this, you ask? Why, imagine our friend the magician mathematician needs to boil some eggs. He has a pot; a faucet; a hot plate; and some eggs. What is he to do?
Well, it's easy - he'll just put eggs into the pot, add some water from the faucet into it and will put it on the hot plate.
And what if he's already given a pot with eggs and water in it? Why, it's even easier to him - he'll just take the eggs out, pour out the water, and will end up with the problem he already knows how to solve! Pure magic, isn't it!
Before we laugh at the poor chap, we mustn't forget the tale of the centipede. Sometimes ignorance is bliss. But when the required knowledge is simple and "one-dimensional" like the distance here, it'd be a crime to pretend to have no memory at all.
accumulators are intermediate variables, and are an important (read basic) topic in Prolog because allow reversing the information flow of some fundamental algorithm, with important consequences for the efficiency of the program.
Take reversing a list as example
nrev([],[]).
nrev([H|T], R) :- nrev(T, S), append(S, [H], R).
rev(L, R) :- rev(L, [], R).
rev([], R, R).
rev([H|T], C, R) :- rev(T, [H|C], R).
nrev/2 (naive reverse) it's O(N^2), where N is list length, while rev/2 it's O(N).

Pure functional bottom up tree algorithm

Say I wanted to write an algorithm working on an immutable tree data structure that has a list of leaves as its input. It needs to return a new tree with changes made to the old tree going upwards from those leaves.
My problem is that there seems to be no way to do this purely functional without reconstructing the entire tree checking at leaves if they are in the list, because you always need to return a complete new tree as the result of an operation and you can't mutate the existing tree.
Is this a basic problem in functional programming that only can be avoided by using a better suited algorithm or am I missing something?
Edit: I not only want to avoid to recreate the entire tree but also the functional algorithm should have the same time complexity than the mutating variant.
The most promising I have seen so far (which admittedly is not very long...) is the Zipper data structure: It basically keeps a separate structure, a reverse path from the node to root, and does local edits on this separate structure.
It can do multiple local edits, most of which are constant time, and write them back to the tree (reconstructing the path to root, which are the only nodes that need to change) all in one go.
The Zipper is a standard library in Clojure (see the heading Zippers - Functional Tree Editing).
And there's the original paper by Huet with an implementation in OCaml.
Disclaimer: I have been programming for a long time, but only started functional programming a couple of weeks ago, and had never even heard of the problem of functional editing of trees until last week, so there may very well be other solutions I'm unaware of.
Still, it looks like the Zipper does most of what one could wish for. If there are other alternatives at O(log n) or below, I'd like to hear them.
You may enjoy reading
http://lorgonblog.spaces.live.com/blog/cns!701679AD17B6D310!248.entry
This depends on your functional programming language. For instance in Haskell, which is a Lazy functional programming language, results are calculated at the last moment; when they are acutally needed.
In your example the assumption is that because your function creates a new tree, the whole tree must be processed, whereas in reality the function is just passed on to the next function and only executed when necessary.
A good example of lazy evaluation is the sieve of erastothenes in Haskell, which creates the prime numbers by eliminating the multiples of the current number in the list of numbers. Note that the list of numbers is infinite. Taken from here
primes :: [Integer]
primes = sieve [2..]
where
sieve (p:xs) = p : sieve [x|x <- xs, x `mod` p > 0]
I recently wrote an algorithm that does exactly what you described - https://medium.com/hibob-engineering/from-list-to-immutable-hierarchy-tree-with-scala-c9e16a63cb89
It works in 2 phases:
Sort the list of nodes by their depth in the hierarchy
constructs the tree from bottom up
Some caveats:
No Node mutation, The result is an Immutable-tree
The complexity is O(n)
Ignores cyclic referencing in the incoming list

Real-world examples of recursion [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What are real-world problems where a recursive approach is the natural solution besides depth-first search (DFS)?
(I don't consider Tower of Hanoi, Fibonacci number, or factorial real-world problems. They are a bit contrived in my mind.)
A real world example of recursion
How about anything involving a directory structure in the file system. Recursively finding files, deleting files, creating directories, etc.
Here is a Java implementation that recursively prints out the content of a directory and its sub-directories.
import java.io.File;
public class DirectoryContentAnalyserOne implements DirectoryContentAnalyser {
private static StringBuilder indentation = new StringBuilder();
public static void main (String args [] ){
// Here you pass the path to the directory to be scanned
getDirectoryContent("C:\\DirOne\\DirTwo\\AndSoOn");
}
private static void getDirectoryContent(String filePath) {
File currentDirOrFile = new File(filePath);
if ( !currentDirOrFile.exists() ){
return;
}
else if ( currentDirOrFile.isFile() ){
System.out.println(indentation + currentDirOrFile.getName());
return;
}
else{
System.out.println("\n" + indentation + "|_" +currentDirOrFile.getName());
indentation.append(" ");
for ( String currentFileOrDirName : currentDirOrFile.list()){
getPrivateDirectoryContent(currentDirOrFile + "\\" + currentFileOrDirName);
}
if (indentation.length() - 3 > 3 ){
indentation.delete(indentation.length() - 3, indentation.length());
}
}
}
}
There are lots of mathy examples here, but you wanted a real world example, so with a bit of thinking, this is possibly the best I can offer:
You find a person who has contracted a given contageous infection, which is non fatal, and fixes itself quickly( Type A) , Except for one in 5 people ( We'll call these type B ) who become permanently infected with it and shows no symptoms and merely acts a spreader.
This creates quite annoying waves of havoc when ever type B infects a multitude of type A.
Your task is to track down all the type Bs and immunise them to stop the backbone of the disease. Unfortunately tho, you cant administer a nationwide cure to all, because the people who are typeAs are also deadly allergic to the cure that works for type B.
The way you would do this, would be social discovery, given an infected person(Type A), choose all their contacts in the last week, marking each contact on a heap. When you test a person is infected, add them to the "follow up" queue. When a person is a type B, add them to the "follow up" at the head ( because you want to stop this fast ).
After processing a given person, select the person from the front of the queue and apply immunization if needed. Get all their contacts previously unvisited, and then test to see if they're infected.
Repeat until the queue of infected people becomes 0, and then wait for another outbreak..
( Ok, this is a bit iterative, but its an iterative way of solving a recursive problem, in this case, breadth first traversal of a population base trying to discover likely paths to problems, and besides, iterative solutions are often faster and more effective, and I compulsively remove recursion everywhere so much its become instinctive. .... dammit! )
Quicksort, merge sort, and most other N-log N sorts.
Matt Dillard's example is good. More generally, any walking of a tree can generally be handled by recursion very easily. For instance, compiling parse trees, walking over XML or HTML, etc.
Recursion is often used in implementations of the Backtracking algorithm. For a "real-world" application of this, how about a Sudoku solver?
Recursion is appropriate whenever a problem can be solved by dividing it into sub-problems, that can use the same algorithm for solving them. Algorithms on trees and sorted lists are a natural fit. Many problems in computational geometry (and 3D games) can be solved recursively using binary space partitioning (BSP) trees, fat subdivisions, or other ways of dividing the world into sub-parts.
Recursion is also appropriate when you are trying to guarantee the correctness of an algorithm. Given a function that takes immutable inputs and returns a result that is a combination of recursive and non-recursive calls on the inputs, it's usually easy to prove the function is correct (or not) using mathematical induction. It's often intractable to do this with an iterative function or with inputs that may mutate. This can be useful when dealing with financial calculations and other applications where correctness is very important.
Surely that many compilers out there use recursion heavily. Computer languages are inherently recursive themselves (i.e., you can embed 'if' statements inside other 'if' statements, etc.).
Disabling/setting read-only for all children controls in a container control. I needed to do this because some of the children controls were containers themselves.
public static void SetReadOnly(Control ctrl, bool readOnly)
{
//set the control read only
SetControlReadOnly(ctrl, readOnly);
if (ctrl.Controls != null && ctrl.Controls.Count > 0)
{
//recursively loop through all child controls
foreach (Control c in ctrl.Controls)
SetReadOnly(c, readOnly);
}
}
People often sort stacks of documents using a recursive method. For example, imagine you are sorting 100 documents with names on them. First place documents into piles by the first letter, then sort each pile.
Looking up words in the dictionary is often performed by a binary-search-like technique, which is recursive.
In organizations, bosses often give commands to department heads, who in turn give commands to managers, and so on.
Famous Eval/Apply cycle from SICP
(source: mit.edu)
Here is the definition of eval:
(define (eval exp env)
(cond ((self-evaluating? exp) exp)
((variable? exp) (lookup-variable-value exp env))
((quoted? exp) (text-of-quotation exp))
((assignment? exp) (eval-assignment exp env))
((definition? exp) (eval-definition exp env))
((if? exp) (eval-if exp env))
((lambda? exp)
(make-procedure (lambda-parameters exp)
(lambda-body exp)
env))
((begin? exp)
(eval-sequence (begin-actions exp) env))
((cond? exp) (eval (cond->if exp) env))
((application? exp)
(apply (eval (operator exp) env)
(list-of-values (operands exp) env)))
(else
(error "Unknown expression type - EVAL" exp))))
Here is the definition of apply:
(define (apply procedure arguments)
(cond ((primitive-procedure? procedure)
(apply-primitive-procedure procedure arguments))
((compound-procedure? procedure)
(eval-sequence
(procedure-body procedure)
(extend-environment
(procedure-parameters procedure)
arguments
(procedure-environment procedure))))
(else
(error
"Unknown procedure type - APPLY" procedure))))
Here is the definition of eval-sequence:
(define (eval-sequence exps env)
(cond ((last-exp? exps) (eval (first-exp exps) env))
(else (eval (first-exp exps) env)
(eval-sequence (rest-exps exps) env))))
eval -> apply -> eval-sequence -> eval
Recursion is used in things like BSP trees for collision detection in game development (and other similar areas).
Real world requirement I got recently:
Requirement A: Implement this feature after thoroughly understanding Requirement A.
Recursion is applied to problems (situations) where you can break it up (reduce it) into smaller parts, and each part(s) looks similar to the original problem.
Good examples of where things that contain smaller parts similar to itself are:
tree structure (a branch is like a tree)
lists (part of a list is still a list)
containers (Russian dolls)
sequences (part of a sequence looks like the next)
groups of objects (a subgroup is a still a group of objects)
Recursion is a technique to keep breaking the problem down into smaller and smaller pieces, until one of those pieces become small enough to be a piece-of-cake. Of course, after you break them up, you then have to "stitch" the results back together in the right order to form a total solution of your original problem.
Some recursive sorting algorithms, tree-walking algorithms, map/reduce algorithms, divide-and-conquer are all examples of this technique.
In computer programming, most stack-based call-return type languages already have the capabilities built in for recursion: i.e.
break the problem down into smaller pieces ==> call itself on a smaller subset of the original data),
keep track on how the pieces are divided ==> call stack,
stitch the results back ==> stack-based return
Feedback loops in a hierarchical organization.
Top boss tells top executives to collect feedback from everyone in the company.
Each executive gathers his/her direct reports and tells them to gather feedback from their direct reports.
And on down the line.
People with no direct reports -- the leaf nodes in the tree -- give their feedback.
The feedback travels back up the tree with each manager adding his/her own feedback.
Eventually all the feedback makes it back up to the top boss.
This is the natural solution because the recursive method allows filtering at each level -- the collating of duplicates and the removal of offensive feedback. The top boss could send a global email and have each employee report feedback directly back to him/her, but there are the "you can't handle the truth" and the "you're fired" problems, so recursion works best here.
Parsers and compilers may be written in a recursive-descent method. Not the best way to do it, as tools like lex/yacc generate faster and more efficient parsers, but conceptually simple and easy to implement, so they remain common.
I have a system that uses pure tail recursion in a few places to simulate a state machine.
Some great examples of recursion are found in functional programming languages. In functional programming languages (Erlang, Haskell, ML/OCaml/F#, etc.), it's very common to have any list processing use recursion.
When dealing with lists in typical imperative OOP-style languages, it's very common to see lists implemented as linked lists ([item1 -> item2 -> item3 -> item4]). However, in some functional programming languages, you find that lists themselves are implemented recursively, where the "head" of the list points to the first item in the list, and the "tail" points to a list containing the rest of the items ([item1 -> [item2 -> [item3 -> [item4 -> []]]]]). It's pretty creative in my opinion.
This handling of lists, when combined with pattern matching, is VERY powerful. Let's say I want to sum a list of numbers:
let rec Sum numbers =
match numbers with
| [] -> 0
| head::tail -> head + Sum tail
This essentially says "if we were called with an empty list, return 0" (allowing us to break the recursion), else return the value of head + the value of Sum called with the remaining items (hence, our recursion).
For example, I might have a list of URLs, I think break apart all the URLs each URL links to, and then I reduce the total number of links to/from all URLs to generate "values" for a page (an approach that Google takes with PageRank and that you can find defined in the original MapReduce paper). You can do this to generate word counts in a document also. And many, many, many other things as well.
You can extend this functional pattern to any type of MapReduce code where you can taking a list of something, transforming it, and returning something else (whether another list, or some zip command on the list).
XML, or traversing anything that is a tree. Although, to be honest, I pretty much never use recursion in my job.
A "real-world" problem solved by recursion would be nesting dolls. Your function is OpenDoll().
Given a stack of them, you would recursilvey open the dolls, calling OpenDoll() if you will, until you've reached the inner-most doll.
Parsing an XML file.
Efficient search in multi-dimensional spaces. E. g. quad-trees in 2D, oct-trees in 3D, kd-trees, etc.
Hierarchical clustering.
Come to think of it, traversing any hierarchical structure naturally lends itself to recursion.
Template metaprogramming in C++, where there are no loops and recursion is the only way.
Suppose you are building a CMS for a website, where your pages are in a tree structure, with say the root being the home-page.
Suppose also your {user|client|customer|boss} requests that you place a breadcrumb trail on every page to show where you are in the tree.
For any given page n, you'll may want to walk up to the parent of n, and its parent, and so on, recursively to build a list of nodes back up to the root of page tree.
Of course, you're hitting the db several times per page in that example, so you may want to use some SQL aliasing where you look up page-table as a, and page-table again as b, and join a.id with b.parent so you make the database do the recursive joins. It's been a while, so my syntax is probably not helpful.
Then again, you may just want to only calculate this once and store it with the page record, only updating it if you move the page. That'd probably be more efficient.
Anyway, that's my $.02
You have an organization tree that is N levels deep. Several of the nodes are checked, and you want to expand out to only those nodes that have been checked.
This is something that I actually coded.
Its nice and easy with recursion.
In my job we have a system with a generic data structure that can be described as a tree. That means that recursion is a very effective technique to work with the data.
Solving it without recursion would require a lot of unnecessary code. The problem with recursion is that it is not easy to follow what happens. You really have to concentrate when following the flow of execution. But when it works the code is elegant and effective.
Calculations for finance/physics, such as compound averages.
Parsing a tree of controls in Windows Forms or WebForms (.NET Windows Forms / ASP.NET).
The best example I know is quicksort, it is a lot simpler with recursion. Take a look at:
shop.oreilly.com/product/9780596510046.do
www.amazon.com/Beautiful-Code-Leading-Programmers-Practice/dp/0596510047
(Click on the first subtitle under the chapter 3: "The most beautiful code I ever wrote").
Phone and cable companies maintain a model of their wiring topology, which in effect is a large network or graph. Recursion is one way to traverse this model when you want to find all parent or all child elements.
Since recursion is expensive from a processing and memory perspective, this step is commonly only performed when the topology is changed and the result is stored in a modified pre-ordered list format.
Inductive reasoning, the process of concept-formation, is recursive in nature. Your brain does it all the time, in the real world.
Ditto the comment about compilers. The abstract syntax tree nodes naturally lend themselves to recursion. All recursive data structures (linked lists, trees, graphs, etc.) are also more easily handled with recursion. I do think that most of us don't get to use recursion a lot once we are out of school because of the types of real-world problems, but it's good to be aware of it as an option.

What is recursion and when should I use it?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
One of the topics that seems to come up regularly on mailing lists and online discussions is the merits (or lack thereof) of doing a Computer Science Degree. An argument that seems to come up time and again for the negative party is that they have been coding for some number of years and they have never used recursion.
So the question is:
What is recursion?
When would I use recursion?
Why don't people use recursion?
There are a number of good explanations of recursion in this thread, this answer is about why you shouldn't use it in most languages.* In the majority of major imperative language implementations (i.e. every major implementation of C, C++, Basic, Python, Ruby,Java, and C#) iteration is vastly preferable to recursion.
To see why, walk through the steps that the above languages use to call a function:
space is carved out on the stack for the function's arguments and local variables
the function's arguments are copied into this new space
control jumps to the function
the function's code runs
the function's result is copied into a return value
the stack is rewound to its previous position
control jumps back to where the function was called
Doing all of these steps takes time, usually a little bit more than it takes to iterate through a loop. However, the real problem is in step #1. When many programs start, they allocate a single chunk of memory for their stack, and when they run out of that memory (often, but not always due to recursion), the program crashes due to a stack overflow.
So in these languages recursion is slower and it makes you vulnerable to crashing. There are still some arguments for using it though. In general, code written recursively is shorter and a bit more elegant, once you know how to read it.
There is a technique that language implementers can use called tail call optimization which can eliminate some classes of stack overflow. Put succinctly: if a function's return expression is simply the result of a function call, then you don't need to add a new level onto the stack, you can reuse the current one for the function being called. Regrettably, few imperative language-implementations have tail-call optimization built in.
* I love recursion. My favorite static language doesn't use loops at all, recursion is the only way to do something repeatedly. I just don't think that recursion is generally a good idea in languages that aren't tuned for it.
** By the way Mario, the typical name for your ArrangeString function is "join", and I'd be surprised if your language of choice doesn't already have an implementation of it.
Simple english example of recursion.
A child couldn't sleep, so her mother told her a story about a little frog,
who couldn't sleep, so the frog's mother told her a story about a little bear,
who couldn't sleep, so the bear's mother told her a story about a little weasel...
who fell asleep.
...and the little bear fell asleep;
...and the little frog fell asleep;
...and the child fell asleep.
In the most basic computer science sense, recursion is a function that calls itself. Say you have a linked list structure:
struct Node {
Node* next;
};
And you want to find out how long a linked list is you can do this with recursion:
int length(const Node* list) {
if (!list->next) {
return 1;
} else {
return 1 + length(list->next);
}
}
(This could of course be done with a for loop as well, but is useful as an illustration of the concept)
Whenever a function calls itself, creating a loop, then that's recursion. As with anything there are good uses and bad uses for recursion.
The most simple example is tail recursion where the very last line of the function is a call to itself:
int FloorByTen(int num)
{
if (num % 10 == 0)
return num;
else
return FloorByTen(num-1);
}
However, this is a lame, almost pointless example because it can easily be replaced by more efficient iteration. After all, recursion suffers from function call overhead, which in the example above could be substantial compared to the operation inside the function itself.
So the whole reason to do recursion rather than iteration should be to take advantage of the call stack to do some clever stuff. For example, if you call a function multiple times with different parameters inside the same loop then that's a way to accomplish branching. A classic example is the Sierpinski triangle.
You can draw one of those very simply with recursion, where the call stack branches in 3 directions:
private void BuildVertices(double x, double y, double len)
{
if (len > 0.002)
{
mesh.Positions.Add(new Point3D(x, y + len, -len));
mesh.Positions.Add(new Point3D(x - len, y - len, -len));
mesh.Positions.Add(new Point3D(x + len, y - len, -len));
len *= 0.5;
BuildVertices(x, y + len, len);
BuildVertices(x - len, y - len, len);
BuildVertices(x + len, y - len, len);
}
}
If you attempt to do the same thing with iteration I think you'll find it takes a lot more code to accomplish.
Other common use cases might include traversing hierarchies, e.g. website crawlers, directory comparisons, etc.
Conclusion
In practical terms, recursion makes the most sense whenever you need iterative branching.
Recursion is a method of solving problems based on the divide and conquer mentality.
The basic idea is that you take the original problem and divide it into smaller (more easily solved) instances of itself, solve those smaller instances (usually by using the same algorithm again) and then reassemble them into the final solution.
The canonical example is a routine to generate the Factorial of n. The Factorial of n is calculated by multiplying all of the numbers between 1 and n. An iterative solution in C# looks like this:
public int Fact(int n)
{
int fact = 1;
for( int i = 2; i <= n; i++)
{
fact = fact * i;
}
return fact;
}
There's nothing surprising about the iterative solution and it should make sense to anyone familiar with C#.
The recursive solution is found by recognising that the nth Factorial is n * Fact(n-1). Or to put it another way, if you know what a particular Factorial number is you can calculate the next one. Here is the recursive solution in C#:
public int FactRec(int n)
{
if( n < 2 )
{
return 1;
}
return n * FactRec( n - 1 );
}
The first part of this function is known as a Base Case (or sometimes Guard Clause) and is what prevents the algorithm from running forever. It just returns the value 1 whenever the function is called with a value of 1 or less. The second part is more interesting and is known as the Recursive Step. Here we call the same method with a slightly modified parameter (we decrement it by 1) and then multiply the result with our copy of n.
When first encountered this can be kind of confusing so it's instructive to examine how it works when run. Imagine that we call FactRec(5). We enter the routine, are not picked up by the base case and so we end up like this:
// In FactRec(5)
return 5 * FactRec( 5 - 1 );
// which is
return 5 * FactRec(4);
If we re-enter the method with the parameter 4 we are again not stopped by the guard clause and so we end up at:
// In FactRec(4)
return 4 * FactRec(3);
If we substitute this return value into the return value above we get
// In FactRec(5)
return 5 * (4 * FactRec(3));
This should give you a clue as to how the final solution is arrived at so we'll fast track and show each step on the way down:
return 5 * (4 * FactRec(3));
return 5 * (4 * (3 * FactRec(2)));
return 5 * (4 * (3 * (2 * FactRec(1))));
return 5 * (4 * (3 * (2 * (1))));
That final substitution happens when the base case is triggered. At this point we have a simple algrebraic formula to solve which equates directly to the definition of Factorials in the first place.
It's instructive to note that every call into the method results in either a base case being triggered or a call to the same method where the parameters are closer to a base case (often called a recursive call). If this is not the case then the method will run forever.
Recursion is solving a problem with a function that calls itself. A good example of this is a factorial function. Factorial is a math problem where factorial of 5, for example, is 5 * 4 * 3 * 2 * 1. This function solves this in C# for positive integers (not tested - there may be a bug).
public int Factorial(int n)
{
if (n <= 1)
return 1;
return n * Factorial(n - 1);
}
Recursion refers to a method which solves a problem by solving a smaller version of the problem and then using that result plus some other computation to formulate the answer to the original problem. Often times, in the process of solving the smaller version, the method will solve a yet smaller version of the problem, and so on, until it reaches a "base case" which is trivial to solve.
For instance, to calculate a factorial for the number X, one can represent it as X times the factorial of X-1. Thus, the method "recurses" to find the factorial of X-1, and then multiplies whatever it got by X to give a final answer. Of course, to find the factorial of X-1, it'll first calculate the factorial of X-2, and so on. The base case would be when X is 0 or 1, in which case it knows to return 1 since 0! = 1! = 1.
Consider an old, well known problem:
In mathematics, the greatest common divisor (gcd) … of two or more non-zero integers, is the largest positive integer that divides the numbers without a remainder.
The definition of gcd is surprisingly simple:
where mod is the modulo operator (that is, the remainder after integer division).
In English, this definition says the greatest common divisor of any number and zero is that number, and the greatest common divisor of two numbers m and n is the greatest common divisor of n and the remainder after dividing m by n.
If you'd like to know why this works, see the Wikipedia article on the Euclidean algorithm.
Let's compute gcd(10, 8) as an example. Each step is equal to the one just before it:
gcd(10, 8)
gcd(10, 10 mod 8)
gcd(8, 2)
gcd(8, 8 mod 2)
gcd(2, 0)
2
In the first step, 8 does not equal zero, so the second part of the definition applies. 10 mod 8 = 2 because 8 goes into 10 once with a remainder of 2. At step 3, the second part applies again, but this time 8 mod 2 = 0 because 2 divides 8 with no remainder. At step 5, the second argument is 0, so the answer is 2.
Did you notice that gcd appears on both the left and right sides of the equals sign? A mathematician would say this definition is recursive because the expression you're defining recurs inside its definition.
Recursive definitions tend to be elegant. For example, a recursive definition for the sum of a list is
sum l =
if empty(l)
return 0
else
return head(l) + sum(tail(l))
where head is the first element in a list and tail is the rest of the list. Note that sum recurs inside its definition at the end.
Maybe you'd prefer the maximum value in a list instead:
max l =
if empty(l)
error
elsif length(l) = 1
return head(l)
else
tailmax = max(tail(l))
if head(l) > tailmax
return head(l)
else
return tailmax
You might define multiplication of non-negative integers recursively to turn it into a series of additions:
a * b =
if b = 0
return 0
else
return a + (a * (b - 1))
If that bit about transforming multiplication into a series of additions doesn't make sense, try expanding a few simple examples to see how it works.
Merge sort has a lovely recursive definition:
sort(l) =
if empty(l) or length(l) = 1
return l
else
(left,right) = split l
return merge(sort(left), sort(right))
Recursive definitions are all around if you know what to look for. Notice how all of these definitions have very simple base cases, e.g., gcd(m, 0) = m. The recursive cases whittle away at the problem to get down to the easy answers.
With this understanding, you can now appreciate the other algorithms in Wikipedia's article on recursion!
A function that calls itself
When a function can be (easily) decomposed into a simple operation plus the same function on some smaller portion of the problem. I should say, rather, that this makes it a good candidate for recursion.
They do!
The canonical example is the factorial which looks like:
int fact(int a)
{
if(a==1)
return 1;
return a*fact(a-1);
}
In general, recursion isn't necessarily fast (function call overhead tends to be high because recursive functions tend to be small, see above) and can suffer from some problems (stack overflow anyone?). Some say they tend to be hard to get 'right' in non-trivial cases but I don't really buy into that. In some situations, recursion makes the most sense and is the most elegant and clear way to write a particular function. It should be noted that some languages favor recursive solutions and optimize them much more (LISP comes to mind).
A recursive function is one which calls itself. The most common reason I've found to use it is traversing a tree structure. For example, if I have a TreeView with checkboxes (think installation of a new program, "choose features to install" page), I might want a "check all" button which would be something like this (pseudocode):
function cmdCheckAllClick {
checkRecursively(TreeView1.RootNode);
}
function checkRecursively(Node n) {
n.Checked = True;
foreach ( n.Children as child ) {
checkRecursively(child);
}
}
So you can see that the checkRecursively first checks the node which it is passed, then calls itself for each of that node's children.
You do need to be a bit careful with recursion. If you get into an infinite recursive loop, you will get a Stack Overflow exception :)
I can't think of a reason why people shouldn't use it, when appropriate. It is useful in some circumstances, and not in others.
I think that because it's an interesting technique, some coders perhaps end up using it more often than they should, without real justification. This has given recursion a bad name in some circles.
Recursion is an expression directly or indirectly referencing itself.
Consider recursive acronyms as a simple example:
GNU stands for GNU's Not Unix
PHP stands for PHP: Hypertext Preprocessor
YAML stands for YAML Ain't Markup Language
WINE stands for Wine Is Not an Emulator
VISA stands for Visa International Service Association
More examples on Wikipedia
Recursion works best with what I like to call "fractal problems", where you're dealing with a big thing that's made of smaller versions of that big thing, each of which is an even smaller version of the big thing, and so on. If you ever have to traverse or search through something like a tree or nested identical structures, you've got a problem that might be a good candidate for recursion.
People avoid recursion for a number of reasons:
Most people (myself included) cut their programming teeth on procedural or object-oriented programming as opposed to functional programming. To such people, the iterative approach (typically using loops) feels more natural.
Those of us who cut our programming teeth on procedural or object-oriented programming have often been told to avoid recursion because it's error prone.
We're often told that recursion is slow. Calling and returning from a routine repeatedly involves a lot of stack pushing and popping, which is slower than looping. I think some languages handle this better than others, and those languages are most likely not those where the dominant paradigm is procedural or object-oriented.
For at least a couple of programming languages I've used, I remember hearing recommendations not to use recursion if it gets beyond a certain depth because its stack isn't that deep.
A recursive statement is one in which you define the process of what to do next as a combination of the inputs and what you have already done.
For example, take factorial:
factorial(6) = 6*5*4*3*2*1
But it's easy to see factorial(6) also is:
6 * factorial(5) = 6*(5*4*3*2*1).
So generally:
factorial(n) = n*factorial(n-1)
Of course, the tricky thing about recursion is that if you want to define things in terms of what you have already done, there needs to be some place to start.
In this example, we just make a special case by defining factorial(1) = 1.
Now we see it from the bottom up:
factorial(6) = 6*factorial(5)
= 6*5*factorial(4)
= 6*5*4*factorial(3) = 6*5*4*3*factorial(2) = 6*5*4*3*2*factorial(1) = 6*5*4*3*2*1
Since we defined factorial(1) = 1, we reach the "bottom".
Generally speaking, recursive procedures have two parts:
1) The recursive part, which defines some procedure in terms of new inputs combined with what you've "already done" via the same procedure. (i.e. factorial(n) = n*factorial(n-1))
2) A base part, which makes sure that the process doesn't repeat forever by giving it some place to start (i.e. factorial(1) = 1)
It can be a bit confusing to get your head around at first, but just look at a bunch of examples and it should all come together. If you want a much deeper understanding of the concept, study mathematical induction. Also, be aware that some languages optimize for recursive calls while others do not. It's pretty easy to make insanely slow recursive functions if you're not careful, but there are also techniques to make them performant in most cases.
Hope this helps...
I like this definition:
In recursion, a routine solves a small part of a problem itself, divides the problem into smaller pieces, and then calls itself to solve each of the smaller pieces.
I also like Steve McConnells discussion of recursion in Code Complete where he criticises the examples used in Computer Science books on Recursion.
Don't use recursion for factorials or Fibonacci numbers
One problem with
computer-science textbooks is that
they present silly examples of
recursion. The typical examples are
computing a factorial or computing a
Fibonacci sequence. Recursion is a
powerful tool, and it's really dumb to
use it in either of those cases. If a
programmer who worked for me used
recursion to compute a factorial, I'd
hire someone else.
I thought this was a very interesting point to raise and may be a reason why recursion is often misunderstood.
EDIT:
This was not a dig at Dav's answer - I had not seen that reply when I posted this
1.)
A method is recursive if it can call itself; either directly:
void f() {
... f() ...
}
or indirectly:
void f() {
... g() ...
}
void g() {
... f() ...
}
2.) When to use recursion
Q: Does using recursion usually make your code faster?
A: No.
Q: Does using recursion usually use less memory?
A: No.
Q: Then why use recursion?
A: It sometimes makes your code much simpler!
3.) People use recursion only when it is very complex to write iterative code. For example, tree traversal techniques like preorder, postorder can be made both iterative and recursive. But usually we use recursive because of its simplicity.
Here's a simple example: how many elements in a set. (there are better ways to count things, but this is a nice simple recursive example.)
First, we need two rules:
if the set is empty, the count of items in the set is zero (duh!).
if the set is not empty, the count is one plus the number of items in the set after one item is removed.
Suppose you have a set like this: [x x x]. let's count how many items there are.
the set is [x x x] which is not empty, so we apply rule 2. the number of items is one plus the number of items in [x x] (i.e. we removed an item).
the set is [x x], so we apply rule 2 again: one + number of items in [x].
the set is [x], which still matches rule 2: one + number of items in [].
Now the set is [], which matches rule 1: the count is zero!
Now that we know the answer in step 4 (0), we can solve step 3 (1 + 0)
Likewise, now that we know the answer in step 3 (1), we can solve step 2 (1 + 1)
And finally now that we know the answer in step 2 (2), we can solve step 1 (1 + 2) and get the count of items in [x x x], which is 3. Hooray!
We can represent this as:
count of [x x x] = 1 + count of [x x]
= 1 + (1 + count of [x])
= 1 + (1 + (1 + count of []))
= 1 + (1 + (1 + 0)))
= 1 + (1 + (1))
= 1 + (2)
= 3
When applying a recursive solution, you usually have at least 2 rules:
the basis, the simple case which states what happens when you have "used up" all of your data. This is usually some variation of "if you are out of data to process, your answer is X"
the recursive rule, which states what happens if you still have data. This is usually some kind of rule that says "do something to make your data set smaller, and reapply your rules to the smaller data set."
If we translate the above to pseudocode, we get:
numberOfItems(set)
if set is empty
return 0
else
remove 1 item from set
return 1 + numberOfItems(set)
There's a lot more useful examples (traversing a tree, for example) which I'm sure other people will cover.
Well, that's a pretty decent definition you have. And wikipedia has a good definition too. So I'll add another (probably worse) definition for you.
When people refer to "recursion", they're usually talking about a function they've written which calls itself repeatedly until it is done with its work. Recursion can be helpful when traversing hierarchies in data structures.
An example: A recursive definition of a staircase is:
A staircase consists of:
- a single step and a staircase (recursion)
- or only a single step (termination)
To recurse on a solved problem: do nothing, you're done.
To recurse on an open problem: do the next step, then recurse on the rest.
In plain English:
Assume you can do 3 things:
Take one apple
Write down tally marks
Count tally marks
You have a lot of apples in front of you on a table and you want to know how many apples there are.
start
Is the table empty?
yes: Count the tally marks and cheer like it's your birthday!
no: Take 1 apple and put it aside
Write down a tally mark
goto start
The process of repeating the same thing till you are done is called recursion.
I hope this is the "plain english" answer you are looking for!
A recursive function is a function that contains a call to itself. A recursive struct is a struct that contains an instance of itself. You can combine the two as a recursive class. The key part of a recursive item is that it contains an instance/call of itself.
Consider two mirrors facing each other. We've seen the neat infinity effect they make. Each reflection is an instance of a mirror, which is contained within another instance of a mirror, etc. The mirror containing a reflection of itself is recursion.
A binary search tree is a good programming example of recursion. The structure is recursive with each Node containing 2 instances of a Node. Functions to work on a binary search tree are also recursive.
This is an old question, but I want to add an answer from logistical point of view (i.e not from algorithm correctness point of view or performance point of view).
I use Java for work, and Java doesn't support nested function. As such, if I want to do recursion, I might have to define an external function (which exists only because my code bumps against Java's bureaucratic rule), or I might have to refactor the code altogether (which I really hate to do).
Thus, I often avoid recursion, and use stack operation instead, because recursion itself is essentially a stack operation.
You want to use it anytime you have a tree structure. It is very useful in reading XML.
Recursion as it applies to programming is basically calling a function from inside its own definition (inside itself), with different parameters so as to accomplish a task.
"If I have a hammer, make everything look like a nail."
Recursion is a problem-solving strategy for huge problems, where at every step just, "turn 2 small things into one bigger thing," each time with the same hammer.
Example
Suppose your desk is covered with a disorganized mess of 1024 papers. How do you make one neat, clean stack of papers from the mess, using recursion?
Divide: Spread all the sheets out, so you have just one sheet in each "stack".
Conquer:
Go around, putting each sheet on top of one other sheet. You now have stacks of 2.
Go around, putting each 2-stack on top of another 2-stack. You now have stacks of 4.
Go around, putting each 4-stack on top of another 4-stack. You now have stacks of 8.
... on and on ...
You now have one huge stack of 1024 sheets!
Notice that this is pretty intuitive, aside from counting everything (which isn't strictly necessary). You might not go all the way down to 1-sheet stacks, in reality, but you could and it would still work. The important part is the hammer: With your arms, you can always put one stack on top of the other to make a bigger stack, and it doesn't matter (within reason) how big either stack is.
Recursion is the process where a method call iself to be able to perform a certain task. It reduces redundency of code. Most recurssive functions or methods must have a condifiton to break the recussive call i.e. stop it from calling itself if a condition is met - this prevents the creating of an infinite loop. Not all functions are suited to be used recursively.
hey, sorry if my opinion agrees with someone, I'm just trying to explain recursion in plain english.
suppose you have three managers - Jack, John and Morgan.
Jack manages 2 programmers, John - 3, and Morgan - 5.
you are going to give every manager 300$ and want to know what would it cost.
The answer is obvious - but what if 2 of Morgan-s employees are also managers?
HERE comes the recursion.
you start from the top of the hierarchy. the summery cost is 0$.
you start with Jack,
Then check if he has any managers as employees. if you find any of them are, check if they have any managers as employees and so on. Add 300$ to the summery cost every time you find a manager.
when you are finished with Jack, go to John, his employees and then to Morgan.
You'll never know, how much cycles will you go before getting an answer, though you know how many managers you have and how many Budget can you spend.
Recursion is a tree, with branches and leaves, called parents and children respectively.
When you use a recursion algorithm, you more or less consciously are building a tree from the data.
In plain English, recursion means to repeat someting again and again.
In programming one example is of calling the function within itself .
Look on the following example of calculating factorial of a number:
public int fact(int n)
{
if (n==0) return 1;
else return n*fact(n-1)
}
Any algorithm exhibits structural recursion on a datatype if basically consists of a switch-statement with a case for each case of the datatype.
for example, when you are working on a type
tree = null
| leaf(value:integer)
| node(left: tree, right:tree)
a structural recursive algorithm would have the form
function computeSomething(x : tree) =
if x is null: base case
if x is leaf: do something with x.value
if x is node: do something with x.left,
do something with x.right,
combine the results
this is really the most obvious way to write any algorith that works on a data structure.
now, when you look at the integers (well, the natural numbers) as defined using the Peano axioms
integer = 0 | succ(integer)
you see that a structural recursive algorithm on integers looks like this
function computeSomething(x : integer) =
if x is 0 : base case
if x is succ(prev) : do something with prev
the too-well-known factorial function is about the most trivial example of
this form.
function call itself or use its own definition.

Resources