Why after pressing semicolon program is back in deep recursion? - recursion

I'm trying to understand the semicolon functionality.
I have this code:
del(X,[X|Rest],Rest).
del(X,[Y|Tail],[Y|Rest]) :-
del(X,Tail,Rest).
permutation([],[]).
permutation(L,[X|P]) :- del(X,L,L1), permutation(L1,P).
It's the simple predicate to show all permutations of given list.
I used the built-in graphical debugger in SWI-Prolog because I wanted to understand how it works and I understand for the first case which returns the list given in argument. Here is the diagram which I made for better understanding.
But I don't get it for the another solution. When I press the semicolon it doesn't start in the place where it ended instead it's starting with some deep recursion where L=[] (like in step 9). I don't get it, didn't the recursion end earlier? It had to go out of the recursions to return the answer and after semicolon it's again deep in recursion.
Could someone clarify that to me? Thanks in advance.

One analogy that I find useful in demystifying Prolog is that Backtracking is like Nested Loops, and when the innermost loop's variables' values are all found, the looping is suspended, the vars' values are reported, and then the looping is resumed.
As an example, let's write down simple generate-and-test program to find all pairs of natural numbers above 0 that sum up to a prime number. Let's assume is_prime/1 is already given to us.
We write this in Prolog as
above(0, N), between(1, N, M), Sum is M+N, is_prime(Sum).
We write this in an imperative pseudocode as
for N from 1 step 1:
for M from 1 step 1 until N:
Sum := M+N
if is_prime(Sum):
report_to_user_and_ask(Sum)
Now when report_to_user_and_ask is called, it prints Sum out and asks the user whether to abort or to continue. The loops are not exited, on the contrary, they are just suspended. Thus all the loop variables values that got us this far -- and there may be more tests up the loops chain that sometimes succeed and sometimes fail -- are preserved, i.e. the computation state is preserved, and the computation is ready to be resumed from that point, if the user presses ;.
I first saw this in Peter Norvig's AI book's implementation of Prolog in Common Lisp. He used mapping (Common Lisp's mapcan which is concatMap in Haskell or flatMap in many other languages) as a looping construct though, and it took me years to see that nested loops is what it is really all about.
Goals conjunction is expressed as the nesting of the loops; goals disjunction is expressed as the alternatives to loop through.
Further twist is that the nested loops' structure isn't fixed from the outset. It is fluid, the nested loops of a given loop can be created depending on the current state of that loop, i.e. depending on the current alternative being explored there; the loops are written as we go. In (most of the) languages where such dynamic creation of nested loops is impossible, it can be encoded with nested recursion / function invocation / inside the loops. (Here's one example, with some pseudocode.)
If we keep all such loops (created for each of the alternatives) in memory even after they are finished with, what we get is the AND-OR tree (mentioned in the other answer) thus being created while the search space is being explored and the solutions are found.
(non-coincidentally this fluidity is also the essence of "monad"; nondeterminism is modeled by the list monad; and the essential operation of the list monad is the flatMap operation which we saw above. With fluid structure of loops it is "Monad"; with fixed structure it is "Applicative Functor"; simple loops with no structure (no nesting at all): simply "Functor" (the concepts used in Haskell and the like). Also helps to demystify those.)
So, the proper slogan could be Backtracking is like Nested Loops, either fixed, known from the outset, or dynamically-created as we go. It's a bit longer though. :)
Here's also a Prolog example, which "as if creates the code to be run first (N nested loops for a given value of N), and then runs it." (There's even a whole dedicated tag for it on SO, too, it turns out, recursive-backtracking.)
And here's one in Scheme ("creates nested loops with the solution being accessible in the innermost loop's body"), and a C++ example ("create n nested loops at run-time, in effect enumerating the binary encoding of 2n, and print the sums out from the innermost loop").

There is a big difference between recursion in functional/imperative programming languages and Prolog (and it really became clear to me only in the last 2 weeks or so):
In functional/imperative programming, you recurse down a call chain, then come back up, unwinding the stack, then output the result. It's over.
In Prolog, you recurse down an AND-OR tree (really, alternating AND and OR nodes), selecting a predicate to call on an OR node (the "choicepoint"), from left to right, and calling every predicate in turn on an AND node, also from left to right. An acceptable tree has exactly one predicate returning TRUE under each OR node, and all predicates returning TRUE under each AND node. Once an acceptable tree has been constructed, by the very search procedure, we are (i.e. the "search cursor" is) on a rightmost bottommost node .
Success in constructing an acceptable tree also means a solution to the query entered at the Prolog Toplevel (the REPL) has been found: The variable values are output, but the tree is kept (unless there are no choicepoints).
And this is also important: all variables are global in the sense that if a variable X as been passed all the way down the call chain from predicate to predicate to the rightmost bottommost node, then constrained at the last possible moment by unifying it with 2 for example, X = 2, then the Prolog Toplevel is aware of that without further ado: nothing needs to be passed up the call chain.
If you now press ;, search doesn't restart at the top of the tree, but at the bottom, i.e. at the current cursor position: the nearest parent OR node is asked for more solutions. This may result in much search until a new acceptable tree has been constructed, we are at a new rightmost bottommost node. The new variable values are output and you may again enter ;.
This process cycles until no acceptable tree can be constructed any longer, upon which false is output.
Note that having this AND-OR as an inspectable and modifiable data structure at runtime allows some magical tricks to be deployed.
There is bound to be a lot of power in debugging tools which record this tree to help the user who gets the dreaded sphynxian false from a Prolog program that is supposed to work. There are now Time Traveling Debuggers for functional and imperative languages, after all...

Related

Struggling with building an intuition for recursion

Though I have studied and able am able to understand some programs in recursion, I am still not able to intuitively obtain a solution using recursion as I do easily using Iteration. Is there any course or track available in order to build an intuition for recursion? How can one master the concept of recursion?
if you want to gain a thorough understanding of how recursion works, I highly recommend that you start with understanding mathematical induction, as the two are very closely related, if not arguably identical.
Recursion is a way of breaking down seemingly complicated problems into smaller bits. Consider the trivial example of the factorial function.
def factorial(n):
if n < 2:
return 1
return n * factorial(n - 1)
To calculate factorial(100), for example, all you need is to calculate factorial(99) and multiply 100. This follows from the familiar definition of the factorial.
Here are some tips for coming up with a recursive solution:
Assume you know the result returned by the immediately preceding recursive call (e.g. in calculating factorial(100), assume you already know the value of factorial(99). How do you go from there?)
Consider the base case (i.e. when should the recursion come to a halt?)
The first bullet point might seem rather abstract, but all it means is this: a large portion of the work has already been done. How do you go from there to complete the task? In the case of the factorial, factorial(99) constituted this large portion of work. In many cases, you will find that identifying this portion of work simply amounts to examining the argument to the function (e.g. n in factorial), and assuming that you already have the answer to func(n - 1).
Here's another example for concreteness. Let's say we want to reverse a string without using in-built functions. In using recursion, we might assume that string[:-1], or the substring until the very last character, has already been reversed. Then, all that is needed is to put the last remaining character in the front. Using this inspiration, we might come up with the following recursive solution:
def my_reverse(string):
if not string: # base case: empty string
return string # return empty string, nothing to reverse
return string[-1] + my_reverse(string[:-1])
With all of this said, recursion is built on mathematical induction, and these two are inseparable ideas. In fact, one can easily prove that recursive algorithms work using induction. I highly recommend that you checkout this lecture.

Recursive thinking

I would like to ask if it is really necessary to track every recursive call when writing it, because I am having troubles if recursive call is inside a loop or inside multiple for loops. I just get lost when I am trying to understand what is happening.
Do you have some advice how to approach recursive problems and how to imagine it. I have already read a lot about it but I havent found a perfect answer yet. I understand for example how factorial works or fibonacci recursion. I get lost for example when I am trying to print all combinations from 1 to 5 length of 3 or all the solutions for n-queen problem
I had a similar problem, try drawing a tree like structure that keeps track of each recursive call. Where a node is a function and every child node of that node is a recursive call made from that function.
Everyone may have a different mental approach towards towards modeling a recursive problem. If you can solve the n queens problem in a non-recursive way, then you are just fine. It is certainly helpful to grasp the concept of recursion to break down a problem, though. If you are up for the mental exercise, then I suggest a text book on PROLOG. It is fun and very much teaches recursion from the very beginning.
Attempting a bit of a brain dump on n-queens. It goes like "how would I do it manually" by try and error. For n-queens, I propose to in your mind call it 8-queens as a start, just to make it look more familiar and intuitive. "n" is not an iterator here but specifies the problem size.
you reckon that n-queens has a self-similarity which is that you place single queens on a board - that is your candidate recursive routine
for a given board you have a routine to test if the latest queen added is in conflict with the prior placed ones
for a given board you have a routine to find a position for the queen that you have not tested yet if that test is not successful for the current position
you print out all queen positions if the queen you just placed was the nth (last) queen
otherwise (if the current queen was validly placed) you position an additional queen
The above is your program. Your routine will pass a list of positions of earlier queens. The first invocation is with an empty list.

prolog recursion

am making a function that will send me a list of all possible elemnts .. in each iteration its giving me the last answer .. but after the recursion am only getting the last answer back .. how can i make it give back every single answer ..
thank you
the problem is that am trying to find all possible distributions for a list into other lists .. the code
addIn(_,[],Result,Result).
addIn(C,[Element|Rest],[F|R],Result):-
member( Members , [F|R]),
sumlist( Members, Sum),
sumlist([Element],ElementLength),
Cap is Sum + ElementLength,
(Cap =< Ca,
append([Element], Members,New)....
by calling test .. am getting back all the list of possible answers .. now if i tried to do something that will fail like
bp(3,11,[8,2,4,6,1,8,4],Answer).
it will just enter a while loop .. more over if i changed the
bp(NB,C,OL,A):-
addIn(C,OL,[[],[],[]],A);
bp(NB,C,_,A).
to and instead of Or .. i get error :
ERROR: is/2: Arguments are not
sufficiently instantiated
appreciate the help ..
Thanks alot #hardmath
It sounds like you are trying to write your own version of findall/3, perhaps limited to a special case of an underlying goal. Doing it generally (constructing a list of all solutions to a given goal) in a user-defined Prolog predicate is not possible without resorting to side-effects with assert/retract.
However a number of useful special cases can be implemented without such "tricks". So it would be helpful to know what predicate defines your "all possible elements". [It may also be helpful to state which Prolog implementation you are using, if only so that responses may include links to documentation for that version.]
One important special case is where the "universe" of potential candidates already exists as a list. In that case we are really asking to find the sublist of "all possible elements" that satisfy a particular goal.
findSublist([ ],_,[ ]).
findSublist([H|T],Goal,[H|S]) :-
Goal(H),
!,
findSublist(T,Goal,S).
findSublist([_|T],Goal,S) :-
findSublist(T,Goal,S).
Many Prologs will allow you to pass the name of a predicate Goal around as an "atom", but if you have a specific goal in mind, you can leave out the middle argument and just hardcode your particular condition into the middle clause of a similar implementation.
Added in response to code posted:
I think I have a glimmer of what you are trying to do. It's hard to grasp because you are not going about it in the right way. Your predicate bp/4 has a single recursive clause, variously attempted using either AND or OR syntax to relate a call to addIn/4 to a call to bp/4 itself.
Apparently you expect wrapping bp/4 around addIn/4 in this way will somehow cause addIn/4 to accumulate or iterate over its solutions. It won't. It might help you to see this if we analyze what happens to the arguments of bp/4.
You are calling the formal arguments bp(NB,C,OL,A) with simple integers bound to NB and C, with a list of integers bound to OL, and with A as an unbound "output" Answer. Note that nothing is ever done with the value NB, as it is not passed to addIn/4 and is passed unchanged to the recursive call to bp/4.
Based on the variable names used by addIn/4 and supporting predicate insert/4, my guess is that NB was intended to mean "number of bins". For one thing you set NB = 3 in your test/0 clause, and later you "hardcode" three empty lists in the third argument in calling addIn/4. Whatever Answer you get from bp/4 comes from what addIn/4 is able to do with its first two arguments passed in, C and OL, from bp/4. As we noted, C is an integer and OL a list of integers (at least in the way test/0 calls bp/4).
So let's try to state just what addIn/4 is supposed to do with those arguments. Superficially addIn/4 seems to be structured for self-recursion in a sensible way. Its first clause is a simple termination condition that when the second argument becomes an empty list, unify the third and fourth arguments and that gives "answer" A to its caller.
The second clause for addIn/4 seems to coordinate with that approach. As written it takes the "head" Element off the list in the second argument and tries to find a "bin" in the third argument that Element can be inserted into while keeping the sum of that bin under the "cap" given by C. If everything goes well, eventually all the numbers from OL get assigned to a bin, all the bins have totals under the cap C, and the answer A gets passed back to the caller. The way addIn/4 is written leaves a lot of room for improvement just in basic clarity, but it may be doing what you need it to do.
Which brings us back to the question of how you should collect the answers produced by addIn/4. Perhaps you are happy to print them out one at a time. Perhaps you meant to collect all the solutions produced by addIn/4 into a single list. To finish up the exercise I'll need you to clarify what you really want to do with the Answers from addIn/4.
Let's say you want to print them all out and then stop, with a special case being to print nothing if the arguments being passed in don't allow a solution. Then you'd probably want something of this nature:
newtest :-
addIn(12,[7, 3, 5, 4, 6, 4, 5, 2], Answer),
format("Answer = ~w\n",[Answer]),
fail.
newtest.
This is a standard way of getting predicate addIn/4 to try all possible solutions, and then stop with the "fall-through" success of the second clause of newtest/0.
(Added) Suggestions about coding addIn/4:
It will make the code more readable and maintainable if the variable names are clear. I'd suggest using Cap instead of C as the first argument to addIn/4 and BinSum when you take the sum of items assigned to a "bin". Likewise Bin would be better where you used Members. In the third argument to addIn/4 (in the head of the second clause) you don't need an explicit list structure [F|R] since you never refer to either part F or R by itself. So there I'd use Bins.
Some of your predicate calls don't accomplish much that you cannot do more easily. For example, your second call to sumlist/2 involves a list with one item. Thus the sum is just the same as that item, i.e. ElementLength is the same as Element. Here you could just replace both calls to sumlist/2 with one such call:
sumlist([Element|Bin],BinSum)
and then do your test comparing BinSum with Cap. Similarly your call to append/3 just adjoins the single item Element to the front of the list (I'm calling) Bin, so you could just replace what you have called New with [Element|Bin].
You have used an extra pair of parentheses around the last four subgoals (in the second clause for addIn/4). Since AND is implied for all the subgoals of this clause, using the extra pair of parentheses is unnecessary.
The code for insert/4 isn't shown now, but it could be a source of some unintended "backtracking" in special cases. The better approach would be to have the first call (currently to member/2) be your only point of indeterminacy, i.e. when you choose one of the bins, do it by replacing it with a free variable that gets unified with [Element|Bin] at the next to last step.

Pure functional bottom up tree algorithm

Say I wanted to write an algorithm working on an immutable tree data structure that has a list of leaves as its input. It needs to return a new tree with changes made to the old tree going upwards from those leaves.
My problem is that there seems to be no way to do this purely functional without reconstructing the entire tree checking at leaves if they are in the list, because you always need to return a complete new tree as the result of an operation and you can't mutate the existing tree.
Is this a basic problem in functional programming that only can be avoided by using a better suited algorithm or am I missing something?
Edit: I not only want to avoid to recreate the entire tree but also the functional algorithm should have the same time complexity than the mutating variant.
The most promising I have seen so far (which admittedly is not very long...) is the Zipper data structure: It basically keeps a separate structure, a reverse path from the node to root, and does local edits on this separate structure.
It can do multiple local edits, most of which are constant time, and write them back to the tree (reconstructing the path to root, which are the only nodes that need to change) all in one go.
The Zipper is a standard library in Clojure (see the heading Zippers - Functional Tree Editing).
And there's the original paper by Huet with an implementation in OCaml.
Disclaimer: I have been programming for a long time, but only started functional programming a couple of weeks ago, and had never even heard of the problem of functional editing of trees until last week, so there may very well be other solutions I'm unaware of.
Still, it looks like the Zipper does most of what one could wish for. If there are other alternatives at O(log n) or below, I'd like to hear them.
You may enjoy reading
http://lorgonblog.spaces.live.com/blog/cns!701679AD17B6D310!248.entry
This depends on your functional programming language. For instance in Haskell, which is a Lazy functional programming language, results are calculated at the last moment; when they are acutally needed.
In your example the assumption is that because your function creates a new tree, the whole tree must be processed, whereas in reality the function is just passed on to the next function and only executed when necessary.
A good example of lazy evaluation is the sieve of erastothenes in Haskell, which creates the prime numbers by eliminating the multiples of the current number in the list of numbers. Note that the list of numbers is infinite. Taken from here
primes :: [Integer]
primes = sieve [2..]
where
sieve (p:xs) = p : sieve [x|x <- xs, x `mod` p > 0]
I recently wrote an algorithm that does exactly what you described - https://medium.com/hibob-engineering/from-list-to-immutable-hierarchy-tree-with-scala-c9e16a63cb89
It works in 2 phases:
Sort the list of nodes by their depth in the hierarchy
constructs the tree from bottom up
Some caveats:
No Node mutation, The result is an Immutable-tree
The complexity is O(n)
Ignores cyclic referencing in the incoming list

Real-world examples of recursion [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What are real-world problems where a recursive approach is the natural solution besides depth-first search (DFS)?
(I don't consider Tower of Hanoi, Fibonacci number, or factorial real-world problems. They are a bit contrived in my mind.)
A real world example of recursion
How about anything involving a directory structure in the file system. Recursively finding files, deleting files, creating directories, etc.
Here is a Java implementation that recursively prints out the content of a directory and its sub-directories.
import java.io.File;
public class DirectoryContentAnalyserOne implements DirectoryContentAnalyser {
private static StringBuilder indentation = new StringBuilder();
public static void main (String args [] ){
// Here you pass the path to the directory to be scanned
getDirectoryContent("C:\\DirOne\\DirTwo\\AndSoOn");
}
private static void getDirectoryContent(String filePath) {
File currentDirOrFile = new File(filePath);
if ( !currentDirOrFile.exists() ){
return;
}
else if ( currentDirOrFile.isFile() ){
System.out.println(indentation + currentDirOrFile.getName());
return;
}
else{
System.out.println("\n" + indentation + "|_" +currentDirOrFile.getName());
indentation.append(" ");
for ( String currentFileOrDirName : currentDirOrFile.list()){
getPrivateDirectoryContent(currentDirOrFile + "\\" + currentFileOrDirName);
}
if (indentation.length() - 3 > 3 ){
indentation.delete(indentation.length() - 3, indentation.length());
}
}
}
}
There are lots of mathy examples here, but you wanted a real world example, so with a bit of thinking, this is possibly the best I can offer:
You find a person who has contracted a given contageous infection, which is non fatal, and fixes itself quickly( Type A) , Except for one in 5 people ( We'll call these type B ) who become permanently infected with it and shows no symptoms and merely acts a spreader.
This creates quite annoying waves of havoc when ever type B infects a multitude of type A.
Your task is to track down all the type Bs and immunise them to stop the backbone of the disease. Unfortunately tho, you cant administer a nationwide cure to all, because the people who are typeAs are also deadly allergic to the cure that works for type B.
The way you would do this, would be social discovery, given an infected person(Type A), choose all their contacts in the last week, marking each contact on a heap. When you test a person is infected, add them to the "follow up" queue. When a person is a type B, add them to the "follow up" at the head ( because you want to stop this fast ).
After processing a given person, select the person from the front of the queue and apply immunization if needed. Get all their contacts previously unvisited, and then test to see if they're infected.
Repeat until the queue of infected people becomes 0, and then wait for another outbreak..
( Ok, this is a bit iterative, but its an iterative way of solving a recursive problem, in this case, breadth first traversal of a population base trying to discover likely paths to problems, and besides, iterative solutions are often faster and more effective, and I compulsively remove recursion everywhere so much its become instinctive. .... dammit! )
Quicksort, merge sort, and most other N-log N sorts.
Matt Dillard's example is good. More generally, any walking of a tree can generally be handled by recursion very easily. For instance, compiling parse trees, walking over XML or HTML, etc.
Recursion is often used in implementations of the Backtracking algorithm. For a "real-world" application of this, how about a Sudoku solver?
Recursion is appropriate whenever a problem can be solved by dividing it into sub-problems, that can use the same algorithm for solving them. Algorithms on trees and sorted lists are a natural fit. Many problems in computational geometry (and 3D games) can be solved recursively using binary space partitioning (BSP) trees, fat subdivisions, or other ways of dividing the world into sub-parts.
Recursion is also appropriate when you are trying to guarantee the correctness of an algorithm. Given a function that takes immutable inputs and returns a result that is a combination of recursive and non-recursive calls on the inputs, it's usually easy to prove the function is correct (or not) using mathematical induction. It's often intractable to do this with an iterative function or with inputs that may mutate. This can be useful when dealing with financial calculations and other applications where correctness is very important.
Surely that many compilers out there use recursion heavily. Computer languages are inherently recursive themselves (i.e., you can embed 'if' statements inside other 'if' statements, etc.).
Disabling/setting read-only for all children controls in a container control. I needed to do this because some of the children controls were containers themselves.
public static void SetReadOnly(Control ctrl, bool readOnly)
{
//set the control read only
SetControlReadOnly(ctrl, readOnly);
if (ctrl.Controls != null && ctrl.Controls.Count > 0)
{
//recursively loop through all child controls
foreach (Control c in ctrl.Controls)
SetReadOnly(c, readOnly);
}
}
People often sort stacks of documents using a recursive method. For example, imagine you are sorting 100 documents with names on them. First place documents into piles by the first letter, then sort each pile.
Looking up words in the dictionary is often performed by a binary-search-like technique, which is recursive.
In organizations, bosses often give commands to department heads, who in turn give commands to managers, and so on.
Famous Eval/Apply cycle from SICP
(source: mit.edu)
Here is the definition of eval:
(define (eval exp env)
(cond ((self-evaluating? exp) exp)
((variable? exp) (lookup-variable-value exp env))
((quoted? exp) (text-of-quotation exp))
((assignment? exp) (eval-assignment exp env))
((definition? exp) (eval-definition exp env))
((if? exp) (eval-if exp env))
((lambda? exp)
(make-procedure (lambda-parameters exp)
(lambda-body exp)
env))
((begin? exp)
(eval-sequence (begin-actions exp) env))
((cond? exp) (eval (cond->if exp) env))
((application? exp)
(apply (eval (operator exp) env)
(list-of-values (operands exp) env)))
(else
(error "Unknown expression type - EVAL" exp))))
Here is the definition of apply:
(define (apply procedure arguments)
(cond ((primitive-procedure? procedure)
(apply-primitive-procedure procedure arguments))
((compound-procedure? procedure)
(eval-sequence
(procedure-body procedure)
(extend-environment
(procedure-parameters procedure)
arguments
(procedure-environment procedure))))
(else
(error
"Unknown procedure type - APPLY" procedure))))
Here is the definition of eval-sequence:
(define (eval-sequence exps env)
(cond ((last-exp? exps) (eval (first-exp exps) env))
(else (eval (first-exp exps) env)
(eval-sequence (rest-exps exps) env))))
eval -> apply -> eval-sequence -> eval
Recursion is used in things like BSP trees for collision detection in game development (and other similar areas).
Real world requirement I got recently:
Requirement A: Implement this feature after thoroughly understanding Requirement A.
Recursion is applied to problems (situations) where you can break it up (reduce it) into smaller parts, and each part(s) looks similar to the original problem.
Good examples of where things that contain smaller parts similar to itself are:
tree structure (a branch is like a tree)
lists (part of a list is still a list)
containers (Russian dolls)
sequences (part of a sequence looks like the next)
groups of objects (a subgroup is a still a group of objects)
Recursion is a technique to keep breaking the problem down into smaller and smaller pieces, until one of those pieces become small enough to be a piece-of-cake. Of course, after you break them up, you then have to "stitch" the results back together in the right order to form a total solution of your original problem.
Some recursive sorting algorithms, tree-walking algorithms, map/reduce algorithms, divide-and-conquer are all examples of this technique.
In computer programming, most stack-based call-return type languages already have the capabilities built in for recursion: i.e.
break the problem down into smaller pieces ==> call itself on a smaller subset of the original data),
keep track on how the pieces are divided ==> call stack,
stitch the results back ==> stack-based return
Feedback loops in a hierarchical organization.
Top boss tells top executives to collect feedback from everyone in the company.
Each executive gathers his/her direct reports and tells them to gather feedback from their direct reports.
And on down the line.
People with no direct reports -- the leaf nodes in the tree -- give their feedback.
The feedback travels back up the tree with each manager adding his/her own feedback.
Eventually all the feedback makes it back up to the top boss.
This is the natural solution because the recursive method allows filtering at each level -- the collating of duplicates and the removal of offensive feedback. The top boss could send a global email and have each employee report feedback directly back to him/her, but there are the "you can't handle the truth" and the "you're fired" problems, so recursion works best here.
Parsers and compilers may be written in a recursive-descent method. Not the best way to do it, as tools like lex/yacc generate faster and more efficient parsers, but conceptually simple and easy to implement, so they remain common.
I have a system that uses pure tail recursion in a few places to simulate a state machine.
Some great examples of recursion are found in functional programming languages. In functional programming languages (Erlang, Haskell, ML/OCaml/F#, etc.), it's very common to have any list processing use recursion.
When dealing with lists in typical imperative OOP-style languages, it's very common to see lists implemented as linked lists ([item1 -> item2 -> item3 -> item4]). However, in some functional programming languages, you find that lists themselves are implemented recursively, where the "head" of the list points to the first item in the list, and the "tail" points to a list containing the rest of the items ([item1 -> [item2 -> [item3 -> [item4 -> []]]]]). It's pretty creative in my opinion.
This handling of lists, when combined with pattern matching, is VERY powerful. Let's say I want to sum a list of numbers:
let rec Sum numbers =
match numbers with
| [] -> 0
| head::tail -> head + Sum tail
This essentially says "if we were called with an empty list, return 0" (allowing us to break the recursion), else return the value of head + the value of Sum called with the remaining items (hence, our recursion).
For example, I might have a list of URLs, I think break apart all the URLs each URL links to, and then I reduce the total number of links to/from all URLs to generate "values" for a page (an approach that Google takes with PageRank and that you can find defined in the original MapReduce paper). You can do this to generate word counts in a document also. And many, many, many other things as well.
You can extend this functional pattern to any type of MapReduce code where you can taking a list of something, transforming it, and returning something else (whether another list, or some zip command on the list).
XML, or traversing anything that is a tree. Although, to be honest, I pretty much never use recursion in my job.
A "real-world" problem solved by recursion would be nesting dolls. Your function is OpenDoll().
Given a stack of them, you would recursilvey open the dolls, calling OpenDoll() if you will, until you've reached the inner-most doll.
Parsing an XML file.
Efficient search in multi-dimensional spaces. E. g. quad-trees in 2D, oct-trees in 3D, kd-trees, etc.
Hierarchical clustering.
Come to think of it, traversing any hierarchical structure naturally lends itself to recursion.
Template metaprogramming in C++, where there are no loops and recursion is the only way.
Suppose you are building a CMS for a website, where your pages are in a tree structure, with say the root being the home-page.
Suppose also your {user|client|customer|boss} requests that you place a breadcrumb trail on every page to show where you are in the tree.
For any given page n, you'll may want to walk up to the parent of n, and its parent, and so on, recursively to build a list of nodes back up to the root of page tree.
Of course, you're hitting the db several times per page in that example, so you may want to use some SQL aliasing where you look up page-table as a, and page-table again as b, and join a.id with b.parent so you make the database do the recursive joins. It's been a while, so my syntax is probably not helpful.
Then again, you may just want to only calculate this once and store it with the page record, only updating it if you move the page. That'd probably be more efficient.
Anyway, that's my $.02
You have an organization tree that is N levels deep. Several of the nodes are checked, and you want to expand out to only those nodes that have been checked.
This is something that I actually coded.
Its nice and easy with recursion.
In my job we have a system with a generic data structure that can be described as a tree. That means that recursion is a very effective technique to work with the data.
Solving it without recursion would require a lot of unnecessary code. The problem with recursion is that it is not easy to follow what happens. You really have to concentrate when following the flow of execution. But when it works the code is elegant and effective.
Calculations for finance/physics, such as compound averages.
Parsing a tree of controls in Windows Forms or WebForms (.NET Windows Forms / ASP.NET).
The best example I know is quicksort, it is a lot simpler with recursion. Take a look at:
shop.oreilly.com/product/9780596510046.do
www.amazon.com/Beautiful-Code-Leading-Programmers-Practice/dp/0596510047
(Click on the first subtitle under the chapter 3: "The most beautiful code I ever wrote").
Phone and cable companies maintain a model of their wiring topology, which in effect is a large network or graph. Recursion is one way to traverse this model when you want to find all parent or all child elements.
Since recursion is expensive from a processing and memory perspective, this step is commonly only performed when the topology is changed and the result is stored in a modified pre-ordered list format.
Inductive reasoning, the process of concept-formation, is recursive in nature. Your brain does it all the time, in the real world.
Ditto the comment about compilers. The abstract syntax tree nodes naturally lend themselves to recursion. All recursive data structures (linked lists, trees, graphs, etc.) are also more easily handled with recursion. I do think that most of us don't get to use recursion a lot once we are out of school because of the types of real-world problems, but it's good to be aware of it as an option.

Resources