Difference between recursion and iteration - recursion

What is the difference? Are these the same? If not, can someone please give me an example?
MW:
Iteration - 1 : the action or a process of iterating or repeating: as a : a procedure in which repetition of a sequence of operations yields results successively closer to a desired result b : the repetition of a sequence of computer instructions a specified number of times or until a condition is met
Recursion - 3 : a computer programming technique involving the use of a procedure, subroutine, function, or algorithm that calls itself one or more times until a specified condition is met at which time the rest of each repetition is processed from the last one called to the first

We can distinguish (as is done in SICP) recursive and iterative procedures from recursive and iterative processes. The former are as your definition describes, where recursion is basically the same as mathematical recursion: a recursive procedure is defined in terms of itself. An iterative procedure repeats a block of code with a loop statement. A recursive process, however, is one that takes non-constant (e.g. O(n) or O(lg(n)) space) to execute, while an iterative process takes O(1) (constant) space.
For mathematical examples, the Fibonacci numbers are defined recursively:
Sigma notation is analogous to iteration:
as is Pi notation. Similar to how some (mathematical) recursive formulae can be rewritten as iterative ones, some (but not all) recursive processes have iterative equivalents. All recursive procedures can be transformed into iterative ones by keeping track of partial results in your own data structure, rather than using the function call stack.

[Hurry and trump this!]
One form can be converted to the other, with one notable restriction: many "popular" languages (C/Java/normal Python) do not support TCO/TCE (tail-call-optimization/tail-call-elimination) and thus using recursion will 'add extra data to the stack' each time a method calls itself recursively.
So in C and Java, iteration is idiomatic, whereas in Scheme or Haskell, recursion is idiomatic.

As per the definitions you mentioned, these 2 are very different. In iteration, there is no self-calling, but in recursion, a function calls itself
For example. Iterative algorithm for factorial calculation
fact=1
For count=1 to n
fact=fact*count
end for
And the recursive version
function factorial(n)
if (n==1) return 1
else
n=n*factorial(n-1)
end if
end function
Generally
Recursive code is more succinct but uses a larger amount of memory. Sometimes recursion can be converted to iterations using dynamic programming.

Here's a Lisp function for finding the length of a list. It is recursive:
(defun recursive-list-length (L)
"A recursive implementation of list-length."
(if (null L)
0
(1+ (recursive-list-length (rest L)))))
It reads "the length of a list is either 0 if that list is empty, or 1 plus the length of the sub-list starting with the second element).
And this is an implementation of strlen - the C function finding the length of a nul-terminated char* string. It is iterative:
size_t strlen(const char *s)
{
size_t n;
n = 0;
while (*s++)
n++;
return(n);
}
You goal is to repeat some operation. Using iteration, you employ an explicit loop (like the while loop in the strlen code). Using recursion, your function calls itself with (usually) a smaller argument, and so on until a boundary condition (null L in the code above) is met. This also repeats the operation, but without an explicit loop.

Recursion:
Eg: Take fibonacci series for example. to get any fibonacci number we have to know the previous one. So u will berform the operation (same one) on every number lesser than the given and each of this inturn calling the same method.
fib(5) = Fib (4) + 5
fib(4) = Fib (3) + 4
.
.
i.e reusing the method fib
Iteration is looping like you add 1+1+1+1+1(iteratively adding) to get 5 or 3*3*3*3*3 (iteratively multiplying)to get 3^5.

For a good example of the above, consider recursive v. iterative procedures for depth-first search. It can be done using language features via recursive function calls, or in an iterative loop using a stack, but the process is inherently recursive.

For difference between recursive vs non-recursive;
recursive implementations are a bit easier to verify for correctness; non-
recursive implementations are a bit more efficient.
Algorithms (4th Edition)

Related

Can memoization be used with an iterative solution in dynamic programming?

For example, the Fibonacci sequence can be solved with memoization when using recursion. But can solving Fibonacci iteratively (stack + while loop) also take advantage of memoization?
Of course ... start at the base case F(0) and F(1), and compute values. Keep them all in an array, indexed by the functional subscript. When you get an input argument greater than your current array extent, compute more values. When you get one within the current bounds, simply return that value from the array.

In functional languages, how is the compiler able to translate non-tail recursion into loops to avoid stack overflows (if at all)?

I've been recently learning about functional languages and how many don't include for loops. While I don't personally view recursion as more difficult than a for loop (and often easier to reason out) I realized that many examples of recursion aren't tail recursive and therefor cannot use simple tail recursion optimization in order to avoid stack overflows. According to this question, all iterative loops can be translated into recursion, and those iterative loops can be transformed into tail recursion, so it confuses me when the answers on a question like this suggest that you have to explicitly manage the translation of your recursion into tail recursion yourself if you want to avoid stack overflows. It seems like it should be possible for a compiler to do all the translation from either recursion to tail recursion, or from recursion straight to an iterative loop with out stack overflows.
Are functional compilers able to avoid stack overflows in more general recursive cases? Are you really forced to transform your recursive code in order to avoid stack overflows yourself? If they aren't able to perform general recursive stack-safe compilation, why aren't they?
Any recursive function can be converted into a tail recursive one.
For instance, consider the transition function of a Turing machine, that
is the mapping from a configuration to the next one. To simulate the
turing machine you just need to iterate the transition function until
you reach a final state, that is easily expressed in tail recursive
form. Similarly, a compiler typically translates a recursive program into
an iterative one simply adding a stack of activation records.
You can also give a translation into tail recursive form using continuation
passing style (CPS). To make a classical example, consider the fibonacci
function.
This can be expressed in CPS style in the following way, where the second
parameter is the continuation (essentially, a callback function):
def fibc(n, cont):
if n <= 1:
return cont(n)
return fibc(n - 1, lambda a: fibc(n - 2, lambda b: cont(a + b)))
Again, you are simulating the recursion stack using a dynamic data structure:
in this case, lambda abstractions.
The use of dynamic structures (lists, stacks, functions, etc.) in all previous
examples is essential. That is to say, that in order to simulate a generic
recursive function iteratively, you cannot avoid dynamic memory allocation,
and hence you cannot avoid stack overflow, in general.
So, memory consumption is not only related to the iterative/recursive
nature of the program. On the other side, if you prevent dynamic memory
allocation, your
programs are essentially finite state machines, with limited computational
capabilities (more interesting would be to parametrise memory according to
the dimension of inputs).
In general, in the same way as you cannot predict termination, you cannot
predict an unbound memory consumption of your program: working with
a Turing complete language, at compile time
you cannot avoid divergence, and you cannot avoid stack overflow.
Tail Call Optimization:
The natural way to do arguments and calls is to sort out the cleaning up when exiting or when returning.
For tail calls to work you need to alter it so that the tail call inherits the current frame. Thus instead of making a new frame it massages the frame so that the next call returns to the current functions caller instead of this function, which really only cleans up and returns if it's a tail call.
Thus TCO is all about cleaning up before the last call.
Continuation Passing Style - make tail calls out of everything
A compiler can change the code such that it only does primitive operations and pass it to continuations. Thus the stack usage gets moved onto the heap since the computation to be continued is made a function.
An example is:
function hypotenuse(k1, k2) {
return sqrt(add(square(k1), square(k2)))
}
becomes
function hypotenuse(k, k1, k2) {
(function (sk1) {
(function (sk2) {
(function (ar) {
k(sqrt(ar));
}(add(sk1,sk2));
}(square(k2));
}(square(k1));
}
Notice every function has exactly one call now and the order of evaluation is set.
According to this question, all iterative loops can be translated into recursion
"Translated" might be a bit of a stretch. The proof that for every iterative loop there is an equivalent recursive program is trivial if you understand Turing completeness: since a Turing machine can be implemented using strictly iterative structures and strictly recursive structures, every program that can be expressed in an iterative language can be expressed in a recursive language, and vice-versa. This means that for every iterative loop there is an equivalent recursive construct (and the other way around). However, that doesn't mean we have some automated way of transforming one into the other.
and those iterative loops can be transformed into tail recursion
Tail recursion can perhaps be easily transformed into an iterative loop, and the other way around. But not all recursion is tail recursion. Here's an example. Suppose we have some binary tree. It consists of nodes. Each node can have a left and a right child and a value. If a node has no children, then isLeaf returns true for it. We'll assume there's some function max that returns the maximum of two values, and if one of the values is null it returns the other one. Now we want to define a function that finds the maximum value among all the leaf nodes. Here it is in some pseudo-code I cooked up.
findmax(node) {
if (node == null) {
return null
}
if (node.isLeaf) {
return node.value
} else {
return max(findmax(node.left), findmax(node.right))
}
}
There's two recursive calls in the max function, so we can't optimize for tail recursion. We need the results of both before we can supply them to the max function and determine the result of the call for the current node.
Now, there may be a way of getting the same result, using recursion and only a single tail-recursive call. It is functionally equivalent, but it is a different algorithm. Compilers can do a lot of transformations to create a functionally equivalent program with lots of optimizations, but they're not quite clever enough to create functionally equivalent algorithms.
Even the transformation of a function that only calls itself recursively once into a tail-recursive version would be far from trivial. Such an adaptation usually employs some argument passed into the recursive invocation that is used as an "accumulator" for the current results.
Look at the next naive implementation for calculating a factorial of a number (e.g. fact(5) = 5*4*3*2*1):
fact(number) {
if (number == 1) {
return 1
} else {
return number * fact(number - 1)
}
}
It's not tail-recursive. But it can be made so in this way:
fact(number, acc) {
if (number == 1) {
return acc
} else {
return fact(number - 1, number * acc)
}
}
// Helper function
fact(number) {
return fact(number, 1)
}
This requires an interpretation of what is being done. Recognizing the case for stuff like this is easy enough, but what if you call a function instead of a multiplication? How will the compiler know that for the initial call the accumulator must be 1 and not, say, 0? How do you translate this program?
recsub(number) {
if (number == 1) {
return 1
} else {
return number - recsub(number - 1)
}
}
This is as of yet outside the scope of the sort of compiler we have now, and may in fact always be.
Maybe it would be interesting to ask this on the computer science Stack Exchange to see if they know of some papers or proofs that investigate this more in-depth.

How to count the digits of a given number in scheme?recursively and iteratively

I know how to calculate the sum of the digits of a number:
(define (sum-of-digits x)
(if (= x 0) 0
(+ (modulo x 10)
(sum-of-digits (/ (- x (modulo x 10))
10)))))`
But I just don't have a clue to make a count of the digits. And also don't know how to do that by a linear iterative progress.
Thanks!!
You are very close to the answer.
In order to figure out how to change sum-of-digits into count-of-digits, try writing some test cases. A test case must include an example of calling the function, and also the expected result.
As a side note, this is an example of generative recursion, and you shouldn't be tackling it until you've done a bunch of problems like "add the numbers in a list", "count the elements in a list", etc.
Some hints regarding each of your questions:
For counting digits, you don't need to add the current digit (as is the case in your code). Just add 1
There are several strategies for transforming a recursive solution (like yours) to a tail recursion (one that generates a linear iterative progress). Here's a short list:
Add an extra parameter to the function to hold the result accumulated so far
Pass the initial value for the accumulator the first time you call the procedure, typically this is the same value that you'd have returned at the base case in a "normal" (non-tail-recursive) recursion.
Return the accumulator at the base case of the recursion
At the recursive step, update the accumulated result with a new value and pass it to the recursive call
And the most important: when the time comes to call the recursion, make sure to call it as the last expression with no "additional work" to be performed.

Is there any limit to recursion in lisp?

I enjoy using recursion whenever I can, it seems like a much more natural way to loop over something then actual loops. I was wondering if there is any limit to recursion in lisp? Like there is in python where it freaks out after like 1000 loops?
Could you use it for say, a game loop?
Testing it out now, simple counting recursive function. Now at >7000000!
Thanks alot
First, you should understand what tail call is about.
Tail call are call that do not consumes stack.
Now you need to recognize when your are consuming stack.
Let's take the factorial example:
(defun factorial (n)
(if (= n 1)
1
(* n (factorial (- n 1)))))
Here is the non-tail recursive implementation of factorial.
Why? This is because in addition to a return from factorial, there is a pending computation.
(* n ..)
So you are stacking n each time you call factorial.
Now let's write the tail recursive factorial:
(defun factorial-opt (n &key (result 1))
(if (= n 1)
result
(factorial-opt (- n 1) :result (* result n))))
Here, the result is passed as an argument to the function.
So you're also consuming stack, but the difference is that the stack size stays constant.
Thus, the compiler can optimize it by using only registers and leaving the stack empty.
The factorial-opt is then faster, but is less readable.
factorial is limited to the size of the stack will factorial-opt is not.
So you should learn to recognize tail recursive function in order to know if the recursion is limited.
There might be some compiler technique to transform a non-tail recursive function into a tail recursive one. Maybe someone could point out some link here.
Scheme mandates tail call optimization, and some CL implementations offer it as well. However, CL does not mandate it.
Note that for tail call optimization to work, you need to make sure that you don't have to return. E.g. a naive implementation of Fibonacci where there is a need to return and add to another recursive call, will not be tail call optimized and as a result you will run out of stack space.

Are there problems that cannot be written using tail recursion?

Tail recursion is an important performance optimisation stragegy in functional languages because it allows recursive calls to consume constant stack (rather than O(n)).
Are there any problems that simply cannot be written in a tail-recursive style, or is it always possible to convert a naively-recursive function into a tail-recursive one?
If so, one day might functional compilers and interpreters be intelligent enough to perform the conversion automatically?
Yes, actually you can take some code and convert every function call—and every return—into a tail call. What you end up with is called continuation-passing style, or CPS.
For example, here's a function containing two recursive calls:
(define (count-tree t)
(if (pair? t)
(+ (count-tree (car t)) (count-tree (cdr t)))
1))
And here's how it would look if you converted this function to continuation-passing style:
(define (count-tree-cps t ctn)
(if (pair? t)
(count-tree-cps (car t)
(lambda (L) (count-tree-cps (cdr t)
(lambda (R) (ctn (+ L R))))))
(ctn 1)))
The extra argument, ctn, is a procedure which count-tree-cps tail-calls instead of returning. (sdcvvc's answer says that you can't do everything in O(1) space, and that is correct; here each continuation is a closure which takes up some memory.)
I didn't transform the calls to car or cdr or + into tail-calls. That could be done as well, but I assume those leaf calls would actually be inlined.
Now for the fun part. Chicken Scheme actually does this conversion on all code it compiles. Procedures compiled by Chicken never return. There's a classic paper explaining why Chicken Scheme does this, written in 1994 before Chicken was implemented: CONS should not cons its arguments, Part II: Cheney on the M.T.A.
Surprisingly enough, continuation-passing style is fairly common in JavaScript. You can use it to do long-running computation, avoiding the browser's "slow script" popup. And it's attractive for asynchronous APIs. jQuery.get (a simple wrapper around XMLHttpRequest) is clearly in continuation-passing style; the last argument is a function.
It's true but not useful to observe that any collection of mutually recursive functions can be turned into a tail-recursive function. This observation is on a par with the old chestnut fro the 1960s that control-flow constructs could be eliminated because every program could be written as a loop with a case statement nested inside.
What's useful to know is that many functions which are not obviously tail-recursive can be converted to tail-recursive form by the addition of accumulating parameters. (An extreme version of this transformation is the transformation to continuation-passing style (CPS), but most programmers find the output of the CPS transform difficult to read.)
Here's an example of a function that is "recursive" (actually it's just iterating) but not tail-recursive:
factorial n = if n == 0 then 1 else n * factorial (n-1)
In this case the multiply happens after the recursive call.
We can create a version that is tail-recursive by putting the product in an accumulating parameter:
factorial n = f n 1
where f n product = if n == 0 then product else f (n-1) (n * product)
The inner function f is tail-recursive and compiles into a tight loop.
I find the following distinctions useful:
In an iterative or recursive program, you solve a problem of size n by
first solving one subproblem of size n-1. Computing the factorial function
falls into this category, and it can be done either iteratively or
recursively. (This idea generalizes, e.g., to the Fibonacci function, where
you need both n-1 and n-2 to solve n.)
In a recursive program, you solve a problem of size n by first solving two
subproblems of size n/2. Or, more generally, you solve a problem of size n
by first solving a subproblem of size k and one of size n-k, where 1 < k < n. Quicksort and mergesort are two examples of this kind of problem, which
can easily be programmed recursively, but is not so easy to program
iteratively or using only tail recursion. (You essentially have to simulate recursion using an explicit
stack.)
In dynamic programming, you solve a problem of size n by first solving all
subproblems of all sizes k, where k<n. Finding the shortest route from one
point to another on the London Underground is an example of this kind of
problem. (The London Underground is a multiply-connected graph, and you
solve the problem by first finding all points for which the shortest path
is 1 stop, then for which the shortest path is 2 stops, etc etc.)
Only the first kind of program has a simple transformation into tail-recursive form.
Any recursive algorithm can be rewritten as an iterative algorithm (perhaps requiring a stack or list) and iterative algorithms can always be rewritten as tail-recursive algorithms, so I think it's true that any recursive solution can somehow be converted to a tail-recursive solution.
(In comments, Pascal Cuoq points out that any algorithm can be converted to continuation-passing style.)
Note that just because something is tail-recursive doesn't mean that its memory usage is constant. It just means that the call-return stack doesn't grow.
You can't do everything in O(1) space (space hierarchy theorem). If you insist on using tail recursion, then you can store the call stack as one of the arguments. Obviously this doesn't change anything; somewhere internally, there is a call stack, you're simply making it explicitly visible.
If so, one day might functional compilers and interpreters be intelligent enough to perform the conversion automatically?
Such conversion will not decrease space complexity.
As Pascal Cuoq commented, another way is to use CPS; all calls are tail recursive then.
I don't think something like tak could be implemented using only tail calls. (not allowing continuations)

Resources