What is the use of a Recursion call in programming paradigm? - recursion

Where is recursion applied in industrial programming. I do understand the notion that it is a function that calls itself but my question is what's its major use in programming paradigm

You can encounter recursive calls in many situations, the common ones are:
to traverse data structures that are recursive in nature (trees, graphs)
to perform retries of the same function in case of errors
for numeric calculations if the recursive notation brings clarity (if performance is critical it's pretty common to turn them into loops, unless you use language optimized to do tail calls)

Related

Are there any modern languages not supporting recursion?

There is a lot of talk about loop-less (functional) programming languages, but I haven't heard of any modern recursive-less programming languages. I know COBOL doesn't (or didn't) support recursion and I imagine it's the same for FORTRAN (or at the very least they will have a weird behaviour), but are there any modern languages?
There are several reasons I am wondering about this...
From what I understand there is no recursive algorithm for which no iterative implementation exists that is at least as performant (as you can simply "simulate" recursion iteratively, when managing the stack within the loop). So there is a) no performance gain and b) no capability gain from using recursion.
Recursion can have worse performance than iteration for things like naiive, recursive implementations of fibonacci numbers. People like to point out that recursion can be just as fast when compilers are optimized and it comes to tail recursion, but from my understanding tail recursion algos always are trivial to convert to loops. So there is c) potential performance loss from recursion.
Academics seem to have a thing for recursion, but then again most of them can't/don't really write any productive code. C++ (or object orientation in general) is the living proof how academic ideas can make their way into the business world/companies, while beeing mostly impractical. I wonder if recursion is only another example of this. (I know this is an opinionated topic, but the majority of accomplished software engineers I heard talk agree on this at least to some extent).
I have written recursive algorithms in productive systems before, but I can't think of a time, when I really benefited from using recursion. Recursive algorithms d) may or may not provide code readability advantages in some cases. I am not sure about this...
Most (all?) modern operating systems rely on specified call stack conventions. So if you want to call any external library from within your program, your programming language/compiler has to comply with the system's call stack convention. This makes it an easy step to adopt recursion for programming languages/compilers. However if you would create a recursion-less language you wouldn't need a call stack at all, which would have interesting effects in terms of memory management...
So I am not hating on recursion... I am just wondering if the recursion-less language experiment has been done in the recent past and what the results have been, as it seems like a practical experiment to me?

which is more general recursion or iteration?

is Recursion is more general than Iteration?
for me Iteration means repetitive control using language constructs other than subroutine calls (like loop-constructs and/or explicit
goto’s), whereas recursion means as repetitive control obtained using subroutine calls). which is more general in these two?
Voted to close as likely to prompt opinion-based responses; my opinion-based response is: recursion is more general because:
it's simply a user-case of function calls, which a language will already have; and
recursion captures both a direct one-to-one pattern of repetition and more complicated patterns, such as tree traversal, divide and conquer, etc.
Conversely iteration tends to be a specific language-level construct allowing only a direct linear traversal.
In so far as it is not opinion-based, the most reasonable answer is that neither recursion nor iteration is more general than the other. A language can be Turing complete with recursion but no iteration (minimal Lisps are like that) and a language can also be Turing complete with iteration but no recursion (earlier versions of Fortran didn't support recursion). Both recursion and iteration are widely used. Iteration is probably more commonly used since for every person who learns programming with something like Lisp or Haskell there are probably a dozen who learn programming with things like Java or Visual Basic -- but I don't think that "most commonly used" is a good synonym for "general".

Can every recursive process be transformed into an iterative process?

I was reading the book, Structure and Interpretation of Computer Programs, where in it tells about the distinction between a recursive procedure and recursive process, and similarly between iterative procedure and iterative process. So, a recursive procedure could still generate an iterative process.
My question is: given a procedure which generates a recursive process, can you always write another procedure that achieves the same result but generates an iterative process?
The specific problem that I was trying to solve was to write a procedure which does an in-order traversal of a binary search tree but generates an iterative process. I know how you can use a stack to get an iterative procedure for this problem. However, that still generates a recursive process (correct me if I am wrong here).
Thanks,
Abhinav.
Some tasks are truly impossible to solve with linear iterative processes (e.g. tree recursion, which is impossible to convert to tail recursion). You either have to use the stack built into your platform, or re-create it yourself within the language (usually a much less efficient and uglier solution).
So if you define 'recursion' as 'using a stack to store different invocations of the same code', then yes, recursion sometimes is absolutely required.
If you define 'recursion' as 'a function in my language (eventually) calling itself', then you can get by without explicit recursion by re-implementing recursiveness yourself, as describes above. This is only useful if your language doesn't provide recursive procedures, or not enough stack space, or has similar limitations. (For instance, early Fortran's didn't have recursive procedures. Of course, they also didn't have the dynamic data structures that you would need to simulate them! Personally, I have never come across an actual example where implementing pseudo-recursion was the right solution.)
Read this former SO post:
Design patterns for converting recursive algorithms to iterative ones
there are a lot of good answers there which may help you further.
Any tail recursive process can be transformed into an iterative one.
But not all recursive processes can be transformed into an iterative one.

What are the core concepts in functional programming?

In object-oriented programming, we might say the core concepts are:
encapsulation
inheritance,
polymorphism
What would that be in functional programming?
There's no community consensus on what are the essential concepts in functional programming. In
Why Functional Programming Matters (PDF), John Hughes argues that they are higher-order functions and lazy evaluation. In Wearing the Hair Shirt: A Retrospective on Haskell, Simon Peyton Jones says the real essential is not laziness but purity. Richard Bird would agree. But there's a whole crowd of Scheme and ML programmers who are perfectly happy to write programs with side effects.
As someone who has practiced and taught functional programming for twenty years, I can give you a few ideas that are widely believed to be at the core of functional programming:
Nested, first-class functions with proper lexical scoping are at the core. This means you can create an anonymous function at run time, whose free variables may be parameters or local variables of an enclosing function, and you get a value you can return, put into data structures, and so on. (This is the most important form of higher-order functions, but some higher-order functions (like qsort!) can be written in C, which is not a functional language.)
Means of composing functions with other functions to solve problems. Nobody does this better than John Hughes.
Many functional programmers believe purity (freedom from effects, including mutation, I/O, and exceptions) is at the core of functional programming. Many functional programmers do not.
Polymorphism, whether it is enforced by the compiler or not, is a core value of functional programmers. Confusingly, C++ programmers call this concept "generic programming." When polymorphism is enforced by the compiler it is generally a variant of Hindley-Milner, but the more powerful System F is also a powerful basis for functional languages. And with languages like Scheme, Erlang, and Lua, you can do functional programming without a static type system.
Finally, a large majority of functional programmers believe in the value of inductively defined data types, sometimes called "recursive types". In languages with static type systems these are generally known as "algebraic data types", but you will find inductively defined data types even in material written for beginning Scheme programmers. Inductively defined types usually ship with a language feature called pattern matching, which supports a very general form of case analysis. Often the compiler can tell you if you have forgotten a case. I wouldn't want to program without this language feature (a luxury once sampled becomes a necessity).
In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state. Functional programming has its roots in the lambda calculus, a formal system developed in the 1930s to investigate function definition, function application, and recursion. Many functional programming languages can be viewed as embellishments to the lambda calculus. - Wikipedia
In a nutshell,
Lambda Calculus
Higher Order Functions
Immutability
No side-effects
Not directly an answer to your question, but I'd like to point out that "object-oriented" and functional programming aren't necessarily at odds. The "core concepts" you cite have more general counterparts which apply just as well to functional programming.
Encapsulation, more generally, is modularisation. All purely functional languages that I know of support modular programming. You might say that those languages implement encapsulation better than the typical "OO" variety, since side-effects break encapsulation, and pure functions have no side-effects.
Inheritance, more generally, is logical implication, which is what a function represents. The canonical subclass -> superclass relation is a kind of implicit function. In functional languages, this is expressed with type classes or implicits (I consider implicits to be the more general of these two).
Polymorphism in the "OO" school is achieved by means of subtyping (inheritance). There is a more general kind of polymorphism known as parametric polymorphism (a.k.a. generics), which you will find to be supported by pure-functional programming languages. Additionally, some support "higher kinds", or higher-order generics (a.k.a. type constructor polymorphism).
What I'm trying to say is that your "core concepts of OO" aren't specific to OO in any way. I, for one, would argue that there aren't any core concepts of OO, in fact.
Let me repeat the answer I gave at one discussion in the Bangalore Functional Programming group:
A functional program consists only of functions. Functions compute
values from their inputs. We can contrast this with imperative
programming, where as the program executes, values of mutable
locations change. In other words, in C or Java, a variable called X
refers to a location whose value change. But in functional
programming X is the name of a value (not a location). Any where that
X is in scope, it has the same value (i.e, it is referentially
transparent). In FP, functions are also values. They can be passed as
arguments to other functions. This is known as higher-order functional
programming. Higher-order functions let us model an amazing variety of
patterns. For instance, look at the map function in Lisp. It
represents a pattern where the programmer needs to do 'something' to
every element of a list. That 'something' is encoded as a function and
passed as an argument to map.
As we saw, the most notable feature of FP is it's side-effect
freeness. If a function does something more than computing a value
from it's input, then it is causing a side-effect. Such functions are
not allowed in pure FP. It is easy to test side-effect free functions.
There is no global state to set-up before running the test and there
is no global state to check after running the test. Each function can
be tested independently just by providing it's input and examining the
return value. This makes it easy to write automated tests. Another
advantage of side-effect freeness is that it gives you better control
on parallelism.
Many FP languages treat recursion and iteration correctly. They does this by
supporting something called tail-recursion. What tail-recursion is -
if a function calls itself, and it is the last thing it does, it
removes the current stack frame right away. In other words, if a
function calls itself tail-recursively a 1000 times, it does not grow
the stack a 1000 deep. This makes special looping constructs
unnecessary in these languages.
Lambda Calculus is the most boiled down version of an FP language.
Higher level FP languages like Haskell get compiled to Lambda
Calculus. It has only three syntactic constructs but still it is
expressive enough to represent any abstraction or algorithm.
My opinion is that FP should be viewed as a meta-paradigm. We can
write programs in any style, including OOP, using the simple
functional abstractions provided by the Lambda Calculus.
Thanks,
-- Vijay
Original discussion link: http://groups.google.co.in/group/bangalore-fp/browse_thread/thread/4c2cfa7985d7eab3
Abstraction, the process of making a function by parameterizing over some part of an expression.
Application, the process of evaluating a function by replacing its parameters with specific values.
At some level, that's all there is to it.
Though the question is older, thought of sharing my view as reference.
Core Concept in FP is "FUNCTION"
FP gives KISS(Keep It Simple Sxxxxx) programming paradigm (once you get the FP ideas, you will literally start hating the OO paradigm)
Here is my simple FP comparison with OO Design Patterns. Its my perspective of seeing FP and pls correct me if there is any discrepancy from actual.

Defining point of functional programming

I can enumerate many features of functional programming, but when my friend asked me Could you define functional programming for me? I couldn't.
I would say that the defining point of pure functional programming is that all computation is done in functions with no side effects. That is, functions take inputs and return values, but do not change any hidden state, In this paradigm, functions more closely model their mathematical cousins.
This was nailed down for me when I started playing with Erlang, a language with a write-once stack. However, it should be clarified that there is a difference between a programming paradigm, and a programming language. Languages that are generally referred to as functional provide a number of features that encourage or enforce the functional paradigm (e.g., Erlang with it's write-once stack, higher order functions, closures, etc.). However the functional programming paradigm can be applied in many languages (with varying degrees of pain).
A lot of the definitions so far have emphasized purity, but there are many languages that are considered functional that are not at all pure (e.g., ML, Scheme). I think the key properties that make a language "functional" are:
Higher-order functions. Functions are a built-in datatype no different from integers and booleans. Anonymous functions are easy to create and idiomatic (e.g., lambdas).
Everything is an expression. In imperative languages, a distinction is made between statements, which mutate state and affect control flow, and expressions, which yield values. In functional languages (even impure functional languages), expression evaluation is the fundamental unit of execution.
Given these two properties, you naturally get the behavior we think of as functional (e.g., expressing computations in terms of folds and maps). Eliminating mutable state is a way to make things even more functional.
From wikipedia:
In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast with the imperative programming style that emphasizes changes in state.
Using a functional approach gives the following benefits:
Concurrent programming is much easier in functional languages.
Functions in FP can never cause side effects - this makes unit testing much easier.
Hot Code Deployment in production environments is much easier.
Functional languages can be reasoned about mathematically.
Lazy evaluation provides potential for performance optimizations.
More expressive - closures, pattern matching, advanced type systems etc. allow programmers to 'say what they mean' more readily.
Brevity - for some classes of program a functional solution is significantly more concise.
There is a great article with more detail here.
Being able to enumerate the features is more useful than trying to define the term itself, as people will use the term "functional programming" in a variety of contexts with many shades of meaning across a continuum, whereas the individual features have individually crisper definitions that are more universally agreed upon.
Below are the features that come to mind. Most people use the term "functional programming" to refer to some subset of those features (the most common/important ones being "purity" and "higher-order functions").
FP features:
Purity (a.k.a. immutability, eschewing side-effects, referential transparency)
Higher-order functions (e.g. pass a function as a parameter, return it as a result, define anonymous function on the fly as a lambda expression)
Laziness (a.k.a. non-strict evaluation, most useful/usable when coupled with purity)
Algebraic data types and pattern matching
Closures
Currying / partial application
Parametric polymorphism (a.k.a. generics)
Recursion (more prominent as a result of purity)
Programming with expressions rather than statements (again, from purity)
...
The more features from the above list you are using, the more likely someone will label what you are doing "functional programming" (and the first two features--purity and higher-order functions--are probably worth the most extra bonus points towards your "FP score").
I have to add that functional programming tends to also abstract control structures of your program as well as the domain - e.g., you no longer do a 'for loop' on some list of things, but you 'map' it with some function to produce the output.
i think functional programming is a state of mind as well as the definition given above.
There are two separate definitions:
The older definition (first-class functions) has been given by Chris Conway.
The newer definition (avoiding side effects like mutation) has been given by John Stauffer. This is more generally known as purely functional programming.
This is a source of much confusion...
It's like drawing a picture by using vectors instead of bitmaps - tell the painter how to change the picture instead of what the picture looks like at each step.
It's application of functions as opposed to changing the state.
I think John Stauffer mostly has the definition. I would also add that you need to be able to pass functions around. Essentially you need high order functions, meaning you can pass functions around easily (although passing blocks is good enough).
For example a very popular functional call is map. It is basically equivalent to
list is some list of items
OutList is some empty list
foreach item in list
OutList.append(function(item))
return OutList
so that code is expressed as map(function, list). The revolutionary concept is that function is a function. Javascript is a great example of a language with high order functions. Basically functions can be treated like a variable and passed into functions or returned from functions. C++ and C have function pointers which can be used similarly. .NET delegates can also be used similarly.
then you can think of all sorts of cool abstractions...
Do you have a function AddItemsInList, MultiplyItemsInList, etc..?
Each function takes (List) and returns a single result
You could create (note, many languages do not allow you to pass + around as a function but it seems the clearest way to express the concept)....
AggregateItemsInList(List, combinefunction, StepFunction)
Increment functions work on indexes...better would be to make them work on list using list operations like next and for incTwo next next if it exists....
function incNormal(x) {
return x + 1
}
function incTwo(x) {
return x + 2
}
AggregateItemsInList(List, +, incNormal)
Want to do every other item?
AggegateItemsInList(List, +, incTwo)
Want to multiply?
AggregateItemsInList(List, *, incNormal)
Want to add exam scores together?
function AddScores (studenta, studentb) {
return studenta.score + studentb.score
}
AggregateItemsInList(ListOfStudents, AddScores, incOne)
High order functions are a very powerful abstraction. Instead of having to write custom methods for numbers, strings, students, etc.. you have one aggregate method that loops through a list of anything and you just have to create the addition operation for each data type.

Resources