Related
Basic cons-cell sticks together two arbitrary things and is the basic unit that allows a construction of linked lists and arbitrary data objects. Question: is there a reason to stick with this simplistic language design descision (for instance, in all lisp families)?
Why not use fixed length arrays for this purpose (or some nested stacks)? I can't foresee any problems with that, but there are clear advantages of a more "packed" memory, less pointer resolution and less "dead-weight" cons-cells to define hierarchy of the data.
You have titled your question “Functional Programming: why pair as a basic constructed unit?”, but this title does not reflect correctly the fact that many important and well known functional languages (e.g. Haskell, F#, Scala, SML, Clojure etc.) have either algebraic data types or different collection of data structures, in which the pair is just one of the different type of constructors, if even available. The situation is similar for other multiparadigm languages, that have support for functional programming, like C++, Java, Objective-C, Swift, etc.
In all these cases the pair, if present, is exactly “basic” as an array, a record, or list, or any other type of data constructor.
What is left is the family of Lisp languages, notably Common Lisp and Scheme, that, beside having a rich set of data structures, like those cited in the comment of Rainer Joswig, use the pair for an important task: as basic data constructor to represent programs.
The fact that Lisp code is a s-expression (that is a list of lists and atoms) has foundamental consequences, the most notable of all being the rising of macro systems, that allow programmers to create easily new syntax, or even new domain-specific languages.
Renzo's answer about other structures in functional programming is spot on. Function programming is about aligning programming with logic and mathematics, where expressions denote values, and there's no such thing as a side effect. (Of course, in practice, we need side effects for I/O, etc.) Functional programming doesn't require the singly linked list as a fundamental construct.
Lists are one of the things that make Lisps lispy, though.
One of the reasons that pairs are so common in the Lisp family of languages may be that ordered pairs are very easy to implement in the lambda calculus that Lisps are inspired by. (I say "inspired by" rather than "based on" because after the syntax and use of lambda to denote anonymous functions, there's plenty of difference, and it's best not to assume that things about one apply to the other.) See the answer to Use of lambda for cons/car/cdr definition in SICP for a quick lesson in how cons, car, and cdr can implemented using nothing but lexical closures.
I understand very clearly the difference between functional and imperative programming techniques. But there's a widespread tendency to talk of "functional languages", and this really confuses me.
Of course some languages like Haskell are more hospitable to functional programming than other languages like C. But even the former does I/O (it just keeps it in a ghetto). And you can write functional programs in C (it's just absurdly harder). So maybe it's just a matter of degree.
Still, even as a matter of degree, what does it mean when someone calls Scheme a "functional language"? Most Scheme code I see is imperative. Is it just that Scheme makes it easy to write in a functional style if you want to? So too do Lua and Python. Are they "functional languages" too?
I'm (really) not trying to be a language cop. If this is just a loose way of talking, that's fine. I'm just trying to figure out whether it does have some definite meaning (even if it's a matter-of-degree meaning) that I'm not seeing.
Among people who study programming languages for a living, "functional programming language" is a pretty weakly bound term. There is a strong consensus that:
Any language that calls itself functional must support first-class, nested functions with lexical scoping rules.
A significant minority also reserve the term "functional language" for languages which are:
Pure (or side-effect-free, referentially transparent, see also)
as in languages like Agda, Clean, Coq, and Haskell.
Beyond that, what's considered a functional programming language is often a matter of intent, that is, whether is designers want it to be called "functional".
Perl and Smalltalk are examples of languages that support first-class functions but whose designers don't call them functional. Objective Caml is an example of a language that is called functional even though it has a full object system with inheritance and everything.
Languages that are called "functional" will tend to have features like the following (taken from Defining point of functional programming):
Anonymous functions (lambda expressions)
Recursion (more prominent as a result of purity)
Programming with expressions rather than statements (again, from purity)
Closures
Currying / partial application
Lazy evaluation
Algebraic data types and pattern matching
Parametric polymorphism (a.k.a. generics)
The more a particular programming language has syntax and constructs tailored to making the various programming features listed above easy/painless to express & implement, the more likely someone will label it a "functional language".
I would say that a functional language is any language that allows functional programming without undue pain.
I like #Randolpho's answer. With regards to features, I might cite the list here:
Defining point of functional programming
namely
Purity (a.k.a. immutability, eschewing side-effects, referential transparency)
Higher-order functions (e.g. pass a function as a parameter, return it as a result, define anonymous function on the fly as a lambda expression)
Laziness (a.k.a. non-strict evaluation, most useful/usable when coupled with purity)
Algebraic data types and pattern matching
Closures
Currying / partial application
Parametric polymorphism (a.k.a. generics)
Recursion (more prominent as a result of purity)
Programming with expressions rather than statements (again, from purity)
The more a particular programming language has syntax and constructs tailored to making the various FP features listed above easy/painless to express & implement, the more likely someone will label it a "functional language".
Jane Street's Brian Hurt wrote a very good article on this a while back. The basic definition he arrived at is that a functional programming language is a language that models the lambda calculus. Think about what languages are widely agreed to be functional and you'll see that this is a very practical definition.
Lisp was a primitive attempt to model the lambda calculus, so it fits this definition — though since most implementations don't stick very closely to the ideas of lambda calculus, they're generally considered to be mixed-paradigm or at best weakly functional.
This is also why a lot of people bristle at languages like Python being called functional. Python's general philosophy is unrelated to lambda calculus — it doesn't encourage this way of thinking at all — so it's not a functional language. It's a Turing machine with first-class functions. You can do functional-style programming in Python, yes, but the language does not have its roots in the same math that functional languages do. (Incidentally, Guido van Rossum himself agrees with this description of the language.)
A language (and platform) that promotes Functional Programming as a means of fully leveraging the capabilities of the said platform.
A language that makes it a lot harder to create functions with side effects than without side effects. The same counts for mutable/immutable data structures.
I think the same question can be asked about "OOP languages". After all, you can write object oriented programs in C (and it's not uncommon to do so). But C doesn't have any built-in language constructs to enable OOP. You have to do OOP "by hand" without much help from the compiler. That's why it's usually not considered an OOP language. I think this distinction can be applied to "functional languages", too: For example, it's not uncommon to write functional code in C++ (think about STL functions like std::count_if or std::transform). But C++ (for now) lacks built-in language features that enable functional programming, like lambdas. (Let's ignore boost::lambda for the sake of the argument.)
So, to answer your question, I'd say although it's possible to write function programs in each of these languages:
C is not a functional language (no built-in functional language constructs)
Scheme, Python and friends have functional constructs, so they're functional languages. But they also have imperative and OOP constructs, so they're usually referred to as "multi-paradigm" languages.
You can do functional style programming in any language. I try as much as possible.
Python, Linq all promote functional style programming.
A pure functional language like Haskell requires you to do all your computations using mathematical functions, functions that do not modify anything, they just return values.
In addition, functional languages typically allow you to write higher order functions, functions that take functions as arguments and/or return types.
Haskell for one have different types for functions with side-effects and those without.
That's a pretty good discriminating property for being a 100% functional language, at least IMHO.
I wrote a (pretty long) analysis once on why the term 'functional programming language' is meaningless which also tries to explain why for instance 'functions' in Haskell are completely different from 'functions' in Lisp or Python: http://blog.nihilarchitect.net/archives/289/on-functional-programming/
Things like 'map' or 'filter' are for a large part also implementable in C for instance.
I read somewhere where rich hickey said:
"I think continuations might be neat
in theory, but not in practice"
I am not familiar with clojure.
1. Does clojure have continuations?
2. If no, don't you need continuations? I have seen a lot of good examples especially from this guy. What is the alternative?
3. If yes, is there a documentation?
When talking about continuations, you’ll have to distinguish between two different kinds of them:
First-class continuations – Continuation-support that is deeply integrated in the language (Scheme or Ruby). Clojure does not support first-class continuations.
Continuation-passing-style (CPS) – CPS is just a style of coding and any language supporting anonymous functions will allow this style (which applies to Clojure too).
Examples:
-- Standard function
double :: Int -> Int
double x = 2 * x
-- CPS-function – We pass the continuation explicitly
doubleCPS :: Int -> (Int -> res) -> res
doubleCPS x cont = cont (2 * x)
; Call
print (double 2)
; Call CPS: Continue execution with specified anonymous function
double 2 (\res -> print res)
Read continuation on Wikipedia.
I don’t think that continuations are necessary for a good language, but especially first-class continuations and CPS in functional languages like Haskell can be quite useful (intelligent backtracking example).
I've written a Clojure port of cl-cont which adds continuations to Common Lisp.
https://github.com/swannodette/delimc
Abstract Continuations
Continuations are an abstract notion that are used to describe control flow semantics. In this sense, they both exist and don't exist (remember, they're abstract) in any language that offers control operators (as any Turing complete language must), in the same way that numbers both exist (as abstract entities) and don't exist (as tangible entities).
Continuations describe control effects such as function call/return, exception handling, and even gotos. A well founded language will, among other things, be designed with abstractions that are built on continuations (e.g., exceptions). (That is to say, a well-founded language will consist of control operators that were designed with continuations in mind. It is, of course, perfectly reasonable for a language to expose continuations as the only control abstraction, allowing users to build their own abstractions on top.)
First Class Continuations
If the notion of a continuation is reified as a first-class object in a language, then we have a tool upon which all kinds of control effects can be built. For example, if a language has first-class continuations, but not exceptions, we can construct exceptions on top of continuations.
Problems with First-Class Continuations
While first-class continuations are a powerful and useful tool in many cases, there are also some drawbacks to exposing them in a language:
Different abstractions built on top of continuations may result in unexpected / unintuitive behavior when composed. For example, a finally block might be skipped if I use a continuation to abort a computation.
If the current continuation may be requested at any time, then the language run-time must be structured so that it is possible to produce some data-structure representation of the current continuation at any time. This places some degree of burden on the run-time for a feature which, for better or worse, is often considered "exotic". If the language is hosted (such as Clojure is hosted on the JVM), then that representation must be able to fit within the framework provided by the hosting platform. There may also be other features a language would like to maintain (e.g., C interop) which restrict the solution space. Issues such as these increase the potential of an "impedence mismatch", and can severely complicate development of a performant solution.
Adding First-Class Continuations to a Language
Through metaprogramming, it is possible to add support for first-class continuations to a language. Generally, this approach involves transforming code to continuation-passing style (CPS), in which the current continuation is passed around as an explicit argument to each function.
For example, David Nolen's delimc library implements delimited continuations of portions of a Clojure program through a series of macro transforms. In a similar vein, I have authored pulley.cps, which is a macro compiler that transforms code into CPS, along with a run-time library to support more core Clojure features (such as exception handling) as well as interop with native Clojure code.
One issue with this approach is how you handle the boundary between native (Clojure) code and transformed (CPS) code. Specifically, since you can't capture the continuation of native code, you need to either disallow (or somehow restrict) interop with the base language or place a burden on the user of ensuring the context will allow any continuation they wish to capture to actually be captured.
pulley.cps tends towards the latter, although some attempts have been made to allow the user to manage this. For instance, it is possible to disallow CPS code to call into native code. In addition, a mechanism is provided to supply CPS versions of existing native functions.
In a language with a sufficiently strong type system (such as Haskell), it is possible to use the type system to encapsulate computations which might use control operations (i.e., continuations) from functionally pure code.
Summary
We now have the information necessary to directly answer your three questions:
Clojure does not support first-class continuations due to practical considerations.
All languages are built on continuations in the theoretical sense, but few languages expose continuations as first-class objects. However, it is possible to add continuations to any language via, e.g., transformation into CPS.
Check out the documentation for delimc and/or pulley.cps.
Is continuation a necessary feature in a language?
No. Plenty of languages don't have continuations.
If no, dont you need continuations? I have seen a lot of good examples especially from this guy. What is the alternative?
A call stack
A common use of continuations is in the implementation of control structures for: returning from a function, breaking from a loop, exception handling etc. Most languages (like Java, C++ etc) provide these features as part of the core language. Some languages don't (e.g: Scheme). Instead, these languages expose continuatiions as first class objects and let the programmer define new control structures. Thus Scheme should be looked upon as a programming language toolkit, not a complete language in itself.
In Clojure, we almost never need to use continuations directly, because almost all the control structures are provided by the language/VM combination. Still, first class continuations can be a powerful tool in the hands of the competent programmer. Especially in Scheme, continuations are better than the equivalent counterparts in other languages (like the setjmp/longjmp pair in C). This article has more details on this.
BTW, it will be interesting to know how Rich Hickey justifies his opinion about continuations. Any links for that?
Clojure (or rather clojure.contrib.monads) has a continuation monad; here's an article that describes its usage and motivation.
Well... Clojure's -> implements what you are after... But with a macro instead
In object-oriented programming, we might say the core concepts are:
encapsulation
inheritance,
polymorphism
What would that be in functional programming?
There's no community consensus on what are the essential concepts in functional programming. In
Why Functional Programming Matters (PDF), John Hughes argues that they are higher-order functions and lazy evaluation. In Wearing the Hair Shirt: A Retrospective on Haskell, Simon Peyton Jones says the real essential is not laziness but purity. Richard Bird would agree. But there's a whole crowd of Scheme and ML programmers who are perfectly happy to write programs with side effects.
As someone who has practiced and taught functional programming for twenty years, I can give you a few ideas that are widely believed to be at the core of functional programming:
Nested, first-class functions with proper lexical scoping are at the core. This means you can create an anonymous function at run time, whose free variables may be parameters or local variables of an enclosing function, and you get a value you can return, put into data structures, and so on. (This is the most important form of higher-order functions, but some higher-order functions (like qsort!) can be written in C, which is not a functional language.)
Means of composing functions with other functions to solve problems. Nobody does this better than John Hughes.
Many functional programmers believe purity (freedom from effects, including mutation, I/O, and exceptions) is at the core of functional programming. Many functional programmers do not.
Polymorphism, whether it is enforced by the compiler or not, is a core value of functional programmers. Confusingly, C++ programmers call this concept "generic programming." When polymorphism is enforced by the compiler it is generally a variant of Hindley-Milner, but the more powerful System F is also a powerful basis for functional languages. And with languages like Scheme, Erlang, and Lua, you can do functional programming without a static type system.
Finally, a large majority of functional programmers believe in the value of inductively defined data types, sometimes called "recursive types". In languages with static type systems these are generally known as "algebraic data types", but you will find inductively defined data types even in material written for beginning Scheme programmers. Inductively defined types usually ship with a language feature called pattern matching, which supports a very general form of case analysis. Often the compiler can tell you if you have forgotten a case. I wouldn't want to program without this language feature (a luxury once sampled becomes a necessity).
In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state. Functional programming has its roots in the lambda calculus, a formal system developed in the 1930s to investigate function definition, function application, and recursion. Many functional programming languages can be viewed as embellishments to the lambda calculus. - Wikipedia
In a nutshell,
Lambda Calculus
Higher Order Functions
Immutability
No side-effects
Not directly an answer to your question, but I'd like to point out that "object-oriented" and functional programming aren't necessarily at odds. The "core concepts" you cite have more general counterparts which apply just as well to functional programming.
Encapsulation, more generally, is modularisation. All purely functional languages that I know of support modular programming. You might say that those languages implement encapsulation better than the typical "OO" variety, since side-effects break encapsulation, and pure functions have no side-effects.
Inheritance, more generally, is logical implication, which is what a function represents. The canonical subclass -> superclass relation is a kind of implicit function. In functional languages, this is expressed with type classes or implicits (I consider implicits to be the more general of these two).
Polymorphism in the "OO" school is achieved by means of subtyping (inheritance). There is a more general kind of polymorphism known as parametric polymorphism (a.k.a. generics), which you will find to be supported by pure-functional programming languages. Additionally, some support "higher kinds", or higher-order generics (a.k.a. type constructor polymorphism).
What I'm trying to say is that your "core concepts of OO" aren't specific to OO in any way. I, for one, would argue that there aren't any core concepts of OO, in fact.
Let me repeat the answer I gave at one discussion in the Bangalore Functional Programming group:
A functional program consists only of functions. Functions compute
values from their inputs. We can contrast this with imperative
programming, where as the program executes, values of mutable
locations change. In other words, in C or Java, a variable called X
refers to a location whose value change. But in functional
programming X is the name of a value (not a location). Any where that
X is in scope, it has the same value (i.e, it is referentially
transparent). In FP, functions are also values. They can be passed as
arguments to other functions. This is known as higher-order functional
programming. Higher-order functions let us model an amazing variety of
patterns. For instance, look at the map function in Lisp. It
represents a pattern where the programmer needs to do 'something' to
every element of a list. That 'something' is encoded as a function and
passed as an argument to map.
As we saw, the most notable feature of FP is it's side-effect
freeness. If a function does something more than computing a value
from it's input, then it is causing a side-effect. Such functions are
not allowed in pure FP. It is easy to test side-effect free functions.
There is no global state to set-up before running the test and there
is no global state to check after running the test. Each function can
be tested independently just by providing it's input and examining the
return value. This makes it easy to write automated tests. Another
advantage of side-effect freeness is that it gives you better control
on parallelism.
Many FP languages treat recursion and iteration correctly. They does this by
supporting something called tail-recursion. What tail-recursion is -
if a function calls itself, and it is the last thing it does, it
removes the current stack frame right away. In other words, if a
function calls itself tail-recursively a 1000 times, it does not grow
the stack a 1000 deep. This makes special looping constructs
unnecessary in these languages.
Lambda Calculus is the most boiled down version of an FP language.
Higher level FP languages like Haskell get compiled to Lambda
Calculus. It has only three syntactic constructs but still it is
expressive enough to represent any abstraction or algorithm.
My opinion is that FP should be viewed as a meta-paradigm. We can
write programs in any style, including OOP, using the simple
functional abstractions provided by the Lambda Calculus.
Thanks,
-- Vijay
Original discussion link: http://groups.google.co.in/group/bangalore-fp/browse_thread/thread/4c2cfa7985d7eab3
Abstraction, the process of making a function by parameterizing over some part of an expression.
Application, the process of evaluating a function by replacing its parameters with specific values.
At some level, that's all there is to it.
Though the question is older, thought of sharing my view as reference.
Core Concept in FP is "FUNCTION"
FP gives KISS(Keep It Simple Sxxxxx) programming paradigm (once you get the FP ideas, you will literally start hating the OO paradigm)
Here is my simple FP comparison with OO Design Patterns. Its my perspective of seeing FP and pls correct me if there is any discrepancy from actual.
By Logic Programming I mean the a sub-paradigm of declarative programming languages. Don't confuse this question with "What problems can you solve with if-then-else?"
A language like Prolog is very fascinating, and it's worth learning for the sake of learning, but I have to wonder what class of real-world problems is best expressed and solved by such a language. Are there better languages? Does logic programming exist by another name in more trendy programming languages? Is the cynical version of the answer a variant of the Python Paradox?
Prototyping.
Prolog is dynamic and has been for 50 years. The compiler is liberal, the syntax minimalist, and "doing stuff" is easy, fun and efficient. SWI-Prolog has a built-in tracer (debugger!), and even a graphical tracer. You can change the code on the fly, using make/0, you can dynamically load modules, add a few lines of code without leaving the interpreter, or edit the file you're currently running on the fly with edit(1). Do you think you've found a problem with the foobar/2 predicate?
?- edit(foobar).
And as soon as you leave the editor, that thing is going to be re-compiled. Sure, Eclipse does the same thing for Java, but Java isn't exactly a prototyping language.
Apart from the pure prototyping stuff, Prolog is incredibly well suited for translating a piece of logic into code. So, automatic provers and that type of stuff can easily be written in Prolog.
The first Erlang interpreter was written in Prolog - and for a reason, since Prolog is very well suited for parsing, and encoding the logic you find in parse trees. In fact, Prolog comes with a built-in parser! No, not a library, it's in the syntax, namely DCGs.
Prolog is used a lot in NLP, particularly in syntax and computational semantics.
But, Prolog is underused and underappreciated. Unfortunately, it seems to bear an academic or "unusable for any real purpose" stigma. But it can be put to very good use in many real-world applications involving facts and the computation of relations between facts. It is not very well suited for number crunching, but CS is not only about number crunching.
Since Prolog = Syntactic Unification + Backward chaining + REPL,
most places where syntactic unification is used is also a good use for Prolog.
Syntactic unification uses
AST transformations
Type Inference
Term rewriting
Theorem proving
Natural language processing
Pattern matching
Combinatorial test case generation
Extract sub structures from structured data such as an XML document
Symbolic computation i.e. calculus
Deductive databases
Expert systems
Artificial Intelligence
Parsing
Query languages
Constraint Logic Programming (CLP)
Many very good and well-suited use cases of logic programming have already been mentioned. I would like to complement the existing list with several tasks from an extremely important application area of logic programming:
Logic programming blends seamlessly, more seamlessly than other paradigms, with constraints, resulting in a framework called Constraint Logic Programming.
This leads to dedicated constraint solvers for different domains, such as:
CLP(FD) for integers
CLP(B) for Booleans
CLP(Q) for rational numbers
CLP(R) for floating point numbers.
These dedicated constraint solvers lead to several important use cases of logic programming that have not yeen been mentioned, some of which I show below.
When choosing a Prolog system, the power and performance of its constraint solvers are often among the deciding factors, especially for commercial users.
CLP(FD) — Reasoning over integers
In practice, CLP(FD) is one of the most imporant applications of logic programming, and is used to solve tasks from the following areas, among others:
scheduling
resource allocation
planning
combinatorial optimization
See clpfd for more information and several examples.
CLP(B) — Boolean constraints
CLP(B) is often used in connection with:
SAT solving
circuit verification
combinatorial counting
See clpb.
CLP(Q) — Rational numbers
CLP(Q) is used to solve important classes of problems arising in Operations Research:
linear programming
integer linear programming
mixed integer linear programming
See clpq.
One of the things Prolog gives you for free is a backtracking search algorithm -- you could implement it yourself, but if your problem is best solved by having that algorithm available, then it's nice to use it.
The two things I've seen it be good at is mathematical proofs and natural language understanding.
Prolog is ideal for non-numeric problems. This article gives a few examples of some applications of Prolog and it might help you understand the type of problems that it might solve.
Prolog is great at solving puzzles and the like. That said, in the domain of puzzle-solving it makes easy/medium puzzle-solving easier and complicated puzzle solving harder. Still, writing solvers for grid puzzles and the like such as Hexiom, Sudoku, or Nurikabe is not especially tough.
One simple answer is "build systems". The language used to build Makefiles (at least, the part to describe dependencies) is essentially a logic programming language, although not really a "pure" logic programming language.
Yes, Prolog has been around since 1972. It was invented by Alain Colmerauer with Philippe Roussel, based on Robert Kowalski's procedural interpretation of Horn clauses. Alain was a French computer scientist and professor at Aix-Marseille University from 1970 to 1995.
And Alain invented it to analyse Natural Language. Several successful prototypes were created by him and his "followers".
His own system Orbis to understand questions in English and French about the solar system. See his personal site.
Warren and Pereira's system Chat80 QA on world geography.
Today, IBM Watson is a contempory QA based on logic with a huge dose of statistics about real world phrases.
So you can imagine that's where it's strength is.
Retired in 2006, he remained active until he died in 2017. He was named Chevalier de la Legion d’Honneur by the French government in 1986.