What is a difference between definitions reference transparency and deterministic function? - functional-programming

Reference transparency (Wikipedia):
An expression is said to be referentially transparent if it can be
replaced with its value without changing the behavior of a program (in
other words, yielding a program that has the same effects and output
on the same input).
And also (Learn you some Erlang):
Functions always returning the same result for the same parameter is
called referential transparency
Deterministic function (MSDN):
Deterministic functions always return the same result any time they
are called with a specific set of input values.
If we are talking about deterministic functions, we mean referential transparency? If we are talking about referential transparency, we mean deterministic functions?

Expressions can be more complex than a simple function call, so "referential transparency" applies to a larger class of entities than "deterministic". As applied to functions they are the basically the same in that a function application is referentially transparent if and only if it is deterministic. An expression built up out of deterministic functions will be referentially transparent, though it is also possible to have an expression referentially transparent even though some of its ingredients are non-deterministic (0*rand() for a silly example, although there are less silly examples where random seeds are used to get to a deterministic answer).

Note that your definition of referential transparency specifically mentions "the same effects and output on the same input". If your expression includes effects, such as I/O, then it can be referentially transparent without being deterministic (both as defined above).
On the other hand, pure functional programming focuses on functions without effects, which can be reliably deterministic. Deterministic functions are necessarily referentially transparent, but the reverse is not true.
Consider making the distinction between "effects", which are the important and necessary business of writing programs in the first place, and "side effects" where effects may occur that are not apparent to the caller. This opacity, which is the opposite of referential transparency, is what makes it hard to reason about the code you are calling.

Related

The benefit of referential transparency in FP

I was learning functional programming and came across the term referential transparency.
After some research on that, I found out that RT is useful
When we want to make our code easier to reason about and read since our function is predictable AND
When our function is predictable, it will be of great help to the JIT compiler allowing it to replace the function with its return value(Does it replace the function with its value as long as the function is hot?).
Are both the above statements true?
Referential Transparency means that a function with certain parameters will always return the same result as long as the input parameters are the same, in other words it does not have side effects.
Of course one of the benefits of these is that the code is easier to reason about because the same execution will return the same values so you can change the call to the function by the result it returns.
I suppose this feature is used by many compilers to speed up the execution by making the change, but this depends on the language and the compiler used to translate to byte code, but It little has to do with Functional programming per se.

Is Date.now referential transparent?

DateTime.Now or Date.now is referential transparent?
This is one of the controversial topic in a functional programming article in Qiita.
First of all, we must be very careful since the word "referential transparent" is tricky word/concept in a sense, and a prominent discussion exists in
What is referential transparency?
The questioner states:
What does the term referential transparency mean? I've heard it described as "it means you can replace equals with equals" but this seems like an inadequate explanation.
A very typical explanation but the idea that typically leads us misunderstanding is as follows: (#2 answer of the above page by #Brian R. Bondy )
Referential transparency, a term commonly used in functional programming, means that given a function and an input value, you will always receive the same output. That is to say there is no external state used in the function.
Typical claims I always have heard and thought wrong is like this:
In a programming language, Date.now always returns a different value that corresponds the current time, and according to
given a function and an input value, you will always receive the same output.
therefore, Date.now is Not referential transparent!
I know some (functional) programmers firmly believe the above claim is trustworthy, however, #1 and #3 answer by #Uday Reddy explains as follows:
Any talk of "referential transparency" without understanding the distinction between L-values, R-values and other complex objects that populate the imperative programmer's conceptual universe is fundamentally mistaken.
The functional programmers' idea of referential transparency seems to differ from the standard notion in three ways:
Whereas the philosophers/logicians use terms like "reference", "denotation", "designatum" and "bedeutung" (Frege's German term), functional programmers use the term "value". (This is not entirely their doing. I notice that Landin, Strachey and their descendants also used the term "value" to talk about reference/denotation. It may be just a terminological simplification that Landin and Strachey introduced, but it seems to make a big difference when used in a naive way.)
Functional programmers seem to believe that these "values" exist within the programming language, not outside. In doing this, they differ from both the philosophers and the programming language semanticists.
They seem to believe that these "values" are supposed to be obtained by evaluation.
Come to think of it, "external state" is also tricky word/concept.
Referential transparency, a term commonly used in functional programming, means that given a function and an input value, you will always receive the same output. That is to say there is no external state used in the function.
Is "current time" "external state" or "external value"?
If we call "current time" is "external state", how about "mouse event"??
"mouse event" is not a state that should be managed by programming context, it's rather an external event.
given a function and an input value, you will always receive the same output.
So, we can understand as follows:
"current time" is neither "input value" nor "external value" nor "external state" and Date.now always returns the same output corresponds to the on-going event "current time".
If one still insists or want to call "current time" as a "value", again,
Functional programmers seem to believe that these "values" exist within the programming language, not outside. In doing this, they differ from both the philosophers and the programming language semanticists.
The value of "current time" never exists within the programming language, but only outside, and the value of "current time" outside obviously updates via not programming context but the real-world's time-flow.
Therefore, I understand Date.now is referential transparent.
I'd like to read your idea. Thanks.
EDIT1
In
What is (functional) reactive programming?
Conal Elliott #Conal also explains functional-reactive-programming (FRP).
He is the one of the earliest who develops FRP, and explaines like this:
FRP is about - “datatypes that represent a value ‘over time’ “
Dynamic/evolving values (i.e., values “over time”) are first class values in themselves.
In this FRP perspective,
Date can be seen as a first-class value "over time" that is immutable object on the Time axis.
.now is a property/function to address "the current time" within the Date
Therefore Date.time returns immutable and referential transparent value that represents our "current time".
EDIT2
(in JavaScript)
referential intransparent function
let a = 1;
let f = () => (a);
input of Function:f is none;
output of Function:f depends on a that depends on a context outside f
referential transparent function
let t = Date.now();
let f = (Date) => (Date.now());
Although, Date value resides in our physical world, Date can be seen as an immutable FRP first-class value "over time".
Since Date referred from any programming context is identical, we usually implicitly may omit Date as input value and simply like
let f = () => (Date.now());
EDIT3
Actually, I emailed to Conal Elliott #Conal who is one of the earliest developer of FRP.
He kindly replied and informed me there's a similar question here.
How can a time function exist in functional programming?
The questioner states:
So my question is: can a time function (which returns the current time) exist in functional programming?
If yes, then how can it exist? Does it not violate the principle of functional programming? It particularly violates referential transparency which is one of the property of functional programming (if I correctly understand it).
Or if no, then how can one know the current time in functional programming?
and, the answer by Conal Elliott #Conal in stackoverflow:
Yes, it's possible for a pure function to return the time, if it's given that time as a parameter. Different time argument, different time result. Then form other functions of time as well and combine them with a simple vocabulary of function(-of-time)-transforming (higher-order) functions. Since the approach is stateless, time here can be continuous (resolution-independent) rather than discrete, greatly boosting modularity. This intuition is the basis of Functional Reactive Programming (FRP).
Edit4
My appreciation for the answer by #Roman Sausarnes .
Please allow me to introduce my perspective for functional programming and FRP.
First of all, I think programming is fundamentally all about
mathematics, and functional programming pursuits that aspect. On the
other hand, imperative programming is a way to describe steps of
machine operation, that is not necessarily mathematics.
Pure functional programming like Haskel has some difficulty to handle
"state" or IO, and I think the whole problem come from "time".
"state" or "time" is pretty much subjective entity to us, human. We
naturally believe "time" is flowing or passing, and "state" is
changing, that is Naïve realism.
I think Naïve realism for "time" is fundamental hazard and reason for
all the confusion in programming community and very few discuss this
aspect. In modern physics, or even from Newton Physics we treat time
in pure mathematical way, so if we overview our world in physics way,
nothing should be difficult to treat our world with pure mathematical
functional programming.
So, I overview our world/universe is immutable like pre-recorded DVD,
and only our subjective view is mutable, including "time" or "state".
In programming, the only connection between the immutable universe and
our mutable subjective experience is "event". Pure functional
programming language such as Haskell, basically lacks this view,
although some insightful researchers including Cornel Elliott proceed FRP, but the majority still think FRP method is still minor or hard to use andm many of them treat mutable state as a matter of course.
Naturally, FRP is the only smart solution and especially Cornel Elliott as a founder applied this philosophical perspective and declare - first
class value "over time". Perhaps, unfortunately, many programmers
would not understand what he really meant since they are trapped by Naïve
realism, and find it difficult to view "time" is philosophically, or
physically immutable entity.
So, if they discuss "pure functional" or "referential transparency"
for the advantage of mathematical integrity/consistency, to me,
"Date.now" is naturally referential transparent within pure functional
programming, simply because "Date.time" access a certain point of
immutable time-line of immutable universe.
So What About referential transparency in Denotational semantics like #Reddy or #Roman Sausarnes disucusses?
I overview referential transparency in FP, especially in Haskell community is all about mathematical integrity/consistency.
Sure, maybe I could follow the updated definition of "referential transparency" by Haskell community, and practically, we judge the code is mathematically inconsistent if we judge it's not referential transparent, correct?
Actually, again,
How can a time function exist in functional programming?
A programmer questioned as follows:
So my question is: can a time function (which returns the current time) exist in functional programming?
If yes, then how can it exist? Does it not violate the principle of functional programming? It particularly violates referential transparency which is one of the property of functional programming (if I correctly understand it).
Or if no, then how can one know the current time in functional programming?
Consensus
violate the principle of functional programming
= violates referential transparency which is one of the property of functional programming
= Mathematically inconsistent!!
This is our common perception, correct?
In this question, many answered that "function returning the current time" is Not referential transparent especially in the definition of "referential transparency" by Haskell community, and many mentioned it's about mathematical consistency.
However only a few answered that "function returning the current time" is referential transparent, and one of the answer is from FRP perspective by Conal Elliott #Conal.
IMO, FRP, a perspective to handle a time-stream as a first-class immutable value "over time" is a correct manner with mathematical principle like Physics as I mentioned above.
Then how come "Date.now"/"function returning the current time" became referential intransparent by Haskell context?
Well, only explanation I can think of is the updated definition of "referential transparency" by Haskell community is somewhat wrong.
Event-driven & Mathematical integrity/consistency
I mentioned - In programming, the only connection between the immutable universe and
our mutable subjective experience is "event" or "Event-driven".
Functional Programming is evaluated in Event-driven manner, on the other hand, Imperative Programming is evaluated by steps/routine of
machine operation described in the code.
"Date.now" depends on "event", and in principle, "event" is unknown to the context of the code.
So, does event-driven destroy mathematical integrity/consistency? Absolutely not.
Mapping Syntax to Meaning - indexical(index finger)
C.S. Peirce introduced the term ‘indexical’ to suggest the idea of pointing (as in ‘index finger’). ⟦I⟧ ,[[here]],[[now]] ,etc.. 
Probably this is mathematically identical concept of "Monad", "functor" things in Haskell. In denotational semantics even in Haskell, [[now]] as the 'index finger' is clear.
Indexical(index finger) is subjective and so is Event-driven
[[I]] ,[[here]],[[now]] ,etc.. is subjective, and again, in programming, the only connection between the immutable objective universe and
our mutable subjective experience is "event" or "Event-driven"
Therefore, as long as [[now]] is bind to event declaration of "Event-driven" Programming, the subjective(context dependent) mathematical inconsistency never occurs, I think.
Edit5
#Bergi gave me an excellent comment:
Yes, Date.now, the external value, is referentially transparent. It always means "current time".
But Date.now() is not, it's a function call returning different numbers depending on external state. The problem with the referentially transparent "concept of current time" is that we cannot compute anything with it.
#KenOKABE: Seems to be the same case as Date.now().
The problem is that it does not mean the current time at the same time, but at different times - a program takes time to execute, and this is what makes it impure.
Sure, we could devise a referentially transparent Date.now function/getter that always returns the time of the start of the program (as if a program execution was immediate), but that's not how Date.now()/Date.Now work. They depend on the execution state of the program. – Bergi
I think we need to discuss it.
Date.now, the external value, is referentially transparent.
[[Date.now]] is as I mention in #Edit4, an indexical(index finger) that is subjective, but as long as it remains in indexical domain (without an execution/evaluation), it is referentially transparent, that we agreed on.
However, #Bergi suggests Date.now() (with an execution/evaluation) returns "different values" at different times, and which is no longer referentially transparent. That we have not agreed on.
I think this problem he has shown surely but only exist in Imperative Programming:
console.log(Date.now()); //some numeric for 2016/05/18 xx:xx:xx ....
console.log(Date.now()); //different numeric for 2016/05/18 xx:xx:xx ....
In this case, Date.now() is not referentially transparent, I agree.
However, in Functional Programming/Declarative Programming paradigm, we would never write like the above. We must write this:
const f = () => (Date.now());
and, this f is evaluated in some "event-driven" context. That is how a Functional-programming code behaves.
Yes, this code is identical to
const f = Date.now;
Therefore, in Functional Programming/Declarative Programming paradigm, Date.now or Date.now() (with an execution/evaluation) never has problem to return "different values" at different times.
So, again, as I mentioned in EDIT4, as long as [[now]] is bind to event declaration of "Event-driven" Programming, the subjective(context dependent) mathematical inconsistency never occurs, I think.
Okay, I'm going to take a stab at this. I'm not an expert on this stuff, but I've spent some time thinking about #UdayReddy's answers to this question that you linked to, and I think I've got my head wrapped around it.
Referential Transparency in Analytic Philosophy
I think you have to start where Mr. Reddy did in his answer to the other question. Mr. Reddy wrote:
The term "referent" is used in analytical philosophy to talk about the thing that an expression refers to. It is roughly the same as what we mean by "meaning" or "denotation" in programming language semantics.
Note the use of the word "denotation". Programming languages have a syntax, or grammar, but they also have a semantics, or meaning. Denotational semantics is the practice of translating a language's syntax to its mathematical meaning.
Denotational semantics, as far as I can tell, is not widely understood even though it is one of the most powerful tools around for understanding, designing, and reasoning about computer programs. I gotta spend a little time on it to lay the foundation for the answer to your question.
Denotational Semantics: Mapping Syntax to Meaning
The idea behind denotational semantics is that every syntactical element in a computer language has a corresponding mathematical meaning, or semantics. Denotational semantics is the explicit mapping between syntax and semantics. Take the syntactic numeral 1. You can map it to its mathematical meaning, which is just the mathematical number 1. The semantic function might look like this:
syntax
↓
⟦1⟧ ∷ One
↑
semantics
Sometimes the double-square brackets are used to stand for "meaning", and in this case the number 1 on the semantic side is spelled out as One. Those are just tools for indicating when we are talking about semantics and when we are talking about syntax. You can read that function to mean, "The meaning of the syntactic symbol 1 is the number One."
The example that I used above looks trivial. Of course 1 means One. What else would it mean? It doesn't have to, however. You could do this:
⟦1⟧ ∷ Four
That would be dumb, and no-one would use such a dumb language, but it would be a valid language all the same. But the point is that denotational semantics allows us to be explicit about the mathematical meaning of the programs that we write. Here is a denotation for a function that squares the integer x using lambda notation:
⟦square x⟧ ∷ λx → x²
Now we can move on and talk about referential transparency.
Referential Transparency is About Meaning
Allow me to piggyback on Mr. Uday's answer again. He writes:
A context in a sentence is "referentially transparent" if replacing a term in that context by another term that refers to the same entity doesn't alter the meaning.
Compare that to the answer you get when you ask the average programmer what referential transparency means. They usually say something like the answer you quoted above:
Referential transparency, a term commonly used in functional programming, means that given a function and an input value, you will always receive the same output. That is to say there is no external state used in the function.
That answer defines referential transparency in terms of values and side effects, but it totally ignores meaning.
Here is a function that under the second definition is not referentially transparent:
var x = 0
func changeX() -> Int {
x += 1
return x
}
It reads some external state, mutates it, and then returns the value. It takes no input, returns a different value every time you call it, and it relies on external state. Meh. Big deal.
Given a correct denotational semantics, it is still referentially transparent.
Why? Because you could replace it with another expression with the same semantic meaning.
Now, the semantics of that function is much more confusing. I don't know how to define it. It has something to do with state transformations, given a state s and a function that produces a new state s', the denotation might look something like this, though I have no idea if this is mathematically correct:
⟦changeX⟧ ∷ λs → (s → s')
Is that right? I have don't have a clue. Strachey figured out the denotational semantics for imperative languages, but it is complicated and I don't understand it yet. By establishing the denotative semantics, however, he established that imperative languages are every bit as referentially transparent as functional languages. Why? Because the mathematical meaning can be precisely described. And once you know the precise mathematical meaning of something, you can replace it with any other term that has the same meaning. So even though I don't know what the true semantics of the changeX function is, I know that if I had another term with the same semantic meaning, I could swap one out for the other.
So What About Date.now?
I don't know anything about that function. I'm not even sure what language it is from, though I suspect it may be Javascript. But who cares. What is its denotational semantics? What does it mean? What could you insert in its place without changing the meaning of your program?
The ugly truth is, most of us don't have a clue! Denotational semantics isn't that widely used to begin with, and the denotational semantics of imperative programming languages is really complicated (at least for me - if you find it easy, I'd love to have you explain it to me). Take any imperative program consisting of more than about 20 lines of non-trivial code and tell me what its mathematical meaning is. I challenge you.
By contrast the denotational semantics of Haskell is pretty straightforward. I have very little knowledge of Haskell. I've never done any coding in it beyond messing around in the ghci, but what makes it so powerful is that the syntax tracks the semantics more closely than any other language that I know of. Being a pure, strict functional language, the semantics are right there on the surface of the syntax. The syntax is defined by the mathematical concepts that define the meaning.
In fact, the syntax and semantics are so closely related that functional programmers have begun to conflate the two. (I humbly submit this opinion and await the backlash.) That is why you get definitions of referential transparency from FPers that talk about values instead of meaning. In a language like Haskell, the two are almost indistinguishable. Since there is no mutable state and every function is a pure function, all you have to do is look at the value that is produced when the function is evaluated and you've basically determined its meaning.
It may also be that the new-age FPer's explanation of referential transparency is, in a way, more useful than the one that I summarized above. And that cannot be ignored. After all, if what I wrote above is correct then everything that has a denotational semantics is referentially transparent. There is no such thing as a non-referentially transparent function, because every function has a mathematical meaning (though it may be obscure and hard to define) and you could always replace it with another term with the same meaning. What good is that?
Well, it's good for one reason. It let's us know that we don't know jack about the mathematics behind what we do. Like I said above, I haven't a clue what the denotational semantics of Date.now is or what it means in a mathematical sense. Is it referentially transparent? Yeah, I'm sure that it is, since it could be replaced by another function with the same semantics. But I have no idea how to evaluate the semantics for that function, and therefore its referential transparency is of no use to me as a programmer.
So if there's one thing I've learned out of all of this, it is to focus a lot less on whether or not something meets some definition of "referential transparency" and a lot more on trying to make programs out of small, mathematically composable parts that have precise semantic meanings that even I can understand.
Is Date.now referentially transparent?
Here's a referentially-transparent random-number generator:
...so if Date.now was defined in similar fashion e.g:
int Date.now()
{
return 314159265358979; // think of it as
// a prototype...
}
then it would also be referentially transparent (but not very useful :-).

What is the difference between a combinator and a higher order function?

I have always thought the definition of both of these were functions that take other functions as arguments. I understand the domain of each is different, but what are their defining characteristics?
Well, let me try to kind of derive their defining characteristics from their different domains ;)
First of all, in their usual context combinators are higher order functions. But as it turns out, context is an important thing to keep in mind when talking about differences of these two terms:
Higher Order Functions
When we think of higher order functions, the first thing usually mentioned is "oh, they (also) take at least one function as an argument" (thinking of fold, etc)... as if they were something special because of that. Which - depending on context - they are.
Typical context: functional programming, haskell, any other (usually typed) language where functions are first class citizens (like when LINQ made C# even more awesome)
Focus: let the caller specify/customize some functionality of this function
Combinators
Combinators are somewhat special functions, primitive ones do not even mind what they are given as arguments (argument type often does not matter at all, so passing functions as arguments is not a problem at all). So can the identity-combinator also be called "higher order function"??? Formally: No, it does not need a function as argument! But hold on... in which context would you ever encounter/use combinators (like I, K, etc) instead of just implementing desired functionality "directly"? Answer: Well, in purely functional context!
This is not a law or something, but I can really not think of a situation where you would see actual combinators in a context where you suddenly pass pointers, hash-tables, etc. to a combinator... again, you can do that, but in such scenarios there should really be a better way than using combinators.
So based on this "weak" law of common sense - that you will work with combinators only in a purely functional context - they inherently are higher order functions. What else would you have available to pass as arguments? ;)
Combining combinators (by application only, of course - if you take it seriously) always gives new combinators that therefore also are higher order functions, again. Primitive combinators usually just represent some basic behaviour or operation (thinking of S, K, I, Y combinators) that you want to apply to something without using abstractions. But of course the definition of combinators does not limit them to that purpose!
Typical context: (untyped) lambda calculus, combinatory logic (surprise)
Focus: (structurally) combine existing combinators/"building blocks" to something new (e.g. using the Y-combinator to "add recursion" to something that is not recursive, yet)
Summary
Yes, as you can see, it might be more of a contextual/philosophical thing or about what you want to express: I would never call the K-combinator (definition: K = \a -> \b -> a) "higher order function" - although it is very likely that you will never see K being called with something else than functions, therefore "making" it a higher order function.
I hope this sort of answered your question - formally they certainly are not the same, but their defining characteristics are pretty similar - personally I think of combinators as functions used as higher order functions in their typical context (which usually is somewhere between special an weird).
EDIT: I have adjusted my answer a little bit since - as it turned out - it was slightly "biased" by personal experience/imression. :) To get an even better idea about correctly distinguishing combinators from HOFs, read the comments below!
EDIT2: Taking a look at HaskellWiki also gives a technical definition for combinators that is pretty far away from HOFs!

Map/Reduce: any theoretical foundation beyond "howto"?

For a while I was thinking that you just need a map to a monoid, and then reduce would do reduction according to monoid's multiplication.
First, this is not exactly how monoids work, and second, this is not exactly how map/reduce works in practice.
Namely, take the ubiquitous "count" example. If there's nothing to count, any map/reduce engine will return an empty dataset, not a neutral element. Bummer.
Besides, in a monoid, an operation is defined for two elements. We can easily extend it to finite sequences, or, due to associativity, to finite ordered sets. But there's no way to extend it to arbitrary "collections" unless we actually have a σ-algebra.
So, what's the theory? I tried to figure it out, but I could not; and I tried to go Google it but found nothing.
I think the right way to think about map-reduce is not as a computational paradigm in its own right, but rather as a control flow construct similar to a while loop. You can view while as a program constructor with two arguments, a predicate function and an arbitrary program. Similarly, the map-reduce construct has two arguments named map and reduce, each functions. So analogously to while, the useful questions to ask are about proving correctness of constructed programs relative to given preconditions and postconditions. And as usual, those questions involve (a) termination and run-time performance and (b) maintenance of invariants.

Defining point of functional programming

I can enumerate many features of functional programming, but when my friend asked me Could you define functional programming for me? I couldn't.
I would say that the defining point of pure functional programming is that all computation is done in functions with no side effects. That is, functions take inputs and return values, but do not change any hidden state, In this paradigm, functions more closely model their mathematical cousins.
This was nailed down for me when I started playing with Erlang, a language with a write-once stack. However, it should be clarified that there is a difference between a programming paradigm, and a programming language. Languages that are generally referred to as functional provide a number of features that encourage or enforce the functional paradigm (e.g., Erlang with it's write-once stack, higher order functions, closures, etc.). However the functional programming paradigm can be applied in many languages (with varying degrees of pain).
A lot of the definitions so far have emphasized purity, but there are many languages that are considered functional that are not at all pure (e.g., ML, Scheme). I think the key properties that make a language "functional" are:
Higher-order functions. Functions are a built-in datatype no different from integers and booleans. Anonymous functions are easy to create and idiomatic (e.g., lambdas).
Everything is an expression. In imperative languages, a distinction is made between statements, which mutate state and affect control flow, and expressions, which yield values. In functional languages (even impure functional languages), expression evaluation is the fundamental unit of execution.
Given these two properties, you naturally get the behavior we think of as functional (e.g., expressing computations in terms of folds and maps). Eliminating mutable state is a way to make things even more functional.
From wikipedia:
In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast with the imperative programming style that emphasizes changes in state.
Using a functional approach gives the following benefits:
Concurrent programming is much easier in functional languages.
Functions in FP can never cause side effects - this makes unit testing much easier.
Hot Code Deployment in production environments is much easier.
Functional languages can be reasoned about mathematically.
Lazy evaluation provides potential for performance optimizations.
More expressive - closures, pattern matching, advanced type systems etc. allow programmers to 'say what they mean' more readily.
Brevity - for some classes of program a functional solution is significantly more concise.
There is a great article with more detail here.
Being able to enumerate the features is more useful than trying to define the term itself, as people will use the term "functional programming" in a variety of contexts with many shades of meaning across a continuum, whereas the individual features have individually crisper definitions that are more universally agreed upon.
Below are the features that come to mind. Most people use the term "functional programming" to refer to some subset of those features (the most common/important ones being "purity" and "higher-order functions").
FP features:
Purity (a.k.a. immutability, eschewing side-effects, referential transparency)
Higher-order functions (e.g. pass a function as a parameter, return it as a result, define anonymous function on the fly as a lambda expression)
Laziness (a.k.a. non-strict evaluation, most useful/usable when coupled with purity)
Algebraic data types and pattern matching
Closures
Currying / partial application
Parametric polymorphism (a.k.a. generics)
Recursion (more prominent as a result of purity)
Programming with expressions rather than statements (again, from purity)
...
The more features from the above list you are using, the more likely someone will label what you are doing "functional programming" (and the first two features--purity and higher-order functions--are probably worth the most extra bonus points towards your "FP score").
I have to add that functional programming tends to also abstract control structures of your program as well as the domain - e.g., you no longer do a 'for loop' on some list of things, but you 'map' it with some function to produce the output.
i think functional programming is a state of mind as well as the definition given above.
There are two separate definitions:
The older definition (first-class functions) has been given by Chris Conway.
The newer definition (avoiding side effects like mutation) has been given by John Stauffer. This is more generally known as purely functional programming.
This is a source of much confusion...
It's like drawing a picture by using vectors instead of bitmaps - tell the painter how to change the picture instead of what the picture looks like at each step.
It's application of functions as opposed to changing the state.
I think John Stauffer mostly has the definition. I would also add that you need to be able to pass functions around. Essentially you need high order functions, meaning you can pass functions around easily (although passing blocks is good enough).
For example a very popular functional call is map. It is basically equivalent to
list is some list of items
OutList is some empty list
foreach item in list
OutList.append(function(item))
return OutList
so that code is expressed as map(function, list). The revolutionary concept is that function is a function. Javascript is a great example of a language with high order functions. Basically functions can be treated like a variable and passed into functions or returned from functions. C++ and C have function pointers which can be used similarly. .NET delegates can also be used similarly.
then you can think of all sorts of cool abstractions...
Do you have a function AddItemsInList, MultiplyItemsInList, etc..?
Each function takes (List) and returns a single result
You could create (note, many languages do not allow you to pass + around as a function but it seems the clearest way to express the concept)....
AggregateItemsInList(List, combinefunction, StepFunction)
Increment functions work on indexes...better would be to make them work on list using list operations like next and for incTwo next next if it exists....
function incNormal(x) {
return x + 1
}
function incTwo(x) {
return x + 2
}
AggregateItemsInList(List, +, incNormal)
Want to do every other item?
AggegateItemsInList(List, +, incTwo)
Want to multiply?
AggregateItemsInList(List, *, incNormal)
Want to add exam scores together?
function AddScores (studenta, studentb) {
return studenta.score + studentb.score
}
AggregateItemsInList(ListOfStudents, AddScores, incOne)
High order functions are a very powerful abstraction. Instead of having to write custom methods for numbers, strings, students, etc.. you have one aggregate method that loops through a list of anything and you just have to create the addition operation for each data type.

Resources