What is the difference between procedural programming and functional programming? [closed] - functional-programming

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
The community reviewed whether to reopen this question 17 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I've read the Wikipedia articles for both procedural programming and functional programming, but I'm still slightly confused. Could someone boil it down to the core?

A functional language (ideally) allows you to write a mathematical function, i.e. a function that takes n arguments and returns a value. If the program is executed, this function is logically evaluated as needed.1
A procedural language, on the other hand, performs a series of sequential steps. (There's a way of transforming sequential logic into functional logic called continuation passing style.)
As a consequence, a purely functional program always yields the same value for an input, and the order of evaluation is not well-defined; which means that uncertain values like user input or random values are hard to model in purely functional languages.
1 As everything else in this answer, that’s a generalisation. This property, evaluating a computation when its result is needed rather than sequentially where it’s called, is known as “laziness”. Not all functional languages are actually universally lazy, nor is laziness restricted to functional programming. Rather, the description given here provides a “mental framework” to think about different programming styles that are not distinct and opposite categories but rather fluid ideas.

Basically the two styles, are like Yin and Yang. One is organized, while the other chaotic. There are situations when Functional programming is the obvious choice, and other situations were Procedural programming is the better choice. This is why there are at least two languages that have recently come out with a new version, that embraces both programming styles. ( Perl 6 and D 2 )
#Procedural:#
The output of a routine does not always have a direct correlation with the input.
Everything is done in a specific order.
Execution of a routine may have side effects.
Tends to emphasize implementing solutions in a linear fashion.
##Perl 6 ##
sub factorial ( UInt:D $n is copy ) returns UInt {
# modify "outside" state
state $call-count++;
# in this case it is rather pointless as
# it can't even be accessed from outside
my $result = 1;
loop ( ; $n > 0 ; $n-- ){
$result *= $n;
}
return $result;
}
##D 2##
int factorial( int n ){
int result = 1;
for( ; n > 0 ; n-- ){
result *= n;
}
return result;
}
#Functional:#
Often recursive.
Always returns the same output for a given input.
Order of evaluation is usually undefined.
Must be stateless. i.e. No operation can have side effects.
Good fit for parallel execution
Tends to emphasize a divide and conquer approach.
May have the feature of Lazy Evaluation.
##Haskell##
( copied from Wikipedia );
fac :: Integer -> Integer
fac 0 = 1
fac n | n > 0 = n * fac (n-1)
or in one line:
fac n = if n > 0 then n * fac (n-1) else 1
##Perl 6 ##
proto sub factorial ( UInt:D $n ) returns UInt {*}
multi sub factorial ( 0 ) { 1 }
multi sub factorial ( $n ) { $n * samewith $n-1 } # { $n * factorial $n-1 }
##D 2##
pure int factorial( invariant int n ){
if( n <= 1 ){
return 1;
}else{
return n * factorial( n-1 );
}
}
#Side note:#
Factorial is actually a common example to show how easy it is to create new operators in Perl 6 the same way you would create a subroutine. This feature is so ingrained into Perl 6 that most operators in the Rakudo implementation are defined this way. It also allows you to add your own multi candidates to existing operators.
sub postfix:< ! > ( UInt:D $n --> UInt )
is tighter(&infix:<*>)
{ [*] 2 .. $n }
say 5!; # 120␤
This example also shows range creation (2..$n) and the list reduction meta-operator ([ OPERATOR ] LIST) combined with the numeric infix multiplication operator. (*)
It also shows that you can put --> UInt in the signature instead of returns UInt after it.
( You can get away with starting the range with 2 as the multiply "operator" will return 1 when called without any arguments )

I've never seen this definition given elsewhere, but I think this sums up the differences given here fairly well:
Functional programming focuses on expressions
Procedural programming focuses on statements
Expressions have values. A functional program is an expression who's value is a sequence of instructions for the computer to carry out.
Statements don't have values and instead modify the state of some conceptual machine.
In a purely functional language there would be no statements, in the sense that there's no way to manipulate state (they might still have a syntactic construct named "statement", but unless it manipulates state I wouldn't call it a statement in this sense). In a purely procedural language there would be no expressions, everything would be an instruction which manipulates the state of the machine.
Haskell would be an example of a purely functional language because there is no way to manipulate state. Machine code would be an example of a purely procedural language because everything in a program is a statement which manipulates the state of the registers and memory of the machine.
The confusing part is that the vast majority of programming languages contain both expressions and statements, allowing you to mix paradigms. Languages can be classified as more functional or more procedural based on how much they encourage the use of statements vs expressions.
For example, C would be more functional than COBOL because a function call is an expression, whereas calling a sub program in COBOL is a statement (that manipulates the state of shared variables and doesn't return a value). Python would be more functional than C because it allows you to express conditional logic as an expression using short circuit evaluation (test && path1 || path2 as opposed to if statements). Scheme would be more functional than Python because everything in scheme is an expression.
You can still write in a functional style in a language which encourages the procedural paradigm and vice versa. It's just harder and/or more awkward to write in a paradigm which isn't encouraged by the language.

Funtional Programming
num = 1
def function_to_add_one(num):
num += 1
return num
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
#Final Output: 2
Procedural Programming
num = 1
def procedure_to_add_one():
global num
num += 1
return num
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
#Final Output: 6
function_to_add_one is a function
procedure_to_add_one is a procedure
Even if you run the function five times, every time it will return 2
If you run the procedure five times, at the end of fifth run it will give you 6.
DISCLAIMER: Obviously this is a hyper-simplified view of reality. This answer just gives a taste of "functions" as opposed to "procedures". Nothing more. Once you have tasted this superficial yet deeply penetrative intuition, start exploring the two paradigms, and you will start to see the difference quite clearly.
Helps my students, hope it helps you too.

In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast with the procedural programming style that emphasizes changes in state.

I believe that procedural/functional/objective programming are about how to approach a problem.
The first style would plan everything in to steps, and solves the problem by implementing one step (a procedure) at a time. On the other hand, functional programming would emphasize the divide-and-conquer approach, where the problem is divided into sub-problem, then each sub-problem is solved (creating a function to solve that sub problem) and the results are combined to create the answer for the whole problem. Lastly, Objective programming would mimic the real world by create a mini-world inside the computer with many objects, each of which has a (somewhat) unique characteristics, and interacts with others. From those interactions the result would emerge.
Each style of programming has its own advantages and weaknesses. Hence, doing something such as "pure programming" (i.e. purely procedural - no one does this, by the way, which is kind of weird - or purely functional or purely objective) is very difficult, if not impossible, except some elementary problems specially designed to demonstrate the advantage of a programming style (hence, we call those who like pureness "weenie" :D).
Then, from those styles, we have programming languages that is designed to optimized for some each style. For example, Assembly is all about procedural. Okay, most early languages are procedural, not only Asm, like C, Pascal, (and Fortran, I heard). Then, we have all famous Java in objective school (Actually, Java and C# is also in a class called "money-oriented," but that is subject for another discussion). Also objective is Smalltalk. In functional school, we would have "nearly functional" (some considered them to be impure) Lisp family and ML family and many "purely functional" Haskell, Erlang, etc. By the way, there are many general languages such as Perl, Python, Ruby.

To expand on Konrad's comment:
As a consequence, a purely functional program always yields the same value for an input, and the order of evaluation is not well-defined;
Because of this, functional code is generally easier to parallelize. Since there are (generally) no side effects of the functions, and they (generally) just act on their arguments, a lot of concurrency issues go away.
Functional programming is also used when you need to be capable of proving your code is correct. This is much harder to do with procedural programming (not easy with functional, but still easier).
Disclaimer: I haven't used functional programming in years, and only recently started looking at it again, so I might not be completely correct here. :)

One thing I hadn't seen really emphasized here is that modern functional languages such as Haskell really more on first class functions for flow control than explicit recursion. You don't need to define factorial recursively in Haskell, as was done above. I think something like
fac n = foldr (*) 1 [1..n]
is a perfectly idiomatic construction, and much closer in spirit to using a loop than to using explicit recursion.

A functional programming is identical to procedural programming in which global variables are not being used.

Procedural languages tend to keep track of state (using variables) and tend to execute as a sequence of steps. Purely functional languages don't keep track of state, use immutable values, and tend to execute as a series of dependencies. In many cases the status of the call stack will hold the information that would be equivalent to that which would be stored in state variables in procedural code.
Recursion is a classic example of functional style programming.

Konrad said:
As a consequence, a purely functional program always yields the same value for an input,
and the order of evaluation is not well-defined; which means that uncertain values like
user input or random values are hard to model in purely functional languages.
The order of evaluation in a purely functional program may be hard(er) to reason about (especially with laziness) or even unimportant but I think that saying it is not well defined makes it sound like you can't tell if your program is going to work at all!
Perhaps a better explanation would be that control flow in functional programs is based on when the value of a function's arguments are needed. The Good Thing about this that in well written programs, state becomes explicit: each function lists its inputs as parameters instead of arbitrarily munging global state. So on some level, it is easier to reason about order of evaluation with respect to one function at a time. Each function can ignore the rest of the universe and focus on what it needs to do. When combined, functions are guaranteed to work the same[1] as they would in isolation.
... uncertain values like user input or random values are hard to model in purely
functional languages.
The solution to the input problem in purely functional programs is to embed an imperative language as a DSL using a sufficiently powerful abstraction. In imperative (or non-pure functional) languages this is not needed because you can "cheat" and pass state implicitly and order of evaluation is explicit (whether you like it or not). Because of this "cheating" and forced evaluation of all parameters to every function, in imperative languages 1) you lose the ability to create your own control flow mechanisms (without macros), 2) code isn't inherently thread safe and/or parallelizable by default, 3) and implementing something like undo (time travel) takes careful work (imperative programmer must store a recipe for getting the old value(s) back!), whereas pure functional programming buys you all these things—and a few more I may have forgotten—"for free".
I hope this doesn't sound like zealotry, I just wanted to add some perspective. Imperative programming and especially mixed paradigm programming in powerful languages like C# 3.0 are still totally effective ways to get things done and there is no silver bullet.
[1] ... except possibly with respect memory usage (cf. foldl and foldl' in Haskell).

To expand on Konrad's comment:
and the order of evaluation is not
well-defined
Some functional languages have what is called Lazy Evaluation. Which means a function is not executed until the value is needed. Until that time the function itself is what is passed around.
Procedural languages are step 1 step 2 step 3... if in step 2 you say add 2 + 2, it does it right then. In lazy evaluation you would say add 2 + 2, but if the result is never used, it never does the addition.

If you have a chance, I would recommand getting a copy of Lisp/Scheme, and doing some projects in it. Most of the ideas that have lately become bandwagons were expressed in Lisp decades ago: functional programming, continuations (as closures), garbage collection, even XML.
So that would be a good way to get a head start on all these current ideas, and a few more besides, like symbolic computation.
You should know what functional programming is good for, and what it isn't good for. It isn't good for everything. Some problems are best expressed in terms of side-effects, where the same question gives differet answers depending on when it is asked.

#Creighton:
In Haskell there is a library function called product:
prouduct list = foldr 1 (*) list
or simply:
product = foldr 1 (*)
so the "idiomatic" factorial
fac n = foldr 1 (*) [1..n]
would simply be
fac n = product [1..n]

Procedural programming divides sequences of statements and conditional constructs into separate blocks called procedures that are parameterized over arguments that are (non-functional) values.
Functional programming is the same except that functions are first-class values, so they can be passed as arguments to other functions and returned as results from function calls.
Note that functional programming is a generalization of procedural programming in this interpretation. However, a minority interpret "functional programming" to mean side-effect-free which is quite different but irrelevant for all major functional languages except Haskell.

None of the answers here show idiomatic functional programming. The recursive factorial answer is great for representing recursion in FP, but the majority of code is not recursive so I don't think that answer is fully representative.
Say you have an arrays of strings, and each string represents an integer like "5" or "-200". You want to check this input array of strings against your internal test case (Using integer comparison). Both solutions are shown below
Procedural
arr_equal(a : [Int], b : [Str]) -> Bool {
if(a.len != b.len) {
return false;
}
bool ret = true;
for( int i = 0; i < a.len /* Optimized with && ret*/; i++ ) {
int a_int = a[i];
int b_int = parseInt(b[i]);
ret &= a_int == b_int;
}
return ret;
}
Functional
eq = i, j => i == j # This is usually a built-in
toInt = i => parseInt(i) # Of course, parseInt === toInt here, but this is for visualization
arr_equal(a : [Int], b : [Str]) -> Bool =
zip(a, b.map(toInt)) # Combines into [Int, Int]
.map(eq)
.reduce(true, (i, j) => i && j) # Start with true, and continuously && it with each value
While pure functional languages are generally research languages (As the real-world likes free side-effects), real-world procedural languages will use the much simpler functional syntax when appropriate.
This is usually implemented with an external library like Lodash, or available built-in with newer languages like Rust. The heavy lifting of functional programming is done with functions/concepts like map, filter, reduce, currying, partial, the last three of which you can look up for further understanding.
Addendum
In order to be used in the wild, the compiler will normally have to work out how to convert the functional version into the procedural version internally, as function call overhead is too high. Recursive cases such as the factorial shown will use tricks such as tail call to remove O(n) memory usage. The fact that there are no side effects allows functional compilers to implement the && ret optimization even when the .reduce is done last. Using Lodash in JS, obviously does not allow for any optimization, so it is a hit to performance (Which isn't usually a concern with web development). Languages like Rust will optimize internally (And have functions such as try_fold to assist && ret optimization).

To Understand the difference, one needs to to understand that "the godfather" paradigm of both procedural and functional programming is the imperative programming.
Basically procedural programming is merely a way of structuring imperative programs in which the primary method of abstraction is the "procedure." (or "function" in some programming languages). Even Object Oriented Programming is just another way of structuring an imperative program, where the state is encapsulated in objects, becoming an object with a "current state," plus this object has a set of functions, methods, and other stuff that let you the programmer manipulate or update the state.
Now, in regards to functional programming, the gist in its approach is that it identifies what values to take and how these values should be transferred. (so there is no state, and no mutable data as it takes functions as first class values and pass them as parameters to other functions).
PS: understanding every programming paradigm is used for should clarify the differences between all of them.
PSS: In the end of the day, programming paradigms are just different approaches to solving problems.
PSS: this quora answer has a great explanation.

Related

Functional vs Procedural programming [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
The community reviewed whether to reopen this question 10 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I've read the Wikipedia articles for both procedural programming and functional programming, but I'm still slightly confused. Could someone boil it down to the core?
A functional language (ideally) allows you to write a mathematical function, i.e. a function that takes n arguments and returns a value. If the program is executed, this function is logically evaluated as needed.1
A procedural language, on the other hand, performs a series of sequential steps. (There's a way of transforming sequential logic into functional logic called continuation passing style.)
As a consequence, a purely functional program always yields the same value for an input, and the order of evaluation is not well-defined; which means that uncertain values like user input or random values are hard to model in purely functional languages.
1 As everything else in this answer, that’s a generalisation. This property, evaluating a computation when its result is needed rather than sequentially where it’s called, is known as “laziness”. Not all functional languages are actually universally lazy, nor is laziness restricted to functional programming. Rather, the description given here provides a “mental framework” to think about different programming styles that are not distinct and opposite categories but rather fluid ideas.
Basically the two styles, are like Yin and Yang. One is organized, while the other chaotic. There are situations when Functional programming is the obvious choice, and other situations were Procedural programming is the better choice. This is why there are at least two languages that have recently come out with a new version, that embraces both programming styles. ( Perl 6 and D 2 )
#Procedural:#
The output of a routine does not always have a direct correlation with the input.
Everything is done in a specific order.
Execution of a routine may have side effects.
Tends to emphasize implementing solutions in a linear fashion.
##Perl 6 ##
sub factorial ( UInt:D $n is copy ) returns UInt {
# modify "outside" state
state $call-count++;
# in this case it is rather pointless as
# it can't even be accessed from outside
my $result = 1;
loop ( ; $n > 0 ; $n-- ){
$result *= $n;
}
return $result;
}
##D 2##
int factorial( int n ){
int result = 1;
for( ; n > 0 ; n-- ){
result *= n;
}
return result;
}
#Functional:#
Often recursive.
Always returns the same output for a given input.
Order of evaluation is usually undefined.
Must be stateless. i.e. No operation can have side effects.
Good fit for parallel execution
Tends to emphasize a divide and conquer approach.
May have the feature of Lazy Evaluation.
##Haskell##
( copied from Wikipedia );
fac :: Integer -> Integer
fac 0 = 1
fac n | n > 0 = n * fac (n-1)
or in one line:
fac n = if n > 0 then n * fac (n-1) else 1
##Perl 6 ##
proto sub factorial ( UInt:D $n ) returns UInt {*}
multi sub factorial ( 0 ) { 1 }
multi sub factorial ( $n ) { $n * samewith $n-1 } # { $n * factorial $n-1 }
##D 2##
pure int factorial( invariant int n ){
if( n <= 1 ){
return 1;
}else{
return n * factorial( n-1 );
}
}
#Side note:#
Factorial is actually a common example to show how easy it is to create new operators in Perl 6 the same way you would create a subroutine. This feature is so ingrained into Perl 6 that most operators in the Rakudo implementation are defined this way. It also allows you to add your own multi candidates to existing operators.
sub postfix:< ! > ( UInt:D $n --> UInt )
is tighter(&infix:<*>)
{ [*] 2 .. $n }
say 5!; # 120␤
This example also shows range creation (2..$n) and the list reduction meta-operator ([ OPERATOR ] LIST) combined with the numeric infix multiplication operator. (*)
It also shows that you can put --> UInt in the signature instead of returns UInt after it.
( You can get away with starting the range with 2 as the multiply "operator" will return 1 when called without any arguments )
I've never seen this definition given elsewhere, but I think this sums up the differences given here fairly well:
Functional programming focuses on expressions
Procedural programming focuses on statements
Expressions have values. A functional program is an expression who's value is a sequence of instructions for the computer to carry out.
Statements don't have values and instead modify the state of some conceptual machine.
In a purely functional language there would be no statements, in the sense that there's no way to manipulate state (they might still have a syntactic construct named "statement", but unless it manipulates state I wouldn't call it a statement in this sense). In a purely procedural language there would be no expressions, everything would be an instruction which manipulates the state of the machine.
Haskell would be an example of a purely functional language because there is no way to manipulate state. Machine code would be an example of a purely procedural language because everything in a program is a statement which manipulates the state of the registers and memory of the machine.
The confusing part is that the vast majority of programming languages contain both expressions and statements, allowing you to mix paradigms. Languages can be classified as more functional or more procedural based on how much they encourage the use of statements vs expressions.
For example, C would be more functional than COBOL because a function call is an expression, whereas calling a sub program in COBOL is a statement (that manipulates the state of shared variables and doesn't return a value). Python would be more functional than C because it allows you to express conditional logic as an expression using short circuit evaluation (test && path1 || path2 as opposed to if statements). Scheme would be more functional than Python because everything in scheme is an expression.
You can still write in a functional style in a language which encourages the procedural paradigm and vice versa. It's just harder and/or more awkward to write in a paradigm which isn't encouraged by the language.
Funtional Programming
num = 1
def function_to_add_one(num):
num += 1
return num
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
#Final Output: 2
Procedural Programming
num = 1
def procedure_to_add_one():
global num
num += 1
return num
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
#Final Output: 6
function_to_add_one is a function
procedure_to_add_one is a procedure
Even if you run the function five times, every time it will return 2
If you run the procedure five times, at the end of fifth run it will give you 6.
DISCLAIMER: Obviously this is a hyper-simplified view of reality. This answer just gives a taste of "functions" as opposed to "procedures". Nothing more. Once you have tasted this superficial yet deeply penetrative intuition, start exploring the two paradigms, and you will start to see the difference quite clearly.
Helps my students, hope it helps you too.
In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast with the procedural programming style that emphasizes changes in state.
I believe that procedural/functional/objective programming are about how to approach a problem.
The first style would plan everything in to steps, and solves the problem by implementing one step (a procedure) at a time. On the other hand, functional programming would emphasize the divide-and-conquer approach, where the problem is divided into sub-problem, then each sub-problem is solved (creating a function to solve that sub problem) and the results are combined to create the answer for the whole problem. Lastly, Objective programming would mimic the real world by create a mini-world inside the computer with many objects, each of which has a (somewhat) unique characteristics, and interacts with others. From those interactions the result would emerge.
Each style of programming has its own advantages and weaknesses. Hence, doing something such as "pure programming" (i.e. purely procedural - no one does this, by the way, which is kind of weird - or purely functional or purely objective) is very difficult, if not impossible, except some elementary problems specially designed to demonstrate the advantage of a programming style (hence, we call those who like pureness "weenie" :D).
Then, from those styles, we have programming languages that is designed to optimized for some each style. For example, Assembly is all about procedural. Okay, most early languages are procedural, not only Asm, like C, Pascal, (and Fortran, I heard). Then, we have all famous Java in objective school (Actually, Java and C# is also in a class called "money-oriented," but that is subject for another discussion). Also objective is Smalltalk. In functional school, we would have "nearly functional" (some considered them to be impure) Lisp family and ML family and many "purely functional" Haskell, Erlang, etc. By the way, there are many general languages such as Perl, Python, Ruby.
To expand on Konrad's comment:
As a consequence, a purely functional program always yields the same value for an input, and the order of evaluation is not well-defined;
Because of this, functional code is generally easier to parallelize. Since there are (generally) no side effects of the functions, and they (generally) just act on their arguments, a lot of concurrency issues go away.
Functional programming is also used when you need to be capable of proving your code is correct. This is much harder to do with procedural programming (not easy with functional, but still easier).
Disclaimer: I haven't used functional programming in years, and only recently started looking at it again, so I might not be completely correct here. :)
One thing I hadn't seen really emphasized here is that modern functional languages such as Haskell really more on first class functions for flow control than explicit recursion. You don't need to define factorial recursively in Haskell, as was done above. I think something like
fac n = foldr (*) 1 [1..n]
is a perfectly idiomatic construction, and much closer in spirit to using a loop than to using explicit recursion.
A functional programming is identical to procedural programming in which global variables are not being used.
Procedural languages tend to keep track of state (using variables) and tend to execute as a sequence of steps. Purely functional languages don't keep track of state, use immutable values, and tend to execute as a series of dependencies. In many cases the status of the call stack will hold the information that would be equivalent to that which would be stored in state variables in procedural code.
Recursion is a classic example of functional style programming.
Konrad said:
As a consequence, a purely functional program always yields the same value for an input,
and the order of evaluation is not well-defined; which means that uncertain values like
user input or random values are hard to model in purely functional languages.
The order of evaluation in a purely functional program may be hard(er) to reason about (especially with laziness) or even unimportant but I think that saying it is not well defined makes it sound like you can't tell if your program is going to work at all!
Perhaps a better explanation would be that control flow in functional programs is based on when the value of a function's arguments are needed. The Good Thing about this that in well written programs, state becomes explicit: each function lists its inputs as parameters instead of arbitrarily munging global state. So on some level, it is easier to reason about order of evaluation with respect to one function at a time. Each function can ignore the rest of the universe and focus on what it needs to do. When combined, functions are guaranteed to work the same[1] as they would in isolation.
... uncertain values like user input or random values are hard to model in purely
functional languages.
The solution to the input problem in purely functional programs is to embed an imperative language as a DSL using a sufficiently powerful abstraction. In imperative (or non-pure functional) languages this is not needed because you can "cheat" and pass state implicitly and order of evaluation is explicit (whether you like it or not). Because of this "cheating" and forced evaluation of all parameters to every function, in imperative languages 1) you lose the ability to create your own control flow mechanisms (without macros), 2) code isn't inherently thread safe and/or parallelizable by default, 3) and implementing something like undo (time travel) takes careful work (imperative programmer must store a recipe for getting the old value(s) back!), whereas pure functional programming buys you all these things—and a few more I may have forgotten—"for free".
I hope this doesn't sound like zealotry, I just wanted to add some perspective. Imperative programming and especially mixed paradigm programming in powerful languages like C# 3.0 are still totally effective ways to get things done and there is no silver bullet.
[1] ... except possibly with respect memory usage (cf. foldl and foldl' in Haskell).
To expand on Konrad's comment:
and the order of evaluation is not
well-defined
Some functional languages have what is called Lazy Evaluation. Which means a function is not executed until the value is needed. Until that time the function itself is what is passed around.
Procedural languages are step 1 step 2 step 3... if in step 2 you say add 2 + 2, it does it right then. In lazy evaluation you would say add 2 + 2, but if the result is never used, it never does the addition.
If you have a chance, I would recommand getting a copy of Lisp/Scheme, and doing some projects in it. Most of the ideas that have lately become bandwagons were expressed in Lisp decades ago: functional programming, continuations (as closures), garbage collection, even XML.
So that would be a good way to get a head start on all these current ideas, and a few more besides, like symbolic computation.
You should know what functional programming is good for, and what it isn't good for. It isn't good for everything. Some problems are best expressed in terms of side-effects, where the same question gives differet answers depending on when it is asked.
#Creighton:
In Haskell there is a library function called product:
prouduct list = foldr 1 (*) list
or simply:
product = foldr 1 (*)
so the "idiomatic" factorial
fac n = foldr 1 (*) [1..n]
would simply be
fac n = product [1..n]
Procedural programming divides sequences of statements and conditional constructs into separate blocks called procedures that are parameterized over arguments that are (non-functional) values.
Functional programming is the same except that functions are first-class values, so they can be passed as arguments to other functions and returned as results from function calls.
Note that functional programming is a generalization of procedural programming in this interpretation. However, a minority interpret "functional programming" to mean side-effect-free which is quite different but irrelevant for all major functional languages except Haskell.
None of the answers here show idiomatic functional programming. The recursive factorial answer is great for representing recursion in FP, but the majority of code is not recursive so I don't think that answer is fully representative.
Say you have an arrays of strings, and each string represents an integer like "5" or "-200". You want to check this input array of strings against your internal test case (Using integer comparison). Both solutions are shown below
Procedural
arr_equal(a : [Int], b : [Str]) -> Bool {
if(a.len != b.len) {
return false;
}
bool ret = true;
for( int i = 0; i < a.len /* Optimized with && ret*/; i++ ) {
int a_int = a[i];
int b_int = parseInt(b[i]);
ret &= a_int == b_int;
}
return ret;
}
Functional
eq = i, j => i == j # This is usually a built-in
toInt = i => parseInt(i) # Of course, parseInt === toInt here, but this is for visualization
arr_equal(a : [Int], b : [Str]) -> Bool =
zip(a, b.map(toInt)) # Combines into [Int, Int]
.map(eq)
.reduce(true, (i, j) => i && j) # Start with true, and continuously && it with each value
While pure functional languages are generally research languages (As the real-world likes free side-effects), real-world procedural languages will use the much simpler functional syntax when appropriate.
This is usually implemented with an external library like Lodash, or available built-in with newer languages like Rust. The heavy lifting of functional programming is done with functions/concepts like map, filter, reduce, currying, partial, the last three of which you can look up for further understanding.
Addendum
In order to be used in the wild, the compiler will normally have to work out how to convert the functional version into the procedural version internally, as function call overhead is too high. Recursive cases such as the factorial shown will use tricks such as tail call to remove O(n) memory usage. The fact that there are no side effects allows functional compilers to implement the && ret optimization even when the .reduce is done last. Using Lodash in JS, obviously does not allow for any optimization, so it is a hit to performance (Which isn't usually a concern with web development). Languages like Rust will optimize internally (And have functions such as try_fold to assist && ret optimization).
To Understand the difference, one needs to to understand that "the godfather" paradigm of both procedural and functional programming is the imperative programming.
Basically procedural programming is merely a way of structuring imperative programs in which the primary method of abstraction is the "procedure." (or "function" in some programming languages). Even Object Oriented Programming is just another way of structuring an imperative program, where the state is encapsulated in objects, becoming an object with a "current state," plus this object has a set of functions, methods, and other stuff that let you the programmer manipulate or update the state.
Now, in regards to functional programming, the gist in its approach is that it identifies what values to take and how these values should be transferred. (so there is no state, and no mutable data as it takes functions as first class values and pass them as parameters to other functions).
PS: understanding every programming paradigm is used for should clarify the differences between all of them.
PSS: In the end of the day, programming paradigms are just different approaches to solving problems.
PSS: this quora answer has a great explanation.

What's the difference between calling a function and functional programming

I'm referring to this article https://codeburst.io/a-beginner-friendly-intro-to-functional-programming-4f69aa109569. I've been confused with how functional programming differs from procedural programming for a while and want to understand it.
function getOdds2(arr){
return arr.filter(num => num % 2 !== 0)
}
"we simply define the outcome we want to see by using a method called filter, and allow the machine to take care of all the steps in between. This is a more declarative approach."
They say they define the outcome we want to see. How is that any different from procedural programming where you call a function (a sub routine) and get the output you want to see. How is this "declarative approach" any different than just calling a function. The only difference I see between functional and procedural programming is the style.
Procedural programming uses procedures (i.e. calls functions, aka subroutines) and potentially (refers to, and/or changes in place aka "mutates") lots of global variables and/or data structures, i.e. "state".
Functional programming seeks to have all the relevant state in the function's arguments and return values, to minimize if not outright forbid the state external to the functions used (called) in a program.
So, it seeks to have few (or none at all) global variables and data structures, just have functions (subroutines) passing data along until the final one produces the result.
Both are calling functions / subroutines, yes. That's just structured programming, though.
"Pure" functional programming doesn't want its functions to have (and maintain) any internal state (between the calls) either. That way the programming functions / subroutines / procedures become much like mathematical functions, always producing the same output for the same input(s). The operations of such functions become reliable, allowing for the more declarative style of programming. (when it might do something different each time it's called, there's no name we can give it, no concept to call it for).
With functional programming, everything revolves around functions. In the example you gave above, you are defining and executing a function without even knowing it.
You're passing a lambda (anonymous function) to the filter function. With functional programming, that's really the point - defining everything in terms of functions, passing functions as arguments to functions, etc.
So yes, it is about style. It's about ease of expressing your ideas in code and being additionally terse.
Take this for example:
function getOdds(array) {
odds = [];
for(var i = 0; i < array.length; i++) {
if(isOdd(array[i]) array.push(i);
}
}
function isOdd(n) {
return n % 2 != 0;
}
it gets the job done but it's really verbose. Now compare it to:
function getOdds(array) {
array.filter(n => n % 2 != 0)
}
Here you're defining the isOdd function anonymously as a lambda and passing it directly to the filter function. Saves a lot of keystrokes.

Higher-order functions in VHDL or Verilog

I've been programming in more functional-style languages and have gotten to appreciate things like tuples, and higher-order functions such as maps, and folds/aggregates. Do either VHDL or Verilog have any of these kinds of constructs?
For example, is there any way to do even simple things like
divByThreeCount = count (\x -> x `mod` 3 == 0) myArray
or
myArray2 = map (\x -> x `mod` 3) myArray
or even better yet let me define my own higher-level constructs recursively, in either of these languages?
I think you're right that there is a clash between the imperative style of HDLs and the more functional aspects of combinatorial circuits. Describing parallel circuits with languages which are linear in nature is odd, but I think a full blown functional HDL would be a disaster.
The thing with functional languages (I find) is that it's very easy to write code which takes huge resources, either in time, memory, or processing power. That's their power; they can express complexity quite succinctly, but they do so at the expense of resource management. Put that in an HDL and I think you'll have a lot of large, power hungry designs when synthesised.
Take your map example:
myArray2 = map (\x -> x `mod` 3) myArray
How would you like that to synthesize? To me you've described a modulo operator per element of the array. Ignoring the fact that modulo isn't cheap, was that what you intended, and how would you change it if it wasn't? If I start breaking that function up in some way so that I can say "instantiate a single operator and use it multiple times" I lost a lot of the power of functional languages.
...and then we've got retained state. Retained state is everywhere in hardware. You need it. You certainly wouldn't use a purely functional language.
That said, don't throw away your functional design patterns. Combinatorial processes (VHDL) and "always blocks" (Verilog) can be viewed as functions that apply themselves to the data presented at their input. Pipelines can be viewed as chains of functions. Often the way you structure a design looks functional, and can share a lot with the "Actor" design pattern that's popular in Erlang.
So is there stuff to learn from functional programming? Certainly. Do I wish VHDL and Verilog took more from functional languages? Sometimes. The trouble is functional languages get too high level too quickly. If I can't distinguish between "use one instance of f() many times" and "use many instances of f()" then it doesn't do what a Hardware Description Language must do... describe hardware.
Have a look at http://clash-lang.org for an example of a higher-order language that is transpiled into VHDL/Verilog. It allows functions to be passed as arguments, currying etc., even a limited set of recursive data structures (Vec). As you would expect the pure moore function instantiates a stateful Moore machine given step/output functions and a start state. It has this signature:
moore :: (s -> i -> s) -> (s -> o) -> s -> Signal i -> Signal o
Signals model the time-evolved values in sequential logic.
BlueSpec is used by a small com. known by few people only - INTEL
It’s an extension of SystemVerilog (SV) and called BSV
But it’s NOT open - I don’t think you can try use or even learn it without paying BlueSpec.com BIG money
There’s also Lava that’s used by Xilinx but I don’t know if you can use it directly.
Note:
In Vhdl functions (and maybe Verilog too)
CAN’T have time delay (you can’t ask function to wait for clk event)
Vhdl has a std. lib. for complex num. cal.
But in Vhdl you can change or overload a system function like add (“+”) and define your implementation.
You can declare vector or matrix add sub mul or div (+ - * /) function,
use generic size (any) and recursive declarations - most synthesizers will "understand you" and do what you’ve asked.

understanding referential transparency

Generally, I have a headache because something is wrong with my reasoning:
For 1 set of arguments, referential transparent function will always return 1 set of output values.
that means that such function could be represented as a truth table (a table where 1 set of output parameters is specified for 1 set of arguments).
that makes the logic behind such functions is combinational (as opposed to sequential)
that means that with pure functional language (that has only rt functions) it is possible to describe only combinational logic.
The last statement is derived from this reasoning, but it's obviously false; that means there is an error in reasoning. [question: where is error in this reasoning?]
UPD2. You, guys, are saying lots of interesting stuff, but not answering my question. I defined it more explicitly now. Sorry for messing up with question definition!
Question: where is error in this reasoning?
A referentially transparent function might require an infinite truth table to represent its behavior. You will be hard pressed to design an infinite circuit in combinatory logic.
Another error: the behavior of sequential logic can be represented purely functionally as a function from states to states. The fact that in the implementation these states occur sequentially in time does not prevent one from defining a purely referentially transparent function which describes how state evolves over time.
Edit: Although I apparently missed the bullseye on the actual question, I think my answer is pretty good, so I'm keeping it :-) (see below).
I guess a more concise way to phrase the question might be: can a purely functional language compute anything an imperative one can?
First of all, suppose you took an imperative language like C and made it so you can't alter variables after defining them. E.g.:
int i;
for (i = 0; // okay, that's one assignment
i < 10; // just looking, that's all
i++) // BUZZZ! Sorry, can't do that!
Well, there goes your for loop. Do we get to keep our while loop?
while (i < 10)
Sure, but it's not very useful. i can't change, so it's either going to run forever or not run at all.
How about recursion? Yes, you get to keep recursion, and it's still plenty useful:
int sum(int *items, unsigned int count)
{
if (count) {
// count the first item and sum the rest
return *items + sum(items + 1, count - 1);
} else {
// no items
return 0;
}
}
Now, with functions, we don't alter state, but variables can, well, vary. Once a variable passes into our function, it's locked in. However, we can call the function again (recursion), and it's like getting a brand new set of variables (the old ones stay the same). Although there are multiple instances of items and count, sum((int[]){1,2,3}, 3) will always evaluate to 6, so you can replace that expression with 6 if you like.
Can we still do anything we want? I'm not 100% sure, but I think the answer is "yes". You certainly can if you have closures, though.
You have it right. The idea is, once a variable is defined, it can't be redefined. A referentially transparent expression, given the same variables, always yields the same result value.
I recommend looking into Haskell, a purely functional language. Haskell doesn't have an "assignment" operator, strictly speaking. For instance:
my_sum numbers = ??? where
i = 0
total = 0
Here, you can't write a "for loop" that increments i and total as it goes along. All is not lost, though. Just use recursion to keep getting new is and totals:
my_sum numbers = f 0 0 where
f i total =
if i < length numbers
then f i' total'
else total
where
i' = i+1
total' = total + (numbers !! i)
(Note that this is a stupid way to sum a list in Haskell, but it demonstrates a method of coping with single assignment.)
Now, consider this highly imperative-looking code:
main = do
a <- readLn
b <- readLn
print (a + b)
It's actually syntactic sugar for:
main =
readLn >>= (\a ->
readLn >>= (\b ->
print (a + b)))
The idea is, instead of main being a function consisting of a list of statements, main is an IO action that Haskell executes, and actions are defined and chained together with bind operations. Also, an action that does nothing, yielding an arbitrary value, can be defined with the return function.
Note that bind and return aren't specific to actions. They can be used with any type that calls itself a Monad to do all sorts of funky things.
To clarify, consider readLn. readLn is an action that, if executed, would read a line from standard input and yield its parsed value. To do something with that value, we can't store it in a variable because that would violate referential transparency:
a = readLn
If this were allowed, a's value would depend on the world and would be different every time we called readLn, meaning readLn wouldn't be referentially transparent.
Instead, we bind the readLn action to a function that deals with the action, yielding a new action, like so:
readLn >>= (\x -> print (x + 1))
The result of this expression is an action value. If Haskell got off the couch and performed this action, it would read an integer, increment it, and print it. By binding the result of an action to a function that does something with the result, we get to keep referential transparency while playing around in the world of state.
As far as I understand it, referential transparency just means: A given function will always yield the same result when invoked with the same arguments. So, the mathematical functions you learned about in school are referentially transparent.
A language you could check out in order to learn how things are done in a purely functional language would be Haskell. There are ways to use "updateable storage possibilities" like the Reader Monad, and the State Monad for example. If you're interested in purely functional data structures, Okasaki might be a good read.
And yes, you're right: Order of evaluation in a purely functional language like haskell does not matter as in non-functional languages, because if there are no side effects, there is no reason to do someting before/after something else -- unless the input of one depends on the output of the other, or means like monads come into play.
I don't really know about the truth-table question.
Here's my stab at answering the question:
Any system can be described as a combinatorial function, large or small.
There's nothing wrong with the reasoning that pure functions can only deal with combinatorial logic -- it's true, just that functional languages hide that from you to some extent or another.
You could even describe, say, the workings of a game engine as a truth table or a combinatorial function.
You might have a deterministic function that takes in "the current state of the entire game" as the RAM occupied by the game engine and the keyboard input, and returns "the state of the game one frame later". The return value would be determined by the combinations of the bits in the input.
Of course, in any meaningful and sane function, the input is parsed down to blocks of integers, decimals and booleans, but the combinations of the bits in those values is still determining the output of your function.
Keep in mind also that basic digital logic can be described in truth tables. The only reason that that's not done for anything more than, say, arithmetic on 4-bit integers, is because the size of the truth table grows exponentially.
The error in Your reasoning is the following:
"that means that such function could be represented as a truth table".
You conclude that from a functional language's property of referential transparency. So far the conclusion would sound plausible, but You oversee that a function is able to accept collections as input and process them in contrast to the fixed inputs of a logic gate.
Therefore a function does not equal a logic gate but rather a construction plan of such a logic gate depending on the actual (at runtime determined) input!
To comment on Your comment: Functional languages can - although stateless - implement a state machine by constructing the states from scratch each time they are being accessed.

Is there a strongly typed programming language which allows you to define new operators?

I am currently looking for a programming language to write a math class in. I know that there are lots and lots of them everywhere around, but since I'm going to start studying math next semester, I thought this might be a good way to get a deeper insight in to what I've learned.
Thanks for your replys.
BTW: If you are wondering what I wanted to ask:
"Is there a strongly typed programming language which allows you to define new operators?"
Like EFraim said, Haskell makes this pretty easy:
% ghci
ghci> let a *-* b = (a*a) - (b*b)
ghci> :type (*-*)
(*-*) :: (Num a) => a -> a -> a
ghci> 4 *-* 3
7
ghci> 1.2 *-* 0.9
0.6299999999999999
ghci> (*-*) 5 3
16
ghci> :{
let gcd a b | a > b = gcd (a - b) b
| b > a = gcd a (b - a)
| otherwise = a
:}
ghci> :type gcd
gcd :: (Ord a, Num a) => a -> a -> a
ghci> gcd 3 6
3
ghci> gcd 12 11
1
ghci> 18 `gcd` 12
6
You can define new infix operators (symbols only) using an infix syntax. You can then use
them as infix operators, or enclose them in parens to use them as a normal function.
You can also use normal functions (letters, numbers, underscores and single-quotes) as operators
by enclosing them in backticks.
Well, you can redefine a fixed set of operators in many languages, like C++ or C#. Others, like F# or Scala allow you to define even new operators (even as infix ones) which might be even nicer for math-y stuff.
Maybe Haskell? Allows you to define arbitrary infix operators.
Ted Neward wrote a series of article on Scala aimed at Java developers, and he finished it off by demonstrating how to write a mathematical domain language in Scala (which, incidentally, is a statically-typed language)
Part 1
Part 2
Part 3
In C++ you can define operators that work on other classes, but I don't think other primitive types like ints since they can't have instance methods. You could either make your own number class in C++ and redefine ALL the operators, including + * etc.
To make new operators on primitive types you have to turn to functional programming (it seems from the other answers). This is fine, just keep in mind that functional programming is very different from OOP. But it will be a great new challenge and functional programming is great for math as it comes from lambda calc. Learning functional programming will teach you different skills and help you greatly with math and programming in general. :D
good luck!
Eiffel allows you to define new operators.
http://dev.eiffel.com
Inasmuch as the procedure you apply to the arguments in a Lisp combination is called an “operator,” then yeah, you can define new operators till the cows come home.
Ada has support for overriding infix operators: here is the reference manual chapter.
Unfortunately you can't create your own new operators, it seems you can only override the existing ones.
type wobble is new integer range 23..89;
function "+" (A, B: wobble) return wobble is
begin
...
end "+";
Ada is not a hugely popular language, it has to be said, but as far as strong typing goes, you can't get much stronger.
EDIT:
Another language which hasn't been mentioned yet is D. It also is a strongly typed language, and supports operator overloading. Again, it doesn't support user-defined infix operators.
From http://www.digitalmars.com/d/1.0/rationale.html
Why not allow user definable operators?
These can be very useful for attaching new infix operations to various unicode symbols. The trouble is that in D, the tokens are supposed to be completely independent of the semantic analysis. User definable operators would break that.
Both ocaml and f# have infix operators. They have a special set of characters that are allowed within their syntax, but both can be used to manipulate other symbols to use any function infix (see the ocaml discussion).
I think you should probably think deeply about why you want to use this feature. It seems to me that there are much more important considerations when choosing a language.
I can only think of one possible meaning for the word "operator" in this context, which is just syntactic sugar for a function call, e.g. foo + bar would be translated as a call to a function +(a, b).
This is sometimes useful, but not often. I can think of very few instances where I have overloaded/defined an operator.
As noted in the other answers, Haskell does allow you to define new infix operators. However, a purely functional language with lazy evaluation can be a bit of a mouthful. I would probably recommend SML over Haskell, if you feel like trying on a functional language for the first time. The type system is a bit simpler, you can use side-effects and it is not lazy.
F# is also very interesting and also features units of measure, which AFAIK is unique to that language. If you have a need for the feature it can be invaluable.
Off the top of my head I can't think of any statically typed imperative languages with infix operators, but you might want to use a functional language for math programming anyway, since it is much easier to prove facts about a functional program.
You might also want to create a small DSL if syntax issues like infix operators are so important to you. Then you can write the program in whatever language you want and still specify the math in a convenient way.
What do you mean by strong typing? Do you mean static typing (where everything has a type that is known at compile time, and conversions are restricted) or strong typing (everything has a type known at run time, and conversions are restricted)?
I'd go with Common Lisp. It doesn't actually have operators (for example, adding a and b is (+ a b)), but rather functions, which can be defined freely. It has strong typing in that every object has a definite type, even if it can't be known at compile time, and conversions are restricted. It's a truly great language for exploratory programming, and it sounds like that's what you'll be doing.
Ruby does.
require 'rubygems'
require 'superators'
class Array
superator "<---" do |operand|
self << operand.reverse
end
end
["jay"] <--- "spillihp"
You can actually do what you need with C# through operator overloading.
Example:
public static Complex operator -(Complex c)
{
Complex temp = new Complex();
temp.x = -c.x;
temp.y = -c.y;
return temp;
}

Resources