Struggling with the basics - functional-programming

I am trying to self-learn SML. Though I can write some SML code, I have not attained the ah ha.
In:
val x = 5;
how does binding a value of 5 to the name x differ from assigning a value of 5 to the memory location/variable x in imperative programming?
How does the above expression elucidate "a style that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data"?
What do I have to throw away about imperative programming to catch on FP quickly?
Please be gentle with me.

How does binding a value of 5 to the name x differ from assigning a value of 5 to the memory location/variable x in imperative programming?
The key difference between variables in functional programming and variables in imperative programming is that variables in functional programming cannot be modified.
Then why are they called “variables”?
Variables in functional programming languages don't vary in the sense that you can modify their value. However, they do vary in the mathematical sense in that they represent some unknown constant value. For example, consider the following quadratic expression:
This is a quadratic expression in one variable. The variable x varies in the sense that you can select any value for x. However, once you select a certain value for x you cannot change that value.
Now, if you have a quadractic equation then the choice of x is no longer arbitrary. For example, consider the following quadratic equation:
The only choices for x are those which satisfy the equation (i.e. x = -2 or x = -1.5).
A mathematical function on the other hand is a relation between two sets, called the domain (input set) and the codomain (output set). For example, consider the following function:
This function relates every x belonging to the set of real numbers to its corresponding 2x^2 + 7x + 6, also belonging to the set of real numbers. Again, we are free to choose any value for x. However, once we choose some value for x we are not allowed to modify it.
The point is that they are called variables because they vary in the mathematical sense that they can assume different values. Hence, they vary with instance. However, they do not vary with time.
This immutability of variables is important in functional programming because it makes your program referentially transparent (i.e. invoking a function can be thought of as simply replacing the function call with the function body).
If variables were allowed to vary with time then you wouldn't be able to simply substitute the function call with the function body because the substituted variable could change over time. Hence, your substituted function body could become invalid over time.
How does the above expression elucidate "a style that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data"?
The expression val x = 5; could be thought of as a function application (fn x => y) 5 where y is the rest of the program. Functional programming is all about functions, and immutability in the sense that variables only vary with instance and not with time.
In particular, mutability is considered bad because it is implicit. Anything that is implicit could potentially cause unforeseeable errors in your program. Hence, the Zen of Python explicitly states that:
Explicit is better than implicit.
In fact, mutable state is the primary cause of software complexity. Hence, functional programming tries to eschew mutability as much as possible. However, that doesn't mean that we only use immutable structures. We can use mutable structures. We just need to be explicit about it.
What do I have to throw away about imperative programming to catch on FP quickly?
Nothing. Functional programming and imperative programming are two ways of thinking about computation. Computation is a very abstract idea. What exactly is computation? How do we know which problems can be computed? These were some of the questions that mathematicians struggled with in the early nineteen hundreds.
To think about computation we need to have a formal model of computation (i.e. a formal system that captures the notion of computation). There are various models of computation. However, the most famous are the lambda calculus (which gave rise to functional programming) and turing machines (which gave rise to imperative programming).
Both the lambda calculus and the turing machine are equivalent in power. They both capture the notion of computing and every problem that can be computed can be expressed as either a lambda calculus expression or an equivalent turing machine. The only difference is the way in which you express your problem.
You don't have to throw away anything that you learned about imperative programming to understand functional programming. Neither one is better than the other. They are both just different ways of expressing the same problem. However, you do need to start thinking like a functional programmer to write good functional programs.

Related

Functional vs Procedural programming [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
The community reviewed whether to reopen this question 10 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I've read the Wikipedia articles for both procedural programming and functional programming, but I'm still slightly confused. Could someone boil it down to the core?
A functional language (ideally) allows you to write a mathematical function, i.e. a function that takes n arguments and returns a value. If the program is executed, this function is logically evaluated as needed.1
A procedural language, on the other hand, performs a series of sequential steps. (There's a way of transforming sequential logic into functional logic called continuation passing style.)
As a consequence, a purely functional program always yields the same value for an input, and the order of evaluation is not well-defined; which means that uncertain values like user input or random values are hard to model in purely functional languages.
1 As everything else in this answer, that’s a generalisation. This property, evaluating a computation when its result is needed rather than sequentially where it’s called, is known as “laziness”. Not all functional languages are actually universally lazy, nor is laziness restricted to functional programming. Rather, the description given here provides a “mental framework” to think about different programming styles that are not distinct and opposite categories but rather fluid ideas.
Basically the two styles, are like Yin and Yang. One is organized, while the other chaotic. There are situations when Functional programming is the obvious choice, and other situations were Procedural programming is the better choice. This is why there are at least two languages that have recently come out with a new version, that embraces both programming styles. ( Perl 6 and D 2 )
#Procedural:#
The output of a routine does not always have a direct correlation with the input.
Everything is done in a specific order.
Execution of a routine may have side effects.
Tends to emphasize implementing solutions in a linear fashion.
##Perl 6 ##
sub factorial ( UInt:D $n is copy ) returns UInt {
# modify "outside" state
state $call-count++;
# in this case it is rather pointless as
# it can't even be accessed from outside
my $result = 1;
loop ( ; $n > 0 ; $n-- ){
$result *= $n;
}
return $result;
}
##D 2##
int factorial( int n ){
int result = 1;
for( ; n > 0 ; n-- ){
result *= n;
}
return result;
}
#Functional:#
Often recursive.
Always returns the same output for a given input.
Order of evaluation is usually undefined.
Must be stateless. i.e. No operation can have side effects.
Good fit for parallel execution
Tends to emphasize a divide and conquer approach.
May have the feature of Lazy Evaluation.
##Haskell##
( copied from Wikipedia );
fac :: Integer -> Integer
fac 0 = 1
fac n | n > 0 = n * fac (n-1)
or in one line:
fac n = if n > 0 then n * fac (n-1) else 1
##Perl 6 ##
proto sub factorial ( UInt:D $n ) returns UInt {*}
multi sub factorial ( 0 ) { 1 }
multi sub factorial ( $n ) { $n * samewith $n-1 } # { $n * factorial $n-1 }
##D 2##
pure int factorial( invariant int n ){
if( n <= 1 ){
return 1;
}else{
return n * factorial( n-1 );
}
}
#Side note:#
Factorial is actually a common example to show how easy it is to create new operators in Perl 6 the same way you would create a subroutine. This feature is so ingrained into Perl 6 that most operators in the Rakudo implementation are defined this way. It also allows you to add your own multi candidates to existing operators.
sub postfix:< ! > ( UInt:D $n --> UInt )
is tighter(&infix:<*>)
{ [*] 2 .. $n }
say 5!; # 120␤
This example also shows range creation (2..$n) and the list reduction meta-operator ([ OPERATOR ] LIST) combined with the numeric infix multiplication operator. (*)
It also shows that you can put --> UInt in the signature instead of returns UInt after it.
( You can get away with starting the range with 2 as the multiply "operator" will return 1 when called without any arguments )
I've never seen this definition given elsewhere, but I think this sums up the differences given here fairly well:
Functional programming focuses on expressions
Procedural programming focuses on statements
Expressions have values. A functional program is an expression who's value is a sequence of instructions for the computer to carry out.
Statements don't have values and instead modify the state of some conceptual machine.
In a purely functional language there would be no statements, in the sense that there's no way to manipulate state (they might still have a syntactic construct named "statement", but unless it manipulates state I wouldn't call it a statement in this sense). In a purely procedural language there would be no expressions, everything would be an instruction which manipulates the state of the machine.
Haskell would be an example of a purely functional language because there is no way to manipulate state. Machine code would be an example of a purely procedural language because everything in a program is a statement which manipulates the state of the registers and memory of the machine.
The confusing part is that the vast majority of programming languages contain both expressions and statements, allowing you to mix paradigms. Languages can be classified as more functional or more procedural based on how much they encourage the use of statements vs expressions.
For example, C would be more functional than COBOL because a function call is an expression, whereas calling a sub program in COBOL is a statement (that manipulates the state of shared variables and doesn't return a value). Python would be more functional than C because it allows you to express conditional logic as an expression using short circuit evaluation (test && path1 || path2 as opposed to if statements). Scheme would be more functional than Python because everything in scheme is an expression.
You can still write in a functional style in a language which encourages the procedural paradigm and vice versa. It's just harder and/or more awkward to write in a paradigm which isn't encouraged by the language.
Funtional Programming
num = 1
def function_to_add_one(num):
num += 1
return num
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
#Final Output: 2
Procedural Programming
num = 1
def procedure_to_add_one():
global num
num += 1
return num
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
#Final Output: 6
function_to_add_one is a function
procedure_to_add_one is a procedure
Even if you run the function five times, every time it will return 2
If you run the procedure five times, at the end of fifth run it will give you 6.
DISCLAIMER: Obviously this is a hyper-simplified view of reality. This answer just gives a taste of "functions" as opposed to "procedures". Nothing more. Once you have tasted this superficial yet deeply penetrative intuition, start exploring the two paradigms, and you will start to see the difference quite clearly.
Helps my students, hope it helps you too.
In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast with the procedural programming style that emphasizes changes in state.
I believe that procedural/functional/objective programming are about how to approach a problem.
The first style would plan everything in to steps, and solves the problem by implementing one step (a procedure) at a time. On the other hand, functional programming would emphasize the divide-and-conquer approach, where the problem is divided into sub-problem, then each sub-problem is solved (creating a function to solve that sub problem) and the results are combined to create the answer for the whole problem. Lastly, Objective programming would mimic the real world by create a mini-world inside the computer with many objects, each of which has a (somewhat) unique characteristics, and interacts with others. From those interactions the result would emerge.
Each style of programming has its own advantages and weaknesses. Hence, doing something such as "pure programming" (i.e. purely procedural - no one does this, by the way, which is kind of weird - or purely functional or purely objective) is very difficult, if not impossible, except some elementary problems specially designed to demonstrate the advantage of a programming style (hence, we call those who like pureness "weenie" :D).
Then, from those styles, we have programming languages that is designed to optimized for some each style. For example, Assembly is all about procedural. Okay, most early languages are procedural, not only Asm, like C, Pascal, (and Fortran, I heard). Then, we have all famous Java in objective school (Actually, Java and C# is also in a class called "money-oriented," but that is subject for another discussion). Also objective is Smalltalk. In functional school, we would have "nearly functional" (some considered them to be impure) Lisp family and ML family and many "purely functional" Haskell, Erlang, etc. By the way, there are many general languages such as Perl, Python, Ruby.
To expand on Konrad's comment:
As a consequence, a purely functional program always yields the same value for an input, and the order of evaluation is not well-defined;
Because of this, functional code is generally easier to parallelize. Since there are (generally) no side effects of the functions, and they (generally) just act on their arguments, a lot of concurrency issues go away.
Functional programming is also used when you need to be capable of proving your code is correct. This is much harder to do with procedural programming (not easy with functional, but still easier).
Disclaimer: I haven't used functional programming in years, and only recently started looking at it again, so I might not be completely correct here. :)
One thing I hadn't seen really emphasized here is that modern functional languages such as Haskell really more on first class functions for flow control than explicit recursion. You don't need to define factorial recursively in Haskell, as was done above. I think something like
fac n = foldr (*) 1 [1..n]
is a perfectly idiomatic construction, and much closer in spirit to using a loop than to using explicit recursion.
A functional programming is identical to procedural programming in which global variables are not being used.
Procedural languages tend to keep track of state (using variables) and tend to execute as a sequence of steps. Purely functional languages don't keep track of state, use immutable values, and tend to execute as a series of dependencies. In many cases the status of the call stack will hold the information that would be equivalent to that which would be stored in state variables in procedural code.
Recursion is a classic example of functional style programming.
Konrad said:
As a consequence, a purely functional program always yields the same value for an input,
and the order of evaluation is not well-defined; which means that uncertain values like
user input or random values are hard to model in purely functional languages.
The order of evaluation in a purely functional program may be hard(er) to reason about (especially with laziness) or even unimportant but I think that saying it is not well defined makes it sound like you can't tell if your program is going to work at all!
Perhaps a better explanation would be that control flow in functional programs is based on when the value of a function's arguments are needed. The Good Thing about this that in well written programs, state becomes explicit: each function lists its inputs as parameters instead of arbitrarily munging global state. So on some level, it is easier to reason about order of evaluation with respect to one function at a time. Each function can ignore the rest of the universe and focus on what it needs to do. When combined, functions are guaranteed to work the same[1] as they would in isolation.
... uncertain values like user input or random values are hard to model in purely
functional languages.
The solution to the input problem in purely functional programs is to embed an imperative language as a DSL using a sufficiently powerful abstraction. In imperative (or non-pure functional) languages this is not needed because you can "cheat" and pass state implicitly and order of evaluation is explicit (whether you like it or not). Because of this "cheating" and forced evaluation of all parameters to every function, in imperative languages 1) you lose the ability to create your own control flow mechanisms (without macros), 2) code isn't inherently thread safe and/or parallelizable by default, 3) and implementing something like undo (time travel) takes careful work (imperative programmer must store a recipe for getting the old value(s) back!), whereas pure functional programming buys you all these things—and a few more I may have forgotten—"for free".
I hope this doesn't sound like zealotry, I just wanted to add some perspective. Imperative programming and especially mixed paradigm programming in powerful languages like C# 3.0 are still totally effective ways to get things done and there is no silver bullet.
[1] ... except possibly with respect memory usage (cf. foldl and foldl' in Haskell).
To expand on Konrad's comment:
and the order of evaluation is not
well-defined
Some functional languages have what is called Lazy Evaluation. Which means a function is not executed until the value is needed. Until that time the function itself is what is passed around.
Procedural languages are step 1 step 2 step 3... if in step 2 you say add 2 + 2, it does it right then. In lazy evaluation you would say add 2 + 2, but if the result is never used, it never does the addition.
If you have a chance, I would recommand getting a copy of Lisp/Scheme, and doing some projects in it. Most of the ideas that have lately become bandwagons were expressed in Lisp decades ago: functional programming, continuations (as closures), garbage collection, even XML.
So that would be a good way to get a head start on all these current ideas, and a few more besides, like symbolic computation.
You should know what functional programming is good for, and what it isn't good for. It isn't good for everything. Some problems are best expressed in terms of side-effects, where the same question gives differet answers depending on when it is asked.
#Creighton:
In Haskell there is a library function called product:
prouduct list = foldr 1 (*) list
or simply:
product = foldr 1 (*)
so the "idiomatic" factorial
fac n = foldr 1 (*) [1..n]
would simply be
fac n = product [1..n]
Procedural programming divides sequences of statements and conditional constructs into separate blocks called procedures that are parameterized over arguments that are (non-functional) values.
Functional programming is the same except that functions are first-class values, so they can be passed as arguments to other functions and returned as results from function calls.
Note that functional programming is a generalization of procedural programming in this interpretation. However, a minority interpret "functional programming" to mean side-effect-free which is quite different but irrelevant for all major functional languages except Haskell.
None of the answers here show idiomatic functional programming. The recursive factorial answer is great for representing recursion in FP, but the majority of code is not recursive so I don't think that answer is fully representative.
Say you have an arrays of strings, and each string represents an integer like "5" or "-200". You want to check this input array of strings against your internal test case (Using integer comparison). Both solutions are shown below
Procedural
arr_equal(a : [Int], b : [Str]) -> Bool {
if(a.len != b.len) {
return false;
}
bool ret = true;
for( int i = 0; i < a.len /* Optimized with && ret*/; i++ ) {
int a_int = a[i];
int b_int = parseInt(b[i]);
ret &= a_int == b_int;
}
return ret;
}
Functional
eq = i, j => i == j # This is usually a built-in
toInt = i => parseInt(i) # Of course, parseInt === toInt here, but this is for visualization
arr_equal(a : [Int], b : [Str]) -> Bool =
zip(a, b.map(toInt)) # Combines into [Int, Int]
.map(eq)
.reduce(true, (i, j) => i && j) # Start with true, and continuously && it with each value
While pure functional languages are generally research languages (As the real-world likes free side-effects), real-world procedural languages will use the much simpler functional syntax when appropriate.
This is usually implemented with an external library like Lodash, or available built-in with newer languages like Rust. The heavy lifting of functional programming is done with functions/concepts like map, filter, reduce, currying, partial, the last three of which you can look up for further understanding.
Addendum
In order to be used in the wild, the compiler will normally have to work out how to convert the functional version into the procedural version internally, as function call overhead is too high. Recursive cases such as the factorial shown will use tricks such as tail call to remove O(n) memory usage. The fact that there are no side effects allows functional compilers to implement the && ret optimization even when the .reduce is done last. Using Lodash in JS, obviously does not allow for any optimization, so it is a hit to performance (Which isn't usually a concern with web development). Languages like Rust will optimize internally (And have functions such as try_fold to assist && ret optimization).
To Understand the difference, one needs to to understand that "the godfather" paradigm of both procedural and functional programming is the imperative programming.
Basically procedural programming is merely a way of structuring imperative programs in which the primary method of abstraction is the "procedure." (or "function" in some programming languages). Even Object Oriented Programming is just another way of structuring an imperative program, where the state is encapsulated in objects, becoming an object with a "current state," plus this object has a set of functions, methods, and other stuff that let you the programmer manipulate or update the state.
Now, in regards to functional programming, the gist in its approach is that it identifies what values to take and how these values should be transferred. (so there is no state, and no mutable data as it takes functions as first class values and pass them as parameters to other functions).
PS: understanding every programming paradigm is used for should clarify the differences between all of them.
PSS: In the end of the day, programming paradigms are just different approaches to solving problems.
PSS: this quora answer has a great explanation.

Correct way of writing recursive functions in CLP(R) with Prolog

I am very confused in how CLP works in Prolog. Not only do I find it hard to see the benefits (I do see it in specific cases but find it hard to generalise those) but more importantly, I can hardly make up how to correctly write a recursive predicate. Which of the following would be the correct form in a CLP(R) way?
factorial(0, 1).
factorial(N, F):- {
N > 0,
PrevN = N - 1,
factorial(PrevN, NewF),
F = N * NewF}.
or
factorial(0, 1).
factorial(N, F):- {
N > 0,
PrevN = N - 1,
F = N * NewF},
factorial(PrevN, NewF).
In other words, I am not sure when I should write code outside the constraints. To me, the first case would seem more logical, because PrevN and NewF belong to the constraints. But if that's true, I am curious to see in which cases it is useful to use predicates outside the constraints in a recursive function.
There are several overlapping questions and issues in your post, probably too many to coherently address to your complete satisfaction in a single post.
Therefore, I would like to state a few general principles first, and then—based on that—make a few specific comments about the code you posted.
First, I would like to address what I think is most important in your case:
LP &subseteq; CLP
This means simply that CLP can be regarded as a superset of logic programming (LP). Whether it is to be considered a proper superset or if, in fact, it makes even more sense to regard them as denoting the same concept is somewhat debatable. In my personal view, logic programming without constraints is much harder to understand and much less usable than with constraints. Given that also even the very first Prolog systems had a constraint like dif/2 and also that essential built-in predicates like (=)/2 perfectly fit the notion of "constraint", the boundaries, if they exist at all, seem at least somewhat artificial to me, suggesting that:
LP &approx; CLP
Be that as it may, the key concept when working with CLP (of any kind) is that the constraints are available as predicates, and used in Prolog programs like all other predicates.
Therefore, whether you have the goal factorial(N, F) or { N > 0 } is, at least in principle, the same concept: Both mean that something holds.
Note the syntax: The CLP(&Rscr;) constraints have the form { C }, which is {}(C) in prefix notation.
Note that the goal factorial(N, F) is not a CLP(&Rscr;) constraint! Neither is the following:
?- { factorial(N, F) }.
ERROR: Unhandled exception: type_error({factorial(_3958,_3960)},...)
Thus, { factorial(N, F) } is not a CLP(&Rscr;) constraint either!
Your first example therefore cannot work for this reason alone already. (In addition, you have a syntax error in the clause head: factorial (, so it also does not compile at all.)
When you learn working with a constraint solver, check out the predicates it provides. For example, CLP(&Rscr;) provides {}/1 and a few other predicates, and has a dedicated syntax for stating relations that hold about floating point numbers (in this case).
Other constraint solver provide their own predicates for describing the entities of their respective domains. For example, CLP(FD) provides (#=)/2 and a few other predicates to reason about integers. dif/2 lets you reason about any Prolog term. And so on.
From the programmer's perspective, this is exactly the same as using any other predicate of your Prolog system, whether it is built-in or stems from a library. In principle, it's all the same:
A goal like list_length(Ls, L) can be read as: "The length of the list Ls is L."
A goal like { X = A + B } can be read as: The number X is equal to the sum of A and B. For example, if you are using CLP(Q), it is clear that we are talking about rational numbers in this case.
In your second example, the body of the clause is a conjunction of the form (A, B), where A is a CLP(&Rscr;) constraint, and B is a goal of the form factorial(PrevN, NewF).
The point is: The CLP(&Rscr;) constraint is also a goal! Check it out:
?- write_canonical({a,b,c}).
{','(a,','(b,c))}
true.
So, you are simply using {}/1 from library(clpr), which is one of the predicates it exports.
You are right that PrevN and NewF belong to the constraints. However, factorial(PrevN, NewF) is not part of the mini-language that CLP(&Rscr;) implements for reasoning over floating point numbers. Therefore, you cannot pull this goal into the CLP(&Rscr;)-specific part.
From a programmer's perspective, a major attraction of CLP is that it blends in completely seamlessly into "normal" logic programming, to the point that it can in fact hardly be distinguished at all from it: The constraints are simply predicates, and written down like all other goals.
Whether you label a library predicate a "constraint" or not hardly makes any difference: All predicates can be regarded as constraints, since they can only constrain answers, never relax them.
Note that both examples you post are recursive! That's perfectly OK. In fact, recursive predicates will likely be the majority of situations in which you use constraints in the future.
However, for the concrete case of factorial, your Prolog system's CLP(FD) constraints are likely a better fit, since they are completely dedicated to reasoning about integers.

A Functional-Imperative Hybrid

Pure functional programming languages do not allow mutable data, but some computations are more naturally/intuitively expressed in an imperative way -- or an imperative version of an algorithm may be more efficient. I am aware that most functional languages are not pure, and let you assign/reassign variables and do imperative things but generally discourage it.
My question is, why not allow local state to be manipulated in local variables, but require that functions can only access their own locals and global constants (or just constants defined in an outer scope)? That way, all functions maintain referential transparency (they always give the same return value given the same arguments), but within a function, a computation can be expressed in imperative terms (like, say, a while loop).
IO and such could still be accomplished in the normal functional ways - through monads or passing around a "world" or "universe" token.
My question is, why not allow local state to be manipulated in local variables, but require that functions can only access their own locals and global constants (or just constants defined in an outer scope)?
Good question. I think the answer is that mutable locals are of limited practical value but mutable heap-allocated data structures (primarily arrays) are enormously valuable and form the backbone of many important collections including efficient stacks, queues, sets and dictionaries. So restricting mutation to locals only would not give an otherwise purely functional language any of the important benefits of mutation.
On a related note, communicating sequential processes exchanging purely functional data structures offer many of the benefits of both worlds because the sequential processes can use mutation internally, e.g. mutable message queues are ~10x faster than any purely functional queues. For example, this is idiomatic in F# where the code in a MailboxProcessor uses mutable data structures but the messages communicated between them are immutable.
Sorting is a good case study in this context. Sedgewick's quicksort in C is short and simple and hundreds of times faster than the fastest purely functional sort in any language. The reason is that quicksort mutates the array in-place. Mutable locals would not help. Same story for most graph algorithms.
The short answer is: there are systems to allow what you want. For example, you can do it using the ST monad in Haskell (as referenced in the comments).
The ST monad approach is from Haskell's Control.Monad.ST. Code written in the ST monad can use references (STRef) where convenient. The nice part is that you can even use the results of the ST monad in pure code, as it is essentially self-contained (this is basically what you were wanting in the question).
The proof of this self-contained property is done through the type-system. The ST monad carries a state-thread parameter, usually denoted with a type-variable s. When you have such a computation you'll have monadic result, with a type like:
foo :: ST s Int
To actually turn this into a pure result, you have to use
runST :: (forall s . ST s a) -> a
You can read this type like: give me a computation where the s type parameter doesn't matter, and I can give you back the result of the computation, without the ST baggage. This basically keeps the mutable ST variables from escaping, as they would carry the s with them, which would be caught by the type system.
This can be used to good effect on pure structures that are implemented with underlying mutable structures (like the vector package). One can cast off the immutability for a limited time to do something that mutates the underlying array in place. For example, one could combine the immutable Vector with an impure algorithms package to keep the most of the performance characteristics of the in place sorting algorithms and still get purity.
In this case it would look something like:
pureSort :: Ord a => Vector a -> Vector a
pureSort vector = runST $ do
mutableVector <- thaw vector
sort mutableVector
freeze mutableVector
The thaw and freeze functions are linear-time copying, but this won't disrupt the overall O(n lg n) running time. You can even use unsafeFreeze to avoid another linear traversal, as the mutable vector isn't used again.
"Pure functional programming languages do not allow mutable data" ... actually it does, you just simply have to recognize where it lies hidden and see it for what it is.
Mutability is where two things have the same name and mutually exclusive times of existence so that they may be treated as "the same thing at different times". But as every Zen philosopher knows, there is no such thing as "same thing at different times". Everything ceases to exist in an instant and is inherited by its successor in possibly changed form, in a (possibly) uncountably-infinite succession of instants.
In the lambda calculus, mutability thus takes the form illustrated by the following example: (λx (λx f(x)) (x+1)) (x+1), which may also be rendered as "let x = x + 1 in let x = x + 1 in f(x)" or just "x = x + 1, x = x + 1, f(x)" in a more C-like notation.
In other words, "name clash" of the "lambda calculus" is actually "update" of imperative programming, in disguise. They are one and the same - in the eyes of the Zen (who is always right).
So, let's refer to each instant and state of the variable as the Zen Scope of an object. One ordinary scope with a mutable object equals many Zen Scopes with constant, unmutable objects that either get initialized if they are the first, or inherit from their predecessor if they are not.
When people say "mutability" they're misidentifying and confusing the issue. Mutability (as we've just seen here) is a complete red herring. What they actually mean (even unbeknonwst to themselves) is infinite mutability; i.e. the kind which occurs in cyclic control flow structures. In other words, what they're actually referring to - as being specifically "imperative" and not "functional" - is not mutability at all, but cyclic control flow structures along with the infinite nesting of Zen Scopes that this entails.
The key feature that lies absent in the lambda calculus is, thus, seen not as something that may be remedied by the inclusion of an overwrought and overthought "solution" like monads (though that doesn't exclude the possibility of it getting the job done) but as infinitary terms.
A control flow structure is the wrapping of an unwrapped (possibility infinite) decision tree structure. Branches may re-converge. In the corresponding unwrapped structure, they appear as replicated, but separate, branches or subtrees. Goto's are direct links to subtrees. A goto or branch that back-branches to an earlier part of a control flow structure (the very genesis of the "cycling" of a cyclic control flow structure) is a link to an identically-shaped copy of the entire structure being linked to. Corresponding to each structure is its Universally Unrolled decision tree.
More precisely, we may think of a control-flow structure as a statement that precedes an actual expression that conditions the value of that expression. The archetypical case in point is Landin's original case, itself (in his 1960's paper, where he tried to lambda-ize imperative languages): let x = 1 in f(x). The "x = 1" part is the statement, the "f(x)" is the value being conditioned by the statement. In C-like form, we could write this as x = 1, f(x).
More generally, corresponding to each statement S and expression Q is an expression S[Q] which represents the result Q after S is applied. Thus, (x = 1)[f(x)] is just λx f(x) (x + 1). The S wraps around the Q. If S contains cyclic control flow structures, the wrapping will be infinitary.
When Landin tried to work out this strategy, he hit a hard wall when he got to the while loop and went "Oops. Never mind." and fell back into what become an overwrought and overthought solution, while this simple (and in retrospect, obvious) answer eluded his notice.
A while loop "while (x < n) x = x + 1;" - which has the "infinite mutability" mentioned above, may itself be treated as an infinitary wrapper, "if (x < n) { x = x + 1; if (x < 1) { x = x + 1; if (x < 1) { x = x + 1; ... } } }". So, when it wraps around an expression Q, the result is (in C-like notation) "x < n? (x = x + 1, x < n? (x = x + 1, x < n? (x = x + 1, ...): Q): Q): Q", which may be directly rendered in lambda form as "x < n? (λx x < n (λx x < n? (λx·...) (x + 1): Q) (x + 1): Q) (x + 1): Q". This shows directly the connection between cyclicity and infinitariness.
This is an infinitary expression that, despite being infinite, has only a finite number of distinct subexpressions. Just as we can think of there being a Universally Unrolled form to this expression - which is similar to what's shown above (an infinite decision tree) - we can also think of there being a Maximally Rolled form, which could be obtained by labelling each of the distinct subexpressions and referring to the labels, instead. The key subexpressions would then be:
A: x < n? goto B: Q
B: x = x + 1, goto A
The subexpression labels, here, are "A:" and "B:", while the references to the subexpressions so labelled as "goto A" and "goto B", respectively. So, by magic, the very essence of Imperativitity emerges directly out of the infinitary lambda calculus, without any need to posit it separately or anew.
This way of viewing things applies even down to the level of binary files. Every interpretation of every byte (whether it be a part of an opcode of an instruction that starts 0, 1, 2 or more bytes back, or as part of a data structure) can be treated as being there in tandem, so that the binary file is a rolling up of a much larger universally unrolled structure whose physical byte code representation overlaps extensively with itself.
Thus, emerges the imperative programming language paradigm automatically out of the pure lambda calculus, itself, when the calculus is extended to include infinitary terms. The control flow structure is directly embodied in the very structure of the infinitary expression, itself; and thus requires no additional hacks (like Landin's or later descendants, like monads) - as it's already there.
This synthesis of the imperative and functional paradigms arose in the late 1980's via the USENET, but has not (yet) been published. Part of it was already implicit in the treatment (dating from around the same time) given to languages, like Prolog-II, and the much earlier treatment of cyclic recursive structures by infinitary expressions by Irene Guessarian LNCS 99 "Algebraic Semantics".
Now, earlier I said that the magma-based formulation might get you to the same place, or to an approximation thereof. I believe there is a kind of universal representation theorem of some sort, which asserts that the infinitary based formulation provides a purely syntactic representation, and that the semantics that arise from the monad-based representation factors through this as "monad-based semantics" = "infinitary lambda calculus" + "semantics of infinitary languages".
Likewise, we may think of the "Q" expressions above as being continuations; so there may also be a universal representation theorem for continuation semantics, which similarly rolls this formulation back into the infinitary lambda calculus.
At this point, I've said nothing about non-rational infinitary terms (i.e. infinitary terms which possess an infinite number of distinct subterms and no finite Minimal Rolling) - particularly in relation to interprocedural control flow semantics. Rational terms suffice to account for loops and branches, and so provide a platform for intraprocedural control flow semantics; but not as much so for the call-return semantics that are the essential core element of interprocedural control flow semantics, if you consider subprograms to be directly represented as embellished, glorified macros.
There may be something similar to the Chomsky hierarchy for infinitary term languages; so that type 3 corresponds to rational terms, type 2 to "algebraic terms" (those that can be rolled up into a finite set of "goto" references and "macro" definitions), and type 0 for "transcendental terms". That is, for me, an unresolved loose end, as well.

Algebraic structure and programming

May anyone give me an example how we can improve our code reusability using algebraic structures like groups, monoids and rings? (or how can i make use of these kind of structures in programming, knowing at least that i didn't learn all that theory in highschool for nothing).
I heard this is possible but i can't figure out a way applying them in programming and genereally applying hardcore mathematics in programming.
It is not really the mathematical stuff that helps as is the mathematical thinking. Abstraction is the key in programming. Transforming real live concepts into numbers and relations is what we do every day. Algebra is the mother of all, algebra is the set of rules that defines correctness, it is the highest level of abstraction, so, understanding algebra means you can think more clear, more faster, more efficient. Commencing from Sets theory to Category Theory, Domain Theory etc everything comes from practical challenges, abstraction and generalization requirements.
In common practice you will not need to actually know these, although if you are thinking of developing stuff like AI Agents, programming languages, fundamental concepts and tools then they are a must.
In functional programming, esp. Haskell, it's common to structure programs that transform states as monads. Doing so means you can reuse generic algorithms on monads in very different programs.
The C++ standard template library features the concept of a monoid. The idea is again that generic algorithms may require an operation to satisfies the axioms of monoids for their correctness.
E.g., if we can prove the type T we're operating on (numbers, string, whatever) is closed under the operation, we know we won't have to check for certain errors; we always get a valid T back. If we can prove that the operation is associative (x * (y * z) = (x * y) * z), then we can reuse the fork-join architecture; a simple but way of parallel programming implemented in various libraries.
Computer science seems to get a lot of milage out of category theory these days. You get monads, monoids, functors -- an entire bestiary of mathematical entities that are being used to improve code reusability, harnessing the abstraction of abstract mathematics.
Lists are free monoids with one generator, binary trees are groups. You have either the finite or infinite variant.
Starting points:
http://en.wikipedia.org/wiki/Algebraic_data_type
http://en.wikipedia.org/wiki/Initial_algebra
http://en.wikipedia.org/wiki/F-algebra
You may want to learn category theory, and the way category theory approaches algebraic structures: it is exactly the way functional programming languages approach data structures, at least shapewise.
Example: the type Tree A is
Tree A = () | Tree A | Tree A * Tree A
which reads as the existence of a isomorphism (*) (I set G = Tree A)
1 + G + G x G -> G
which is the same as a group structure
phi : 1 + G + G x G -> G
() € 1 -> e
x € G -> x^(-1)
(x, y) € G x G -> x * y
Indeed, binary trees can represent expressions, and they form an algebraic structure. An element of G reads as either the identity, an inverse of an element or the product of two elements. A binary tree is either a leaf, a single tree or a pair of trees. Note the similarity in shape.
(*) as well as a universal property, but they are two of them (finite trees or infinite lazy trees) so I won't dwelve into details.
As I had no idea this stuff existed in the computer science world, please disregard this answer ;)
I don't think the two fields (no pun intended) have any overlap. Rings/fields/groups deal with mathematical objects. Consider a part of the definition of a field:
For every a in F, there exists an element −a in F, such that a + (−a) = 0. Similarly, for any a in F other than 0, there exists an element a^−1 in F, such that a · a^−1 = 1. (The elements a + (−b) and a · b^−1 are also denoted a − b and a/b, respectively.) In other words, subtraction and division operations exist.
What the heck does this mean in terms of programming? I surely can't have an additive inverse of a list object in Python (well, I could just destroy the object, but that is like the multiplicative inverse. I guess you could get somewhere trying to define a Python-ring, but it just won't work out in the end). Don't even think about dividing lists...
As for code readability, I have absolutely no idea how this can even be applied, so this application is irrelevant.
This is my interpretation, but being a mathematics major probably makes me blind to other terminology from different fields (you know which one I'm talking about).
Monoids are ubiquitous in programming. In some programming languages, eg. Haskell, we can make monoids explicit http://blog.sigfpe.com/2009/01/haskell-monoids-and-their-uses.html

What is the difference between procedural programming and functional programming? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
The community reviewed whether to reopen this question 17 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I've read the Wikipedia articles for both procedural programming and functional programming, but I'm still slightly confused. Could someone boil it down to the core?
A functional language (ideally) allows you to write a mathematical function, i.e. a function that takes n arguments and returns a value. If the program is executed, this function is logically evaluated as needed.1
A procedural language, on the other hand, performs a series of sequential steps. (There's a way of transforming sequential logic into functional logic called continuation passing style.)
As a consequence, a purely functional program always yields the same value for an input, and the order of evaluation is not well-defined; which means that uncertain values like user input or random values are hard to model in purely functional languages.
1 As everything else in this answer, that’s a generalisation. This property, evaluating a computation when its result is needed rather than sequentially where it’s called, is known as “laziness”. Not all functional languages are actually universally lazy, nor is laziness restricted to functional programming. Rather, the description given here provides a “mental framework” to think about different programming styles that are not distinct and opposite categories but rather fluid ideas.
Basically the two styles, are like Yin and Yang. One is organized, while the other chaotic. There are situations when Functional programming is the obvious choice, and other situations were Procedural programming is the better choice. This is why there are at least two languages that have recently come out with a new version, that embraces both programming styles. ( Perl 6 and D 2 )
#Procedural:#
The output of a routine does not always have a direct correlation with the input.
Everything is done in a specific order.
Execution of a routine may have side effects.
Tends to emphasize implementing solutions in a linear fashion.
##Perl 6 ##
sub factorial ( UInt:D $n is copy ) returns UInt {
# modify "outside" state
state $call-count++;
# in this case it is rather pointless as
# it can't even be accessed from outside
my $result = 1;
loop ( ; $n > 0 ; $n-- ){
$result *= $n;
}
return $result;
}
##D 2##
int factorial( int n ){
int result = 1;
for( ; n > 0 ; n-- ){
result *= n;
}
return result;
}
#Functional:#
Often recursive.
Always returns the same output for a given input.
Order of evaluation is usually undefined.
Must be stateless. i.e. No operation can have side effects.
Good fit for parallel execution
Tends to emphasize a divide and conquer approach.
May have the feature of Lazy Evaluation.
##Haskell##
( copied from Wikipedia );
fac :: Integer -> Integer
fac 0 = 1
fac n | n > 0 = n * fac (n-1)
or in one line:
fac n = if n > 0 then n * fac (n-1) else 1
##Perl 6 ##
proto sub factorial ( UInt:D $n ) returns UInt {*}
multi sub factorial ( 0 ) { 1 }
multi sub factorial ( $n ) { $n * samewith $n-1 } # { $n * factorial $n-1 }
##D 2##
pure int factorial( invariant int n ){
if( n <= 1 ){
return 1;
}else{
return n * factorial( n-1 );
}
}
#Side note:#
Factorial is actually a common example to show how easy it is to create new operators in Perl 6 the same way you would create a subroutine. This feature is so ingrained into Perl 6 that most operators in the Rakudo implementation are defined this way. It also allows you to add your own multi candidates to existing operators.
sub postfix:< ! > ( UInt:D $n --> UInt )
is tighter(&infix:<*>)
{ [*] 2 .. $n }
say 5!; # 120␤
This example also shows range creation (2..$n) and the list reduction meta-operator ([ OPERATOR ] LIST) combined with the numeric infix multiplication operator. (*)
It also shows that you can put --> UInt in the signature instead of returns UInt after it.
( You can get away with starting the range with 2 as the multiply "operator" will return 1 when called without any arguments )
I've never seen this definition given elsewhere, but I think this sums up the differences given here fairly well:
Functional programming focuses on expressions
Procedural programming focuses on statements
Expressions have values. A functional program is an expression who's value is a sequence of instructions for the computer to carry out.
Statements don't have values and instead modify the state of some conceptual machine.
In a purely functional language there would be no statements, in the sense that there's no way to manipulate state (they might still have a syntactic construct named "statement", but unless it manipulates state I wouldn't call it a statement in this sense). In a purely procedural language there would be no expressions, everything would be an instruction which manipulates the state of the machine.
Haskell would be an example of a purely functional language because there is no way to manipulate state. Machine code would be an example of a purely procedural language because everything in a program is a statement which manipulates the state of the registers and memory of the machine.
The confusing part is that the vast majority of programming languages contain both expressions and statements, allowing you to mix paradigms. Languages can be classified as more functional or more procedural based on how much they encourage the use of statements vs expressions.
For example, C would be more functional than COBOL because a function call is an expression, whereas calling a sub program in COBOL is a statement (that manipulates the state of shared variables and doesn't return a value). Python would be more functional than C because it allows you to express conditional logic as an expression using short circuit evaluation (test && path1 || path2 as opposed to if statements). Scheme would be more functional than Python because everything in scheme is an expression.
You can still write in a functional style in a language which encourages the procedural paradigm and vice versa. It's just harder and/or more awkward to write in a paradigm which isn't encouraged by the language.
Funtional Programming
num = 1
def function_to_add_one(num):
num += 1
return num
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
#Final Output: 2
Procedural Programming
num = 1
def procedure_to_add_one():
global num
num += 1
return num
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
#Final Output: 6
function_to_add_one is a function
procedure_to_add_one is a procedure
Even if you run the function five times, every time it will return 2
If you run the procedure five times, at the end of fifth run it will give you 6.
DISCLAIMER: Obviously this is a hyper-simplified view of reality. This answer just gives a taste of "functions" as opposed to "procedures". Nothing more. Once you have tasted this superficial yet deeply penetrative intuition, start exploring the two paradigms, and you will start to see the difference quite clearly.
Helps my students, hope it helps you too.
In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast with the procedural programming style that emphasizes changes in state.
I believe that procedural/functional/objective programming are about how to approach a problem.
The first style would plan everything in to steps, and solves the problem by implementing one step (a procedure) at a time. On the other hand, functional programming would emphasize the divide-and-conquer approach, where the problem is divided into sub-problem, then each sub-problem is solved (creating a function to solve that sub problem) and the results are combined to create the answer for the whole problem. Lastly, Objective programming would mimic the real world by create a mini-world inside the computer with many objects, each of which has a (somewhat) unique characteristics, and interacts with others. From those interactions the result would emerge.
Each style of programming has its own advantages and weaknesses. Hence, doing something such as "pure programming" (i.e. purely procedural - no one does this, by the way, which is kind of weird - or purely functional or purely objective) is very difficult, if not impossible, except some elementary problems specially designed to demonstrate the advantage of a programming style (hence, we call those who like pureness "weenie" :D).
Then, from those styles, we have programming languages that is designed to optimized for some each style. For example, Assembly is all about procedural. Okay, most early languages are procedural, not only Asm, like C, Pascal, (and Fortran, I heard). Then, we have all famous Java in objective school (Actually, Java and C# is also in a class called "money-oriented," but that is subject for another discussion). Also objective is Smalltalk. In functional school, we would have "nearly functional" (some considered them to be impure) Lisp family and ML family and many "purely functional" Haskell, Erlang, etc. By the way, there are many general languages such as Perl, Python, Ruby.
To expand on Konrad's comment:
As a consequence, a purely functional program always yields the same value for an input, and the order of evaluation is not well-defined;
Because of this, functional code is generally easier to parallelize. Since there are (generally) no side effects of the functions, and they (generally) just act on their arguments, a lot of concurrency issues go away.
Functional programming is also used when you need to be capable of proving your code is correct. This is much harder to do with procedural programming (not easy with functional, but still easier).
Disclaimer: I haven't used functional programming in years, and only recently started looking at it again, so I might not be completely correct here. :)
One thing I hadn't seen really emphasized here is that modern functional languages such as Haskell really more on first class functions for flow control than explicit recursion. You don't need to define factorial recursively in Haskell, as was done above. I think something like
fac n = foldr (*) 1 [1..n]
is a perfectly idiomatic construction, and much closer in spirit to using a loop than to using explicit recursion.
A functional programming is identical to procedural programming in which global variables are not being used.
Procedural languages tend to keep track of state (using variables) and tend to execute as a sequence of steps. Purely functional languages don't keep track of state, use immutable values, and tend to execute as a series of dependencies. In many cases the status of the call stack will hold the information that would be equivalent to that which would be stored in state variables in procedural code.
Recursion is a classic example of functional style programming.
Konrad said:
As a consequence, a purely functional program always yields the same value for an input,
and the order of evaluation is not well-defined; which means that uncertain values like
user input or random values are hard to model in purely functional languages.
The order of evaluation in a purely functional program may be hard(er) to reason about (especially with laziness) or even unimportant but I think that saying it is not well defined makes it sound like you can't tell if your program is going to work at all!
Perhaps a better explanation would be that control flow in functional programs is based on when the value of a function's arguments are needed. The Good Thing about this that in well written programs, state becomes explicit: each function lists its inputs as parameters instead of arbitrarily munging global state. So on some level, it is easier to reason about order of evaluation with respect to one function at a time. Each function can ignore the rest of the universe and focus on what it needs to do. When combined, functions are guaranteed to work the same[1] as they would in isolation.
... uncertain values like user input or random values are hard to model in purely
functional languages.
The solution to the input problem in purely functional programs is to embed an imperative language as a DSL using a sufficiently powerful abstraction. In imperative (or non-pure functional) languages this is not needed because you can "cheat" and pass state implicitly and order of evaluation is explicit (whether you like it or not). Because of this "cheating" and forced evaluation of all parameters to every function, in imperative languages 1) you lose the ability to create your own control flow mechanisms (without macros), 2) code isn't inherently thread safe and/or parallelizable by default, 3) and implementing something like undo (time travel) takes careful work (imperative programmer must store a recipe for getting the old value(s) back!), whereas pure functional programming buys you all these things—and a few more I may have forgotten—"for free".
I hope this doesn't sound like zealotry, I just wanted to add some perspective. Imperative programming and especially mixed paradigm programming in powerful languages like C# 3.0 are still totally effective ways to get things done and there is no silver bullet.
[1] ... except possibly with respect memory usage (cf. foldl and foldl' in Haskell).
To expand on Konrad's comment:
and the order of evaluation is not
well-defined
Some functional languages have what is called Lazy Evaluation. Which means a function is not executed until the value is needed. Until that time the function itself is what is passed around.
Procedural languages are step 1 step 2 step 3... if in step 2 you say add 2 + 2, it does it right then. In lazy evaluation you would say add 2 + 2, but if the result is never used, it never does the addition.
If you have a chance, I would recommand getting a copy of Lisp/Scheme, and doing some projects in it. Most of the ideas that have lately become bandwagons were expressed in Lisp decades ago: functional programming, continuations (as closures), garbage collection, even XML.
So that would be a good way to get a head start on all these current ideas, and a few more besides, like symbolic computation.
You should know what functional programming is good for, and what it isn't good for. It isn't good for everything. Some problems are best expressed in terms of side-effects, where the same question gives differet answers depending on when it is asked.
#Creighton:
In Haskell there is a library function called product:
prouduct list = foldr 1 (*) list
or simply:
product = foldr 1 (*)
so the "idiomatic" factorial
fac n = foldr 1 (*) [1..n]
would simply be
fac n = product [1..n]
Procedural programming divides sequences of statements and conditional constructs into separate blocks called procedures that are parameterized over arguments that are (non-functional) values.
Functional programming is the same except that functions are first-class values, so they can be passed as arguments to other functions and returned as results from function calls.
Note that functional programming is a generalization of procedural programming in this interpretation. However, a minority interpret "functional programming" to mean side-effect-free which is quite different but irrelevant for all major functional languages except Haskell.
None of the answers here show idiomatic functional programming. The recursive factorial answer is great for representing recursion in FP, but the majority of code is not recursive so I don't think that answer is fully representative.
Say you have an arrays of strings, and each string represents an integer like "5" or "-200". You want to check this input array of strings against your internal test case (Using integer comparison). Both solutions are shown below
Procedural
arr_equal(a : [Int], b : [Str]) -> Bool {
if(a.len != b.len) {
return false;
}
bool ret = true;
for( int i = 0; i < a.len /* Optimized with && ret*/; i++ ) {
int a_int = a[i];
int b_int = parseInt(b[i]);
ret &= a_int == b_int;
}
return ret;
}
Functional
eq = i, j => i == j # This is usually a built-in
toInt = i => parseInt(i) # Of course, parseInt === toInt here, but this is for visualization
arr_equal(a : [Int], b : [Str]) -> Bool =
zip(a, b.map(toInt)) # Combines into [Int, Int]
.map(eq)
.reduce(true, (i, j) => i && j) # Start with true, and continuously && it with each value
While pure functional languages are generally research languages (As the real-world likes free side-effects), real-world procedural languages will use the much simpler functional syntax when appropriate.
This is usually implemented with an external library like Lodash, or available built-in with newer languages like Rust. The heavy lifting of functional programming is done with functions/concepts like map, filter, reduce, currying, partial, the last three of which you can look up for further understanding.
Addendum
In order to be used in the wild, the compiler will normally have to work out how to convert the functional version into the procedural version internally, as function call overhead is too high. Recursive cases such as the factorial shown will use tricks such as tail call to remove O(n) memory usage. The fact that there are no side effects allows functional compilers to implement the && ret optimization even when the .reduce is done last. Using Lodash in JS, obviously does not allow for any optimization, so it is a hit to performance (Which isn't usually a concern with web development). Languages like Rust will optimize internally (And have functions such as try_fold to assist && ret optimization).
To Understand the difference, one needs to to understand that "the godfather" paradigm of both procedural and functional programming is the imperative programming.
Basically procedural programming is merely a way of structuring imperative programs in which the primary method of abstraction is the "procedure." (or "function" in some programming languages). Even Object Oriented Programming is just another way of structuring an imperative program, where the state is encapsulated in objects, becoming an object with a "current state," plus this object has a set of functions, methods, and other stuff that let you the programmer manipulate or update the state.
Now, in regards to functional programming, the gist in its approach is that it identifies what values to take and how these values should be transferred. (so there is no state, and no mutable data as it takes functions as first class values and pass them as parameters to other functions).
PS: understanding every programming paradigm is used for should clarify the differences between all of them.
PSS: In the end of the day, programming paradigms are just different approaches to solving problems.
PSS: this quora answer has a great explanation.

Resources